const pollNoError … const pollErrClosing … const pollErrTimeout … const pollErrNotPollable … const pdNil … const pdReady … const pdWait … const pollBlockSize … type pollDesc … type pollInfo … const pollClosing … const pollEventErr … const pollExpiredReadDeadline … const pollExpiredWriteDeadline … const pollFDSeq … const pollFDSeqBits … const pollFDSeqMask … func (i pollInfo) closing() bool { … } func (i pollInfo) eventErr() bool { … } func (i pollInfo) expiredReadDeadline() bool { … } func (i pollInfo) expiredWriteDeadline() bool { … } // info returns the pollInfo corresponding to pd. func (pd *pollDesc) info() pollInfo { … } // publishInfo updates pd.atomicInfo (returned by pd.info) // using the other values in pd. // It must be called while holding pd.lock, // and it must be called after changing anything // that might affect the info bits. // In practice this means after changing closing // or changing rd or wd from < 0 to >= 0. func (pd *pollDesc) publishInfo() { … } // setEventErr sets the result of pd.info().eventErr() to b. // We only change the error bit if seq == 0 or if seq matches pollFDSeq // (issue #59545). func (pd *pollDesc) setEventErr(b bool, seq uintptr) { … } type pollCache … var netpollInitLock … var netpollInited … var pollcache … var netpollWaiters … //go:linkname poll_runtime_pollServerInit internal/poll.runtime_pollServerInit func poll_runtime_pollServerInit() { … } func netpollGenericInit() { … } func netpollinited() bool { … } // poll_runtime_isPollServerDescriptor reports whether fd is a // descriptor being used by netpoll. func poll_runtime_isPollServerDescriptor(fd uintptr) bool { … } //go:linkname poll_runtime_pollOpen internal/poll.runtime_pollOpen func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int) { … } //go:linkname poll_runtime_pollClose internal/poll.runtime_pollClose func poll_runtime_pollClose(pd *pollDesc) { … } func (c *pollCache) free(pd *pollDesc) { … } // poll_runtime_pollReset, which is internal/poll.runtime_pollReset, // prepares a descriptor for polling in mode, which is 'r' or 'w'. // This returns an error code; the codes are defined above. // //go:linkname poll_runtime_pollReset internal/poll.runtime_pollReset func poll_runtime_pollReset(pd *pollDesc, mode int) int { … } // poll_runtime_pollWait, which is internal/poll.runtime_pollWait, // waits for a descriptor to be ready for reading or writing, // according to mode, which is 'r' or 'w'. // This returns an error code; the codes are defined above. // //go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait func poll_runtime_pollWait(pd *pollDesc, mode int) int { … } //go:linkname poll_runtime_pollWaitCanceled internal/poll.runtime_pollWaitCanceled func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int) { … } //go:linkname poll_runtime_pollSetDeadline internal/poll.runtime_pollSetDeadline func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int) { … } //go:linkname poll_runtime_pollUnblock internal/poll.runtime_pollUnblock func poll_runtime_pollUnblock(pd *pollDesc) { … } // netpollready is called by the platform-specific netpoll function. // It declares that the fd associated with pd is ready for I/O. // The toRun argument is used to build a list of goroutines to return // from netpoll. The mode argument is 'r', 'w', or 'r'+'w' to indicate // whether the fd is ready for reading or writing or both. // // This returns a delta to apply to netpollWaiters. // // This may run while the world is stopped, so write barriers are not allowed. // //go:nowritebarrier func netpollready(toRun *gList, pd *pollDesc, mode int32) int32 { … } func netpollcheckerr(pd *pollDesc, mode int32) int { … } func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool { … } func netpollgoready(gp *g, traceskip int) { … } // returns true if IO is ready, or false if timed out or closed // waitio - wait only for completed IO, ignore errors // Concurrent calls to netpollblock in the same mode are forbidden, as pollDesc // can hold only a single waiting goroutine for each mode. func netpollblock(pd *pollDesc, mode int32, waitio bool) bool { … } // netpollunblock moves either pd.rg (if mode == 'r') or // pd.wg (if mode == 'w') into the pdReady state. // This returns any goroutine blocked on pd.{rg,wg}. // It adds any adjustment to netpollWaiters to *delta; // this adjustment should be applied after the goroutine has // been marked ready. func netpollunblock(pd *pollDesc, mode int32, ioready bool, delta *int32) *g { … } func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool) { … } func netpollDeadline(arg any, seq uintptr, delta int64) { … } func netpollReadDeadline(arg any, seq uintptr, delta int64) { … } func netpollWriteDeadline(arg any, seq uintptr, delta int64) { … } // netpollAnyWaiters reports whether any goroutines are waiting for I/O. func netpollAnyWaiters() bool { … } // netpollAdjustWaiters adds delta to netpollWaiters. func netpollAdjustWaiters(delta int32) { … } func (c *pollCache) alloc() *pollDesc { … } // makeArg converts pd to an interface{}. // makeArg does not do any allocation. Normally, such // a conversion requires an allocation because pointers to // types which embed internal/runtime/sys.NotInHeap (which pollDesc is) // must be stored in interfaces indirectly. See issue 42076. func (pd *pollDesc) makeArg() (i any) { … } var pdEface … var pdType …