const NoCompression … const BestSpeed … const BestCompression … const DefaultCompression … const HuffmanOnly … const logWindowSize … const windowSize … const windowMask … const baseMatchLength … const minMatchLength … const maxMatchLength … const baseMatchOffset … const maxMatchOffset … const maxFlateBlockTokens … const maxStoreBlockSize … const hashBits … const hashSize … const hashMask … const maxHashOffset … const skipNever … type compressionLevel … var levels … type compressor … func (d *compressor) fillDeflate(b []byte) int { … } func (d *compressor) writeBlock(tokens []token, index int) error { … } // fillWindow will fill the current window with the supplied // dictionary and calculate all hashes. // This is much faster than doing a full encode. // Should only be used after a reset. func (d *compressor) fillWindow(b []byte) { … } // Try to find a match starting at index whose length is greater than prevSize. // We only look at chainCount possibilities before giving up. func (d *compressor) findMatch(pos int, prevHead int, prevLength int, lookahead int) (length, offset int, ok bool) { … } func (d *compressor) writeStoredBlock(buf []byte) error { … } const hashmul … // hash4 returns a hash representation of the first 4 bytes // of the supplied slice. // The caller must ensure that len(b) >= 4. func hash4(b []byte) uint32 { … } // bulkHash4 will compute hashes using the same // algorithm as hash4. func bulkHash4(b []byte, dst []uint32) { … } // matchLen returns the number of matching bytes in a and b // up to length 'max'. Both slices must be at least 'max' // bytes in size. func matchLen(a, b []byte, max int) int { … } // encSpeed will compress and store the currently added data, // if enough has been accumulated or we at the end of the stream. // Any error that occurred will be in d.err func (d *compressor) encSpeed() { … } func (d *compressor) initDeflate() { … } func (d *compressor) deflate() { … } func (d *compressor) fillStore(b []byte) int { … } func (d *compressor) store() { … } // storeHuff compresses and stores the currently added data // when the d.window is full or we are at the end of the stream. // Any error that occurred will be in d.err func (d *compressor) storeHuff() { … } func (d *compressor) write(b []byte) (n int, err error) { … } func (d *compressor) syncFlush() error { … } func (d *compressor) init(w io.Writer, level int) (err error) { … } func (d *compressor) reset(w io.Writer) { … } func (d *compressor) close() error { … } // NewWriter returns a new [Writer] compressing data at the given level. // Following zlib, levels range from 1 ([BestSpeed]) to 9 ([BestCompression]); // higher levels typically run slower but compress more. Level 0 // ([NoCompression]) does not attempt any compression; it only adds the // necessary DEFLATE framing. // Level -1 ([DefaultCompression]) uses the default compression level. // Level -2 ([HuffmanOnly]) will use Huffman compression only, giving // a very fast compression for all types of input, but sacrificing considerable // compression efficiency. // // If level is in the range [-2, 9] then the error returned will be nil. // Otherwise the error returned will be non-nil. func NewWriter(w io.Writer, level int) (*Writer, error) { … } // NewWriterDict is like [NewWriter] but initializes the new // [Writer] with a preset dictionary. The returned [Writer] behaves // as if the dictionary had been written to it without producing // any compressed output. The compressed data written to w // can only be decompressed by a [Reader] initialized with the // same dictionary. func NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error) { … } type dictWriter … func (w *dictWriter) Write(b []byte) (n int, err error) { … } var errWriterClosed … type Writer … // Write writes data to w, which will eventually write the // compressed form of data to its underlying writer. func (w *Writer) Write(data []byte) (n int, err error) { … } // Flush flushes any pending data to the underlying writer. // It is useful mainly in compressed network protocols, to ensure that // a remote reader has enough data to reconstruct a packet. // Flush does not return until the data has been written. // Calling Flush when there is no pending data still causes the [Writer] // to emit a sync marker of at least 4 bytes. // If the underlying writer returns an error, Flush returns that error. // // In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH. func (w *Writer) Flush() error { … } // Close flushes and closes the writer. func (w *Writer) Close() error { … } // Reset discards the writer's state and makes it equivalent to // the result of [NewWriter] or [NewWriterDict] called with dst // and w's level and dictionary. func (w *Writer) Reset(dst io.Writer) { … }