In the Linux kernel, the following vulnerability has been resolved:netfs: Fix unbuffered write error handlingIf all the subrequests in an unbuffered write stream fail, the subrequestcollector doesn't update the stream->transferred value and it retains itsinitial LONG_MAX value. Unfortunately, if all active streams fail, then wetake the smallest value of { LONG_MAX, LONG_MAX, ... } as the value to setin wreq->transferred - which is then returned from ->write_iter().LONG_MAX was chosen as the initial value so that all the streams can bequickly assessed by taking the smallest value of all stream->transferred -but this only works if we've set any of them.Fix this by adding a flag to indicate whether the value instream->transferred is valid and checking that when we integrate thevalues. stream->transferred can then be initialised to zero.This was found by running the generic/750 xfstest against cifs withcache=none. It splices data to the target file. Once (if) it has used upall the available scratch space, the writes start failing with ENOSPC.This causes ->write_iter() to fail. However, it was returningwreq->transferred, i.e. LONG_MAX, rather than an error (because it thoughtthe amount transferred was non-zero) and iter_file_splice_write() wouldthen try to clean up that amount of pipe bufferage - leading to an oopswhen it overran. The kernel log showed: CIFS: VFS: Send error in write = -28followed by: BUG: kernel NULL pointer dereference, address: 0000000000000008with: RIP: 0010:iter_file_splice_write+0x3a4/0x520 do_splice+0x197/0x4e0or: RIP: 0010:pipe_buf_release (include/linux/pipe_fs_i.h:282) iter_file_splice_write (fs/splice.c:755)Also put a warning check into splice to announce if ->write_iter() returnedthat it had written more than it was asked to.
No PoCs from references.
- https://github.com/fkie-cad/nvd-json-data-feeds
- https://github.com/w4zu/Debian_security