In the Linux kernel, the following vulnerability has been resolved:powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init()During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,since pageblock_order is still zero and it gets initializedlater during initmem_init() e.g.setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()One such use case where this causes issue is -early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()This causes CMA memory alignment check to be bypassed incma_init_reserved_mem(). Then later cma_activate_area() can hita VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memoryarea was not pageblock_order aligned.Fix it by moving the fadump_cma_init() after initmem_init(),where other such cma reservations also gets called.
No PoCs from references.
- https://github.com/bygregonline/devsec-fastapi-report
- https://github.com/cku-heise/euvd-api-doc
- https://github.com/w4zu/Debian_security