summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-09-29UPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macroArd Biesheuvel
The implementation of macro inv_entry refers to its 'el' argument without the required leading backslash, which results in an undefined symbol 'el' to be passed into the kernel_entry macro rather than the index of the exception level as intended. This undefined symbol strangely enough does not result in build failures, although it is visible in vmlinux: $ nm -n vmlinux |head U el 0000000000000000 A _kernel_flags_le_hi32 0000000000000000 A _kernel_offset_le_hi32 0000000000000000 A _kernel_size_le_hi32 000000000000000a A _kernel_flags_le_lo32 ..... However, it does result in incorrect code being generated for invalid exceptions taken from EL0, since the argument check in kernel_entry assumes EL1 if its argument does not equal '0'. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I406c1207682a4dff3054a019c26fdf1310b08ed1 (cherry picked from commit b660950c60a7278f9d8deb7c32a162031207c758) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: arm64: Add workaround for Cavium erratum 27456Andrew Pinski
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: David Daney <david.daney@cavium.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I60e6d17926b067a4e022d7b159e239114303a547 (cherry picked from commit 104a0c02e8b1936c049e18a6d4e4ab040fb61213) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: arm64: Add macros to read/write system registersMark Rutland
Rather than crafting custom macros for reading/writing each system register provide generics accessors, read_sysreg and write_sysreg, for this purpose. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Change-Id: I1d6cf948bc6660dfd096ff5a18eba682941098c1 (cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARMArd Biesheuvel
This refactors the EFI init and runtime code that will be shared between arm64 and ARM so that it can be built for both archs. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Ieee70bbe117170d2054a9c82c4f1a8143b7e302b (cherry picked from commit f7d924894265794f447ea799dd853400749b5a22) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARMArd Biesheuvel
This splits off the early EFI init and runtime code that - discovers the EFI params and the memory map from the FDT, and installs the memblocks and config tables. - prepares and installs the EFI page tables so that UEFI Runtime Services can be invoked at the virtual address installed by the stub. This will allow it to be reused for 32-bit ARM. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I143e4b38a5426f70027eff6cc5f732ac370ae69d (cherry picked from commit e5bc22a42e4d46cc203fdfb6d2c76202b08666a0) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAPArd Biesheuvel
Change the EFI memory reservation logic to use memblock_mark_nomap() rather than memblock_reserve() to mark UEFI reserved regions as occupied. In addition to reserving them against allocations done by memblock, this will also prevent them from being covered by the linear mapping. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08 (cherry picked from commit 4dffbfc48d65e5d8157a634fd670065d237a9377) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29BACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mappingArd Biesheuvel
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Id7346a09bb3aee5e9a5ef8812251f80cf8265532 (cherry picked from commit 68709f45385aeddb0ca96a060c0c8259944f321b) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-29UPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory tableArd Biesheuvel
This introduces the MEMBLOCK_NOMAP attribute and the required plumbing to make it usable as an indicator that some parts of normal memory should not be covered by the kernel direct mapping. It is up to the arch to actually honor the attribute when laying out this mapping, but the memblock code itself is modified to disregard these regions for allocations and other general use. Cc: linux-mm@kvack.org Cc: Alexander Kuleshov <kuleshovmail@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I55cd3abdf514ac54c071fa0037d8dac73bda798d (cherry picked from commit bf3d3cc580f9960883ebf9ea05868f336d9491c2) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-27ANDROID: dm: android-verity: Remove fec_header location constraintBadhri Jagan Sridharan
This CL removes the mandate of the fec_header being located right after the ECC data. (Cherry-picked from https://android-review.googlesource.com/#/c/280401) Bug: 28865197 Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com> Change-Id: Ie04c8cf2dd755f54d02dbdc4e734a13d6f6507b5
2016-09-24BACKPORT: audit: consistently record PIDs with task_tgid_nr()Paul Moore
Unfortunately we record PIDs in audit records using a variety of methods despite the correct way being the use of task_tgid_nr(). This patch converts all of these callers, except for the case of AUDIT_SET in audit_receive_msg() (see the comment in the code). Reported-by: Jeff Vander Stoep <jeffv@google.com> Signed-off-by: Paul Moore <paul@paul-moore.com> Bug: 28952093 (cherry picked from commit fa2bea2f5cca5b8d4a3e5520d2e8c0ede67ac108) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: If6645f9de8bc58ed9755f28dc6af5fbf08d72a00
2016-09-23android-base.cfg: Enable kernel ASLRJeff Vander Stoep
Bug: 30369029 Change-Id: I0c1c932255866f308d67de1df2ad52c9c19c4799
2016-09-23UPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data ↵Heiko Carstens
section commit c74ba8b3480d ("arch: Introduce post-init read-only memory") introduced the __ro_after_init attribute which allows to add variables to the ro_after_init data section. This new section was added to rodata, even though it contains writable data. This in turn causes problems on architectures which mark the page table entries read-only that point to rodata very early. This patch allows architectures to implement an own handling of the .data..ro_after_init section. Usually that would be: - mark the rodata section read-only very early - mark the ro_after_init section read-only within mark_rodata_ro Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Bug: 31660652 Change-Id: If68cb4d86f88678c9bac8c47072775ab85ef5770 (cherry picked from commit 32fb2fc5c357fb99616bbe100dbcb27bc7f5d045) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: ARM/vdso: Mark the vDSO code read-only after initDavid Brown
Although the ARM vDSO is cleanly separated by code/data with the code being read-only in userspace mappings, the code page is still writable from the kernel. There have been exploits (such as http://itszn.com/blog/?p=21) that take advantage of this on x86 to go from a bad kernel write to full root. Prevent this specific exploit class on ARM as well by putting the vDSO code page in post-init read-only memory as well. Before: vdso: 1 text pages at base 80927000 root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables ---[ Modules ]--- ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80600000 5M ro x SHD 0x80600000-0x80800000 2M ro NX SHD 0x80800000-0xbe000000 984M RW NX SHD After: vdso: 1 text pages at base 8072b000 root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables ---[ Modules ]--- ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80600000 5M ro x SHD 0x80600000-0x80800000 2M ro NX SHD 0x80800000-0xbe000000 984M RW NX SHD Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the PaX Team, Brad Spengler, and Kees Cook. Signed-off-by: David Brown <david.brown@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nathan Lynch <nathan_lynch@mentor.com> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-8-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: I8d3cb7707644343aa907b2d584312ccdad63e270 (cherry picked from commit 11bf9b865898961cee60a41c483c9f27ec76e12e) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: x86/vdso: Mark the vDSO code read-only after initKees Cook
The vDSO does not need to be writable after __init, so mark it as __ro_after_init. The result kills the exploit method of writing to the vDSO from kernel space resulting in userspace executing the modified code, as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21 The memory map (with added vDSO address reporting) shows the vDSO moving into read-only memory: Before: [ 0.143067] vDSO @ ffffffff82004000 [ 0.143551] vDSO @ ffffffff82006000 ---[ High Kernel Mapping ]--- 0xffffffff80000000-0xffffffff81000000 16M pmd 0xffffffff81000000-0xffffffff81800000 8M ro PSE GLB x pmd 0xffffffff81800000-0xffffffff819f3000 1996K ro GLB x pte 0xffffffff819f3000-0xffffffff81a00000 52K ro NX pte 0xffffffff81a00000-0xffffffff81e00000 4M ro PSE GLB NX pmd 0xffffffff81e00000-0xffffffff81e05000 20K ro GLB NX pte 0xffffffff81e05000-0xffffffff82000000 2028K ro NX pte 0xffffffff82000000-0xffffffff8214f000 1340K RW GLB NX pte 0xffffffff8214f000-0xffffffff82281000 1224K RW NX pte 0xffffffff82281000-0xffffffff82400000 1532K RW GLB NX pte 0xffffffff82400000-0xffffffff83200000 14M RW PSE GLB NX pmd 0xffffffff83200000-0xffffffffc0000000 974M pmd After: [ 0.145062] vDSO @ ffffffff81da1000 [ 0.146057] vDSO @ ffffffff81da4000 ---[ High Kernel Mapping ]--- 0xffffffff80000000-0xffffffff81000000 16M pmd 0xffffffff81000000-0xffffffff81800000 8M ro PSE GLB x pmd 0xffffffff81800000-0xffffffff819f3000 1996K ro GLB x pte 0xffffffff819f3000-0xffffffff81a00000 52K ro NX pte 0xffffffff81a00000-0xffffffff81e00000 4M ro PSE GLB NX pmd 0xffffffff81e00000-0xffffffff81e0b000 44K ro GLB NX pte 0xffffffff81e0b000-0xffffffff82000000 2004K ro NX pte 0xffffffff82000000-0xffffffff8214c000 1328K RW GLB NX pte 0xffffffff8214c000-0xffffffff8227e000 1224K RW NX pte 0xffffffff8227e000-0xffffffff82400000 1544K RW GLB NX pte 0xffffffff82400000-0xffffffff83200000 14M RW PSE GLB NX pmd 0xffffffff83200000-0xffffffffc0000000 974M pmd Based on work by PaX Team and Brad Spengler. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Andy Lutomirski <luto@kernel.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-7-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: Iafbf7314a1106c297ea883031ee96c4f53c04a2b (cherry picked from commit 018ef8dcf3de5f62e2cc1a9273cc27e1c6ba8de5) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: lkdtm: Verify that '__ro_after_init' works correctlyKees Cook
The new __ro_after_init section should be writable before init, but not after. Validate that it gets updated at init and can't be written to afterwards. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-6-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: I75301d0497fde49a02f13b4e75300111ddadda9d (cherry picked from commit 7cca071ccbd2a293ea69168ace6abbcdce53098e) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: arch: Introduce post-init read-only memoryKees Cook
One of the easiest ways to protect the kernel from attack is to reduce the internal attack surface exposed when a "write" flaw is available. By making as much of the kernel read-only as possible, we reduce the attack surface. Many things are written to only during __init, and never changed again. These cannot be made "const" since the compiler will do the wrong thing (we do actually need to write to them). Instead, move these items into a memory region that will be made read-only during mark_rodata_ro() which happens after all kernel __init code has finished. This introduces __ro_after_init as a way to mark such memory, and adds some documentation about the existing __read_mostly marking. This improves the security of the Linux kernel by marking formerly read-write memory regions as read-only on a fully booted up system. Based on work by PaX Team and Brad Spengler. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brad Spengler <spender@grsecurity.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: I640f6d858d9770a5e480d12a1c716adf8842feb0 (cherry picked from commit c74ba8b3480da6ddaea17df2263ec09b869ac496) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig ↵Kees Cook
option This removes the CONFIG_DEBUG_RODATA option and makes it always enabled. This simplifies the code and also makes it clearer that read-only mapped memory is just as fundamental a security feature in kernel-space as it is in user-space. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: I3e79c7c4ead79a81c1445f1b3dd28003517faf18 (cherry picked from commit 9ccaf77cf05915f51231d158abfd5448aedde758) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable ↵Kees Cook
read-only kernel mappings It may be useful to debug writes to the readonly sections of memory, so provide a cmdline "rodata=off" to allow for this. This can be expanded in the future to support "log" and "write" modes, but that will need to be architecture-specific. This also makes KDB software breakpoints more usable, as read-only mappings can now be disabled on any kernel. Suggested-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Brown <david.brown@linaro.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: PaX Team <pageexec@freemail.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: I67b818ca390afdd42ab1c27cb4f8ac64bbdb3b65 (cherry picked from commit d2aa1acad22f1bdd0cfa67b3861800e392254454) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-23UPSTREAM: asm-generic: Consolidate mark_rodata_ro()Kees Cook
Instead of defining mark_rodata_ro() in each architecture, consolidate it. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Gross <agross@codeaurora.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ashok Kumar <ashoks@broadcom.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: David Brown <david.brown@linaro.org> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Helge Deller <deller@gmx.de> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathias Krause <minipli@googlemail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: PaX Team <pageexec@freemail.hu> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-parisc@vger.kernel.org Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Bug: 31660652 Change-Id: Iec0c44b3f5d7948954da93fba6cb57888a2709de (cherry picked from commit e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-09-22UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomicsWill Deacon
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49
2016-09-22UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASEMark Rutland
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old mappings entries may still live in TLBs, and we risk violating Break-Before-Make requirements, leading to TLB conflicts and/or other issues. We invalidate TLBs when we uninsall the idmap in early setup code, but prior to this we are subject to issues relating to the Break-Before-Make violation. Avoid these issues by invalidating the TLBs before the new mappings can be used by the hardware. Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit fd363bd417ddb6103564c69cfcbd92d9a7877431) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24
2016-09-22UPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=yCatalin Marinas
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build the module PLTs support: CC arch/arm64/kernel/module-plts.o /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’: /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’ This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is enabled. Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Reported-by: Jeff Vander Stoep <jeffv@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit b9c220b589daaf140f5b8ebe502c98745b94e65c) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I446cb3aa78f1c64b5aa1e2e90fda13f7d46cac33
2016-09-22UPSTREAM: arm64: kasan: Use actual memory node when populating the kernel ↵Catalin Marinas
image shadow With the 16KB or 64KB page configurations, the generic vmemmap_populate() implementation warns on potential offnode page_structs via vmemmap_verify() because the arm64 kasan_init() passes NUMA_NO_NODE instead of the actual node for the kernel image memory. Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: James Morse <james.morse@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 2f76969f2eef051bdd63d38b08d78e790440b0ad) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I8985e5b4628a9c7076767d4565f7633635813b5c
2016-09-22UPSTREAM: arm64: lse: deal with clobbered IP registers after branch via PLTArd Biesheuvel
The LSE atomics implementation uses runtime patching to patch in calls to out of line non-LSE atomics implementations on cores that lack hardware support for LSE. To avoid paying the overhead cost of a function call even if no call ends up being made, the bl instruction is kept invisible to the compiler, and the out of line implementations preserve all registers, not just the ones that they are required to preserve as per the AAPCS64. However, commit fd045f6cd98e ("arm64: add support for module PLTs") added support for routing branch instructions via veneers if the branch target offset exceeds the range of the ordinary relative branch instructions. Since this deals with jump and call instructions that are exposed to ELF relocations, the PLT code uses x16 to hold the address of the branch target when it performs an indirect branch-to-register, something which is explicitly allowed by the AAPCS64 (and ordinary compiler generated code does not expect register x16 or x17 to retain their values across a bl instruction). Since the lse runtime patched bl instructions don't adhere to the AAPCS64, they don't deal with this clobbering of registers x16 and x17. So add them to the clobber list of the asm() statements that perform the call instructions, and drop x16 and x17 from the list of registers that are callee saved in the out of line non-LSE implementations. In addition, since we have given these functions two scratch registers, they no longer need to stack/unstack temp registers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [will: factored clobber list into #define, updated Makefile comment] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 5be8b70af1ca78cefb8b756d157532360a5fd663) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Ia44a54eba315a47a6b8aaa2259b444e0139162c0
2016-09-22UPSTREAM: arm64: mm: check at build time that PAGE_OFFSET divides the VA ↵Ard Biesheuvel
space evenly Commit 8439e62a1561 ("arm64: mm: use bit ops rather than arithmetic in pa/va translations") changed the boundary check against PAGE_OFFSET from an arithmetic comparison to a bit test. This means we now silently assume that PAGE_OFFSET is a power of 2 that divides the kernel virtual address space into two equal halves. So make that assumption explicit. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 6d2aa549de1fc998581d216de3853aa131aa4446) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I8c3bc8cdb7d7f7dea092fd1a208b04583a141054
2016-09-22UPSTREAM: arm64: kasan: Fix zero shadow mapping overriding kernel image shadowCatalin Marinas
With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since kimg_shadow_end is not page aligned (_end shifted by KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image shadow via vmemmap_populate() may be overridden by subsequent calls to kasan_populate_zero_shadow(), leading to kernel panics like below: ------------------------------------------------------------------------------ Unable to handle kernel paging request at virtual address fffffc100135068c pgd = fffffc8009ac0000 [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793 Internal error: Oops: 9600004f [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984 Hardware name: Juno (DT) task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000 PC is at __memset+0x4c/0x200 LR is at kasan_unpoison_shadow+0x34/0x50 pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245 sp : fffffe0900203db0 x29: fffffe0900203db0 x28: 0000000000000000 x27: 0000000000000000 x26: 0000000000000000 x25: fffffc80099b69d0 x24: 0000000000000001 x23: 0000000000000000 x22: 0000000000002000 x21: dffffc8000000000 x20: 1fffff9001350a8c x19: 0000000000002000 x18: 0000000000000008 x17: 0000000000000147 x16: ffffffffffffffff x15: 79746972100e041d x14: ffffff0000000000 x13: ffff000000000000 x12: 0000000000000000 x11: 0101010101010101 x10: 1fffffc11c000000 x9 : 0000000000000000 x8 : fffffc100135068c x7 : 0000000000000000 x6 : 000000000000003f x5 : 0000000000000040 x4 : 0000000000000004 x3 : fffffc100134f651 x2 : 0000000000000400 x1 : 0000000000000000 x0 : fffffc100135068c Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020) Call trace: [<fffffc800846f1cc>] __memset+0x4c/0x200 [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0 [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28 [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274 [<fffffc80089e1948>] kernel_init+0x10/0xf8 [<fffffc8008093a00>] ret_from_fork+0x10/0x50 ------------------------------------------------------------------------------ This patch aligns kimg_shadow_start and kimg_shadow_end to SWAPPER_BLOCK_SIZE in all configurations. Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 2776e0e8ef683a42fe3e9a5facf576b73579700e) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I13a6b38aefbeddd20bc87cb1382f2787bbc5cf9c
2016-09-22UPSTREAM: arm64: consistently use p?d_set_hugeMark Rutland
Commit 324420bf91f60582 ("arm64: add support for ioremap() block mappings") added new p?d_set_huge functions which do the hard work to generate and set a correct block entry. These differ from open-coded huge page creation in the early page table code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly retained by mk_sect_prot() for any valid prot), but are otherwise identical (and cannot fail on arm64). For simplicity and consistency, make use of these in the initial page table creation code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit c661cb1c537e2364bfdabb298fb934fd77445e98) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I25e58a1626831c2c709abcded989d1770fea851c
2016-09-22UPSTREAM: arm64: fix KASLR boot-time I-cache maintenanceMark Rutland
Commit f80fb3a3d50843a4 ("arm64: add support for kernel ASLR") missed a DSB necessary to complete I-cache maintenance in the primary boot path, and hence stale instructions may still be present in the I-cache and may be executed until the I-cache maintenance naturally completes. Since commit 8ec41987436d566f ("arm64: mm: ensure patched kernel text is fetched from PoU"), all CPUs invalidate their I-caches after their MMU is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may have been fetched from the PoC into I-caches. We never patch text expected to be executed with the MMU off. Thus, it is unnecessary to perform broadcast I-cache maintenance in the primary boot path. This patch reduces the scope of the I-cache maintenance to the local CPU, and adds the missing DSB with similar scope, matching prior maintenance in the primary boot path. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit b90b4a608ea2401cc491828f7a385edd2e236e37) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Ic66b5fec29867b86782ad6c3243642afc1f40080
2016-09-22UPSTREAM: arm64: hugetlb: partial revert of 66b3923a1a0fWill Deacon
Commit 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit") introduced support for huge pages using the contiguous bit in the PTE as opposed to block mappings, which may be slightly unwieldy (512M) in 64k page configurations. Unfortunately, this support has resulted in some late regressions when running the libhugetlbfs test suite with 64k pages and CONFIG_DEBUG_VM as a result of a BUG: | readback (2M: 64): ------------[ cut here ]------------ | kernel BUG at fs/hugetlbfs/inode.c:446! | Internal error: Oops - BUG: 0 [#1] SMP | Modules linked in: | CPU: 7 PID: 1448 Comm: readback Not tainted 4.5.0-rc7 #148 | Hardware name: linux,dummy-virt (DT) | task: fffffe0040964b00 ti: fffffe00c2668000 task.ti: fffffe00c2668000 | PC is at remove_inode_hugepages+0x44c/0x480 | LR is at remove_inode_hugepages+0x264/0x480 Rather than revert the entire patch, simply avoid advertising the contiguous huge page sizes for now while people are actively working on a fix. This patch can then be reverted once things have been sorted out. Cc: David Woods <dwoods@ezchip.com> Reported-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit ff7925848b50050732ac0401e0acf27e8b241d7b) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I3a9751fa79b2d2871dbdc06ea1aa3d1336bb4f4f
2016-09-22UPSTREAM: arm64: make irq_stack_ptr more robustYang Shi
Switching between stacks is only valid if we are tracing ourselves while on the irq_stack, so it is only valid when in current and non-preemptible context, otherwise is is just zeroed off. Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack") Acked-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit a80a0eb70c358f8c7dda4bb62b2278dc6285217b) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I431d3d5e8e1f556ddfef283af88dd2f63b825f7c
2016-09-22UPSTREAM: arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomnessArd Biesheuvel
Since arm64 does not use a decompressor that supplies an execution environment where it is feasible to some extent to provide a source of randomness, the arm64 KASLR kernel depends on the bootloader to supply some random bits in the /chosen/kaslr-seed DT property upon kernel entry. On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain some random bits. At the same time, use it to randomize the offset of the kernel Image in physical memory. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 2b5fe07a78a09a32002642b8a823428ade611f16) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I9cb7ae5727dfdf3726b1c9544bce74722ec77bbd
2016-09-22UPSTREAM: efi: stub: use high allocation for converted command lineArd Biesheuvel
Before we can move the command line processing before the allocation of the kernel, which is required for detecting the 'nokaslr' option which controls that allocation, move the converted command line higher up in memory, to prevent it from interfering with the kernel itself. Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper bound there. Otherwise, use ULONG_MAX (i.e., no limit) Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 48fcb2d0216103d15306caa4814e2381104df6d8) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Ie959355658d3f2f1819bee842c77cc5eef54b8e7
2016-09-22UPSTREAM: efi: stub: add implementation of efi_random_alloc()Ard Biesheuvel
This implements efi_random_alloc(), which allocates a chunk of memory of a certain size at a certain alignment, and uses the random_seed argument it receives to randomize the address of the allocation. This is implemented by iterating over the UEFI memory map, counting the number of suitable slots (aligned offsets) within each region, and picking a random number between 0 and 'number of slots - 1' to select the slot, This should guarantee that each possible offset is chosen equally likely. Suggested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 2ddbfc81eac84a299cb4747a8764bc43f23e9008) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I8f59e3e91a71c752d69fd08ca43a890977c82919
2016-09-22BACKPORT: efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOLArd Biesheuvel
This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new function efi_get_random_bytes(). Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit e4fbf4767440472f9d23b0f25a2b905e1c63b6a8) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Id46036b78c2efd223b6cd5488e512fd93e8f597d
2016-09-22BACKPORT: arm64: kaslr: randomize the linear regionArd Biesheuvel
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been provided by the bootloader, randomize the placement of RAM inside the linear region if sufficient space is available. For instance, on a 4KB granule/3 levels kernel, the linear region is 256 GB in size, and we can choose any 1 GB aligned offset that is far enough from the top of the address space to fit the distance between the start of the lowest memblock and the top of the highest memblock. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 031a4213c11a5db475f528c182f7b3858df11db) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I272de8ee358351d95eacc7dc5f47600adec3e813
2016-09-22UPSTREAM: arm64: mm: treat memstart_addr as a signed quantityArd Biesheuvel
Commit c031a4213c11 ("arm64: kaslr: randomize the linear region") implements randomization of the linear region, by subtracting a random multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping of system RAM to move upwards in the linear region, and at the same time causes memstart_addr to assume a value which may be negative if the offset of system RAM in the physical space is smaller than its offset relative to PAGE_OFFSET in the virtual space. Since memstart_addr is effectively an offset now, redefine its type as s64 so that expressions involving shifting or division preserve its sign. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 020d044f66874eba058ce8264fc550f3eca67879) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I0482ebc13baaa9005cf372795e656c2417be9d1c
2016-09-22UPSTREAM: arm64: vmemmap: use virtual projection of linear regionArd Biesheuvel
Commit dd006da21646 ("arm64: mm: increase VA range of identity map") made some changes to the memory mapping code to allow physical memory to reside at an offset that exceeds the size of the virtual mapping. However, since the size of the vmemmap area is proportional to the size of the VA area, but it is populated relative to the physical space, we may end up with the struct page array being mapped outside of the vmemmap region. For instance, on my Seattle A0 box, I can see the following output in the dmesg log. vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbfc0000000 - 0xffffffbfd0000000 ( 256 MB actual) We can fix this by deciding that the vmemmap region is not a projection of the physical space, but of the virtual space above PAGE_OFFSET, i.e., the linear region. This way, we are guaranteed that the vmemmap region is of sufficient size, and we can even reduce the size by half. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I8112d910f9659941dab6de5b3791f395150c77f1
2016-09-22BACKPORT: arm64: add support for kernel ASLRArd Biesheuvel
This adds support for KASLR is implemented, based on entropy provided by the bootloader in the /chosen/kaslr-seed DT property. Depending on the size of the address space (VA_BITS) and the page size, the entropy in the virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all 4 levels), with the sidenote that displacements that result in the kernel image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB granule kernels, respectively) are not allowed, and will be rounded up to an acceptable value. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is randomized independently from the core kernel. This makes it less likely that the location of core kernel data structures can be determined by an adversary, but causes all function calls from modules into the core kernel to be resolved via entries in the module PLTs. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is randomized by choosing a page aligned 128 MB region inside the interval [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of entropy (depending on page size), independently of the kernel randomization, but still guarantees that modules are within the range of relative branch and jump instructions (with the caveat that, since the module region is shared with other uses of the vmalloc area, modules may need to be loaded further away if the module region is exhausted) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit f80fb3a3d50843a401dac4b566b3b131da8077a2) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I3f5fafa4e92e5ff39259d57065541366237eb021
2016-09-22UPSTREAM: arm64: add support for building vmlinux as a relocatable PIE binaryArd Biesheuvel
This implements CONFIG_RELOCATABLE, which links the final vmlinux image with a dynamic relocation section, allowing the early boot code to perform a relocation to a different virtual address at runtime. This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 1e48ef7fcc374051730381a2a05da77eb4eafdb0) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: If02e065722d438f85feb62240fc230e16f58e912
2016-09-22UPSTREAM: arm64: switch to relative exception tablesArd Biesheuvel
Instead of using absolute addresses for both the exception location and the fixup, use offsets relative to the exception table entry values. Not only does this cut the size of the exception table in half, it is also a prerequisite for KASLR, since absolute exception table entries are subject to dynamic relocation, which is incompatible with the sorting of the exception table that occurs at build time. This patch also introduces the _ASM_EXTABLE preprocessor macro (which exists on x86 as well) and its _asm_extable assembly counterpart, as shorthands to emit exception table entries. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 6c94f27ac847ff8ef15b3da5b200574923bd6287) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Icedda8ee8c32843c439765783816d7d71ca0073a
2016-09-22UPSTREAM: extable: add support for relative extables to search and sort routinesArd Biesheuvel
This adds support to the generic search_extable() and sort_extable() implementations for dealing with exception table entries whose fields contain relative offsets rather than absolute addresses. Acked-by: Helge Deller <deller@gmx.de> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Tony Luck <tony.luck@intel.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit a272858a3c1ecd4a935ba23c66668f81214bd110) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I9d144d351d547c49bf3203a69dfff3cb71a51177
2016-09-22UPSTREAM: scripts/sortextable: add support for ET_DYN binariesArd Biesheuvel
Add support to scripts/sortextable for handling relocatable (PIE) executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding support for the new type, no changes are needed. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 7b957b6e603623ef8b2e8222fa94b976df613fa2) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: If55296ef4934b99c38ceb5acbd7c4a7fb23f24c1
2016-09-22UPSTREAM: arm64: futex.h: Add missing PAN togglingJames Morse
futex.h's futex_atomic_cmpxchg_inatomic() does not use the __futex_atomic_op() macro and needs its own PAN toggling. This was missed when the feature was implemented. Fixes: 338d4f49d6f ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 811d61e384e24759372bb3f01772f3744b0a8327) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I6e7b338a1af17b784d4196101422c3acee3b88ed
2016-09-22UPSTREAM: arm64: make asm/elf.h available to asm filesArd Biesheuvel
This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__ around its C definitions so that the CPP defines can be used in asm source files as well. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 4a2e034e5cdadde4c712f79bdd57d1455c76a3db) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Ic499e950d2ef297d10848862a6dfa07b90887f4c
2016-09-22UPSTREAM: arm64: avoid dynamic relocations in early boot codeArd Biesheuvel
Before implementing KASLR for arm64 by building a self-relocating PIE executable, we have to ensure that values we use before the relocation routine is executed are not subject to dynamic relocation themselves. This applies not only to virtual addresses, but also to values that are supplied by the linker at build time and relocated using R_AARCH64_ABS64 relocations. So instead, use assemble time constants, or force the use of static relocations by folding the constants into the instructions. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 2bf31a4a05f5b00f37d65ba029d36a0230286cb7) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Icce0176591e3c0ae444e1ea54258efe677933c5b
2016-09-22UPSTREAM: arm64: avoid R_AARCH64_ABS64 relocations for Image header fieldsArd Biesheuvel
Unfortunately, the current way of using the linker to emit build time constants into the Image header will no longer work once we switch to the use of PIE executables. The reason is that such constants are emitted into the binary using R_AARCH64_ABS64 relocations, which are resolved at runtime, not at build time, and the places targeted by those relocations will contain zeroes before that. So refactor the endian swapping linker script constant generation code so that it emits the upper and lower 32-bit words separately. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 6ad1fe5d9077a1ab40bf74b61994d2e770b00b14) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Iaa809a0b5fcf628e1e49cd6aaa0f31f31ce95c23
2016-09-22UPSTREAM: arm64: add support for module PLTsArd Biesheuvel
This adds support for emitting PLTs at module load time for relative branches that are out of range. This is a prerequisite for KASLR, which may place the kernel and the modules anywhere in the vmalloc area, making it more likely that branch target offsets exceed the maximum range of +/- 128 MB. In this version, I removed the distinction between relocations against .init executable sections and ordinary executable sections. The reason is that it is hardly worth the trouble, given that .init.text usually does not contain that many far branches, and this version now only reserves PLT entry space for jump and call relocations against undefined symbols (since symbols defined in the same module can be assumed to be within +/- 128 MB) For example, the mac80211.ko module (which is fairly sizable at ~400 KB) built with -mcmodel=large gives the following relocation counts: relocs branches unique !local .text 3925 3347 518 219 .init.text 11 8 7 1 .exit.text 4 4 4 1 .text.unlikely 81 67 36 17 ('unique' means branches to unique type/symbol/addend combos, of which !local is the subset referring to undefined symbols) IOW, we are only emitting a single PLT entry for the .init sections, and we are better off just adding it to the core PLT section instead. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit fd045f6cd98ec4953147b318418bd45e441e52a3) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I1b46bb817e7d16a1b9a394b100c9e5de46c0837c
2016-09-22UPSTREAM: arm64: move brk immediate argument definitions to separate headerArd Biesheuvel
Instead of reversing the header dependency between asm/bug.h and asm/debug-monitors.h, split off the brk instruction immediate value defines into a new header asm/brk-imm.h, and include it from both. This solves the circular dependency issue that prevents BUG() from being used in some header files, and keeps the definitions together. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit f98deee9a9f8c47d05a0f64d86440882dca772ff) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Id4827af98ab3d413828c589bc379acecabeff108
2016-09-22UPSTREAM: arm64: mm: use bit ops rather than arithmetic in pa/va translationsArd Biesheuvel
Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right in half, and since the size of the kernel VA space itself is always a power of 2, we can treat PAGE_OFFSET as a bitmask and replace the additions/subtractions with 'or' and 'and-not' operations. For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends up getting replaced with a single tbz instruction. For the additions and subtractions, we save a mov instruction since the mask is folded into the instruction's immediate field. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 8439e62a15614e8fcd43835d57b7245cd9870dc5) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I1ea4ef654dd7b7693f8713dab28ca0739b8a2c62
2016-09-22UPSTREAM: arm64: mm: only perform memstart_addr sanity check if DEBUG_VMArd Biesheuvel
Checking whether memstart_addr has been assigned every time it is referenced adds a branch instruction that may hurt performance if the reference in question occurs on a hot path. So only perform the check if CONFIG_DEBUG_VM=y. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit a92405f082d43267575444a6927085e4c8a69e4e) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Ia5f206d9a2dbbdbfc3f05fe985d4eca309f0d889