summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2016-10-12FROMLIST: arm64: Handle faults caused by inadvertent user access with PAN ↵Catalin Marinas
enabled When TTBR0_EL1 is set to the reserved page, an erroneous kernel access to user space would generate a translation fault. This patch adds the checks for the software-set PSR_PAN_BIT to emulate a permission fault and report it accordingly. This patch also updates the description of the synchronous external aborts on translation table walks. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I623113fc8bf6d5f023aeec7a0640b62a25ef8420 (cherry picked from commit be3db9340c8011d22f06715339b66bcbbd4893bd) Signed-off-by: Sami Tolvanen <samitolvanen@google.com> [Conflict fixes in lsk-v4.4-android topic branch] Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-10-12FROMLIST: arm64: Disable TTBR0_EL1 during normal kernel executionCatalin Marinas
When the TTBR0 PAN feature is enabled, the kernel entry points need to disable access to TTBR0_EL1. The PAN status of the interrupted context is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22). Restoring access to TTBR0_PAN is done on exception return if returning to user or returning to a context where PAN was disabled. Context switching via switch_mm() must defer the update of TTBR0_EL1 until a return to user or an explicit uaccess_enable() call. Special care needs to be taken for two cases where TTBR0_EL1 is set outside the normal kernel context switch operation: EFI run-time services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap). Code has been added to avoid deferred TTBR0_EL1 switching as in switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the special TTBR0_EL1. This patch also removes a stale comment on the switch_mm() function. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Id1198cf1cde022fad10a94f95d698fae91d742aa (cherry picked from commit d26cfd64c973b31f73091c882e07350e14fdd6c9) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12FROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on ↵Catalin Marinas
TTBR0_EL1 This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this patch introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from the struct thread_info ttbr0 variable. Interrupts must be disabled during the uaccess_ttbr0_enable code to ensure the atomicity of the thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the get_thread_info asm macro from entry.S to assembler.h for reuse in the uaccess_ttbr0_* macros. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Idf09a870b8612dce23215bce90d88781f0c0c3aa (cherry picked from commit 940d37234182d2675ab8ab46084840212d735018) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12FROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific ↵Catalin Marinas
asm macro This patch takes the errata workaround code out of cpu_do_switch_mm into a dedicated post_ttbr0_update_workaround macro which will be reused in a subsequent patch. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I69f94e4c41046bd52ca9340b72d97bfcf955b586 (cherry picked from commit 4398e6a1644373a4c2f535f4153c8378d0914630) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12FROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* ↵Catalin Marinas
macros This patch moves the directly coded alternatives for turning PAN on/off into separate uaccess_{enable,disable} macros or functions. The asm macros take a few arguments which will be used in subsequent patches. Note that any (unlikely) access that the compiler might generate between uaccess_enable() and uaccess_disable(), other than those explicitly specified by the user access code, will not be protected by PAN. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Ic3fddd706400c8798f57456c56361d84d234f6ef (cherry picked from commit a4820644c627b82cbc865f2425bb788c94743b16) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanlyLaura Abbott
Executing from a non-executable area gives an ugly message: lkdtm: Performing direct entry EXEC_RODATA lkdtm: attempting ok execution at ffff0000084c0e08 lkdtm: attempting bad execution at ffff000008880700 Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL) CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13 Hardware name: linux,dummy-virt (DT) task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000 PC is at lkdtm_rodata_do_nothing+0x0/0x8 LR is at execute_location+0x74/0x88 The 'IABT (current EL)' indicates the error but it's a bit cryptic without knowledge of the ARM ARM. There is also no indication of the specific address which triggered the fault. The increase in kernel page permissions makes hitting this case more likely as well. Handling the case in the vectors gives a much more familiar looking error message: lkdtm: Performing direct entry EXEC_RODATA lkdtm: attempting ok execution at ffff0000084c0840 lkdtm: attempting bad execution at ffff000008880680 Unable to handle kernel paging request at virtual address ffff000008880680 pgd = ffff8000089b2000 [ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000 Internal error: Oops: 8400000e [#1] PREEMPT SMP Modules linked in: CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24 Hardware name: linux,dummy-virt (DT) task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000 PC is at lkdtm_rodata_do_nothing+0x0/0x8 LR is at execute_location+0x74/0x88 Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Ifba74589ba2cf05b28335d4fd3e3140ef73668db (cherry picked from commit 9adeb8e72dbfe976709df01e259ed556ee60e779) Signed-off-by: Sami Tolvanen <samitolvanen@google.com> [Conflict fixes in lsk-v4.4-android topic branch] Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-10-12UPSTREAM: arm64: include alternative handling in dcache_by_line_opAndre Przywara
The newly introduced dcache_by_line_op macro is used at least in one occassion at the moment to issue a "dc cvau" instruction, which is affected by ARM errata 819472, 826319, 827319 and 824069. Change the macro to allow for alternative patching in there to protect affected Cortex-A53 cores. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [catalin.marinas@arm.com: indentation fixups] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I450594dc311b09b6b832b707a9abb357608cc6e4 (cherry picked from commit 823066d9edcdfe4cedb06216c2b1f91efaf68a87) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected coreAndre Przywara
The ARM errata 819472, 826319, 827319 and 824069 for affected Cortex-A53 cores demand to promote "dc cvau" instructions to "dc civac" as well. Attribute the usage of the instruction in __flush_cache_user_range to also be covered by our alternative patching efforts. For that we introduce an assembly macro which both deals with alternatives while still tagging the instructions as USER. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: If5e7933ba32331b2aa28fc5d9e019649452f0f6c (cherry picked from commit 290622efc76ece22ef76a30bf117755891ab27f6) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional ↵Andre Przywara
asm macros" Commit 77ee306c0aea9 ("arm64: alternatives: add enable parameter to conditional asm macros") extended the alternative assembly macros. Unfortunately this does not really work as one would expect, as the enable parameter in fact correctly protects the alternative section magic, but not the actual code sequences. This results in having both the original instruction(s) _and_ the alternative ones, if enable if false. Since there is no user of this macros anyway, just revert it. This reverts commit 77ee306c0aea9a219daec256ad25982944affef8. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I608104891335dfa2dacdb364754ae2658088ddf2 (cherry picked from commit b82bfa4793cd0f8fde49b85e0ad66906682e7447) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: Add new asm macro copy_pageGeoff Levand
Kexec and hibernate need to copy pages of memory, but may not have all of the kernel mapped, and are unable to call copy_page(). Add a simplistic copy_page() macro, that can be inlined in these situations. lib/copy_page.S provides a bigger better version, but uses more registers. Signed-off-by: Geoff Levand <geoff@infradead.org> [Changed asm label to 9998, added commit message] Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: If23a454e211b1f57f8ba1a2a00b44dabf4b82932 (cherry picked from commit 5003dbde45961dd7ab3d8a09ab9ad8bcb604db40) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: kill ESR_LNX_EXECMark Rutland
Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing instruction aborts from data aborts, but this bit is architecturally RES0 for instruction aborts, and could be allocated for an arbitrary purpose in future. Additionally, we hard-code the value in entry.S without the mnemonic, making the code difficult to understand. Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr, which we already pass to the sole use of ESR_LNX_EXEC. A new helper, is_el0_instruction_abort() is added to make the logic clear. Any instruction aborts taken from EL1 will already have been handled by bad_mode, so we need not handle that case in the helper. For consistency, the existing permission_fault helper is renamed to is_permission_fault, and the return type is changed to bool. There should be no functional changes as the return value was a boolean expression, and the result is only used in another boolean expression. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Iaf66fa5f3b13cf985b11a3b0a40c4333fe9ef833 (cherry picked from commit 541ec870ef31433018d245614254bd9d810a9ac3) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: add macro to extract ESR_ELx.ECMark Rutland
Several places open-code extraction of the EC field from an ESR_ELx value, in subtly different ways. This is unfortunate duplication and variation, and the precise logic used to extract the field is a distraction. This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from an ESR_ELx value in a consistent fashion. Existing open-coded extractions in core arm64 code are moved over to the new helper. KVM code is left as-is for the moment. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Huang Shijie <shijie.huang@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: Ib634a4795277d243fce5dd30b139e2ec1465bee9 (cherry picked from commit 275f344bec51e9100bae81f3cc8c6940bbfb24c0) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: mm: mark fault_info table constMark Rutland
Unlike the debug_fault_info table, we never intentionally alter the fault_info table at runtime, and all derived pointers are treated as const currently. Make the table const so that it can be placed in .rodata and protected from unintentional writes, as we do for the syscall tables. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I3fb0bb55427835c165cc377d8dc2a3fa9e6e950d (cherry picked from commit bbb1681ee3653bdcfc6a4ba31902738118311fd4) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: fix dump_instr when PAN and UAO are in useMark Rutland
If the kernel is set to show unhandled signals, and a user task does not handle a SIGILL as a result of an instruction abort, we will attempt to log the offending instruction with dump_instr before killing the task. We use dump_instr to log the encoding of the offending userspace instruction. However, dump_instr is also used to dump instructions from kernel space, and internally always switches to KERNEL_DS before dumping the instruction with get_user. When both PAN and UAO are in use, reading a user instruction via get_user while in KERNEL_DS will result in a permission fault, which leads to an Oops. As we have regs corresponding to the context of the original instruction abort, we can inspect this and only flip to KERNEL_DS if the original abort was taken from the kernel, avoiding this issue. At the same time, remove the redundant (and incorrect) comments regarding the order dump_mem and dump_instr are called in. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> #4.6+ Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Fixes: 57f4959bad0a154a ("arm64: kernel: Add support for User Access Override") Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I54c00f3598d227a7e2767b357cb453075dcce7bd (cherry picked from commit c5cea06be060f38e5400d796e61cfc8c36e52924) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12BACKPORT: arm64: Fold proc-macros.S into assembler.hGeoff Levand
To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to be used outside the mm code move the contents of proc-macros.S to asm/assembler.h. Also, delete proc-macros.S, and fix up all references to proc-macros.S. Signed-off-by: Geoff Levand <geoff@infradead.org> Acked-by: Pavel Machek <pavel@ucw.cz> [rebased, included dcache_by_line_op] Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I09e694442ffd25dcac864216d0369c9727ad0090 (cherry picked from commit 7b7293ae3dbd0a1965bf310b77fed5f9bb37bb93) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section ↵Ard Biesheuvel
alignment This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment required by sparsemem vmemmap. This comes down to using 1 GB for all translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I05b8bc6ab24f677f263b09d7c31fcce4f21269b1 (cherry picked from commit 06e9bf2fd9b372bc1c757995b6ca1cfab0720acb) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently alignedArd Biesheuvel
After choosing memstart_addr to be the highest multiple of ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory address, we clip the memblocks to the maximum size of the linear region. Since the kernel may be high up in memory, we take care not to clip the kernel itself, which means we have to clip some memory from the bottom if this occurs, to ensure that the distance between the first and the last usable physical memory address can be covered by the linear region. However, we fail to update memstart_addr if this clipping from the bottom occurs, which means that we may still end up with virtual addresses that wrap into the userland range. So increment memstart_addr as appropriate to prevent this from happening. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I72306207cc46a30b780f5e00b9ef23aa8409867e (cherry picked from commit 2958987f5da2ebcf6a237c5f154d7e3340e60945) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macroArd Biesheuvel
The implementation of macro inv_entry refers to its 'el' argument without the required leading backslash, which results in an undefined symbol 'el' to be passed into the kernel_entry macro rather than the index of the exception level as intended. This undefined symbol strangely enough does not result in build failures, although it is visible in vmlinux: $ nm -n vmlinux |head U el 0000000000000000 A _kernel_flags_le_hi32 0000000000000000 A _kernel_offset_le_hi32 0000000000000000 A _kernel_size_le_hi32 000000000000000a A _kernel_flags_le_lo32 ..... However, it does result in incorrect code being generated for invalid exceptions taken from EL0, since the argument check in kernel_entry assumes EL1 if its argument does not equal '0'. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Change-Id: I406c1207682a4dff3054a019c26fdf1310b08ed1 (cherry picked from commit b660950c60a7278f9d8deb7c32a162031207c758) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: Add macros to read/write system registersMark Rutland
Rather than crafting custom macros for reading/writing each system register provide generics accessors, read_sysreg and write_sysreg, for this purpose. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Change-Id: I1d6cf948bc6660dfd096ff5a18eba682941098c1 (cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARMArd Biesheuvel
This refactors the EFI init and runtime code that will be shared between arm64 and ARM so that it can be built for both archs. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Ieee70bbe117170d2054a9c82c4f1a8143b7e302b (cherry picked from commit f7d924894265794f447ea799dd853400749b5a22) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARMArd Biesheuvel
This splits off the early EFI init and runtime code that - discovers the EFI params and the memory map from the FDT, and installs the memblocks and config tables. - prepares and installs the EFI page tables so that UEFI Runtime Services can be invoked at the virtual address installed by the stub. This will allow it to be reused for 32-bit ARM. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: I143e4b38a5426f70027eff6cc5f732ac370ae69d (cherry picked from commit e5bc22a42e4d46cc203fdfb6d2c76202b08666a0) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAPArd Biesheuvel
Change the EFI memory reservation logic to use memblock_mark_nomap() rather than memblock_reserve() to mark UEFI reserved regions as occupied. In addition to reserving them against allocations done by memblock, this will also prevent them from being covered by the linear mapping. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08 (cherry picked from commit 4dffbfc48d65e5d8157a634fd670065d237a9377) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12BACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mappingArd Biesheuvel
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Change-Id: Id7346a09bb3aee5e9a5ef8812251f80cf8265532 (cherry picked from commit 68709f45385aeddb0ca96a060c0c8259944f321b) Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomicsWill Deacon
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: Will Deacon <will.deacon@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49
2016-10-12UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASEMark Rutland
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old mappings entries may still live in TLBs, and we risk violating Break-Before-Make requirements, leading to TLB conflicts and/or other issues. We invalidate TLBs when we uninsall the idmap in early setup code, but prior to this we are subject to issues relating to the Break-Before-Make violation. Avoid these issues by invalidating the TLBs before the new mappings can be used by the hardware. Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit fd363bd417ddb6103564c69cfcbd92d9a7877431) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24
2016-10-12UPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=yCatalin Marinas
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build the module PLTs support: CC arch/arm64/kernel/module-plts.o /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’: /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’ This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is enabled. Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Reported-by: Jeff Vander Stoep <jeffv@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Bug: 30369029 Patchset: kaslr-arm64-4.4 (cherry picked from commit b9c220b589daaf140f5b8ebe502c98745b94e65c) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I446cb3aa78f1c64b5aa1e2e90fda13f7d46cac33
2016-10-12UPSTREAM: arch/arm/include/asm/pgtable-3level.h: add pmd_mkclean for THPMinchan Kim
MADV_FREE needs pmd_dirty and pmd_mkclean for detecting recent overwrite of the contents since MADV_FREE syscall is called for THP page. This patch adds pmd_mkclean for THP page MADV_FREE support. Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Shaohua Li <shli@kernel.org> Cc: <yalin.wang2010@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chen Gang <gang.chen.5i5j@gmail.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: David S. Miller <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: Hugh Dickins <hughd@google.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Jason Evans <je@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mika Penttil <mika.penttila@nextfour.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Rik van Riel <riel@redhat.com> Cc: Roland Dreier <roland@kernel.org> Cc: Shaohua Li <shli@kernel.org> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Bug: 30369029 Patchset: rework-pagetable (cherry picked from commit 44842045e4baaf406db2954dd2e07152fa61528d) Signed-off-by: Jeff Vander Stoep <jeffv@google.com> Change-Id: I59d53667aa8c40dea4f18fc58acc7d27f4a85a04
2016-10-11Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-androidAlex Shi
Conflicts: kernel/cpuset.c
2016-10-05Merge remote-tracking branch 'lts/linux-4.4.y' into linux-linaro-lsk-v4.4Alex Shi
2016-10-05Merge remote-tracking branch 'lts/linux-4.4.y' into linux-linaro-lsk-v4.4Alex Shi
Conflicts: resovle the conflict on pax_copy for arch/ia64/include/asm/uaccess.h arch/powerpc/include/asm/uaccess.h arch/sparc/include/asm/uaccess_32.h
2016-10-05Merge branch 'v4.4/topic/mm-kaslr-pax_usercopy' into linux-linaro-lsk-v4.4Alex Shi
2016-10-05usercopy: fold builtin_const check into inline functionKees Cook
Instead of having each caller of check_object_size() need to remember to check for a const size parameter, move the check into check_object_size() itself. This actually matches the original implementation in PaX, though this commit cleans up the now-redundant builtin_const() calls in the various architectures. Signed-off-by: Kees Cook <keescook@chromium.org> (cherry picked from commit 81409e9e28058811c9ea865345e1753f8f677e44) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-09-30MIPS: paravirt: Fix undefined reference to smp_bootstrapMatt Redfearn
commit 951c39cd3bc0aedf67fbd8fb4b9380287e6205d1 upstream. If the paravirt machine is compiles without CONFIG_SMP, the following linker error occurs arch/mips/kernel/head.o: In function `kernel_entry': (.ref.text+0x10): undefined reference to `smp_bootstrap' due to the kernel entry macro always including SMP startup code. Wrap this code in CONFIG_SMP to fix the error. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14212/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: Add a missing ".set pop" in an early commitHuacai Chen
commit 3cbc6fc9c99f1709203711f125bc3b79487aba06 upstream. Commit 842dfc11ea9a21 ("MIPS: Fix build with binutils 2.24.51+") missing a ".set pop" in macro fpu_restore_16even, so add it. Signed-off-by: Huacai Chen <chenhc@lemote.com> Acked-by: Manuel Lauss <manuel.lauss@gmail.com> Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14210/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)Marcin Nowakowski
commit b244614a60ab7ce54c12a9cbe15cfbf8d79d0967 upstream. cpu_has_fpu macro uses smp_processor_id() and is currently executed with preemption enabled, that triggers the warning at runtime. It is assumed throughout the kernel that if any CPU has an FPU, then all CPUs would have an FPU as well, so it is safe to perform the check with preemption enabled - change the code to use raw_ variant of the check to avoid the warning. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14125/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: Remove compact branch policy Kconfig entriesPaul Burton
commit b03c1e3b8eed9026733c473071d1f528358a0e50 upstream. Commit c1a0e9bc885d ("MIPS: Allow compact branch policy to be changed") added Kconfig entries allowing for the compact branch policy used by the compiler for MIPSr6 kernels to be specified. This can be useful for debugging, particularly in systems where compact branches have recently been introduced. Unfortunately mainline gcc 5.x supports MIPSr6 but not the -mcompact-branches compiler flag, leading to MIPSr6 kernels failing to build with gcc 5.x with errors such as: mipsel-linux-gnu-gcc: error: unrecognized command line option '-mcompact-branches=optimal' make[2]: *** [kernel/bounds.s] Error 1 Fixing this by hiding the Kconfig entry behind another seems to be more hassle than it's worth, as MIPSr6 & compact branches have been around for a while now and if policy does need to be set for debug it can be done easily enough with KCFLAGS. Therefore remove the compact branch policy Kconfig entries & their handling in the Makefile. This reverts commit c1a0e9bc885d ("MIPS: Allow compact branch policy to be changed"). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reported-by: kbuild test robot <fengguang.wu@intel.com> Fixes: c1a0e9bc885d ("MIPS: Allow compact branch policy to be changed") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14241/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: vDSO: Fix Malta EVA mapping to vDSO page structsJames Hogan
commit 554af0c396380baf416f54c439b99b495180b2f4 upstream. The page structures associated with the vDSO pages in the kernel image are calculated using virt_to_page(), which uses __pa() under the hood to find the pfn associated with the virtual address. The vDSO data pointers however point to kernel symbols, so __pa_symbol() should really be used instead. Since there is no equivalent to virt_to_page() which uses __pa_symbol(), fix init_vdso_image() to work directly with pfns, calculated with __phys_to_pfn(__pa_symbol(...)). This issue broke the Malta Enhanced Virtual Addressing (EVA) configuration which has a non-default implementation of __pa_symbol(). This is because it uses a physical alias so that the kernel executes from KSeg0 (VA 0x80000000 -> PA 0x00000000), while RAM is provided to the kernel in the KUSeg range (VA 0x00000000 -> PA 0x80000000) which uses the same underlying RAM. Since there are no page structures associated with the low physical address region, some arbitrary kernel memory would be interpreted as a page structure for the vDSO pages and badness ensues. Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14229/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: SMP: Fix possibility of deadlock when bringing CPUs onlineMatt Redfearn
commit 8f46cca1e6c06a058374816887059bcc017b382f upstream. This patch fixes the possibility of a deadlock when bringing up secondary CPUs. The deadlock occurs because the set_cpu_online() is called before synchronise_count_slave(). This can cause a deadlock if the boot CPU, having scheduled another thread, attempts to send an IPI to the secondary CPU, which it sees has been marked online. The secondary is blocked in synchronise_count_slave() waiting for the boot CPU to enter synchronise_count_master(), but the boot cpu is blocked in smp_call_function_many() waiting for the secondary to respond to it's IPI request. Fix this by marking the CPU online in cpu_callin_map and synchronising counters before declaring the CPU online and calculating the maps for IPIs. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Reported-by: Justin Chen <justinpopo6@gmail.com> Tested-by: Justin Chen <justinpopo6@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14302/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30MIPS: Fix pre-r6 emulation FPU initialisationPaul Burton
commit 7e956304eb8a285304a78582e4537e72c6365f20 upstream. In the mipsr2_decoder() function, used to emulate pre-MIPSr6 instructions that were removed in MIPSr6, the init_fpu() function is called if a removed pre-MIPSr6 floating point instruction is the first floating point instruction used by the task. However, init_fpu() performs varous actions that rely upon not being migrated. For example in the most basic case it sets the coprocessor 0 Status.CU1 bit to enable the FPU & then loads FP register context into the FPU registers. If the task were to migrate during this time, it may end up attempting to load FP register context on a different CPU where it hasn't set the CU1 bit, leading to errors such as: do_cpu invoked from kernel context![#2]: CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82 #2 task: 838e4000 ti: 88d38000 task.ti: 88d38000 $ 0 : 00000000 00000001 ffffffff 88d3fef8 $ 4 : 838e4000 88d38004 00000000 00000001 $ 8 : 3400fc01 801f8020 808e9100 24000000 $12 : dbffffff 807b69d8 807b0000 00000000 $16 : 00000000 80786150 00400fc4 809c0398 $20 : 809c0338 0040273c 88d3ff28 808e9d30 $24 : 808e9d30 00400fb4 $28 : 88d38000 88d3fe88 00000000 8011a2ac Hi : 0040273c Lo : 88d3ff28 epc : 80114178 _restore_fp+0x10/0xa0 ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660 Status: 1400fc03 KERNEL EXL IE Cause : 1080002c (ExcCode 0b) PrId : 0001a920 (MIPS I6400) Modules linked in: Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0) Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338 808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000 004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28 808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000 00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20 ... Call Trace: [<80114178>] _restore_fp+0x10/0xa0 [<8011a2ac>] mipsr2_decoder+0xd5c/0x1660 [<8010de18>] do_ri+0x90/0x6b8 [<80105c20>] ret_from_exception+0x0/0x10 Fix this by disabling preemption around the call to init_fpu(), ensuring that it starts & completes on one CPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14305/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30net: smc91x: fix SMC accessesRussell King
[ Upstream commit 2fb04fdf30192ff1e2b5834e9b7745889ea8bbcb ] Commit b70661c70830 ("net: smc91x: use run-time configuration on all ARM machines") broke some ARM platforms through several mistakes. Firstly, the access size must correspond to the following rule: (a) at least one of 16-bit or 8-bit access size must be supported (b) 32-bit accesses are optional, and may be enabled in addition to the above. Secondly, it provides no emulation of 16-bit accesses, instead blindly making 16-bit accesses even when the platform specifies that only 8-bit is supported. Reorganise smc91x.h so we can make use of the existing 16-bit access emulation already provided - if 16-bit accesses are supported, use 16-bit accesses directly, otherwise if 8-bit accesses are supported, use the provided 16-bit access emulation. If neither, BUG(). This exactly reflects the driver behaviour prior to the commit being fixed. Since the conversion incorrectly cut down the available access sizes on several platforms, we also need to go through every platform and fix up the overly-restrictive access size: Arnd assumed that if a platform can perform 32-bit, 16-bit and 8-bit accesses, then only a 32-bit access size needed to be specified - not so, all available access sizes must be specified. This likely fixes some performance regressions in doing this: if a platform does not support 8-bit accesses, 8-bit accesses have been emulated by performing a 16-bit read-modify-write access. Tested on the Intel Assabet/Neponset platform, which supports only 8-bit accesses, which was broken by the original commit. Fixes: b70661c70830 ("net: smc91x: use run-time configuration on all ARM machines") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30crypto: arm/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel
commit f82e90b28654804ab72881d577d87c3d5c65e2bc upstream. The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-30crypto: arm64/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel
commit 2db34e78f126c6001d79d3b66ab1abb482dc7caa upstream. The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24Merge branch 'v4.4/topic/mm-kaslr-pax_usercopy' into linux-linaro-lsk-v4.4Alex Shi
2016-09-24openrisc: fix the fix of copy_from_user()Guenter Roeck
commit 8e4b72054f554967827e18be1de0e8122e6efc04 upstream. Since commit acb2505d0119 ("openrisc: fix copy_from_user()"), copy_from_user() returns the number of bytes requested, not the number of bytes not copied. Cc: Al Viro <viro@zeniv.linux.org.uk> Fixes: acb2505d0119 ("openrisc: fix copy_from_user()") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24avr32: fix 'undefined reference to `___copy_from_user'Guenter Roeck
commit 65c0044ca8d7c7bbccae37f0ff2972f0210e9f41 upstream. avr32 builds fail with: arch/avr32/kernel/built-in.o: In function `arch_ptrace': (.text+0x650): undefined reference to `___copy_from_user' arch/avr32/kernel/built-in.o:(___ksymtab+___copy_from_user+0x0): undefined reference to `___copy_from_user' kernel/built-in.o: In function `proc_doulongvec_ms_jiffies_minmax': (.text+0x5dd8): undefined reference to `___copy_from_user' kernel/built-in.o: In function `proc_dointvec_minmax_sysadmin': sysctl.c:(.text+0x6174): undefined reference to `___copy_from_user' kernel/built-in.o: In function `ptrace_has_cap': ptrace.c:(.text+0x69c0): undefined reference to `___copy_from_user' kernel/built-in.o:ptrace.c:(.text+0x6b90): more undefined references to `___copy_from_user' follow Fixes: 8630c32275ba ("avr32: fix copy_from_user()") Cc: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Havard Skinnemoen <hskinnemoen@gmail.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24ia64: copy_from_user() should zero the destination on access_ok() failureAl Viro
commit a5e541f796f17228793694d64b507f5f57db4cd7 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24ppc32: fix copy_from_user()Al Viro
commit 224264657b8b228f949b42346e09ed8c90136a8e upstream. should clear on access_ok() failures. Also remove the useless range truncation logics. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24sparc32: fix copy_from_user()Al Viro
commit 917400cecb4b52b5cde5417348322bb9c8272fa6 upstream. Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24mn10300: copy_from_user() should zero on access_ok() failure...Al Viro
commit ae7cc577ec2a4a6151c9e928fd1f595d953ecef1 upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-09-24nios2: copy_from_user() should zero the tail of destinationAl Viro
commit e33d1f6f72cc82fcfc3d1fb20c9e3ad83b1928fa upstream. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>