Age | Commit message (Collapse) | Author |
|
In case of normal kexec kernel load, all cpu's are offlined
before calling machine_kexec().But in case crash panic cpus
are relaxed in machine_crash_nonpanic_core() SMP function
but not offlined.
When crash kernel is loaded with kexec and on panic trigger
machine_kexec() checks for number of cpus online.
If more than one cpu is online machine_kexec() fails to load
with below error
kexec: error: multiple CPUs still online
In machine_crash_nonpanic_core() SMP function, offline CPU
before cpu_relax
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Acked-by: Stephen Warren <swarren@wwwdotorg.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
Architectures should fully validate whether kexec is possible as part of
machine_kexec_prepare(), so that user-space's kexec_load() operation can
report any problems. Performing validation in machine_kexec() itself is
too late, since it is not allowed to return.
Prior to this patch, ARM's machine_kexec() was testing after-the-fact
whether machine_kexec_prepare() was able to disable all but one CPU.
Instead, modify machine_kexec_prepare() to validate all conditions
necessary for machine_kexec_prepare()'s to succeed. BUG if the validation
succeeded, yet disabling the CPUs didn't actually work.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit 15e7e5c1ebf5 ("ARM: 7749/1: spinlock: retry trylock operation if
strex fails on free lock") modifying our arch_spin_trylock to retry the
acquisition if the lock appeared uncontended, but the strex failed.
This patch does the same for rwlocks, which were missed by the original
patch.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The res variable is written before we've finished with the input
operands (namely the lock address), so ensure that we mark it as `early
clobber' to avoid unintended register sharing.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Vince Weaver reports an oops in the ARM perf event code while
running his perf_fuzzer tool on a pandaboard running v3.11-rc4.
Unable to handle kernel paging request at virtual address 73fd14cc
pgd = eca6c000
[73fd14cc] *pgd=00000000
Internal error: Oops: 5 [#1] SMP ARM
Modules linked in: snd_soc_omap_hdmi omapdss snd_soc_omap_abe_twl6040 snd_soc_twl6040 snd_soc_omap snd_soc_omap_hdmi_card snd_soc_omap_mcpdm snd_soc_omap_mcbsp snd_soc_core snd_compress regmap_spi snd_pcm snd_page_alloc snd_timer snd soundcore
CPU: 1 PID: 2790 Comm: perf_fuzzer Not tainted 3.11.0-rc4 #6
task: eddcab80 ti: ed892000 task.ti: ed892000
PC is at armpmu_map_event+0x20/0x88
LR is at armpmu_event_init+0x38/0x280
pc : [<c001c3e4>] lr : [<c001c17c>] psr: 60000013
sp : ed893e40 ip : ecececec fp : edfaec00
r10: 00000000 r9 : 00000000 r8 : ed8c3ac0
r7 : ed8c3b5c r6 : edfaec00 r5 : 00000000 r4 : 00000000
r3 : 000000ff r2 : c0496144 r1 : c049611c r0 : edfaec00
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 10c5387d Table: aca6c04a DAC: 00000015
Process perf_fuzzer (pid: 2790, stack limit = 0xed892240)
Stack: (0xed893e40 to 0xed894000)
3e40: 00000800 c001c17c 00000002 c008a748 00000001 00000000 00000000 c00bf078
3e60: 00000000 edfaee50 00000000 00000000 00000000 edfaec00 ed8c3ac0 edfaec00
3e80: 00000000 c073ffac ed893f20 c00bf180 00000001 00000000 c00bf078 ed893f20
3ea0: 00000000 ed8c3ac0 00000000 00000000 00000000 c0cb0818 eddcab80 c00bf440
3ec0: ed893f20 00000000 eddcab80 eca76800 00000000 eca76800 00000000 00000000
3ee0: 00000000 ec984c80 eddcab80 c00bfe68 00000000 00000000 00000000 00000080
3f00: 00000000 ed892000 00000000 ed892030 00000004 ecc7e3c8 ecc7e3c8 00000000
3f20: 00000000 00000048 ecececec 00000000 00000000 00000000 00000000 00000000
3f40: 00000000 00000000 00297810 00000000 00000000 00000000 00000000 00000000
3f60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3f80: 00000002 00000002 000103a4 00000002 0000016c c00128e8 ed892000 00000000
3fa0: 00090998 c0012700 00000002 000103a4 00090ab8 00000000 00000000 0000000f
3fc0: 00000002 000103a4 00000002 0000016c 00090ab0 00090ab8 000107a0 00090998
3fe0: bed92be0 bed92bd0 0000b785 b6e8f6d0 40000010 00090ab8 00000000 00000000
[<c001c3e4>] (armpmu_map_event+0x20/0x88) from [<c001c17c>] (armpmu_event_init+0x38/0x280)
[<c001c17c>] (armpmu_event_init+0x38/0x280) from [<c00bf180>] (perf_init_event+0x108/0x180)
[<c00bf180>] (perf_init_event+0x108/0x180) from [<c00bf440>] (perf_event_alloc+0x248/0x40c)
[<c00bf440>] (perf_event_alloc+0x248/0x40c) from [<c00bfe68>] (SyS_perf_event_open+0x4f4/0x8fc)
[<c00bfe68>] (SyS_perf_event_open+0x4f4/0x8fc) from [<c0012700>] (ret_fast_syscall+0x0/0x48)
Code: 0a000005 e3540004 0a000016 e3540000 (0791010c)
This is because event->attr.config in armpmu_event_init()
contains a very large number copied directly from userspace and
is never checked against the size of the array indexed in
armpmu_map_hw_event(). Fix the problem by checking the value of
config before indexing the array and rejecting invalid config
values.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
It is possible to construct an event group with a software event as a
group leader and then subsequently add a hardware event to the group.
This results in the event group being validated by adding all members
of the group to a fake PMU and attempting to allocate each event on
their respective PMU.
Unfortunately, for software events wthout a corresponding arm_pmu, this
results in a kernel crash attempting to dereference the ->get_event_idx
function pointer.
This patch fixes the problem by checking explicitly for software events
and ignoring those in event validation (since they can always be
scheduled). We will probably want to revisit this for 3.12, since the
validation checks don't appear to work correctly when dealing with
multiple hardware PMUs anyway.
Cc: <stable@vger.kernel.org>
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Aaro Koskinen reports the following oops:
Installing fiq handler from c001b110, length 0x164
Unable to handle kernel paging request at virtual address ffff1224
pgd = c0004000
[ffff1224] *pgd=00000000, *pte=11fff0cb, *ppte=11fff00a
...
[<c0013154>] (set_fiq_handler+0x0/0x6c) from [<c0365d38>] (ams_delta_init_fiq+0xa8/0x160)
r6:00000164 r5:c001b110 r4:00000000 r3:fefecb4c
[<c0365c90>] (ams_delta_init_fiq+0x0/0x160) from [<c0365b14>] (ams_delta_init+0xd4/0x114)
r6:00000000 r5:fffece10 r4:c037a9e0
[<c0365a40>] (ams_delta_init+0x0/0x114) from [<c03613b4>] (customize_machine+0x24/0x30)
This is because the vectors page is now write-protected, and to change
code in there we must write to its original alias. Make that change,
and adjust the cache flushing such that the code will become visible
to the instruction stream on VIVT CPUs.
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fix yet another build failure caused by a weird set of configuration
settings:
LD init/built-in.o
arch/arm/kernel/built-in.o: In function `__dabt_usr':
/home/tom3q/kernel/arch/arm/kernel/entry-armv.S:377: undefined reference to `kuser_cmpxchg64_fixup'
arch/arm/kernel/built-in.o: In function `__irq_usr':
/home/tom3q/kernel/arch/arm/kernel/entry-armv.S:387: undefined reference to `kuser_cmpxchg64_fixup'
caused by:
CONFIG_KUSER_HELPERS=n
CONFIG_CPU_32v6K=n
CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG=n
Reported-by: Tomasz Figa <tomasz.figa@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
vdso-like page)
Olof reports that noMMU builds error out with:
arch/arm/kernel/signal.c: In function 'setup_return':
arch/arm/kernel/signal.c:413:25: error: 'mm_context_t' has no member named 'sigpage'
This shows one of the evilnesses of IS_ENABLED(). Get rid of it here
and replace it with #ifdef's - and as no noMMU platform can make use
of sigpage, depend on CONIFG_MMU not CONFIG_ARM_MPU.
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Unfortunately, I never committed the fix to a nasty oops which can
occur as a result of that commit:
------------[ cut here ]------------
kernel BUG at /home/olof/work/batch/include/linux/mm.h:414!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0 PID: 490 Comm: killall5 Not tainted 3.11.0-rc3-00288-gabe0308 #53
task: e90acac0 ti: e9be8000 task.ti: e9be8000
PC is at special_mapping_fault+0xa4/0xc4
LR is at __do_fault+0x68/0x48c
This doesn't show up unless you do quite a bit of testing; a simple
boot test does not do this, so all my nightly tests were passing fine.
The reason for this is that install_special_mapping() expects the
page array to stick around, and as this was only inserting one page
which was stored on the kernel stack, that's why this was blowing up.
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
Commit 8bd26e3a7 (arm: delete __cpuinit/__CPUINIT usage from all ARM
users) caused some code to leak into sections which are discarded
through the removal of __CPUINIT annotations. Add appropriate .text
annotations to bring these back into the kernel text.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
If one process calls sys_reboot and that process then stops other
CPUs while those CPUs are within a spin_lock() region we can
potentially encounter a deadlock scenario like below.
CPU 0 CPU 1
----- -----
spin_lock(my_lock)
smp_send_stop()
<send IPI> handle_IPI()
disable_preemption/irqs
while(1);
<PREEMPT>
spin_lock(my_lock) <--- Waits forever
We shouldn't attempt to run any other tasks after we send a stop
IPI to a CPU so disable preemption so that this task runs to
completion. We use local_irq_disable() here for cross-arch
consistency with x86.
Reported-by: Sundarajan Srinivasan <sundaraj@codeaurora.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
If kuser helpers are not provided by the kernel, disable user access to
the vectors page. With the kuser helpers gone, there is no reason for
this page to be visible to userspace.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Move the signal handlers into a VDSO page rather than keeping them in
the vectors page. This allows us to place them randomly within this
page, and also map the page at a random location within userspace
further protecting these code fragments from ROP attacks. The new
VDSO page is also poisoned in the same way as the vector page.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Provide a kernel configuration option to allow the kernel user helpers
to be removed from the vector page, thereby preventing their use with
ROP (return orientated programming) attacks. This option is only
visible for CPU architectures which natively support all the operations
which kernel user helpers would normally provide, and must be enabled
with caution.
Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
FIQ should no longer copy the FIQ code into the user visible vector
page. Instead, it should use the hidden page. This change makes
that happen.
Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Use linker magic to create the vectors and vector stubs: we can tell the
linker to place them at an appropriate VMA, but keep the LMA within the
kernel. This gets rid of some unnecessary symbol manipulation, and
have the linker calculate the relocations appropriately.
Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Move the machine vector stubs into the page above the vector page,
which we can prevent from being visible to userspace. Also move
the reset stub, and place the swi vector at a location that the
'ldr' can get to it.
This hides pointers into the kernel which could give valuable
information to attackers, and reduces the number of exploitable
instructions at a fixed address.
Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Poison the memory between each kuser helper. This ensures that any
branch between the kuser helpers will be appropriately trapped.
Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fill the empty regions of the vectors page with an exception generating
instruction. This ensures that any inappropriate branch to the vector
page is appropriately trapped, rather than just encountering some code
to execute. (The vectors page was filled with zero before, which
corresponds with the "andeq r0, r0, r0" instruction - a no-op.)
Cc: <stable@vger.kernel.org>
Acked-by Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
is_smp() test
Commit 621a0147d5c921f4cc33636ccd0602ad5d7cbfbc ("ARM: 7757/1: mm:
don't flush icache in switch_mm with hardware broadcasting") breaks
the boot on OMAP2430SDP with omap2plus_defconfig. Tracked to an
undefined instruction abort from the CP15 read in
cache_ops_need_broadcast(). It turns out that gcc 4.5 reorders the
extended CP15 read above the is_smp() test. This breaks ARM1136 r0
cores, since they don't support several CP15 registers that later ARM
cores do. ARM1136JF-S TRM section 3.2.1 "Register allocation" has the
details.
So mark the extended CP15 read as clobbering memory, which prevents
the compiler from reordering it before the is_smp() test. Russell
states that the code generated from this approach is preferable to
marking the inline asm as volatile. Remove the existing condition
code clobber as it's obsolete, per Nico's post:
http://www.spinics.net/lists/arm-kernel/msg261208.html
This patch is a collaboration with Will Deacon and Russell King.
Comments from Paul Walmsley:
Russell, if you accept this one, might you also add Will's ack from the lists:
Comments from Paul Walmsley:
I'd also be obliged if you could add a Cc: line for Jonathan Austin, since he helped test:
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Tony Lindgren <tony@atomide.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The name changed in response to review comments for the nvic irqchip
driver when the original name was already accepted into Russell King's
tree.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
So, there's a comment I put at the top of this, which people seem to
fail to read. So let's fix it for them instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
a.out support on ARM requires that argc, argv and envp are passed in
r0-r2 respectively, which requires hacking load_aout_binary to
prevent argc being clobbered by the return code. Whilst mainline kernels
do set the registers up in start_thread, the aout loader has never
carried the hack in mainline.
Initialising the registers in this way actually goes against the libc
expectations for ELF binaries, where argc, argv and envp are passed on
the stack, with r0 being used to hold a pointer to an exit function for
cleaning up after the dynamic linker if required. If the pointer is
NULL, then it is ignored. When execing an ELF binary, Linux currently
zeroes r0, then sets it to argc and then finally clobbers it with the
return value of the execve syscall, so we actually end up with:
r0 = 0
stack[0] = argc
r1 = stack[1] = argv
r2 = stack[2] = envp
libc treats r1 and r2 as undefined. The clobbering of r0 by sys_execve
works for user-spawned threads, but when executing an ELF binary from a
kernel thread (via call_usermodehelper), the execve is performed on the
ret_from_fork path, which restores r0 from the saved pt_regs, resulting
in argc being presented to the C library. This has horrible consequences
when the application exits, since we have an exit function registered
using argc, resulting in a jump to hyperspace.
This patch solves the problem by removing the partial a.out support from
arch/arm/ altogether.
Cc: <stable@vger.kernel.org>
Cc: Ashish Sangwan <ashishsangwan2@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
finish_arch_post_lock_switch() function to avoid whole cache flushing
with interrupts disabled. The need for deferred mm switch is stored as a
thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
have another thread switch before finish_arch_post_lock_switch(). If the
new thread has the same mm as the previous 'next' thread, the scheduler
will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
the new thread.
This patch moves the switch pending flag to the mm_context_t structure
since this is specific to the mm rather than thread.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Marc Kleine-Budde <mkl@pengutronix.de>
Tested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Cc: <stable@vger.kernel.org> # 3.5+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit 93dc688 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8):
Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881
task: df46cc00 ti: df48e000 task.ti: df48e000
PC is at check_and_switch_context+0x17c/0x4d0
LR is at check_and_switch_context+0xdc/0x4d0
This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not.
To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside
check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
cached value of __boot_cpu_mode may be incoherent with that in memory.
This could lead to a failure to detect mismatched boot modes.
This patch adds flushing to ensure that writes by secondaries to
__boot_cpu_mode are made visible before we test against it.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit a469abd0f868 ("ARM: elf: add new hwcap for identifying atomic
ldrd/strd instructions") added a new hwcap to identify LPAE on CPUs
which support it. Whilst the hwcap data is correct, the string reported
in /proc/cpuinfo actually matches on HWCAP_VFPD32, which was missing
an entry in the string table.
This patch fixes this problem by adding a "vfpd32" string at the correct
offset, preventing us from falsely advertising LPAE on CPUs which do not
support it.
[will: added commit message]
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Tetsuyuki Kobayashi <koba@kmckk.co.jp>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Currently, compare_cpu_mode_with_primary uses a mixture of macro
arguments and hardcoded registers, and does so incorrectly, as it
stores (__boot_cpu_mode_offset | BOOT_CPU_MODE_MISMATCH) to
(__boot_cpu_mode + &__boot_cpu_mode_offset), which could corrupt an
arbitrary portion of memory.
This patch fixes up compare_cpu_mode_with_primary to use the macro
arguments, correctly updating __boot_cpu_mode.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When map_lowmem() runs, and processes a memory bank whose start or end
is not section-aligned, memory must be allocated to store the 2nd-level
page tables. Those allocations are made by calling memblock_alloc().
At this point, the only memory that is free *and* mapped is memory which
has already been mapped by map_lowmem() itself. For this reason, we must
calculate the first point at which map_lowmem() will need to allocate
memory, and set the memblock allocation limit to a lower address, so that
memblock_alloc() is guaranteed to return memory that is already mapped.
This patch enhances sanity_check_meminfo() to calculate that memory
address, and pass it to memblock_set_current_limit(), rather than just
assuming the limit is arm_lowmem_limit.
The algorithm applied is:
* Default memblock_limit to arm_lowmem_limit in the absence of any other
limit; arm_lowmem_limit is the highest memory that is mapped by
map_lowmem().
* While walking the list of memblocks, if the start of a block is not
aligned, 2nd-level page tables will need to be allocated to map the
first few pages of the block. Hence, the memblock_limit must be before
the start of the block.
* Similarly, if the end of any block is not aligned, 2nd-level page
tables will need to be allocated to map the last few pages of the
block. Hence, the memblock_limit must point at the end of the block,
rounded down to section-alignment.
* The memory blocks are assumed to be sorted in address order, so the
first unaligned block start or end is used to set the limit.
With this algorithm, the start or end of almost any bank can be non-
section-aligned. The only exception is that the start of bank 0 must
be section-aligned, since otherwise memory would need to be allocated
when mapping the start of bank 0, which occurs before any free memory
is mapped.
[swarren, wrote commit description, rewrote calculation of memblock_limit]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit ae8a8b9553bd ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE
and use ALT_SMP instead") added early function returns for page table
cache flushing operations on ARMv7 SMP CPUs.
Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences
assemble to 2 bytes which can lead to corruption of the instruction
stream after code patching.
This patch fixes the alternates to use wide (32-bit) instructions for
Thumb-2, therefore ensuring that the patching code works correctly.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This non-user visible option lacked any kind of documentation. This
is quite common for non-user visible options; certian people can't
understand the point of documenting such options with help text.
However, here we have a case in point: developers don't understand the
option either, as they were thinking that when the option is not set,
the decompressor should produce no output what so ever. This is
incorrect, as the purpose of this option is to control whether a
multiplatform kernel uses the kernel debugging macros to produce
output or not.
So let's document this via help rather than commentry to prevent others
falling into this misunderstanding.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Pull UML fixes from Richard Weinberger:
"Special thanks goes to Toralf Föster for continuously testing UML and
reporting issues!"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
um: remove dead code
um: siginfo cleanup
uml: Fix which_tmpdir failure when /dev/shm is a symlink, and in other edge cases
um: Fix wait_stub_done() error handling
um: Mark stub pages mapping with VM_PFNMAP
um: Fix return value of strnlen_user()
|
|
Pull MIPS fixes from Ralf Baechle:
"MIPS fixes for 3.11. Half of then is for Netlogic the remainder
touches things across arch/mips.
Nothing really dramatic and by rc1 standards MIPS will be in fairly
good shape with this applied. Tested by building all MIPS defconfigs
of which with this pull request four platforms won't build. And yes,
it boots also on my favorite test systems"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: kvm: Kconfig: Drop HAVE_KVM dependency from VIRTUALIZATION
MIPS: Octeon: Fix DT pruning bug with pip ports
MIPS: KVM: Mark KVM_GUEST (T&E KVM) as BROKEN_ON_SMP
MIPS: tlbex: fix broken build in v3.11-rc1
MIPS: Netlogic: Add XLP PIC irqdomain
MIPS: Netlogic: Fix USB block's coherent DMA mask
MIPS: tlbex: Fix typo in r3000 tlb store handler
MIPS: BMIPS: Fix thinko to release slave TP from reset
MIPS: Delete dead invocation of exception_exit().
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 fixes from Catalin Marinas:
- Post -rc1 update to the common reboot infrastructure.
- Fixes (user cache maintenance fault handling, !COMPAT compilation,
CPU online and interrupt hanlding).
* tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: use common reboot infrastructure
arm64: mm: don't treat user cache maintenance faults as writes
arm64: add '#ifdef CONFIG_COMPAT' for aarch32_break_handler()
arm64: Only enable local interrupts after the CPU is marked online
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Martin Schwidefsky:
"An update for the BFP jit to the latest and greatest, two patches to
get kdump working again, the random-abort ptrace extention for
transactional execution, the z90crypt module alias for ap and a tiny
cleanup"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/zcrypt: Alias for new zcrypt device driver base module
s390/kdump: Allow copy_oldmem_page() copy to virtual memory
s390/kdump: Disable mmap for s390
s390/bpf,jit: add pkt_type support
s390/bpf,jit: address randomize and write protect jit code
s390/bpf,jit: use generic jit dumper
s390/bpf,jit: call module_free() from any context
s390/qdio: remove unused variable
s390/ptrace: PTRACE_TE_ABORT_RAND
|
|
Pull KVM fix from Paolo Bonzini:
"This single patch fixes a regression caused by one of the
optimizations introduced in 3.11, which is generally visible only on
AMD processors"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: MMU: avoid fast page fault fixing mmio page fault
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI fixes from Rafael Wysocki:
"These are fixes collected over the last week, most importnatly two
cpufreq reverts fixing regressions introduced in 3.10, an autoseelp
fix preventing systems using it from crashing during shutdown and two
ACPI scan fixes related to hotplug.
Specifics:
- Two cpufreq commits from the 3.10 cycle introduced regressions.
The first of them was buggy (it did way much more than it needed to
do) and the second one attempted to fix an issue introduced by the
first one. Fixes from Srivatsa S Bhat revert both.
- If autosleep triggers during system shutdown and the shutdown
callbacks of some device drivers have been called already, it may
crash the system. Fix from Liu Shuo prevents that from happening
by making try_to_suspend() check system_state.
- The ACPI memory hotplug driver doesn't clear its driver_data on
errors which may cause a NULL poiter dereference to happen later.
Fix from Toshi Kani.
- The ACPI namespace scanning code should not try to attach scan
handlers to device objects that have them already, which may
confuse things quite a bit, and it should rescan the whole
namespace branch starting at the given node after receiving a bus
check notify event even if the device at that particular node has
been discovered already. Fixes from Rafael J Wysocki.
- New ACPI video blacklist entry for a system whose initial backlight
setting from the BIOS doesn't make sense. From Lan Tianyu.
- Garbage string output avoindance for ACPI PNP from Liu Shuo.
- Two Kconfig fixes for issues introduced recently in the s3c24xx
cpufreq driver (when moving the driver to drivers/cpufreq) from
Paul Bolle.
- Trivial comment fix in pm_wakeup.h from Chanwoo Choi"
* tag 'pm+acpi-3.11-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / video: ignore BIOS initial backlight value for Fujitsu E753
PNP / ACPI: avoid garbage in resource name
cpufreq: Revert commit 2f7021a8 to fix CPU hotplug regression
cpufreq: s3c24xx: fix "depends on ARM_S3C24XX" in Kconfig
cpufreq: s3c24xx: rename CONFIG_CPU_FREQ_S3C24XX_DEBUGFS
PM / Sleep: Fix comment typo in pm_wakeup.h
PM / Sleep: avoid 'autosleep' in shutdown progress
cpufreq: Revert commit a66b2e to fix suspend/resume regression
ACPI / memhotplug: Fix a stale pointer in error path
ACPI / scan: Always call acpi_bus_scan() for bus check notifications
ACPI / scan: Do not try to attach scan handlers to devices having them
|
|
Commit 7b6d864b48d9 (reboot: arm: change reboot_mode to use enum
reboot_mode) changed the way reboot is handled on arm, which has a
direct impact on arm64 as we share the reset driver on the VE platform.
The obvious fix is to move arm64 to use the same infrastructure.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: removed reboot_mode = REBOOT_HARD default setting]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
On arm64, cache maintenance faults appear as data aborts with the CM
bit set in the ESR. The WnR bit, usually used to distinguish between
faulting loads and stores, always reads as 1 and (slightly confusingly)
the instructions are treated as reads by the architecture.
This patch fixes our fault handling code to treat cache maintenance
faults in the same way as loads.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
If 'COMPAT' not defined, aarch32_break_handler() cannot pass compiling,
and it can work independent with 'COMPAT', so remove dummy definition.
The related error:
arch/arm64/kernel/debug-monitors.c:249:5: error: redefinition of ‘aarch32_break_handler’
In file included from arch/arm64/kernel/debug-monitors.c:29:0:
/root/linux-next/arch/arm64/include/asm/debug-monitors.h:89:12: note: previous definition of ‘aarch32_break_handler’ was here
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
There is a slight chance that (timer) interrupts are triggered before a
secondary CPU has been marked online with implications on softirq thread
affinity.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Kirill Tkhai <tkhai@yandex.ru>
|
|
Virtualization does not always need KVM capabilities so drop the
dependency. The KVM symbol already depends on HAVE_KVM.
Fixes the following problem on a randconfig:
warning: (REMOTEPROC && RPMSG) selects VIRTUALIZATION which has unmet direct
dependencies (HAVE_KVM)
warning: (REMOTEPROC && RPMSG) selects VIRTUALIZATION which has unmet
direct dependencies (HAVE_KVM)
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Acked-by: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5443/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
"me" is not used.
Signed-off-by: Richard Weinberger <richard@nod.at>
|
|
Currently we use both struct siginfo and siginfo_t.
Let's use struct siginfo internally to avoid ongoing
compiler warning. We are allowed to do so because
struct siginfo and siginfo_t are equivalent.
Signed-off-by: Richard Weinberger <richard@nod.at>
|
|
During the pruning of the device tree octeon_fdt_pip_iface() is called
for each PIP interface and every port up to the port count is removed
from the device tree. However, the count was set to the return value of
cvmx_helper_interface_enumerate() which doesn't actually return the
count but just returns zero on success. This effectively removed *all*
ports from the tree.
Use cvmx_helper_ports_on_interface() instead to fix this. This
successfully restores the 3 ports of my ERLite-3 and fixes the "kernel
assigns random MAC addresses" issue.
Signed-off-by: Faidon Liambotis <paravoid@debian.org>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Acked-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5587/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|