diff options
author | David Rientjes <rientjes@google.com> | 2010-12-22 17:23:54 -0800 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2010-12-23 15:27:15 -0800 |
commit | c1c3443c9c5e9be92641029ed229a41563e44506 (patch) | |
tree | 44094a0e5430f162ccfe17cbd0d45ada361c2f9c /kernel/res_counter.c | |
parent | f51bf3073a145a5b3263fd882c52d6ec04b687da (diff) |
x86, numa: Fake node-to-cpumask for NUMA emulation
It's necessary to fake the node-to-cpumask mapping so that an emulated
node ID returns a cpumask that includes all cpus that have affinity to
the memory it represents.
This is a little intrusive because it requires knowledge of the physical
topology of the system. setup_physnodes() gives us that information, but
since NUMA emulation ends up altering the physnodes array, it's necessary
to reset it before cpus are brought online.
Accordingly, the physnodes array is moved out of init.data and into
cpuinit.data since it will be needed on cpuup callbacks.
This works regardless of whether numa=fake is used on the command line,
or the setup of the fake node succeeds or fails. The physnodes array
always contains the physical topology of the machine if CONFIG_NUMA_EMU
is enabled and can be used to setup the correct node-to-cpumask mappings
in all cases since setup_physnodes() is called whenever the array needs
to be repopulated with the correct data.
To fake the actual mappings, numa_add_cpu() and numa_remove_cpu() are
rewritten for CONFIG_NUMA_EMU so that we first find the physical node to
which each cpu has local affinity, then iterate through all online nodes
to find the emulated nodes that have local affinity to that physical
node, and then finally map the cpu to each of those emulated nodes.
Signed-off-by: David Rientjes <rientjes@google.com>
LKML-Reference: <alpine.DEB.2.00.1012221701520.3701@chino.kir.corp.google.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'kernel/res_counter.c')
0 files changed, 0 insertions, 0 deletions