summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2017-06-27Merge 4.4.74 into android-4.4Greg Kroah-Hartman
Changes in 4.4.74 configfs: Fix race between create_link and configfs_rmdir can: gs_usb: fix memory leak in gs_cmd_reset() cpufreq: conservative: Allow down_threshold to take values from 1 to 10 vb2: Fix an off by one error in 'vb2_plane_vaddr' mac80211: don't look at the PM bit of BAR frames mac80211/wpa: use constant time memory comparison for MACs mac80211: fix CSA in IBSS mode mac80211: fix IBSS presp allocation size serial: efm32: Fix parity management in 'efm32_uart_console_get_options()' x86/mm/32: Set the '__vmalloc_start_set' flag in initmem_init() mfd: omap-usb-tll: Fix inverted bit use for USB TLL mode staging: rtl8188eu: prevent an underflow in rtw_check_beacon_data() iio: proximity: as3935: recalibrate RCO after resume USB: hub: fix SS max number of ports usb: core: fix potential memory leak in error path during hcd creation pvrusb2: reduce stack usage pvr2_eeprom_analyze() USB: gadget: dummy_hcd: fix hub-descriptor removable fields usb: r8a66597-hcd: select a different endpoint on timeout usb: r8a66597-hcd: decrease timeout drivers/misc/c2port/c2port-duramar2150.c: checking for NULL instead of IS_ERR() usb: xhci: ASMedia ASM1042A chipset need shorts TX quirk USB: gadgetfs, dummy-hcd, net2280: fix locking for callbacks mm/memory-failure.c: use compound_head() flags for huge pages swap: cond_resched in swap_cgroup_prepare() genirq: Release resources in __setup_irq() error path alarmtimer: Prevent overflow of relative timers usb: dwc3: exynos fix axius clock error path to do cleanup MIPS: Fix bnezc/jialc return address calculation alarmtimer: Rate limit periodic intervals mm: larger stack guard gap, between vmas Allow stack to grow up to address space limit mm: fix new crash in unmapped_area_topdown() Linux 4.4.74 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-06-26alarmtimer: Rate limit periodic intervalsThomas Gleixner
commit ff86bf0c65f14346bf2440534f9ba5ac232c39a0 upstream. The alarmtimer code has another source of potentially rearming itself too fast. Interval timers with a very samll interval have a similar CPU hog effect as the previously fixed overflow issue. The reason is that alarmtimers do not implement the normal protection against this kind of problem which the other posix timer use: timer expires -> queue signal -> deliver signal -> rearm timer This scheme brings the rearming under scheduler control and prevents permanently firing timers which hog the CPU. Bringing this scheme to the alarm timer code is a major overhaul because it lacks all the necessary mechanisms completely. So for a quick fix limit the interval to one jiffie. This is not problematic in practice as alarmtimers are usually backed by an RTC for suspend which have 1 second resolution. It could be therefor argued that the resolution of this clock should be set to 1 second in general, but that's outside the scope of this fix. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kostya Serebryany <kcc@google.com> Cc: syzkaller <syzkaller@googlegroups.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Link: http://lkml.kernel.org/r/20170530211655.896767100@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-26alarmtimer: Prevent overflow of relative timersThomas Gleixner
commit f4781e76f90df7aec400635d73ea4c35ee1d4765 upstream. Andrey reported a alartimer related RCU stall while fuzzing the kernel with syzkaller. The reason for this is an overflow in ktime_add() which brings the resulting time into negative space and causes immediate expiry of the timer. The following rearm with a small interval does not bring the timer back into positive space due to the same issue. This results in a permanent firing alarmtimer which hogs the CPU. Use ktime_add_safe() instead which detects the overflow and clamps the result to KTIME_SEC_MAX. Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kostya Serebryany <kcc@google.com> Cc: syzkaller <syzkaller@googlegroups.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Link: http://lkml.kernel.org/r/20170530211655.802921648@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-26genirq: Release resources in __setup_irq() error pathHeiner Kallweit
commit fa07ab72cbb0d843429e61bf179308aed6cbe0dd upstream. In case __irq_set_trigger() fails the resources requested via irq_request_resources() are not released. Add the missing release call into the error handling path. Fixes: c1bacbae8192 ("genirq: Provide irq_request/release_resources chip callbacks") Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/655538f5-cb20-a892-ff15-fbd2dd1fa4ec@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-23UPSTREAM: bpf: don't let ldimm64 leak map addresses on unprivilegedDaniel Borkmann
[ Upstream commit 0d0e57697f162da4aa218b5feafe614fb666db07 ] The patch fixes two things at once: 1) It checks the env->allow_ptr_leaks and only prints the map address to the log if we have the privileges to do so, otherwise it just dumps 0 as we would when kptr_restrict is enabled on %pK. Given the latter is off by default and not every distro sets it, I don't want to rely on this, hence the 0 by default for unprivileged. 2) Printing of ldimm64 in the verifier log is currently broken in that we don't print the full immediate, but only the 32 bit part of the first insn part for ldimm64. Thus, fix this up as well; it's okay to access, since we verified all ldimm64 earlier already (including just constants) through replace_map_fd_with_map_ptr(). Fixes: 1be7f75d1668 ("bpf: enable non-root eBPF programs") Fixes: cbd357008604 ("bpf: verifier (add ability to receive verification log)") Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Bug: 62199770 Change-Id: I62ee47d06ddc669ba2863e8cf24f8f3e7683a461
2017-06-14Merge 4.4.72 into android-4.4Greg Kroah-Hartman
Changes in 4.4.72 bnx2x: Fix Multi-Cos ipv6: xfrm: Handle errors reported by xfrm6_find_1stfragopt() cxgb4: avoid enabling napi twice to the same queue tcp: disallow cwnd undo when switching congestion control vxlan: fix use-after-free on deletion ipv6: Fix leak in ipv6_gso_segment(). net: ping: do not abuse udp_poll() net: ethoc: enable NAPI before poll may be scheduled net: bridge: start hello timer only if device is up sparc64: mm: fix copy_tsb to correctly copy huge page TSBs sparc: Machine description indices can vary sparc64: reset mm cpumask after wrap sparc64: combine activate_mm and switch_mm sparc64: redefine first version sparc64: add per-cpu mm of secondary contexts sparc64: new context wrap sparc64: delete old wrap code arch/sparc: support NR_CPUS = 4096 serial: ifx6x60: fix use-after-free on module unload ptrace: Properly initialize ptracer_cred on fork KEYS: fix dereferencing NULL payload with nonzero length KEYS: fix freeing uninitialized memory in key_update() crypto: gcm - wait for crypto op not signal safe drm/amdgpu/ci: disable mclk switching for high refresh rates (v2) nfsd4: fix null dereference on replay nfsd: Fix up the "supattr_exclcreat" attributes kvm: async_pf: fix rcu_irq_enter() with irqs enabled KVM: cpuid: Fix read/write out-of-bounds vulnerability in cpuid emulation arm: KVM: Allow unaligned accesses at HYP KVM: async_pf: avoid async pf injection when in guest mode dmaengine: usb-dmac: Fix DMAOR AE bit definition dmaengine: ep93xx: Always start from BASE0 xen/privcmd: Support correctly 64KB page granularity when mapping memory xen-netfront: do not cast grant table reference to signed short xen-netfront: cast grant table reference first to type int ext4: fix SEEK_HOLE ext4: keep existing extra fields when inode expands ext4: fix fdatasync(2) after extent manipulation operations usb: gadget: f_mass_storage: Serialize wake and sleep execution usb: chipidea: udc: fix NULL pointer dereference if udc_start failed usb: chipidea: debug: check before accessing ci_role staging/lustre/lov: remove set_fs() call from lov_getstripe() iio: light: ltr501 Fix interchanged als/ps register field iio: proximity: as3935: fix AS3935_INT mask drivers: char: random: add get_random_long() random: properly align get_random_int_hash stackprotector: Increase the per-task stack canary's random range from 32 bits to 64 bits on 64-bit platforms cpufreq: cpufreq_register_driver() should return -ENODEV if init fails target: Re-add check to reject control WRITEs with overflow data drm/msm: Expose our reservation object when exporting a dmabuf. Input: elantech - add Fujitsu Lifebook E546/E557 to force crc_enabled cpuset: consider dying css as offline fs: add i_blocksize() ufs: restore proper tail allocation fix ufs_isblockset() ufs: restore maintaining ->i_blocks ufs: set correct ->s_maxsize ufs_extend_tail(): fix the braino in calling conventions of ufs_new_fragments() ufs_getfrag_block(): we only grab ->truncate_mutex on block creation path cxl: Fix error path on bad ioctl btrfs: use correct types for page indices in btrfs_page_exists_in_range btrfs: fix memory leak in update_space_info failure path KVM: arm/arm64: Handle possible NULL stage2 pud when ageing pages scsi: qla2xxx: don't disable a not previously enabled PCI device powerpc/eeh: Avoid use after free in eeh_handle_special_event() powerpc/numa: Fix percpu allocations to be NUMA aware powerpc/hotplug-mem: Fix missing endian conversion of aa_index perf/core: Drop kernel samples even though :u is specified drm/vmwgfx: Handle vmalloc() failure in vmw_local_fifo_reserve() drm/vmwgfx: limit the number of mip levels in vmw_gb_surface_define_ioctl() drm/vmwgfx: Make sure backup_handle is always valid drm/nouveau/tmr: fully separate alarm execution/pending lists ALSA: timer: Fix race between read and ioctl ALSA: timer: Fix missing queue indices reset at SNDRV_TIMER_IOCTL_SELECT ASoC: Fix use-after-free at card unregistration drivers: char: mem: Fix wraparound check to allow mappings up to the end tty: Drop krefs for interrupted tty lock serial: sh-sci: Fix panic when serial console and DMA are enabled net: better skb->sender_cpu and skb->napi_id cohabitation mm: consider memblock reservations for deferred memory initialization sizing NFS: Ensure we revalidate attributes before using execute_ok() NFSv4: Don't perform cached access checks before we've OPENed the file Make __xfs_xattr_put_listen preperly report errors. arm64: hw_breakpoint: fix watchpoint matching for tagged pointers arm64: entry: improve data abort handling of tagged pointers RDMA/qib,hfi1: Fix MR reference count leak on write with immediate usercopy: Adjust tests to deal with SMAP/PAN arm64: armv8_deprecated: ensure extension of addr arm64: ensure extension of smp_store_release value Linux 4.4.72 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-06-14perf/core: Drop kernel samples even though :u is specifiedJin Yao
commit cc1582c231ea041fbc68861dfaf957eaf902b829 upstream. When doing sampling, for example: perf record -e cycles:u ... On workloads that do a lot of kernel entry/exits we see kernel samples, even though :u is specified. This is due to skid existing. This might be a security issue because it can leak kernel addresses even though kernel sampling support is disabled. The patch drops the kernel samples if exclude_kernel is specified. For example, test on Haswell desktop: perf record -e cycles:u <mgen> perf report --stdio Before patch applied: 99.77% mgen mgen [.] buf_read 0.20% mgen mgen [.] rand_buf_init 0.01% mgen [kernel.vmlinux] [k] apic_timer_interrupt 0.00% mgen mgen [.] last_free_elem 0.00% mgen libc-2.23.so [.] __random_r 0.00% mgen libc-2.23.so [.] _int_malloc 0.00% mgen mgen [.] rand_array_init 0.00% mgen [kernel.vmlinux] [k] page_fault 0.00% mgen libc-2.23.so [.] __random 0.00% mgen libc-2.23.so [.] __strcasestr 0.00% mgen ld-2.23.so [.] strcmp 0.00% mgen ld-2.23.so [.] _dl_start 0.00% mgen libc-2.23.so [.] sched_setaffinity@@GLIBC_2.3.4 0.00% mgen ld-2.23.so [.] _start We can see kernel symbols apic_timer_interrupt and page_fault. After patch applied: 99.79% mgen mgen [.] buf_read 0.19% mgen mgen [.] rand_buf_init 0.00% mgen libc-2.23.so [.] __random_r 0.00% mgen mgen [.] rand_array_init 0.00% mgen mgen [.] last_free_elem 0.00% mgen libc-2.23.so [.] vfprintf 0.00% mgen libc-2.23.so [.] rand 0.00% mgen libc-2.23.so [.] __random 0.00% mgen libc-2.23.so [.] _int_malloc 0.00% mgen libc-2.23.so [.] _IO_doallocbuf 0.00% mgen ld-2.23.so [.] do_lookup_x 0.00% mgen ld-2.23.so [.] open_verify.constprop.7 0.00% mgen ld-2.23.so [.] _dl_important_hwcaps 0.00% mgen libc-2.23.so [.] sched_setaffinity@@GLIBC_2.3.4 0.00% mgen ld-2.23.so [.] _start There are only userspace symbols. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: acme@kernel.org Cc: jolsa@kernel.org Cc: kan.liang@intel.com Cc: mark.rutland@arm.com Cc: will.deacon@arm.com Cc: yao.jin@intel.com Link: http://lkml.kernel.org/r/1495706947-3744-1-git-send-email-yao.jin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14cpuset: consider dying css as offlineTejun Heo
commit 41c25707d21716826e3c1f60967f5550610ec1c9 upstream. In most cases, a cgroup controller don't care about the liftimes of cgroups. For the controller, a css becomes online when ->css_online() is called on it and offline when ->css_offline() is called. However, cpuset is special in that the user interface it exposes cares whether certain cgroups exist or not. Combined with the RCU delay between cgroup removal and css offlining, this can lead to user visible behavior oddities where operations which should succeed after cgroup removals fail for some time period. The effects of cgroup removals are delayed when seen from userland. This patch adds css_is_dying() which tests whether offline is pending and updates is_cpuset_online() so that the function returns false also while offline is pending. This gets rid of the userland visible delays. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Daniel Jordan <daniel.m.jordan@oracle.com> Link: http://lkml.kernel.org/r/327ca1f5-7957-fbb9-9e5f-9ba149d40ba2@oracle.com Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14stackprotector: Increase the per-task stack canary's random range from 32 ↵Daniel Micay
bits to 64 bits on 64-bit platforms commit 5ea30e4e58040cfd6434c2f33dc3ea76e2c15b05 upstream. The stack canary is an 'unsigned long' and should be fully initialized to random data rather than only 32 bits of random data. Signed-off-by: Daniel Micay <danielmicay@gmail.com> Acked-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: Arjan van Ven <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-hardening@lists.openwall.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170504133209.3053-1-danielmicay@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-14ptrace: Properly initialize ptracer_cred on forkEric W. Biederman
commit c70d9d809fdeecedb96972457ee45c49a232d97f upstream. When I introduced ptracer_cred I failed to consider the weirdness of fork where the task_struct copies the old value by default. This winds up leaving ptracer_cred set even when a process forks and the child process does not wind up being ptraced. Because ptracer_cred is not set on non-ptraced processes whose parents were ptraced this has broken the ability of the enlightenment window manager to start setuid children. Fix this by properly initializing ptracer_cred in ptrace_init_task This must be done with a little bit of care to preserve the current value of ptracer_cred when ptrace carries through fork. Re-reading the ptracer_cred from the ptracing process at this point is inconsistent with how PT_PTRACE_CAP has been maintained all of these years. Tested-by: Takashi Iwai <tiwai@suse.de> Fixes: 64b875f7ac8a ("ptrace: Capture the ptracer's creds not PT_PTRACE_CAP") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-03schedstats/eas: guard properly to avoid breaking non-smp schedstats usersChris Redpath
Add appropriate #ifdef guards to ensure the smp-only easstats structs are not used when smp is not enabled. Arnd got a report from buildbot, analysed it, and pointed out exactly what the issue was. Reported-by: "Arnd Bergmann" <arnd@arndb.de> Suggested-by: "Arnd Bergmann" <arnd@arndb.de> Fixes: 4b85765a3dd9 ("sched/fair: Add eas (& cas) specific rq, sd and task stats") Signed-off-by: Chris Redpath <chris.redpath@arm.com> Change-Id: I60554dea20137f6774db3f59b4afd40a06554cfc
2017-06-02sched/tune: don't use schedtune before it is readyChris Redpath
When EAS is enabled during boot, we have to be careful not to use schedtune from fair.c before it is ready or it will warn us and we'll get a traceback in the console. Change-Id: I1a5cf29b18af626545c636c51219f9ed497c19fa Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: use SCHED_CAPACITY_SCALE for energy normalizationPatrick Bellasi
Change-Id: I686d26975f4a7dd830ff8441ff986e35461a7d55 Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2017-06-02sched/{fair,tune}: use reciprocal_value to compute boost marginPatrick Bellasi
Change-Id: I493b07360c46eee0b72c2a046dab9ec6cb3427ef Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2017-06-02sched/tune: Initialize raw_spin_lock in boosted_groupsSrinath Sridharan
bug: 32668852 Change-Id: Ice96230d88939d5973b1b6310085d1b3df9c47d9 Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2017-06-02sched/tune: report when SchedTune has not been initializedPatrick Bellasi
Change-Id: Iba4e5e3d220451f04272d555e6b8e0af83a7f09d Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2017-06-02sched/tune: fix sched_energy_diff tracepointChris Redpath
sched_energy_diff tracepoint is in a place where it can never trace payoff or nrg.delta. If CONFIG_SCHED_TUNE is enabled, put it in a place where those values exist. If it is not enabled, trace from the current location Change-Id: Id5442f2b34ec76625491d27c0f4285433ca12699 Reported-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/tune: increase group count to 5Chris Redpath
We use 5 groups everywhere else, this should default to the same. Change-Id: I05a20bdcf8046ea90a2e36979940cef11246e735 Signed-off-by: Chris Redpath <chris.redpath@arm.com> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2017-06-02cpufreq/schedutil: use boosted_cpu_util for PELT to match WALTChris Redpath
When using WALT we always used boosted cpu util for OPP selection. This is the primary purpose for boosted cpu util, but we hadn't changed the PELT utilization check to do the same thing. Fix that here. Change-Id: Id5ffb26eac23b25fe754255221f6d21b8cededfd Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Fix sched_group_energy() to support per-cpu capacity statesMorten Rasmussen
sched_group_energy() was supposed to support per-cpu capacity states (DVFS), however, while fixing a hotplug issue this was broken as we bail out if there is no SD_SHARE_CAP_STATES flag set. This patch implements the hotplug race check differently and should therefore reinstate support for per-cpu capacity states. Change-Id: I5b865666c9ce833dcfa6514c574580d75aa0a195 Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2017-06-02sched/fair: discount task contribution to find CPU with lowest utilizationValentin Schneider
In some cases, the new_util of a task can be the same on several CPUs. This causes an issue because the target_util is only updated if the current new_util is strictly smaller than target_util. To fix that, the cpu_util_wake() return value is used alongside the new_util value. If two CPUs compute the same new_util value, we'll now also look at their cpu_util_wake() return value. In this case, the CPU that last ran the task will be chosen in priority. Change-Id: Ia1ea2c4b3ec39621372c2f748862317d5b497723 Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
2017-06-02sched/fair: ensure utilization signals are synchronized before useChris Redpath
wake_cap performs task and cpu utilization synchronization which is what allows us to subtract current task util from prev_cpu util and have a sensible number to work with. It looks as though if wake_wide returns 0, we could potentially not execute wake_cap, which would result in unsynced signals we then use for energy calculations. This is not necessarily an issue we've seen in traces, but it looks as though it should be changed. Change-Id: Ic54a3cba2a10d946ea20113a04371dea04115e82 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: remove task util from own cpu when placing waking taskChris Redpath
When we place a waking task with find_best_target, we calculate the existing and new utilisation of each candidate cpu. However, we do not remove any blocked load resulting from the waking task on the previous cpu which might cause unnecessary migrations. Switch to using cpu_util_wake which does this for us, which requires moving cpu_util_wake a few functions earlier. Also, we have multiple potential cpu utilization signals here, so update the necessary bits to allow WALT to work properly (including not subtracting task util for WALT). When WALT is in use, cpu utilization is the utilization in the previous completed window, whilst the task utilization ignores fully idle windows. There seems to be no way to have a decently accurate estimate of how much (if any) utilization from this task remains on the prev cpu. Instead, just return cpu_util when we're using WALT. Change-Id: I448203ab98ffb5c020dfb6b218581eef1f5601f7 Reported-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02trace:sched: Make util_avg in load_avg trace reflect PELT/WALT as usedChris Redpath
With the ability to choose between WALT and PELT for utilisation tracking we can have the situation where we're using WALT to make all the decisions and reporting PELT figures in the sched_load_avg_(cpu|task) trace points. This is not too much of an issue, but when analysing trace it is nice to see numbers representing what the scheduler is using rather than needing to add in additional sched_walt_* traces to figure it out. Add reporting for both types, and make the util_avg member reflect what will be seen from cpu or task_util functions in the scheduler. Change-Id: I2abbd2c5fa70822096d0f3372b4c12b1c6af1590 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Add eas (& cas) specific rq, sd and task statsDietmar Eggemann
The statistic counter are placed in the eas (& cas) wakeup path. Each of them has one representation for the runqueue (rq), the sched_domain (sd) and the task. A task counter is always incremented. A rq counter is always incremented for the rq the scheduler is currently running on. A sd counter is only incremented if a relation to a sd exists. The counters are exposed: (1) In /proc/schedstat for rq's and sd's: $ cat /proc/schedstat ... cpu0 71422 0 2321254 ... eas 44144 0 0 19446 0 24698 568435 51621 156932 133 222011 17459 120279 516814 83 0 156962 359235 176439 139981 <- runqueue for cpu0 ... domain0 3 42430 42331 ... eas 0 0 0 14200 0 0 0 0 0 0 0 0 0 0 0 0 0 0 66355 0 <- MC sched domain for cpu0 ... The per-cpu eas vector has the following elements: sis_attempts sis_idle sis_cache_affine sis_suff_cap sis_idle_cpu sis_count || secb_attempts secb_sync secb_idle_bt secb_insuff_cap secb_no_nrg_sav secb_nrg_sav secb_count || fbt_attempts fbt_no_cpu fbt_no_sd fbt_pref_idle fbt_count || cas_attempts cas_count The following relations exist between these counters (from cpu0 eas vector above): sis_attempts = sis_idle + sis_cache_affine + sis_suff_cap + sis_idle_cpu + sis_count 44144 = 0 + 0 + 19446 + 0 + 24698 secb_attempts = secb_sync + secb_idle_bt + secb_insuff_cap + secb_no_nrg_sav + secb_nrg_sav + secb_count 568435 = 51621 + 156932 + 133 + 222011 + 17459 + 120279 fbt_attempts = fbt_no_cpu + fbt_no_sd + fbt_pref_idle + fbt_count + (return -1) 516814 = 83 + 0 + 156962 + 359235 + (534) cas_attempts = cas_count + (return -1 or smp_processor_id()) 176439 = 139981 + (36458) (2) In /proc/$PROCESS_PID/task/$TASK_PID/sched for a task. example: main thread of system_server $ cat /proc/1083/task/1083/sched ... se.statistics.nr_wakeups_sis_attempts : 945 se.statistics.nr_wakeups_sis_idle : 0 se.statistics.nr_wakeups_sis_cache_affine : 0 se.statistics.nr_wakeups_sis_suff_cap : 219 se.statistics.nr_wakeups_sis_idle_cpu : 0 se.statistics.nr_wakeups_sis_count : 726 se.statistics.nr_wakeups_secb_attempts : 10376 se.statistics.nr_wakeups_secb_sync : 1462 se.statistics.nr_wakeups_secb_idle_bt : 6984 se.statistics.nr_wakeups_secb_insuff_cap : 3 se.statistics.nr_wakeups_secb_no_nrg_sav : 927 se.statistics.nr_wakeups_secb_nrg_sav : 206 se.statistics.nr_wakeups_secb_count : 794 se.statistics.nr_wakeups_fbt_attempts : 8914 se.statistics.nr_wakeups_fbt_no_cpu : 0 se.statistics.nr_wakeups_fbt_no_sd : 0 se.statistics.nr_wakeups_fbt_pref_idle : 6987 se.statistics.nr_wakeups_fbt_count : 1554 se.statistics.nr_wakeups_cas_attempts : 3107 se.statistics.nr_wakeups_cas_count : 1195 ... The same relation between the counters as in the per-cpu case apply. Change-Id: Ie7d01267c78a3f41f60a3ef52917d5a5d463f195 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/core: Fix PELT jump to max OPP upon util increaseAndres Oportus
Change-Id: Ic80b588ec466ef707f658dcea039fd0d6b384b63 Signed-off-by: Andres Oportus <andresoportus@google.com>
2017-06-02sched: EAS & 'single cpu per cluster'/cpu hotplug interoperabilityDietmar Eggemann
For Energy-Aware Scheduling (EAS) to work properly, even in the case that there is only one cpu per cluster or that cpus are hot-plugged out, the Energy Model (EM) data on all energy-aware sched domains (sd) has to be present for all online cpus. Mainline sd hierarchy setup code will remove sd's which are not useful for task scheduling e.g. in the following situations: 1. Only 1 cpu is/remains in one cluster of a multi cluster system. This remaining cpu only has DIE and no MC sd. 2. A complete cluster in a two cluster system is hot-plugged out. The cpus of the remaining cluster only have MC and no DIE sd. To make sure that all online cpus keep all their energy-aware sd's, the sd degenerate functionality has been changed to not free a sd if its first sched group (sg) contains EM data in case: 1. There is only 1 cpu left in the sd. 2. There have to be at least 2 sg's if certain sd flags are set. Instead of freeing such a sd it now clears only its SD_LOAD_BALANCE flag. This will make sure that the EAS functionality will always see all energy-aware sd's for all online cpus. It will introduce a tiny performance degradation for operations on affected cpus since the hot-path macro for_each_domain() has to deal with sd's not contributing to task scheduling at all now. In most cases the exisiting code makes sure that task scheduling is not invoked on a sd with !SD_LOAD_BALANCE. However, a small change is necessary in update_sd_lb_stats() to make sure that sd->parent is only initialized to !NULL in case the parent sd contains more than 1 sg. The handling of newidle decay values before the SD_LOAD_BALANCE check in rebalance_domains() stays unchanged. Test (w/ CONFIG_SCHED_DEBUG): JUNO r0 default system: $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd07 CPU part : 0xd07 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags: $ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name MC DIE MC DIE MC DIE MC DIE MC DIE MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags` 832f 102f 832f 102f 832f 102f 832f 102f 832f 102f 832f 102f Test 1: Hotplug-out one A57 (CPU part 0xd07) cpu: $ echo 0 > /sys/devices/system/cpu/cpu1/online $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd07 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags for remaining A57 (cpu2) cpu: $ cat /proc/sys/kernel/sched_domain/cpu2/domain*/name MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu2/domain*/flags` 832e <-- MC SD with !SD_LOAD_BALANCE 102f Test 2: Hotplug-out the entire A57 cluster: $ echo 0 > /sys/devices/system/cpu/cpu1/online $ echo 0 > /sys/devices/system/cpu/cpu2/online $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags for the remaining A53 (CPU part 0xd03) cluster: $ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name MC DIE MC DIE MC DIE MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags` 832f 102e <-- DIE SD with !SD_LOAD_BALANCE 832f 102e 832f 102e 832f 102e Change-Id: If24aa2b2628f334abbf0207d39e2a86168d9d673 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2017-06-02UPSTREAM: sched/core: Fix group_entity's share updateVincent Guittot
The update of the share of a cfs_rq is done when its load_avg is updated but before the group_entity's load_avg has been updated for the past time slot. This generates wrong load_avg accounting which can be significant when small tasks are involved in the scheduling. Let take the example of a task a that is dequeued of its task group A: root (cfs_rq) \ (se) A (cfs_rq) \ (se) a Task "a" was the only task in task group A which becomes idle when a is dequeued. We have the sequence: - dequeue_entity a->se - update_load_avg(a->se) - dequeue_entity_load_avg(A->cfs_rq, a->se) - update_cfs_shares(A->cfs_rq) A->cfs_rq->load.weight == 0 A->se->load.weight is updated with the new share (0 in this case) - dequeue_entity A->se - update_load_avg(A->se) but its weight is now null so the last time slot (up to a tick) will be accounted with a weight of 0 instead of its real weight during the time slot. The last time slot will be accounted as an idle one whereas it was a running one. If the running time of task a is short enough that no tick happens when it runs, all running time of group entity A->se will be accounted as idle time. Instead, we should update the share of a cfs_rq (in fact the weight of its group entity) only after having updated the load_avg of the group_entity. update_cfs_shares() now takes the sched_entity as a parameter instead of the cfs_rq, and the weight of the group_entity is updated only once its load_avg has been synced with current time. Change-Id: Id6ce3be1767b44b444ce2a77ed1ba063e57c0664 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pjt@google.com Link: http://lkml.kernel.org/r/1482335426-7664-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 89ee048f3cc796db6f26906c6bef4edf0bee70fd) [minor cherry pick stuff] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Fix calc_cfs_shares() fixed point arithmetics width ↵Peter Zijlstra
confusion Commit: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") did something non-obvious but also did it buggy yet latent. The problem was exposed for real by a later commit in the v4.7 merge window: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") ... after which tg->load_avg and cfs_rq->load.weight had different units (10 bit fixed point and 20 bit fixed point resp.). Add a comment to explain the use of cfs_rq->load.weight over the 'natural' cfs_rq->avg.load_avg and add scale_load_down() to correct for the difference in unit. Since this is (now, as per a previous commit) the only user of calc_tg_weight(), collapse it. The effects of this bug should be randomly inconsistent SMP-balancing of cgroups workloads. Change-Id: If1e565662ea163485edd94a12aef644d0e0dfe7a Reported-by: Jirka Hladky <jhladky@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit ea1dc6fc6242f991656e35e2ed3d90ec1cd13418) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Fix incorrect task group ->load_avgVincent Guittot
A scheduler performance regression has been reported by Joseph Salisbury, which he bisected back to: 3d30544f0212 ("sched/fair: Apply more PELT fixes) The regression triggers when several levels of task groups are involved (read: SystemD) and cpu_possible_mask != cpu_present_mask. The root cause is that group entity's load (tg_child->se[i]->avg.load_avg) is initialized to scale_load_down(se->load.weight). During the creation of a child task group, its group entities on possible CPUs are attached to parent's cfs_rq (tg_parent) and their loads are added to the parent's load (tg_parent->load_avg) with update_tg_load_avg(). But only the load on online CPUs will then be updated to reflect real load, whereas load on other CPUs will stay at the initial value. The result is a tg_parent->load_avg that is higher than the real load, the weight of group entities (tg_parent->se[i]->load.weight) on online CPUs is smaller than it should be, and the task group gets a less running time than what it could expect. ( This situation can be detected with /proc/sched_debug. The ".tg_load_avg" of the task group will be much higher than sum of ".tg_load_avg_contrib" of online cfs_rqs of the task group. ) The load of group entities don't have to be intialized to something else than 0 because their load will increase when an entity is attached. Change-Id: Ie55021ff98ba49016adfddb2444e9c9709939226 Reported-by: Joseph Salisbury <joseph.salisbury@canonical.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@vger.kernel.org> # 4.8.x Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: joonwoop@codeaurora.org Fixes: 3d30544f0212 ("sched/fair: Apply more PELT fixes) Link: http://lkml.kernel.org/r/1476881123-10159-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit b5a9b340789b2b24c6896bcf7a065c31a4db671c) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed loadPeter Zijlstra
Starting with the following commit: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") calc_tg_weight() doesn't compute the right value as expected by effective_load(). The difference is in the 'correction' term. In order to ensure \Sum rw_j >= rw_i we cannot use tg->load_avg directly, since that might be lagging a correction on the current cfs_rq->avg.load_avg value. Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib + cfs_rq->avg.load_avg. Now, per the referenced commit, calc_tg_weight() doesn't use cfs_rq->avg.load_avg, as is later used in @w, but uses cfs_rq->load.weight instead. So stop using calc_tg_weight() and do it explicitly. The effects of this bug are wake_affine() making randomly poor choices in cgroup-intense workloads. Change-Id: I1c0058ff674650cf295c8dc3b88a5a3de4bddab0 Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> # v4.3+ Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 7dd4912594daf769a46744848b05bd5bc6d62469) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Propagate asynchrous detachVincent Guittot
A task can be asynchronously detached from cfs_rq when migrating between CPUs. The load of the migrated task is then removed from source cfs_rq during its next update. We use this event to set propagation flag. During the load balance, we take advantage of the update of blocked load to propagate any pending changes. The propagation relies on patch: "sched: Fix hierarchical order in rq->leaf_cfs_rq_list" ... which orders children and parents, to ensure that it's done in one pass. Change-Id: I33782e35fc4711f5901e8c23d6aa7ec5f2ff7ee5 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 4e5160766fcc9f41bbd38bac11f92dce993644aa) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Propagate load during synchronous attach/detachVincent Guittot
When a task moves from/to a cfs_rq, we set a flag which is then used to propagate the change at parent level (sched_entity and cfs_rq) during next update. If the cfs_rq is throttled, the flag will stay pending until the cfs_rq is unthrottled. For propagating the utilization, we copy the utilization of group cfs_rq to the sched_entity. For propagating the load, we have to take into account the load of the whole task group in order to evaluate the load of the sched_entity. Similarly to what was done before the rewrite of PELT, we add a correction factor in case the task group's load is greater than its share so it will contribute the same load of a task of equal weight. Change-Id: Id34a9888484716961c9027299c0b4d82881a39d1 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-5-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 09a43ace1f986b003c118fdf6ddf1fd685692d49) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_listVincent Guittot
Fix the insertion of cfs_rq in rq->leaf_cfs_rq_list to ensure that a child will always be called before its parent. The hierarchical order in shares update list has been introduced by commit: 67e86250f8ea ("sched: Introduce hierarchal order on shares update list") With the current implementation a child can be still put after its parent. Lets take the example of: root \ b /\ c d* | e* with root -> b -> c already enqueued but not d -> e so the leaf_cfs_rq_list looks like: head -> c -> b -> root -> tail The branch d -> e will be added the first time that they are enqueued, starting with e then d. When e is added, its parents is not already on the list so e is put at the tail : head -> c -> b -> root -> e -> tail Then, d is added at the head because its parent is already on the list: head -> d -> c -> b -> root -> e -> tail e is not placed at the right position and will be called the last whereas it should be called at the beginning. Because it follows the bottom-up enqueue sequence, we are sure that we will finished to add either a cfs_rq without parent or a cfs_rq with a parent that is already on the list. We can use this event to detect when we have finished to add a new branch. For the others, whose parents are not already added, we have to ensure that they will be added after their children that have just been inserted the steps before, and after any potential parents that are already in the list. The easiest way is to put the cfs_rq just after the last inserted one and to keep track of it untl the branch is fully added. Change-Id: I4fe0b8502ea628c13d14e8e5c5279bce67fb8845 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-3-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 9c2791f936ef5fd04a118b5c284f2c9a95f4a647) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02BACKPORT: sched/fair: Factorize PELT updateVincent Guittot
Every time we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done in different ways: - when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. - when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg() everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity. Change-Id: Ibde9a7e07ac80e9d5753bb4a0c30dfb3643cc666 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-4-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> [backported FROMLIST] Signed-off-by: Andres Oportus <andresoportus@google.com> (cherry picked from commit d31b1a66cbe0931733583ad9d9e8c6cfd710907d) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Factorize attach/detach entityVincent Guittot
Factorize post_init_entity_util_avg() and part of attach_task_cfs_rq() in one function attach_entity_cfs_rq(). Create symmetric detach_entity_cfs_rq() function. Change-Id: I44fc6bb5e71460be65f6b8928d4620c6c27a6a67 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-2-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit df217913e72ec7e603d8b68cc4c70646cf7000db) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Improve PELT stuff some morePeter Zijlstra
Vincent noted that the update_tg_load_avg() usage in commit: 3d30544f0212 ("sched/fair: Apply more PELT fixes") isn't entirely sufficient. We need to call this function every time cfs_rq->avg.load changes, this includes when update_cfs_rq_load_avg() returns true, but {attach,detach}_entity_load_avg() themselves also change it. This means we need to unconditionally call update_tg_load_avg(). Also, add more comments. Change-Id: I7e55fceb587601f73c760c8b0d47a7ef2b777b9e Reported-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 7c3edd2c300b7ef2005a69dc727692ee07434aa5) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Apply more PELT fixesPeter Zijlstra
One additional 'rule' for using update_cfs_rq_load_avg() is that one should call update_tg_load_avg() if it returns true. Add a bunch of comments to hopefully clarify some of the rules: o You need to update cfs_rq _before_ any entity attach/detach, this is important, because while for mathmatical consisency this isn't strictly needed, it is required for the physical interpretation of the model, you attach/detach _now_. o When you modify the cfs_rq avg, you have to then call update_tg_load_avg() in order to propagate changes upwards. o (Fair) entities are always attached, switched_{to,from}_fair() deal with !fair. This directly follows from the definition of the cfs_rq averages, namely that they are a direct sum of all (runnable or blocked) entities on that rq. It is the second rule that this patch enforces, but it adds comments pertaining to all of them. Change-Id: Icdc906e98c67b84cb9582c893bc761a9886be57a Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 3d30544f02120b884bba2a9466c87dba980e3be5) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02UPSTREAM: sched/fair: Fix post_init_entity_util_avg() serializationPeter Zijlstra
Chris Wilson reported a divide by 0 at: post_init_entity_util_avg(): > 725 if (cfs_rq->avg.util_avg != 0) { > 726 sa->util_avg = cfs_rq->avg.util_avg * se->load.weight; > -> 727 sa->util_avg /= (cfs_rq->avg.load_avg + 1); > 728 > 729 if (sa->util_avg > cap) > 730 sa->util_avg = cap; > 731 } else { Which given the lack of serialization, and the code generated from update_cfs_rq_load_avg() is entirely possible: if (atomic_long_read(&cfs_rq->removed_load_avg)) { s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); sa->load_avg = max_t(long, sa->load_avg - r, 0); sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); removed_load = 1; } turns into: ffffffff81087064: 49 8b 85 98 00 00 00 mov 0x98(%r13),%rax ffffffff8108706b: 48 85 c0 test %rax,%rax ffffffff8108706e: 74 40 je ffffffff810870b0 ffffffff81087070: 4c 89 f8 mov %r15,%rax ffffffff81087073: 49 87 85 98 00 00 00 xchg %rax,0x98(%r13) ffffffff8108707a: 49 29 45 70 sub %rax,0x70(%r13) ffffffff8108707e: 4c 89 f9 mov %r15,%rcx ffffffff81087081: bb 01 00 00 00 mov $0x1,%ebx ffffffff81087086: 49 83 7d 70 00 cmpq $0x0,0x70(%r13) ffffffff8108708b: 49 0f 49 4d 70 cmovns 0x70(%r13),%rcx Which you'll note ends up with 'sa->load_avg - r' in memory at ffffffff8108707a. By calling post_init_entity_util_avg() under rq->lock we're sure to be fully serialized against PELT updates and cannot observe intermediate state like this. Change-Id: I56c11886102b7859df82e26c88b1b7c200a39f6e Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yuyang Du <yuyang.du@intel.com> Cc: bsegall@google.com Cc: morten.rasmussen@arm.com Cc: pjt@google.com Cc: steve.muckle@linaro.org Fixes: 2b8c41daba32 ("sched/fair: Initiate a new task's util avg to a bounded value") Link: http://lkml.kernel.org/r/20160609130750.GQ30909@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit b7fa30c9cc48c4f55663420472505d3b4f6e1705) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02BACKPORT: sched/fair: Initiate a new task's util avg to a bounded valueYuyang Du
A new task's util_avg is set to full utilization of a CPU (100% time running). This accelerates a new task's utilization ramp-up, useful to boost its execution in early time. However, it may result in (insanely) high utilization for a transient time period when a flood of tasks are spawned. Importantly, it violates the "fundamentally bounded" CPU utilization, and its side effect is negative if we don't take any measure to bound it. This patch proposes an algorithm to address this issue. It has two methods to approach a sensible initial util_avg: (1) An expected (or average) util_avg based on its cfs_rq's util_avg: util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * se.load.weight (2) A trajectory of how successive new tasks' util develops, which gives 1/2 of the left utilization budget to a new task such that the additional util is noticeably large (when overall util is low) or unnoticeably small (when overall util is high enough). In the meantime, the aggregate utilization is well bounded: util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n where n denotes the nth task. If util_avg is larger than util_avg_cap, then the effective util is clamped to the util_avg_cap. Change-Id: Idafe989b24d9e70911666f09800bf1d5a011e1f4 Reported-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: morten.rasmussen@arm.com Cc: pjt@google.com Cc: steve.muckle@linaro.org Link: http://lkml.kernel.org/r/1459283456-21682-1-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 2b8c41daba327c633228169e8bd8ec067ab443f8) [integrate with schedfreq - schedfreq has a tuneable for init task util but this commit removes the use of the tuneable since we have a new algorithm for calculating an initial utilisation. I've left the tuneable in place, but it is no longer used even when schedfreq is the CPUFreq governor] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Simplify idle_idx handling in select_idle_sibling()Dietmar Eggemann
Rename best_idle to best_idle_cpu so the same name is used like in find_best_target(). Fix if (best_idle > 0) since best_idle_cpu = 0 is a valid target. Use 'unsigned long' data type for best_idle_capacity. Since we're looking for the shallowest best_idle_cstate initialize best_idle_cstate = INT_MAX. For cpus which are not idle (idle_idx = -1) the condition 'if (idle_idx < best_idle_cstate && ...)' is never executed. Change-Id: Ic5b63d58478696b3d1ec6253cf739a69a574cf99 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 8bff5e9c0968108d465e1f2a4624fc5ec2f00849) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: refactor find_best_target() for simplicityDietmar Eggemann
Simplify backup_capacity handling and use 'unsigned long' data type for cpu capacity, simplify target_util handling, simplify idle_idx handling & refactor min_util, new_util. Also return first idle cpu for prefer_idle task immediately. Change-Id: Ic89e140f7b369f3965703fdc8463013d16e9b94a Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Change cpu iteration order in find_best_target()Dietmar Eggemann
The schedtune task parameter 'boosted' is mapped into the cpu iteration order. Currently for 'boosted' equal true the iteration starts at the last cpu (NR_CPUS-1) whereas for 'boosted' equal false it starts at the first cpu (0). This only has the desired effect if the cpu topology oerdering matches the underlying assumption. This e.g. is the case for the Qc snapdragon 821 with its [L0 L1 b0 b1] cpu topology layout (L=lower max freq, b=higher max freq). This results in cpus with higher maximum capacity being given the highest logical cpu ids. However not all big.LITTLE systems enumerate their cpus in the same way. For example, the ARM Versatile Express Juno board has 6 cpus for which the default configuration has topology [L0 b0 b1 L1 L2 L3]. To make this approach independent from the cpu topology layout it now iterates over the cpus in the order of the sched_groups of the EAS sched_domain (sd_ea). The order of cpu iteration is different for the different cpu types in case the cpu is used to dereference sd_ea. Considering the Qc snapdragon 821 again, for cpu L0 and L1 the order is 'b0->b1->L0->L1' whereas for b0 and b1 the order is 'b0->b1->L0->L1'. This approach does not allow the exact same iteration order as with the currently used flat iteration over [0 .. NR_CPUS-1] but the cpus are ordered by the original cpu capacity. The cpu iteration is now done in the sd_ea sched_group order required by the 'boosted' value ['L0->L1->b0->b1'/'b0->b1->L0->L1'] rather than forward/backward over the flat cpu space ['L0->L1->b0->b1'/ 'b1->b0->L1->L0']. Change-Id: I8fbe2073dedd2ecb1c750620c6000c11a5ff4358 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit a0c6a4272c3968c0ff50d3fed65f5865b72d777b) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/core: Add first cpu w/ max/min orig capacity to root domainDietmar Eggemann
This will allow to start iterating from a cpu with max or min original capacity in the wakeup path regardless on which cpu the scheduler is currently running (smp_processor_id()) or the previous cpu of the task (task_cpu(p)). This iteration has to happen on a sched_domain spanning all cpus in the order of the sched_groups of this sched_domain seen by the starting cpu. In case of an SMP system the first cpu with max orig capacity and the the one with min orig capacity is the same. This can temporally happen on a big.LITTLE system with hotplug as well. E.g. the different order of cpu iteration can be used to map schedtune task parameter 'boosted' into the cpu iteration order in find_best_target(). Use of READ_ONCE()/WRITE_ONCE() to avoid load/store tearing. Change-Id: I812fbd9c7e5f506617e456c0eec3edcd2c016e92 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit fd6e9543c1fd8971a5e2e68e39b2f6e591d46114) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/core: Remove remnants of commit fd5c98da1a42Dietmar Eggemann
Commit fd5c98da1a42 "WIP: sched: Store system-wide maximum cpu capacity in root domain" was repalced by commit 8148bdfff4f5 "WIP: sched: Update max cpu capacity in case of max frequency constraints" which didn't remove all the now unused bits. Change-Id: I067f6366431f43337cffa7a2a8e0de32dd33d2f9 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 6d284a607cec51bcafca313bc396bc3103b1e876) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched: Remove sysctl_sched_is_big_littleDietmar Eggemann
With the new wakeup approach this sysctl is not necessary any more. Change-Id: I52114b3c918791f6a4f9f30f50002919ccbc1a9c Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 885c0d503bcdf0ef4e9b46822496f16b20aa3bbd) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Code !is_big_little path into select_energy_cpu_brute()Dietmar Eggemann
This patch replaces the existing EAS upstream implementation of select_energy_cpu_brute() with the one of find_best_target() used in Android previously. It also removes the cpumask 'and' from select_energy_cpu_brute, see the existing use of 'cpu = smp_processor_id()' in select_task_rq_fair(). Change-Id: If678c002efaa87d1ba3ec9989a4e9f8df98b83ec Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> [ added guarding for non-schedtune builds ] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02EAS: sched/fair: Re-integrate 'honor sync wakeups' into wakeup pathDietmar Eggemann
This patch re-integrates the part which was initially provided by 3b9d7554aeec ("EAS: sched/fair: tunable to honor sync wakeups") into energy_aware_wake_cpu() into select_energy_cpu_brute(). Change-Id: I748fde3ecdeb44651179bce0a5bb8dd82d1903f6 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit b75b7286cb068d5761621ea134c23dd131db953f) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02Fixup!: sched/fair.c: Set SchedTune specific struct energy_env.taskDietmar Eggemann
This has to be done in the caller function of energy_diff() version of SchedTune to avoid Null pointer dereference in energy_diff(). Change-Id: I3f0f68dbd11efb15bbb3b1832f8294419ed85241 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 14531d4e245d063f713ee5ed835df958e6c7838f) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02sched/fair: Energy-aware wake-up task placementMorten Rasmussen
When the systems is not overutilized, place waking tasks on the most energy efficient cpu. Previous attempts reduced the search space by matching task utilization to cpu capacity before consulting the energy model as this is an expensive operation. The search heuristics didn't work very well and lacking any better alternatives this patch takes the brute-force route and tries all potential targets. This approach doesn't scale, but it might be sufficient for many embedded applications while work is continuing on a heuristic that can minimize the necessary computations. The heuristic must be derrived from the platform energy model rather than make additional assumptions, such lower capacity implies better energy efficiency. PeterZ mentioned in the past that we might be able to derrive some simpler deciding functions using mathematical (modal?) analysis. Change-Id: I772bacb4c8fd599f8006fa422f842e66377a9c6c Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> [rebase: on top of msm-google/android-msm-marlin-3.18] Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit a894422dbdb7b77ea2acfe7ff909ccb5ded23514) Signed-off-by: Chris Redpath <chris.redpath@arm.com>