summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorJoonwoo Park <joonwoop@codeaurora.org>2015-07-08 15:42:30 -0700
committerDavid Keitel <dkeitel@codeaurora.org>2016-03-23 20:02:05 -0700
commitc459d156283b9ca32c053ce327ece301e5821db4 (patch)
tree3f5dfc2dafa51bff6c334af9c4da74c2f3429775 /kernel
parente9c6508168c4a313dae5e7c11b2e458a7e7fb88b (diff)
sched: avoid unnecessary HMP scheduler stat re-accounting
When sched_entity's runnable average changes, before and after, we decrease and increase HMP scheduler's statistics of the sched_entity to take into account of updated runnable average. In that period, however, other CPUs would see that the runnable average updating CPU's load as less than actual. This is suboptimal and can lead improper task placement and load balance decision. We can avoid such situation at least with window based load tracking as sched_entity's load average which is for PELT won't affect to HMP scheduler's load tracking statistics. Thus fix to update HMP statistics only when HMP scheduler uses PELT based load statistics. Change-Id: I9eb615c248c79daab5d22cbb4a994f94be6a968d [joonwoop@codeaurora.org: applied fix into __update_load_avg() instead of update_entity_load_avg().] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c564897cbd4e..39f656fcc0ac 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4466,7 +4466,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
return 0;
sa->last_update_time = now;
- if (!cfs_rq && weight) {
+ if (sched_use_pelt && !cfs_rq && weight) {
se = container_of(sa, struct sched_entity, avg);
if (entity_is_task(se) && se->on_rq)
dec_hmp_sched_stats_fair(rq_of(cfs_rq), task_of(se));