diff options
author | Joonwoo Park <joonwoop@codeaurora.org> | 2015-08-14 14:18:15 -0700 |
---|---|---|
committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:02:24 -0700 |
commit | 28f67e5a50d7c1bfc41cd7eb0f940f5daaa347c2 (patch) | |
tree | 55287e0e14ec32bdd739079aeeb197c1aa732aa0 /kernel | |
parent | 44af3b5e0308c9a7061b18344893d2b07a91b1c9 (diff) |
sched: set HMP scheduler's default initial task load to 100%
Set init_task_load to 100% to allow new tasks to wake up on the best
performance CPUs.
Change-Id: Ie762a3f629db554fb5cfa8c1d7b8b2391badf573
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fc0ff96a1fd8..a9f3199bdcf6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2675,7 +2675,7 @@ static inline void decay_scaled_stat(struct sched_avg *sa, u64 periods); /* Initial task load. Newly created tasks are assigned this load. */ unsigned int __read_mostly sched_init_task_load_pelt; unsigned int __read_mostly sched_init_task_load_windows; -unsigned int __read_mostly sysctl_sched_init_task_load_pct = 15; +unsigned int __read_mostly sysctl_sched_init_task_load_pct = 100; static inline unsigned int task_load(struct task_struct *p) { |