summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorSyed Rameez Mustafa <rameezmustafa@codeaurora.org>2014-04-25 12:27:25 -0700
committerDavid Keitel <dkeitel@codeaurora.org>2016-03-23 19:59:28 -0700
commite640249dbade56af7bc968fce2f5ede230602e6e (patch)
tree6457669720789e8b892b7328074289570a69a66c /kernel/sched
parentb7b5f7911932c94fa036d6d92a0aa2c9bf2c21b0 (diff)
sched/fair: Limit MAX_PINNED_INTERVAL for more frequent load balancing
Should the system get stuck in a state where load balancing is failing due to all tasks being pinned, deferring load balancing for up to half a second may cause further performance problems. Eventually all tasks will not be pinned and load balancing should not be deferred for a great length of time. Change-Id: I06f93b5448353b5871645b9274ce4419dc9fae0f Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cabc223ffd32..b04af1c436cc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7699,7 +7699,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
* Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
* so long as it is large enough.
*/
-#define MAX_PINNED_INTERVAL 512
+#define MAX_PINNED_INTERVAL 16
/* Working cpumask for load_balance and load_balance_newidle. */
DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);