diff options
author | Vikram Mulukutla <markivx@codeaurora.org> | 2015-06-10 17:17:46 -0700 |
---|---|---|
committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:02:01 -0700 |
commit | 09417ad30eeee22816471313bf13417c3039b930 (patch) | |
tree | 6e360e8d597e58eabd8f4cf8cd28ff842332d43e | |
parent | 8d4dce6c804e760fb9c3ff1e937dfe63d59af40c (diff) |
sched: Fix racy invocation of fixup_busy_time via move_queued_task
set_task_cpu uses fixup_busy_time to redistribute a task's load
information between source and destination runqueues. fixup_busy_time
assumes that both source and destination runqueue locks have been
acquired if the task is not being concurrently woken up. However
this is no longer true, since move_queued_task does not acquire the
destination CPU's runqueue lock due to optimizations brought in by
recent kernels.
Acquire both source and destination runqueue locks before invoking
set_task_cpu in move_queued_tasks.
Change-Id: I39fadf0508ad42e511db43428e52c8aa8bf9baf6
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
[joonwoop@codeaurora.org: fixed conflict in move_queued_task().]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
-rw-r--r-- | kernel/sched/core.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2baf7e319942..7b3be71b6e2f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2678,7 +2678,9 @@ static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new p->on_rq = TASK_ON_RQ_MIGRATING; dequeue_task(rq, p, 0); + double_lock_balance(rq, cpu_rq(new_cpu)); set_task_cpu(p, new_cpu); + double_unlock_balance(rq, cpu_rq(new_cpu)); raw_spin_unlock(&rq->lock); rq = cpu_rq(new_cpu); |