-
Notifications
You must be signed in to change notification settings - Fork 0
Scheduler
The scheduler that the OS adopts is a CFS scheduler.
The basic structure that represents the information about scheduling is struct TaskTime
, which is comprised of:
pub struct TaskTime {
// CFS related
vruntime: usize,
weight: usize,
remaining: usize,
pub last_restore: usize,
// PELT related
period: usize,
history_load: usize,
running_load: usize,
}
The scheduler assigns each task a virtual runtime, which is the time that the task has been executed on the CPU. The scheduler always chooses the task with the smallest virtual runtime to run. Each task is assigned with a different time slice to run according to its niceness, or weight
field in TaskTime
.
PELT is used to track the occupancy rate of the CPU in a period of time, which is used to measure the type of the task, whether it is an interactive task or a batch task.
The scheduler will try its best to balance the load of different CPU, and the balancing method is described as:
/// The function is used to balance the load of all processors. It adapts an `O(n)` algorithm.
///
/// Its basic idea is steal. A processor try to steal tasks from another processor to make the two balanced.
pub fn balance() {
let curr_hart = Processor::hart_id();
let next_hart = (curr_hart + (get_time() % (CPUS - 1)) + 1) % CPUS;
let (mut recv, mut send) = if curr_hart < next_hart {
let curr_processor = PROCESSORS[curr_hart].lock();
let next_processor = PROCESSORS[next_hart].lock();
(curr_processor, next_processor)
} else {
let next_processor = PROCESSORS[next_hart].lock();
let curr_processor = PROCESSORS[curr_hart].lock();
(curr_processor, next_processor)
};
// really naive implementation
while let Some((task, _, is_realtime)) = send.pop() {
if task.lock().task_time.history_load() + recv.load() <= send.load() {
task.lock().task_time.clear(); // to make it the first
recv.push(&task, is_realtime);
} else {
send.push(&task, is_realtime);
break;
}
}
There is a switch thread for each CPU in the kernel. When the scheduling takes place, it first switch to the switch thread, which then switch to the task that the scheduler chooses.
The code used to schedule is listed as follows. The balancing between cores to make the load balanced is applied in each pelt period after first switching in PELT period.
/// Switch from idle task to the next task.
///
/// When the next task yields, it will get into this function again.
pub fn schedule() {
let task = Processor::curr_processor().lock().pop();
if let Some((task, time, _)) = task {
if task.lock().task_status == TaskStatus::Ready {
task.lock().task_status = TaskStatus::Running;
}
// set up rest time
task.lock().task_time.setup(time);
let task_ctx = task.lock().task_ctx_ptr();
let idle_task_ctx = {
let mut processor = PROCESSORS[Processor::hart_id()].lock();
*processor.local_curr_task_mut() = Some(task.phantom());
processor.local_idle_task_ctx_ptr()
};
extern "C" {
fn _switch(curr_ctx: *mut TaskContext, next_ctx: *const TaskContext);
}
unsafe {
_switch(idle_task_ctx, task_ctx);
}
let mut processor = PROCESSORS[Processor::hart_id()].lock();
if task.lock().task_status == TaskStatus::Running {
processor.push_normal(&task);
}
// check if timer is up
TIMER.notify(get_time());
} else {
// TODO: try to implement steal with balance
}
// avoid balance when there is only one CPU
if CPUS != 1 {
let mut processor = Processor::curr_processor().lock();
let now = pelt_period(get_time());
if processor.pelt_period != now {
processor.pelt_period = now;
if now % CPUS == Processor::hart_id() {
drop(processor);
Processor::balance();
}
}
}
}