• Keine Ergebnisse gefunden

CPU 1 CPU 2

11.7. Balancing Dynamic Partitions

Chapter 11. Description of the HRT Linux Implementation

};

struct hrtl_deadline_info {

enum hrtl_deadline_info_event event;

pid_t pid;

hrtl_time_t runtime;

hrtl_time_t now;

};

Listing 11.25:HRTL task structure

Two variables containing time information are included in a deadline info block.

They indicate when a deadline was defined (runtime) and when a task signaled that a deadline was met (now).

Deadline events are stored in lockless ring buffers. Each CPU defines its own deadline event ring buffer, thus only one writer exists for each buffer. No locking mechanism is needed when an event is put into a buffer, since the commit can not be interrupted by another CPU.

A deadline watchdog needs to request new deadline events. Delivering a POSIX signal is not possible, because the sending task (respectively interrupt handler) may block on a spin-lock while the signal is enqueued.

11.7. Balancing Dynamic Partitions

struct budget_current_frame { cpumask_t budget_cpus;

unsigned long budget_groups[NR_CPUS][__HRTL_BUDGET_GROUP_LONG];

};

struct budget_window {

void (*update)(struct budget_window *window, struct hrtl_budget *budget,

unsigned int new, unsigned int old);

void (*frame_init)(struct budget_window *window,

struct budget_current_frame *frame);

raw_spinlock_t lock;

struct budget_current_frame frame;

unsigned int pos;

unsigned int slices;

struct budget_window_element window[HRTL_BUDGET_WINDOW_SIZE];

};

Listing 11.26:Time window

The actual part of a window that moves forward as time advances is frame (budget_current_frame). A window update (moving the frame) is performed by a (Linux) timer running on the system CPU everyHRTL_BUDGET_WINDOW_UPDATE microseconds (default is 5000). A frame defines a bit mask for CPUs being avail-able for scheduling between two window updates and a bit mask for the parti-tions that were running in that time. A time window stores frames for the last HRTL_BUDGET_WINDOW_SIZEwindow updates (default is 806). On a window up-date, the usage for this frame is added to the usage for previous HRTL_BUDGET_-WINDOW_SIZE -1 frames to compute the total CPU usage over the time window.

Furthermore, a new slot in thewindow array is allocated for the next frame. The outdated window slot is subtracted from the total CPU usage. The new frame inherits the bit mask of available CPUs from the previous frame. The currently running partition on each available CPU is marked as running in the bit mask for active partitions.

Each CPU that is available for scheduling during a frame extends the number of slicesby one. A slice is distributed in equal parts to the partitions running on that CPU. The number of used slices for a partition is stored in the partitions data object.

The total CPU usage for a partition in a time window is given by the percentage of used slices of this partition and the number of available slices during the time window.

11.7.2. Group Distribution

The HRTL dynamic partition scheduler maintains two different queues for available time slices (for reserved and non-reserved CPUs). Time slices are queued according to the priority of the currently running task in that slice. The slice running the task with

6The default window length is 400 milliseconds: 5000μs·80.

Chapter 11. Description of the HRT Linux Implementation

the lowest priority is enqueued at the head of the queue. If the task’s CPU budget is depleted the priority is lowered so that tasks from partitions with valid CPU budget are always enqueued after partitions that have overused their CPU budgets. If a task becomes ready for execution, it is easy to find a time slot for that task (or none). The new task can run if the combination of task priority and partition budget is higher than the one of the task at the head of the queue. In this case, the queue’s head is removed from the queue and prepared for a task switch.7 The previously removed slice is enqueued again with the new task’s properties after the switch took place.

Each CPU in the system provides one time slice at a time or none if the CPU is running a static partition. The slice is either enqueued in the reserved queue or the non-reserved queue. Above, it was explained how a slice can be found for a known task. In three different situations, a suitable task needs to be found for a slice:

1. A time slice changes the queue.

(reserved non-reserved)

2. The partition of the task running in a time slice changes the budget state.

(available overused)

3. The task running in a time slice is dequeued.

(suspend, terminate,. . .)

Dynamic partitions are organised in four queues. Like the queues for available time slices, partitions are enqueued for reserved and non-reserved CPUs. However, a partition can be present in both categories at the same time, since dynamic partitions have different budget values for both kinds of CPUs. Partitions that have overused their CPU budget are separated from those who have not. Partitions are enqueued according to the task with the highest priority level that is not already running. A task for a free time slice is always found in the head of the queue of partitions that still have a valid CPU budget.

Each dynamic partition maintains a priority based array for the related tasks. Tasks of the same priority are stored in a queue. A task is in that array if it is ready to run but not already scheduled in any time slice. A task that is added to or removed from the array changes the partition ordering.

11.7.3. SCHED_HRTL Integration

A time slice is removed from the non-reserved queue by a callback function that is registered in the HRTL CPU reservation process (Section 11.4.2). The slice is enqueued in the reserved queue when a system group starts execution. The periodic() callback function of the HRTL_SCHED_SYSTEMscheduler module informs the budget accounting core about the new space.

The pick_next_task() function from the HRTL scheduling class is always called on every task switch on every CPU (Listing 11.21). The two functions

7A task switch is realised by various callbacks from the Linux scheduler core.

174