编程作业

思路

进程创建

如果只是想根据特定程序来创建进程,而不需要 fork + exec 组合提供的灵活性(如文件重定位等),那么 spawn 将是一个更简洁且效率更高的选择,本实验要求便是实现它。

对于 spawn 的实现,确实可以简单地将 forkexec 拼接起来。但需要注意的是,fork 首先拷贝父进程的地址空间,exec 再将该地址空间替换,二者融合后最开始地址空间的拷贝其实是徒劳的。如以下代码所示:

1
2
3
let memory_set = MemorySet::from_existed_user(&parent_inner.memory_set);  // futile code!
...
let (memory_set, user_sp, entry_point) = MemorySet::from_elf(elf_data);

stride 调度算法

实现一个简单的步长调度算法,由于算法思路比较简单,这里直接介绍实现方式。

首先,为了保存进程的“步长”和优先级信息,需要为进程控制块添加两个字段 strideprio(不考虑性能,为了实现的简单,pass 字段省略,而是在每次增加步长时再计算)。

在进行调度时(TaskManager.fetch()),需要从进程就绪队列中找到 stride 最小的进程进行调度,并为其 stride 加上对应的 pass,pass 的计算方式是用预先设定的大常数 BigStride 除以进程优先级 prio 得到。这里如果为了效率考虑,就绪队列可以采用优先队列的数据结构,而为了实现的简单,这里选择一次遍历的方式寻找最小值。

最后,在一个时间片后,重新调度当前 stride 最小的进程。这一时间片轮转策略已经事先实现好,不需要做修改,实现代码如下所示:

1
2
3
4
5
6
7
8
9
10
// os/src/task/mod.rs

pub fn trap_handler() -> ! {
...
Trap::Interrupt(Interrupt::SupervisorTimer) => {
set_next_trigger();
suspend_current_and_run_next();
}
...
}

CPU 每个时钟周期都发起一次时钟中断进入内核,内核的 trap_handler 检测到陷入原因是时钟中断,则调用 suspend_current_and_run_next 将当前进程放入就绪队列中,并重新进行调度。

前向兼容

从本实验开始,内核必须前向兼容,能够通过前一章的所有测试用例。根据文档提示,可以采用 git cherry-pick 系列命令,将其他分支的 commit 移植到本章分支。使用方法如下:

  1. 合并特定的 commit 到当前分支:git cherry-pick <commit id>
  2. 若遇到冲突,首先打开冲突文件,如:os/src/syscall/process.rs,编辑文件,解决冲突。
  3. 冲突解决后,标记已解决冲突的文件:git add os/src/syscall/process.rs。重复 2、3 步骤,直至解决完所有的冲突。
  4. 继续 cherry-pick 过程:git cherry-pick --continue

这一章由于涉及到任务到进程的转变,框架改动较大,且前面章节修改的代码也不算多,因此我最终还是选择了手动移植。但在后面章节的实验中还是建议使用 cherry-pick.

代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
diff --git a/os/src/config.rs b/os/src/config.rs
index 5761cdd..b836e56 100644
--- a/os/src/config.rs
+++ b/os/src/config.rs
@@ -23,3 +23,5 @@ pub const TRAP_CONTEXT_BASE: usize = TRAMPOLINE - PAGE_SIZE;
pub const CLOCK_FREQ: usize = 12500000;
/// the physical memory end
pub const MEMORY_END: usize = 0x88000000;
+/// big constant for process scheduling
+pub const BIG_STRIDE: isize = 1000_000;
\ No newline at end of file
diff --git a/os/src/mm/frame_allocator.rs b/os/src/mm/frame_allocator.rs
index 44f20cd..2fa6939 100644
--- a/os/src/mm/frame_allocator.rs
+++ b/os/src/mm/frame_allocator.rs
@@ -1,7 +1,7 @@
//! Implementation of [`FrameAllocator`] which
//! controls all the frames in the operating system.
use super::{PhysAddr, PhysPageNum};
-use crate::config::MEMORY_END;
+use crate::config::{MEMORY_END, PAGE_SIZE};
use crate::sync::UPSafeCell;
use alloc::vec::Vec;
use core::fmt::{self, Debug, Formatter};
@@ -134,3 +134,10 @@ pub fn frame_allocator_test() {
drop(v);
println!("frame_allocator_test passed!");
}
+
+/// Check if the remaining physical memory is sufficient
+pub fn is_mem_sufficient(_len: usize) -> bool {
+ let fa = FRAME_ALLOCATOR.exclusive_access();
+ let page_cnt = fa.end - fa.current + fa.recycled.len();
+ (_len + PAGE_SIZE - 1) / PAGE_SIZE <= page_cnt
+}
diff --git a/os/src/mm/memory_set.rs b/os/src/mm/memory_set.rs
index c3d15f3..dca8551 100644
--- a/os/src/mm/memory_set.rs
+++ b/os/src/mm/memory_set.rs
@@ -60,6 +60,19 @@ impl MemorySet {
None,
);
}
+ /// Remove framed area
+ pub fn remove_framed_area(&mut self, range: VPNRange) {
+ let mut idx = 0;
+ for area in self.areas.iter_mut() {
+ if area.vpn_range.get_start().0 == range.get_start().0
+ && area.vpn_range.get_end().0 == range.get_end().0 {
+ area.unmap(&mut self.page_table);
+ break;
+ }
+ idx += 1;
+ }
+ self.areas.remove(idx);
+ }
/// remove a area
pub fn remove_area_with_start_vpn(&mut self, start_vpn: VirtPageNum) {
if let Some((idx, area)) = self
diff --git a/os/src/mm/mod.rs b/os/src/mm/mod.rs
index d216861..8ff9c7b 100644
--- a/os/src/mm/mod.rs
+++ b/os/src/mm/mod.rs
@@ -11,9 +11,9 @@ mod heap_allocator;
mod memory_set;
mod page_table;

-pub use address::{PhysAddr, PhysPageNum, VirtAddr, VirtPageNum};
-use address::{StepByOne, VPNRange};
-pub use frame_allocator::{frame_alloc, FrameTracker};
+pub use address::{PhysAddr, PhysPageNum, VirtAddr, VirtPageNum, VPNRange};
+use address::StepByOne;
+pub use frame_allocator::{frame_alloc, FrameTracker, is_mem_sufficient};
pub use memory_set::remap_test;
pub use memory_set::{MapPermission, MemorySet, KERNEL_SPACE};
pub use page_table::{translated_byte_buffer, translated_refmut, translated_str, PageTableEntry};
diff --git a/os/src/syscall/mod.rs b/os/src/syscall/mod.rs
index 8e0a7dd..ba663a1 100644
--- a/os/src/syscall/mod.rs
+++ b/os/src/syscall/mod.rs
@@ -45,8 +45,11 @@ mod process;

use fs::*;
use process::*;
+
+use crate::task::current_task;
/// handle syscall exception with `syscall_id` and other arguments
pub fn syscall(syscall_id: usize, args: [usize; 3]) -> isize {
+ current_task().unwrap().inc_syscall_times(syscall_id);
match syscall_id {
SYSCALL_READ => sys_read(args[0], args[1] as *const u8, args[2]),
SYSCALL_WRITE => sys_write(args[0], args[1] as *const u8, args[2]),
diff --git a/os/src/syscall/process.rs b/os/src/syscall/process.rs
index f7aa9c3..13b9abb 100644
--- a/os/src/syscall/process.rs
+++ b/os/src/syscall/process.rs
@@ -1,14 +1,16 @@
//! Process management syscalls
+
+use core::mem::size_of;
+
use alloc::sync::Arc;

use crate::{
config::MAX_SYSCALL_NUM,
loader::get_app_data_by_name,
- mm::{translated_refmut, translated_str},
+ mm::{is_mem_sufficient, translated_byte_buffer, translated_refmut, translated_str, VirtAddr},
task::{
- add_task, current_task, current_user_token, exit_current_and_run_next,
- suspend_current_and_run_next, TaskStatus,
- },
+ add_task, current_task, current_user_token, exit_current_and_run_next, suspend_current_and_run_next, TaskStatus
+ }, timer::get_time_us,
};

#[repr(C)]
@@ -119,10 +121,24 @@ pub fn sys_waitpid(pid: isize, exit_code_ptr: *mut i32) -> isize {
/// HINT: What if [`TimeVal`] is splitted by two pages ?
pub fn sys_get_time(_ts: *mut TimeVal, _tz: usize) -> isize {
trace!(
- "kernel:pid[{}] sys_get_time NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_get_time",
current_task().unwrap().pid.0
);
- -1
+ let _us = get_time_us();
+ let time_val = TimeVal {
+ sec: _us / 1_000_000,
+ usec: _us % 1_000_000,
+ };
+ let buffers = translated_byte_buffer(
+ current_user_token(), _ts as *const u8, size_of::<TimeVal>());
+ let mut time_val_ptr = &time_val as *const _ as *const u8;
+ for buffer in buffers {
+ unsafe {
+ time_val_ptr.copy_to(buffer.as_mut_ptr(), buffer.len());
+ time_val_ptr = time_val_ptr.add(buffer.len());
+ }
+ }
+ 0
}

/// YOUR JOB: Finish sys_task_info to pass testcases
@@ -130,28 +146,60 @@ pub fn sys_get_time(_ts: *mut TimeVal, _tz: usize) -> isize {
/// HINT: What if [`TaskInfo`] is splitted by two pages ?
pub fn sys_task_info(_ti: *mut TaskInfo) -> isize {
trace!(
- "kernel:pid[{}] sys_task_info NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_task_info",
current_task().unwrap().pid.0
);
- -1
+ let task_info = TaskInfo {
+ status: TaskStatus::Running,
+ syscall_times: current_task().unwrap().get_syscall_times(),
+ time: current_task().unwrap().get_scheduled_timespan(),
+ };
+ let buffers = translated_byte_buffer(
+ current_user_token(), _ti as *const u8, size_of::<TaskInfo>());
+ let mut task_info_ptr = &task_info as *const _ as *const u8;
+ for buffer in buffers {
+ unsafe {
+ task_info_ptr.copy_to(buffer.as_mut_ptr(), buffer.len());
+ task_info_ptr = task_info_ptr.add(buffer.len());
+ }
+ }
+ 0
}

/// YOUR JOB: Implement mmap.
pub fn sys_mmap(_start: usize, _len: usize, _port: usize) -> isize {
trace!(
- "kernel:pid[{}] sys_mmap NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_mmap",
current_task().unwrap().pid.0
);
- -1
+ let start_va = VirtAddr::from(_start);
+ // 1. illegal start virtual address or port
+ if !start_va.aligned() || _port & !0x7 != 0 || _port & 0x7 == 0 {
+ return -1;
+ }
+
+ // 2. Check is there sufficient physical memory
+ if !is_mem_sufficient(_len) {
+ return -1;
+ }
+
+ let end_va = VirtAddr::from(_start + _len);
+ current_task().unwrap().mmap(start_va, end_va, _port)
}

/// YOUR JOB: Implement munmap.
pub fn sys_munmap(_start: usize, _len: usize) -> isize {
trace!(
- "kernel:pid[{}] sys_munmap NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_munmap",
current_task().unwrap().pid.0
);
- -1
+ let start_va = VirtAddr::from(_start);
+ if !start_va.aligned() {
+ return -1;
+ }
+
+ let end_va = VirtAddr::from(_start + _len);
+ current_task().unwrap().munmap(start_va, end_va)
}

/// change data segment size
@@ -168,17 +216,34 @@ pub fn sys_sbrk(size: i32) -> isize {
/// HINT: fork + exec =/= spawn
pub fn sys_spawn(_path: *const u8) -> isize {
trace!(
- "kernel:pid[{}] sys_spawn NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_spawn",
current_task().unwrap().pid.0
);
- -1
+
+ // spawn a new process based upon _path
+ let token = current_user_token();
+ let _path = translated_str(token, _path);
+ let data_opt = get_app_data_by_name(_path.as_str());
+ if data_opt.is_none() { // invalid file name
+ return -1;
+ }
+ let current_task = current_task().unwrap();
+ let new_task = current_task.spawn(data_opt.unwrap());
+ let new_pid = new_task.getpid();
+ add_task(new_task);
+ new_pid as isize
}

// YOUR JOB: Set task priority.
pub fn sys_set_priority(_prio: isize) -> isize {
trace!(
- "kernel:pid[{}] sys_set_priority NOT IMPLEMENTED",
+ "kernel:pid[{}] sys_set_priority",
current_task().unwrap().pid.0
);
- -1
+ if _prio <= 1 {
+ return -1;
+ }
+ let current_task = current_task().unwrap();
+ current_task.set_prio(_prio);
+ _prio
}
diff --git a/os/src/task/manager.rs b/os/src/task/manager.rs
index 99393a4..b0f2b52 100644
--- a/os/src/task/manager.rs
+++ b/os/src/task/manager.rs
@@ -19,11 +19,27 @@ impl TaskManager {
}
/// Add process back to ready queue
pub fn add(&mut self, task: Arc<TaskControlBlock>) {
+ task.inc_stride();
self.ready_queue.push_back(task);
}
/// Take a process out of the ready queue
pub fn fetch(&mut self) -> Option<Arc<TaskControlBlock>> {
- self.ready_queue.pop_front()
+ // self.ready_queue.pop_front()
+ let mut min_stride = isize::MAX;
+ let mut best_idx = 0;
+ for (idx, tcb) in self.ready_queue.iter().enumerate() {
+ let stride = tcb.get_stride();
+ if min_stride > stride {
+ min_stride = stride;
+ best_idx = idx;
+ }
+ }
+ if min_stride == isize::MAX {
+ None
+ } else {
+ self.ready_queue.swap(0, best_idx);
+ self.ready_queue.pop_front()
+ }
}
}

diff --git a/os/src/task/processor.rs b/os/src/task/processor.rs
index f05fa09..14b8ae0 100644
--- a/os/src/task/processor.rs
+++ b/os/src/task/processor.rs
@@ -8,6 +8,7 @@ use super::__switch;
use super::{fetch_task, TaskStatus};
use super::{TaskContext, TaskControlBlock};
use crate::sync::UPSafeCell;
+use crate::timer::get_time_ms;
use crate::trap::TrapContext;
use alloc::sync::Arc;
use lazy_static::*;
@@ -61,6 +62,11 @@ pub fn run_tasks() {
let mut task_inner = task.inner_exclusive_access();
let next_task_cx_ptr = &task_inner.task_cx as *const TaskContext;
task_inner.task_status = TaskStatus::Running;
+ // if the task is being called for the first time,
+ if task_inner.time == 0usize {
+ // then set it to the current time
+ task_inner.time = get_time_ms();
+ }
// release coming task_inner manually
drop(task_inner);
// release coming task TCB manually
diff --git a/os/src/task/task.rs b/os/src/task/task.rs
index 1402c31..e60021d 100644
--- a/os/src/task/task.rs
+++ b/os/src/task/task.rs
@@ -1,9 +1,10 @@
//! Types related to task management & Functions for completely changing TCB
use super::TaskContext;
use super::{kstack_alloc, pid_alloc, KernelStack, PidHandle};
-use crate::config::TRAP_CONTEXT_BASE;
-use crate::mm::{MemorySet, PhysPageNum, VirtAddr, KERNEL_SPACE};
+use crate::config::{BIG_STRIDE, MAX_SYSCALL_NUM, TRAP_CONTEXT_BASE};
+use crate::mm::{MapPermission, MemorySet, PhysPageNum, VPNRange, VirtAddr, VirtPageNum, KERNEL_SPACE};
use crate::sync::UPSafeCell;
+use crate::timer::get_time_ms;
use crate::trap::{trap_handler, TrapContext};
use alloc::sync::{Arc, Weak};
use alloc::vec::Vec;
@@ -34,6 +35,86 @@ impl TaskControlBlock {
let inner = self.inner_exclusive_access();
inner.memory_set.token()
}
+ /// Set the schduling priority
+ pub fn set_prio(&self, prio: isize) {
+ let mut inner = self.inner_exclusive_access();
+ inner.set_prio(prio);
+ }
+ /// Get the schduling stride
+ pub fn get_stride(&self) -> isize {
+ let inner = self.inner_exclusive_access();
+ inner.stride
+ }
+ /// Increase the stride by pass
+ pub fn inc_stride(&self) {
+ let mut inner = self.inner_exclusive_access();
+ inner.inc_stride();
+ }
+ /// Get syscall times
+ pub fn get_syscall_times(&self) -> [u32; MAX_SYSCALL_NUM] {
+ let inner = self.inner.exclusive_access();
+ inner.syscall_times.clone()
+ }
+ /// Get time span between current time and first scheduled time
+ pub fn get_scheduled_timespan(&self) -> usize {
+ let inner = self.inner.exclusive_access();
+ get_time_ms() - inner.time
+ }
+ /// Increase syscall times
+ pub fn inc_syscall_times(&self, syscall_id: usize) -> bool {
+ if syscall_id >= MAX_SYSCALL_NUM {
+ return false;
+ }
+ let mut inner = self.inner.exclusive_access();
+ inner.syscall_times[syscall_id] += 1;
+ true
+ }
+ /// Map some pages to memory_set
+ pub fn mmap(&self, start_va: VirtAddr, end_va: VirtAddr, port: usize) -> isize {
+ let mut inner = self.inner.exclusive_access();
+ let vpn_range = VPNRange::new(
+ VirtPageNum::from(start_va),
+ end_va.ceil()
+ );
+
+ // 3. Check if trying to map mapped page
+ for vpn in vpn_range {
+ if let Some(pte) = inner.memory_set.translate(vpn) {
+ if pte.is_valid() {
+ return -1;
+ }
+ }
+ }
+
+ // After checking all errors may occur,
+ // push a new map_area to current memory_set
+ let perm = MapPermission::from_bits((port as u8) << 1).unwrap() | MapPermission::U;
+ inner.memory_set.insert_framed_area(start_va, end_va, perm);
+ 0
+ }
+
+ /// Unmap some pages in memory set
+ pub fn munmap(&self, start_va: VirtAddr, end_va: VirtAddr) -> isize {
+ let mut inner = self.inner.exclusive_access();
+ let vpn_range = VPNRange::new(
+ VirtPageNum::from(start_va),
+ end_va.ceil(),
+ );
+
+ // 1. Check if trying to unmap unmapped page
+ for vpn in vpn_range {
+ if let Some(pte) = inner.memory_set.translate(vpn) {
+ if !pte.is_valid() {
+ return -1
+ }
+ }
+ }
+
+ // After checking all errors may occur,
+ // remove some pages from current memory_set
+ inner.memory_set.remove_framed_area(vpn_range);
+ 0
+ }
}

pub struct TaskControlBlockInner {
@@ -68,6 +149,18 @@ pub struct TaskControlBlockInner {

/// Program break
pub program_brk: usize,
+
+ /// Scheduling priority
+ pub prio: isize,
+
+ /// Scheduling stride
+ pub stride: isize,
+
+ /// The syscall times
+ pub syscall_times: [u32; MAX_SYSCALL_NUM],
+
+ /// The first time the task was scheduled
+ pub time: usize,
}

impl TaskControlBlockInner {
@@ -85,6 +178,13 @@ impl TaskControlBlockInner {
pub fn is_zombie(&self) -> bool {
self.get_status() == TaskStatus::Zombie
}
+ pub fn set_prio(&mut self, prio: isize) {
+ self.prio = prio;
+ }
+ pub fn inc_stride(&mut self) {
+ let pass = BIG_STRIDE / self.prio;
+ self.stride += pass;
+ }
}

impl TaskControlBlock {
@@ -118,6 +218,10 @@ impl TaskControlBlock {
exit_code: 0,
heap_bottom: user_sp,
program_brk: user_sp,
+ prio: 16,
+ stride: 0,
+ syscall_times: [0u32; MAX_SYSCALL_NUM],
+ time: 0usize,
})
},
};
@@ -191,6 +295,10 @@ impl TaskControlBlock {
exit_code: 0,
heap_bottom: parent_inner.heap_bottom,
program_brk: parent_inner.program_brk,
+ prio: 16,
+ stride: 0,
+ syscall_times: [0u32; MAX_SYSCALL_NUM],
+ time: 0usize,
})
},
});
@@ -206,6 +314,55 @@ impl TaskControlBlock {
// ---- release parent PCB
}

+ /// parent process spawn the child process
+ pub fn spawn(self: &Arc<Self>, elf_data: &[u8]) -> Arc<Self> {
+ let mut parent_inner = self.inner_exclusive_access();
+ // create a user space based upon elf_data
+ let (memory_set, user_sp, entry_point) = MemorySet::from_elf(elf_data);
+ let trap_cx_ppn = memory_set
+ .translate(VirtAddr::from(TRAP_CONTEXT_BASE).into())
+ .unwrap()
+ .ppn();
+ // alloc a pid and a kernel stack in kernel space
+ let pid_handle = pid_alloc();
+ let kernel_stack = kstack_alloc();
+ let kernel_stack_top = kernel_stack.get_top();
+ let task_control_block = Arc::new(TaskControlBlock {
+ pid: pid_handle,
+ kernel_stack,
+ inner: unsafe {
+ UPSafeCell::new(TaskControlBlockInner {
+ trap_cx_ppn,
+ base_size: user_sp,
+ task_cx: TaskContext::goto_trap_return(kernel_stack_top),
+ task_status: TaskStatus::Ready,
+ memory_set,
+ parent: Some(Arc::downgrade(self)),
+ children: Vec::new(),
+ exit_code: 0,
+ heap_bottom: parent_inner.heap_bottom,
+ program_brk: parent_inner.program_brk,
+ prio: 16,
+ stride: 0,
+ syscall_times: [0u32; MAX_SYSCALL_NUM],
+ time: 0usize,
+ })
+ },
+ });
+ // add child
+ parent_inner.children.push(task_control_block.clone());
+ // modify trap_cx
+ let trap_cx = task_control_block.inner_exclusive_access().get_trap_cx();
+ *trap_cx = TrapContext::app_init_context(
+ entry_point,
+ user_sp,
+ KERNEL_SPACE.exclusive_access().token(),
+ kernel_stack_top,
+ trap_handler as usize,
+ );
+ task_control_block
+ }
+
/// get pid of process
pub fn getpid(&self) -> usize {
self.pid.0

问答作业

stride 算法原理非常简单,但是有一个比较大的问题。例如两个 pass = 10 的进程,使用 8bit 无符号整型储存 stride, p1.stride = 255, p2.stride = 250,在 p2 执行一个时间片后,理论上下一次应该 p1 执行。

Q1: 实际情况是轮到 p1 执行吗?为什么?

A1: 不是,对于 8 位无符号整型而言,它能够表示的最大整数为 255,因此当 p2 执行了一个时间片后,p2.stride = p2.stride + pass = 250 + 10 = 4 (overflow!) 。这样,下一次被调度的进程实际上还是 p2.

我们之前要求进程优先级 >= 2 其实就是为了解决这个问题。可以证明,在不考虑溢出的情况下, 在进程优先级全部 >= 2 的情况下,如果严格按照算法执行,那么 STRIDE_MAX – STRIDE_MIN <= BigStride / 2.

Q2: 为什么?尝试简单说明(不要求严格证明)。

A2: 进程优先级 >= 2,则有 pass <= BigStride / 2,由于调度策略的影响,最大步长与最小步长的差值最大不超过 pass,那么便有 STRIDE_MAX – STRIDE_MIN <= pass <= BigStride / 2.

Q3: 已知以上结论,考虑溢出的情况下,可以为 Stride 设计特别的比较器,让 BinaryHeap<Stride> 的 pop 方法能返回真正最小的 Stride。补全下列代码中的 partial_cmp 函数,假设两个 Stride 永远不会相等。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
use core::cmp::Ordering;

struct Stride(u64);

impl PartialOrd for Stride {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
// ...
}
}

impl PartialEq for Stride {
fn eq(&self, other: &Self) -> bool {
false
}
}

TIPS: 使用 8 bits 存储 stride, BigStride = 255, 则: (125 < 255) == false, (129 < 255) == true.

A3:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
use core::cmp::Ordering;

struct Stride(u64);

impl PartialOrd for Stride {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
let half = BigStride / 2;
if self.0 < other.0 {
if other.0 - self.0 <= half {
Some(Ordering::Less)
} else {
Some(Ordering::Greater)
}
} else {
if self.0 - other.0 <= half {
Some(Ordering::Greater)
} else {
Some(Ordering::Less)
}
}
}
}

impl PartialEq for Stride {
fn eq(&self, other: &Self) -> bool {
false
}
}