MIT6.828 Fall2018 筆記 - Homework 7: xv6 locking

Homework: xv6 lockinghtml

Interrupts in ide.c

Explain in a few sentences why the kernel panicked. You may find it useful to look up the stack trace (the sequence of %eip values printed by panic) in the kernel.asm listing.api

更改ide.c中的iderw函數,試了四五次,終於panic了ide

❯ make qemu
qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512
xv6...
cpu1: starting 1
cpu0: starting 0
lapicid 1: panic: sched locks
 80103ca1 80103e12 80105a87 8010575c 801022b7 80100191 801014e5 8010155f 801037c4 8010575f

如下爲執行順序:trapasm.S: trapret -> proc.c: forkret -> fs.c: iinit -> fs.c: readsb -> bio.c: bread -> ide.c: iderw -> trapasm.S: alltraps -> trap.c: trap -> proc.c: yield -> proc.c: sched函數

可知在啓動第一個用戶進程時,執行到iderw()時(推測是在sti()後,cli()前)發生定時器中斷,而後進行調度,因爲ncli不爲1,在sched()裏panic了ui

Interrupts in file.c

Explain in a few sentences why the kernel didn't panic. Why do file_table_lock and ide_lock have different behavior in this respect?this

不會panic的緣由多是由於acquire()release()之間的時間過短了,都沒來得及發生定時器中斷線程

xv6 lock implementation

Why does release() clear lk->pcs[0] and lk->cpu before clearing lk->locked? Why not wait until after?code

可能會有以下狀況,cpu0上的線程將lk->locked清零時,正在acquire()等待的cpu1當即取到了lk,而後會更改lk->cpu和lk->pcs[0],而cpu0此時也在更改lk->cpu和lk->pcs[0],這就形成了數據競爭。orm

// Release the lock.
void
release(struct spinlock *lk)
{
  if(!holding(lk))
    panic("release");

  lk->pcs[0] = 0;
  lk->cpu = 0;

  // __sync_synchronize();使得上面的代碼對內存的操做與下面的代碼不放在一塊兒
  // 這樣可確保對臨界區的訪問不會在釋放鎖後
  __sync_synchronize();

  // lk->locked = 0 可能不是原子操做,因此用匯編
  asm volatile("movl $0, %0" : "+m" (lk->locked) : );

  popcli();
}
相關文章
相關標籤/搜索