Tải bản đầy đủ (.pdf) (14 trang)

Advanced Operating Systems: Lecture 15 - Mr. Farhan Zaidi

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (403.25 KB, 14 trang )

CS703 ­ Advanced 
Operating Systems
By Mr. Farhan Zaidi

 

 


Lecture No. 
15


Threads considered marvelous


Threads are wonderful when some action may block
for a while





Like a slow I/O operation, RPC, etc

Your code remains clean and “linear”
Moreover, aggregated performance is often far
higher than without threading


Threads considered harmful








They are fundamentally non-deterministic, hence
invite Heisenbugs
Reentrant code is really hard to write
Surprising scheduling can be a huge headache
When something “major” changes the state of a
system, cleaning up threads running based on the
old state is a pain


Classic issues


Threads that get forked off, then block for some reason

Address space soon bloats due to number of threads
increasing beyond a threshold, causing application to crash
Erratic use of synchronization primitives. Program is incredibly
hard to debug, some problems seen only now and then
As threads grow in number, synchronization overhead becomes
significant. Semaphore and lock queues become very large due
to waiting threads








Bottom line?




Concurrency bugs are incredibly common and very
hard to track down and fix
Programmers find concurrency unnatural



Try to package it better?
Identify software engineering paradigms that can ease the
task of gaining high performance


Event­oriented paradigm


Classic solution?



Go ahead and build a an event driven application,
and use threads as helpers
Connect each “external event source” to a main

hand-crafted “event monitoring routine”








Often will use signals or a kernel supplied event notification
mechanism to detect that I/O is available
Then package the event as an event object and put this on
a queue. Kick off the event scheduler if it was asleep
Scheduler de-queues events, processes them one by
one… forks lightweight threads as needed (when blocking
I/O is required).
Flash web server built on this paradigm


Problems with the architecture?












Only works if all the events show up and none is missed
Depends on OS not “losing” events. In other words event
notification mechanism must be efficient, scalable
Scheduler needs a way to block when no work to do and must be
sure that event notification can surely wake it up
Common event notification mechanism in Linux, Windows etc.
Select() and poll()
New much more efficient and scalable mechanisms:
epoll (2.6 and above Linux kernel only)
IOCompletionPorts in Windows NT, XP etc.


Goals of “the perfect scheduler”
Minimize latency: metric = response time (user time scales ~50150millisec) or job completion time
Maximize throughput: Maximize #of jobs per unit time.
Maximize utilization: keep CPU and I/O devices busy. Recurring
theme with OS scheduling
Fairness: everyone gets to make progress, no one starves


Problem cases







I/O goes idle because of blindness about job types

Optimization involves favoring jobs of type “A” over “B”.
Lots of A’s? B’s starve.
Interactive process trapped behind others. Response time
worsens for no reason.
Priorities: A depends on B. A’s priority > B’s. B never runs.


First come first served (FCFS or FIFO)


Simplest scheduling algorithm:






Run jobs in order that they arrive
Uni-programming: Run until done (non-preemptive)
Multi-programming: put job at back of queue when
blocks on I/O (we’ll assume this)
Advantage: Simplicity


FCFS (2)



Disadvantage: wait time depends on arrival order
unfair to later jobs (worst case: long job arrives first)

example: three jobs (times: A=100, B=1, C=2) arrive nearly
simultaneously – what’s the average completion time?

cpu

A

B

time                     100   101     103
And now?

cpu

B

C

A

time 1          3                                                    103

C


FCFS Convoy effect


A CPU bound job will hold CPU until done, or it causes an I/O
burst (rare occurrence, since the thread is CPU-bound)


long periods where no I/O requests issued, and CPU held
 Result: poor I/O device utilization
Example: one CPU bound job, many I/O bound
 CPU bound runs (I/O devices idle)
 CPU bound blocks
 I/O bound job(s) run, quickly block on I/O
 CPU bound runs again
 I/O completes
 CPU bound still runs while I/O devices idle (continues…)






×