Tải bản đầy đủ (.pdf) (98 trang)

Addison Wesley Essential C Sharp_8 docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.75 MB, 98 trang )

ptg
Running LINQ Queries in Parallel 737
Listing 18.17: Canceling a Parallel Loop
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
public class Program
{
public static List<string> ParallelEncrypt(
List<string> data,
CancellationToken cancellationToken)
{
(item) => Encrypt(item)).ToList();
}
public static void Main()
{
List<string> data = Utility.GetData(1000000).ToList();

Console.WriteLine("Push ENTER to exit.");
Task task = Task.Factory.StartNew(() =>
{
data = ParallelEncrypt(data, cts.Token);
} , cts.Token);
// Wait for the user's input
Console.Read();
Console.Write(stars);
//
}
return data.AsParallel().


WithCancellation(
cancellationToken).Select(
CancellationTokenSource cts =
new CancellationTokenSource();
cts.Cancel();
try{task.Wait();}
catch (AggregateException){} }
OUTPUT 18.8:
ERROR: The operation was canceled.
From the Library of Wow! eBook
ptg
Chapter 18: Multithreading738
As with a parallel loop, canceling a PLINQ query requires a Cancella-
tionToken, which is available on a CancellationTokenSource.Token prop-
erty. However, rather than overloading every PLINQ query to support the
cancellation token, the ParallelQuery<T> object returned by IEnumera-
ble’s AsParallel() method includes a WithCancellation() extension
method that simply takes a CancellationToken. As a result, calling Can-
cel() on the CancellationTokenSource object will request the parallel
query to cancel—because it checks the IsCancellationRequested property
on the CancellationToken.
As mentioned, canceling a PLINQ query will throw an exception in
place of returning the complete result. Therefore, all canceled PLINQ que-
ries will need to be wrapped by try{…}/catch(OperationCanceledExcep-
tion){…} blocks to avoid an unhandled exception. Alternatively, as shown
in Listing 18.17, pass the CancellationToken to both ParallelEncrypt()
and as a second parameter on StartNew(). This will cause task.Wait() to
throw an AggregateException whose InnerException property will be set
to a TaskCanceledException.
Multithreading before .NET Framework 4

TPL is a fantastic library covering a multitude of multithreading patterns
with extensibility points to handle even more. However, there is one sig-
nificant drawback to TPL: It is available only for the .NET Framework 4 or
for use with the Rx library in .NET 3.5. In this section, we cover multi-
threading technology before TPL.
Asynchronous Operations with System.Threading.Thread
Listing 18.18 (with Output 18.9) provides an example. Like TPL, there is a
fundamental type, System.Threading.Thread, which is used to control an
asynchronous operation. Like System.Threading.Tasks.Task in TPL,
Thread includes a Start method and a wait equivalent, Join().
Listing 18.18: Starting a Method Using System.Threading.Thread
using System;
public class RunningASeparateThread
using System.Threading;
From the Library of Wow! eBook
ptg
Multithreading before .NET Framework 4 739
{
public const int Repetitions = 1000;
public static void Main()
{
for (int count = 0; count < Repetitions; count++)
{
Console.Write('-');
}
}
public static void DoWork()
{
for (int count = 0; count < Repetitions; count++)
{

Console.Write('.');
}
}
}
ThreadStart threadStart = DoWork;
Thread thread = new Thread(threadStart);
thread.Start();
thread.Join();
OUTPUT 18.9:
























From the Library of Wow! eBook
ptg
Chapter 18: Multithreading740
Like the output of Listing 18.9, which used TPL, Listing 18.18’s code
(see Output 18.9) intersperses . and – in the output. The code that is to exe-
cute in a new thread appears in the DoWork() method. The DoWork()
method outputs a . during each iteration within a loop. Besides the fact
that it contains code for starting another thread, the Main() method is vir-
tually identical in structure to DoWork(), except that it displays The
resultant output is due to a series of dashes until the thread context
switches, at which time the program displays periods until the next thread
switch, and so on.
2
In order for code to run under the context of a different thread, you
need a delegate of type System.Threading.ThreadStart or System.
Threading.ParameterizedThreadStart (the latter allows for a single
parameter of type object), identifying the code to execute. Given a Thread
instance created using the thread-start delegate constructor, you can start
the thread executing with a call to thread.Start(). (Listing 18.18 shows
the ThreadStart explicitly to identify the delegate type. In general, DoWork
could be passed directly to the thread constructor using C# 2.0’s delegate
inference.) Starting the thread simply involves a call to Thread.Start().
As soon as the DoWork() method begins execution, the call to Thread.
Start() returns and executes the for loop in the Main() method. The
threads are now independent and neither waits for the other. The output
from Listing 18.18 and Listing 18.19 will intermingle the output of each
thread, instead of creating a series of
. followed by

Thread Management
Threads include a number of methods and properties for managing their
execution.
• Join(): Once threads are started, you can cause a “wait for comple-
tion” with a call to thread.Join(). The calling thread will wait
until the thread instance terminates. The Join() method is over-
loaded to take either an int or a TimeSpan to support a maximum
time to wait for thread completion before continuing execution.
2. As mentioned earlier, it is possible to increase the chances of a thread context switch by
using Start /low /b <program.exe> to execute the program.
From the Library of Wow! eBook
ptg
Multithreading before .NET Framework 4 741
• IsBackground: Another thread configuration option is the
thread.IsBackGround property. By default, a thread is a foreground
thread, meaning the process will not terminate until the thread com-
pletes. In contrast, setting the IsBackground property to true will
allow process execution to terminate prior to a thread’s completion.
• Priority: When using the Join() method, you can increase or
decrease the thread’s priority by setting the Priority to a new
ThreadPriority enum value (Lowest, BelowNormal, Normal, Above-
Normal, or Highest).
• ThreadState: A thread’s state is accessible through the ThreadState
property, a more precise reflection of the Boolean IsAlive property.
The ThreadState enum flag values are Aborted, AbortRequested,
Background, Running, Stopped, StopRequested, Suspended, Suspend-
Requested, Unstarted, and WaitSleepJoin. The flag names indicate
activities that may occur on a thread. Two noteworthy methods are
Thread.Sleep() and Abort().
• Thread.Sleep(): Thread.Sleep() is a static method that pauses the

current thread for a period. A single parameter (in milliseconds, or a
TimeSpan) specifies how long the active thread waits before continu-
ing execution. This enables switching to a different thread for a spe-
cific period.
This method is not for accurate timing. Returns can occur hundreds of
milliseconds before or after the specified time.
• Abort(): A thread’s Abort() method causes a ThreadAbortException
to be thrown within the target thread at whatever location the
thread is executing when Abort() is invoked. As already detailed,
aborting a thread introduces uncertainty into the thread’s behavior
and could cause data integrity and resource cleanup problems.
Developers should consider the Abort() method to be a last resort.
Instead, they should rely on threads running to completion and/or
signaling them to escape out of whatever code is running via some
with shared state.
From this list of Thread members, only Join() and ThreadState have Task
equivalents. For the most part, this is because there are generally preferable
From the Library of Wow! eBook
ptg
Chapter 18: Multithreading742
equivalents or the behavior of the member is undesirable as a best practice.
For example, aborting a thread may threaten data integrity or inadequate
resource de-allocation, as mentioned earlier in the chapter. Therefore,
given the .NET Framework 4, developers should generally avoid these
members in favor of their task equivalents or alternative patterns entirely.
In summary, the general priority for selecting from the asynchronous
class options is
Task, ThreadPool, and Thread. In other words, use TPL, but
if that doesn’t fit, use ThreadPool; if that still doesn’t suffice, use Thread.
One particular Thread member that is likely to crop up more frequently

because there is no Task or ThreadPool equivalent is Thread.Sleep().
Although, if it doesn’t introduce too much unnecessary complexity, con-
sider using a timer in place of Sleep().
Thread Pooling
Regardless of the number of processors, an excess of threads negatively
affects performance. To efficiently manage thread creation, TPL makes
extensive use of CLR’s thread pool, System.Threading.ThreadPool. Most
importantly, the thread pool dynamically determines when to use existing
threads rather than creating new ones. Fortunately, the .NET 3.5 Frame-
work includes a version of the System.Threading.ThreadPool, so it is
available even without TPL.
Accessing threads in ThreadPool is similar to explicit use of the Thread
class except that the invocation is via a static method, QueueUser-
WorkItem() (see Listing 18.19).
Listing 18.19: Using ThreadPool Instead of Instantiating Threads Explicitly
using System;
using System.Threading;
public class Program
{
public const int Repetitions = 1000;
public static void Main()
{
for (int count = 0; count < Repetitions; count++)
{
ThreadPool.QueueUserWorkItem(DoWork, '.');
From the Library of Wow! eBook
ptg
Multithreading before .NET Framework 4 743
Console.Write('-');
}

// Pause until the thread completes
}
{
for (int count = 0; count < Repetitions; count++)
{
Console.Write(state);
}
}
}
The output is similar to Output 18.9, an intermingling of . and This pro-
vides more-efficient execution on single- and multiprocessor computers.
The efficiency is achieved by reusing threads over and over, rather than
reconstructing them for every asynchronous call.
Unfortunately, thread pool use is not without its pitfalls. Activities
such as I/O operations and other framework methods that internally use
the thread pool can consume threads as well. Consuming all threads
within the pool can delay execution and, in extreme cases, cause a dead-
lock. Similarly, if the asynchronous code will take a long time to execute,
then it is inappropriate to consume a shared thread from the thread pool
and instead favor explicit
Thread instantiation (use TaskCreationOp-
tions.LongRunning given TPL as mentioned earlier).
Unfortunately, another disadvantage with the thread pool is that,
unlike either Thread or Task, the ThreadPool API does not return a han-
dle to the thread or task itself. This prevents the calling thread from con-
trolling it with the thread management functions described earlier in the
chapter. Just monitoring state is not available without explicitly adding a
custom implementation. Assuming these deficiencies are not critical,
developers should consider using the thread pool over explicit thread
creation because of it increased efficiency—at least prior to .NET Frame-

work 4 and TPL; the fact that TPL uses the thread pool internally indi-
cates the significance of using it for the majority of multithreading
scenarios.
Thread.Sleep(1000);
public static void DoWork(object state)
From the Library of Wow! eBook
ptg
Chapter 18: Multithreading744
Unhandled Exceptions on the AppDomain
To catch all exceptions from a thread (for which appropriate handling is
known), you surround the root code block with a try/catch/finally block,
just as you would for all code within Main(). However, what happens if a
third-party component creates an alternate thread and throws an unhan-
dled exception from that thread? Similarly, what if queued work on the
thread pool throws an exception? A try/catch block in Main() will not
catch an exception on an alternate thread. Furthermore, without access to
any “handle” that invoked the thread (such as a Task) there is no way to
catch any exceptions that it might throw. Even if there was, the code could
never appropriately recover from all possible exceptions and continue exe-
cuting (in fact, this is why in .NET 4.0 exceptions such as System.Stack-
OverflowException, for example, will not be caught and instead will tear
down the application). The general unhandled-exceptions guideline is for
the program to shut down and restart in a clean state instead of behaving
erratically or hanging because of an invalid state.
However, instead of crashing suddenly or ignoring an unhandled
exception entirely if it occurs on an alternate thread, it is often desirable to
save any working data and/or log the exception for error reporting and
future debugging. This requires a mechanism to register for notifications
of unhandled exceptions.
Registering for unhandled exceptions on the main application domain

occurs via an application domain’s
UnhandledException event. Listing
18.20 demonstrates that process, and Output 18.10 shows the results.
Listing 18.20: Registering for Unhandled Exceptions
using System;
using System.Threading;
public class Program
{
public static void Main()
{
try
{
// Register a callback to
// receive notifications
// of any unhandled exception.
AppDomain.CurrentDomain.UnhandledException
+= OnUnhandledException;
From the Library of Wow! eBook
ptg
Unhandled Exceptions on the AppDomain 745
ThreadPool.QueueUserWorkItem(
state =>
{
throw new Exception(
"Arbitrary Exception");
});
//
// Wait for the unhandled exception to fire
// ADVANCED: Use ManualResetEvent to avoid
// timing dependent code.

Thread.Sleep(10000);
Console.WriteLine("Still running ");
}
finally
{
Console.WriteLine("Exiting ");
}
}
public static void ThrowException()
{
throw new ApplicationException(
"Arbitrary exception");
}
}
static void OnUnhandledException(
object sender,
UnhandledExceptionEventArgs eventArgs)
{
Exception exception =
(Exception)eventArgs.ExceptionObject;
Console.WriteLine("ERROR ({0}):{1} > {2}",
exception.GetType().Name,
exception.Message,
exception.InnerException.Message);
}
OUTPUT 18.10:
Still running
Exiting
ERROR (AggregateException):One or more errors occurred. > Arbitrary
Exception

From the Library of Wow! eBook
ptg
Chapter 18: Multithreading746
The UnhandledException callback will fire for all unhandled exceptions on
threads within the application domain, including the main thread. This is a
notification mechanism, not a mechanism to catch and process exceptions
so that the application can continue. After the event, the application will
exit. In fact, the unhandled exception will cause the Windows Error dialog
to display (Dr. Watson). And for console applications, the exception will
appear on the console.
Astute readers will note that in Listing 18.20 we use
ThreadPool rather
than Task. This is because of the likelihood that the garbage collector will
not have executed on Task before the application begins to shut down and
any exceptions within the finalization will be suppressed rather than going
unhandled. The likelihood of this case in most programs is generally low,
but the best practice to avoid significant unhandled exceptions during
application exit is to support task cancellation to cancel the task and wait
for it to exit before shutting down the application.
SUMMARY
This chapter delved into the details surrounding the creation and manipu-
lation of threads using the .NET Framework 4-introduced Task Parallel
Library or TPL. This library includes new APIs for executing for and
foreach loops such that iterations can potentially run in parallel. Underly-
ing TPL is a new fundamental threading class, System.Threading.
Tasks.Task, the basic threading unit on which all of TPL is based. It pro-
vides the standard multithreaded programming and monitoring activities
and keeps them relatively simple. Given that
Task forms the basis for par-
allel loops (Parallel.For() and Parallel.ForEach()), PLINQ, and more,

it is clear that Task and its peer classes also enable a multitude of more
complex threading scenarios—including unhandled exception handling
and Task chaining/notifications—via Task.ContinueWith<T>.
In addition, the chapter demonstrated Parallel LINQ (PLINQ) in which
a single extension method,
AsParallel(), transforms all further LINQ
queries to run in parallel. The elegance and simplicity with which this fits
into the framework is superb.
From the Library of Wow! eBook
ptg
Summary 747
The chapter closes with a section on multithreaded programming prior
to TPL. The foundational class for this is System.Threading.Thread, and
when appropriate, static methods on ThreadPool provide efficient means
for reusing Threads rather than creating new ones—a relatively inefficient
operation. The priority order for choosing an asynchronous class is Task,
ThreadPool, and Thread, resorting to a Thread.Sleep(), for example,
because neither Task nor ThreadPool offers an equivalent. In making this
evaluation, don’t forget to consider using the Rx library in order to gain
access to TPL and PLINQ within .NET 3.5.
There is one glaring omission from the chapter: synchronization. The
introduction mentioned multithreading problems such as deadlocks and
race conditions, but the chapter never discussed how to avoid them. This is
the topic of the next chapter.
From the Library of Wow! eBook
ptg
This page intentionally left blank
From the Library of Wow! eBook
ptg
749

19
Synchronization and More
Multithreading Patterns
N THE PRECEDING CHAPTER, we discussed the details of multithreaded
programming using the Task Parallel Library (TPL) and Parallel LINQ
(PLINQ). One topic specifically avoided, however, was thread synchroni-
zation that prevents race conditions while avoiding deadlocks. Thread
synchronization is the topic of this chapter.
We begin with a multithreaded example with no thread synchronization
around shared data—resulting in a race condition in which data integrity is
lost. This serves as the introduction for why we need thread synchronization
followed by myriad mechanisms and best practices for doing it.
The second half of the chapter looks at some additional multithreading
patterns. This is really a continuation of the patterns first introduced in
I
2
3
4
5
6 1
System.Threading.Interlocked
Synchronization Best Practices
More Synchronization Types
Monitor
Lock
Volatile
Mutex
WaitHandle
Reset Events
Multithreading

Patterns
Synchronization
Thread Local Storage
Timers
Asynchronous
Programming Model
Background
Worker
Pattern
Windows UI
Programming
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns750
Chapter 18 except that they depend on several of the synchronization tools
introduced in this chapter. In addition, the chapter includes a discussion of
three timers and Windows-based user interface programming.
This entire chapter uses TPL, so the samples cannot be compiled on
frameworks prior to .NET Framework 4. However, unless specifically
identified as a .NET Framework 4 API, the only reason for the .NET Frame-
work 4 restriction is the use of the
System.Threading.Tasks.Task class to
execute the asynchronous operation. Modifying the code to instantiate a
System.Threading.Thread and use a Thread.Join() to wait for the thread
to execute will allow the vast majority of samples to compile on earlier
frameworks.
Furthermore (as mentioned in the preceding chapter), Microsoft
released the Reactive Extensions to .NET (Rx), a separate download that
adds support for TPL and PLINQ within the .NET 3.5 framework. This
framework also includes the concurrent and synchronization types intro-

duced in this chapter. For this reason, code listings that depend on
Task or
that introduce C# 4.0 synchronization classes are, in fact, available from
.NET 3.5 using the functionality backported to the .NET 3.5 Framework via
Rx and reference to the System.Threading.dll assembly.
Synchronization
Running a new thread is a relatively simple programming task. What makes
multithreaded programming difficult, however, is identifying which data
multiple threads could access simultaneously. The program must synchro-
nize such data to prevent simultaneous access. Consider Listing 19.1.
Listing 19.1: Unsynchronized State
using System;
using System.Threading.Tasks;
class Program
{
const int _Total = int.MaxValue;
static long _Count = 0;
public static void Main()
{
Task task = Task.Factory.StartNew(Decrement);
From the Library of Wow! eBook
ptg
Synchronization 751
// Increment
for (int i = 0; i < _Total; i++)
{
_Count++;
}
task.Wait();
Console.WriteLine("Count = {0}", _Count);

}
static void Decrement()
{
// Decrement
for (int i = 0; i < _Total; i++)
{
_Count ;
}
}
}
One possible result of Listing 19.1 appears in Output 19.1.
The important thing to note about Listing 19.1 is that the output is not 0.
It would have been if Decrement() was called directly (sequentially). How-
ever, when calling Decrement() asynchronously, a race condition occurs
because the individual steps within _Count++ and _Count statements
intermingle. (As discussed in the Thread Basics Beginner Topic early in
Chapter 18, a single statement in C# will likely involve multiple steps.)
Consider the sample execution in Table 19.1.
Table 19.1 shows a parallel execution (or a thread context switch) by the
transition of instructions appearing from one column to the other. The
value of _Count after a particular line has completed appears in the last col-
umn. In this sample execution, _Count++ executes twice and _Count
occurs once. However, the resultant _Count value is 0, not 1. Copying a
result back to _Count essentially wipes out any _Count value changes that
occurred since the read of _Count on the same thread.
OUTPUT 19.1:
Count = 113449949
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns752

The problem in Listing 19.1 is a race condition, where multiple threads
have simultaneous access to the same data elements. As this sample execu-
tion demonstrates, allowing multiple threads to access the same data ele-
ments likely undermines data integrity, even on a single-processor
computer. To remedy this, the code needs synchronization around the
data. Code or data synchronized for simultaneous access by multiple
threads is thread-safe.
There is one important point to note about atomicity of reading and
writing to variables. The runtime guarantees that a type whose size is no
TABLE 19.1: Sample Pseudocode Execution
Main Thread Decrement Thread Count
. . . . . . . . .
Copy the value 0 out of _Count.
0
Increment the copied value (0),
resulting in 1.
0
Copy the resultant value (1) into
_Count.
1
Copy the value 1 out of _Count.
1
Copy the value 1 out of
_Count.
1
Increment the copied value (1),
resulting in 2.
1
Copy the resultant value (2) into
_Count.

2
Decrement the copied value
(1), resulting in 0.
2
Copy the resultant value (0)
into _Count.
0
. . . . . . . . .
From the Library of Wow! eBook
ptg
Synchronization 753
bigger than a native (pointer-size) integer will not be read or written
partially. Assuming a 64-bit operating system, therefore, reads and
writes to a long (64 bits) will be atomic. However, reads and writes to a
128-bit variable such as decimal may not be atomic. Therefore, write
operations to change a decimal variable may be interrupted after copy-
ing only 32 bits, resulting in the reading of an incorrect value, known as
a torn read.
BEGINNER TOPIC
Multiple Threads and Local Variables
Note that it is not necessary to synchronize local variables. Local variables
are loaded onto the stack and each thread has its own logical stack. There-
fore, each local variable has its own instance for each method call. By
default, local variables are not shared across method calls; therefore, they
are also not shared among multiple threads.
However, this does not mean local variables are entirely without con-
currency issues since code could easily expose the local variable to multi-
ple threads. A parallel
for loop that shares a local variable between
iterations, for example, will expose the variable to concurrent access and a

race condition (see Listing 19.2).
Listing 19.2: Unsynchronized Local Variables
using System;
using System.Threading.Tasks;
class Program
{
public static void Main()
{
int x = 0;
Parallel.For(0, int.MaxValue, i =>
{
x++;
x ;
});
Console.WriteLine("Count = {0}", x);
}
}
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns754
In this example, x (a local variable) is accessed within a parallel for loop
and so multiple threads will modify it simultaneously, creating a race con-
dition very similar to Listing 19.1. The output is unlikely to yield the value
0 even though x is incremented and decremented the same number of
times.
Synchronization Using Monitor
To synchronize multiple threads so that they cannot execute particular sec-
tions of code simultaneously, use a monitor to block the second thread
from entering a protected code section before the first thread has exited
that section. The monitor functionality is part of a class called Sys-

tem.Threading.Monitor, and the beginning and end of protected code sec-
tions are marked with calls to the static methods Monitor.Enter() and
Monitor.Exit(), respectively.
Listing 19.3 demonstrates synchronization using the Monitor class
explicitly. As this listing shows, it is important that all code between calls
to Monitor.Enter() and Monitor.Exit() be surrounded with a try/finally
block. Without this, an exception could occur within the protected section
and Monitor.Exit() may never be called, thereby blocking other threads
indefinitely.
Listing 19.3: Synchronizing with a Monitor Explicitly
using System;
using System.Threading;
using System.Threading.Tasks;
class Program
{
const int _Total = int.MaxValue;
static long _Count = 0;
public static void Main()
{
Task task = Task.Factory.StartNew(Decrement);
// Increment
for (int i = 0; i < _Total; i++)
readonly static object _Sync = new object();
From the Library of Wow! eBook
ptg
Synchronization 755
{
}
task.Wait();
Console.WriteLine("Count = {0}", _Count);

}
static void Decrement()
{
for (int i = 0; i < _Total; i++)
{
}
}
}
The results of Listing 19.3 appear in Output 19.2.
bool lockTaken = false;
Monitor.Enter(_Sync, ref lockTaken);
try
{
_Count++;
}
finally
{
if (lockTaken)
{
Monitor.Exit(_Sync);
} }
bool lockTaken = false;
Monitor.Enter(_Sync, ref lockTaken);
try
{
_Count ;
}
finally
{
if (lockTaken)

{
Monitor.Exit(_Sync);
}
}
OUTPUT 19.2:
Count = 0
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns756
Note that calls to Monitor.Enter() and Monitor.Exit() are associated
with each other by sharing the same object reference passed as the parame-
ter (in this case _Sync).
The Monitor.Enter() overload method that takes the lockTaken
parameter was only added to the framework in .NET 4.0. Before that, no
such lockTaken parameter was available and there was no way to
reliably catch an exception that occurred between the Monitor.Enter()
and try block. Placing the try block immediately following the
Monitor.Enter() call was reliable in release code because the JIT
prevented any such asynchronous exception from sneaking in. However,
anything other than a try block immediately following the Moni-
tor.Enter(), including any instructions that the compiler may have
injected within debug code, could prevent the JIT from reliably returning
execution within the try block. Therefore, if an exception did occur, it
would leak the lock (the lock remains acquired) rather than executing the
final block and releasing it—likely causing a deadlock when another
thread tries to acquire the lock.
Monitor also supports a Pulse() method for allowing a thread to
enter the “ready queue,” indicating it is up next for execution. This is
a common means of synchronizing producer-consumer patterns so
that no “consume” occurs until there has been a “produce.” The pro-

ducer thread that owns the monitor (by calling Monitor.Enter()) calls
Monitor.Pulse() to signal the consumer thread (which may already
have called Monitor.Enter()) that an item is available for consumption,
so “get ready.” For a single Pulse() call, only one thread (consumer in
this case) can enter the ready queue. When the producer thread calls
Monitor.Exit(), the consumer thread takes the lock (Monitor.Enter()
completes) and enters the critical section to begin “consuming” the item.
Once the consumer processes the waiting item, it calls Exit(), thus
allowing the producer (currently blocked with Monitor.Enter()) to pro-
duce again. In this example, only one thread can enter the ready queue at
a time, ensuring that there is no “consumption” without “production”
and vice versa.
From the Library of Wow! eBook
ptg
Synchronization 757
Using the lock Keyword
Because of the frequent need for synchronization using Monitor in multi-
threaded code, and the fact that the try/finally block could easily be
forgotten, C# provides a special keyword to handle this locking synchroni-
zation pattern. Listing 19.4 demonstrates the use of the lock keyword, and
Output 19.3 shows the results.
Listing 19.4: Synchronization Using the lock Keyword
using System;
using System.Threading;
using System.Threading.Tasks;
class Program
{
const int _Total = int.MaxValue;
static long _Count = 0;
public static void Main()

{
Task task = Task.Factory.StartNew(Decrement);
// Increment
for (int i = 0; i < _Total; i++)
{
}
task.Wait();
Console.WriteLine("Count = {0}", _Count);
}
static void Decrement()
{
for (int i = 0; i < _Total; i++)
{
readonly static object _Sync = new object();
lock (_Sync)
{
_Count++;
}
lock (_Sync)
{
_Count ;
}
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns758
}
}
}
By locking the section of code accessing _Count (using either lock or
Monitor), you make the Main() and Decrement() methods thread-safe,

meaning they can be safely called from multiple threads simultaneously.
(Prior to C# 4.0 the concept was the same except the compiler-emitted code
depended on the Monitor.Enter() method without the lockTaken para-
meter and the Monitor.Enter() called was emitted before the try block.)
Synchronization comes at a cost to performance. Listing 19.4, for exam-
ple, takes an order of magnitude longer to execute than Listing 19.1 does,
which demonstrates lock’s relatively slow execution compared to the exe-
cution of incrementing and decrementing the count.
Even when lock is insignificant in comparison with the work it syn-
chronizes, programmers should avoid indiscriminate synchronization in
order to avoid the possibility of deadlocks and unnecessary synchroniza-
tion on multiprocessor computers that could instead be executing code in
parallel. The general best practice for object design is to synchronize muta-
ble static state (there is no need to synchronize something that never
changes) and not any instance data. Programmers who allow multiple
threads to access a particular object must provide synchronization for the
object. Any class that explicitly deals with threads is likely to want to make
instances thread-safe to some extent.
Choosing a lock Object
Whether or not the lock keyword or the Monitor class is explicitly used, it
is crucial that programmers carefully select the lock object.
In the previous examples, the synchronization variable, _Sync, is declared
as both private and read-only. It is declared read-only to ensure that the value
is not changed between calls to Monitor.Enter() and Monitor.Exit(). This
allows correlation between entering and exiting the synchronized block.
OUTPUT 19.3:
Count = 0
From the Library of Wow! eBook
ptg
Synchronization 759

Similarly, the code declares _Sync as private so that no synchronization
block outside the class can synchronize the same object instance, causing
the code to block.
If the data is public, then the synchronization object may be public so that
other classes can synchronize using the same object instance. This makes it
harder to avoid deadlock. Fortunately, the need for this pattern is rare. For
public data, it is preferable to leave synchronization entirely outside the class,
allowing the calling code to take locks with its own synchronization object.
It’s important that the synchronization object not be a value type. If the
lock keyword is used on a value type, then the compiler will report an
error. (In the case of accessing the System.Threading.Monitor class explic-
itly [not via lock], no such error will occur at compile time. Instead, the code
will throw an exception with the call to Monitor.Exit(), indicating there
was no corresponding Monitor.Enter() call.) The issue is that when using a
value type, the runtime makes a copy of the value, places it in the heap (box-
ing occurs), and passes the boxed value to Monitor.Enter(). Similarly, Mon-
itor.Exit() receives a boxed copy of the original variable. The result is that
Monitor.Enter() and Monitor.Exit() receive different synchronization
object instances so that no correlation between the two calls occurs.
Why to Avoid Locking on this, typeof(type), and string
One common pattern is to lock on the this keyword for instance data in a
class, and on the type instance obtained from typeof(type) (for example,
typeof(MyType)) for static data. Such a pattern provides a synchronization
target for all states associated with a particular object instance when this
is used, and all static data for a type when typeof(type) is used. The prob-
lem is that the synchronization target that this (or typeof(type)) points to
could participate in the synchronization target for an entirely different
synchronization block created in an unrelated block of code. In other
words, although only the code within the instance itself can block using
the this keyword, the caller that created the instance can pass that instance

to a synchronization lock.
The result is that two different synchronization blocks that synchronize
two entirely different sets of data could block each other. Although
perhaps unlikely, sharing the same synchronization target could have
an unintended performance impact and, in extreme cases, even cause
From the Library of Wow! eBook
ptg
Chapter 19: Synchronization and More Multithreading Patterns760
a deadlock. Instead of locking on this or even typeof(type), it is better to
define a private, read-only field on which no one will block except for the
class that has access to it.
Another lock type to avoid is
string due to string interning. If the same
string constant appears within multiple locations it is likely that all loca-
tions will refer to the same instance, making the scope of the lock a lot
greater than expected.
In summary, use a per-synchronization context instance of type
object
for the lock target.
ADVANCED TOPIC
Avoid Synchronizing with MethodImplAttribute
One synchronization mechanism that was introduced in .NET 1.0 was the
MethodImplAttribute. Used in conjunction with the MethodImplOp-
tions.Synchronized method, this attribute marks a method as synchro-
nized so that only one thread can execute the method at a time. To achieve
this, the just-in-time compiler essentially treats the method as though it
was surrounded by
lock(this) or locking on the type in the case of a static
method. Such an implementation means that, in fact, the method and all
other methods on the same class, decorated with the same attribute and

enum parameter, are synchronized, not just each method relative to itself.
In other words, given two or more methods on the same class decorated
with the attribute, only one of them will be able to execute at a time and the
one executing will block all calls by other threads to itself or to any other
method in the class with the same decoration. Furthermore, since the syn-
chronization is on
this (or even worse, on the type), it suffers the same det-
riments as lock(this) (or worse, for the static) discussed in the previous
section. As a result, it is a best practice to avoid the attribute altogether.
Declaring Fields as volatile
On occasion, the compiler and/or CPU may optimize code in such a way
that the instructions do not occur in the exact order they are coded, or some
instructions are optimized out. Such optimizations are innocuous when
code executes on one thread. However, with multiple threads, such optimi-
zations may have unintended consequences because the optimizations may
From the Library of Wow! eBook
ptg
Synchronization 761
change the order of execution of a field’s read or write operations relative to
an alternate thread’s access to the same field.
One way to stabilize this is to declare fields using the volatile key-
word. This keyword forces all reads and writes to the volatile field to
occur at the exact location the code identifies instead of at some other loca-
tion that the optimization produces. The volatile modifier identifies that
the field is susceptible to modification by the hardware, operating system,
or another thread. As such, the data is “volatile,” and the keyword
instructs the compilers and runtime to handle it more exactly.
Using the System.Threading.Interlocked Class
The mutual exclusion pattern described so far provides the minimum of
tools for handling synchronization within a process (application domain).

However, synchronization with System.Threading.Monitor is a relatively
expensive operation, and an alternative solution that the processor sup-
ports directly targets specific synchronization patterns.
Listing 19.5 sets _Data to a new value as long as the preceding value
was null. As indicated by the method name, this pattern is the compare/
exchange pattern. Instead of manually placing a lock around behaviorally
equivalent compare and exchange code, the Interlocked.CompareEx-
change() method provides a built-in method for a synchronous operation
that does the same check for a value (null) and swaps the first two param-
eters if the value is equal. Table 19.2 shows other synchronization methods
supported by Interlocked.
Listing 19.5: Synchronization Using System.Threading.Interlocked
class SynchronizationUsingInterlocked
{
private static object _Data;
// Initialize data if not yet assigned.
static void Initialize(object newValue)
{
// If _Data is null then set it to newValue.
Interlocked.CompareExchange(
ref _Data, newValue, null);
}
//
}
From the Library of Wow! eBook

×