Developers employ multiple threads in an attempt to achieve greater performance, scalability, or responsiveness in their applications. In the previous article, we discussed the Thread object and synchronization primitives. Now we’ll examine two more essential constructs: the Interlocked and Monitor classes.

A word about examples
Microsoft’s approach has long been to make the easiest 75 percent of development tasks very easy, the next 20 percent slightly intricate but straightforward, and the most esoteric 5 percent achievable with some pain. In the subject matter of multithreading, our previous article fell in the first category. This article begins to deal with some of the mechanisms needed to solve problems in the 20 percent range, which by nature are harder to build up, and to incorporate esoteric implementation requirements and restrictions.

Building a middle-tier cache or multiprocess control mechanism is impractical due to space constraints, so we’ll have to use smaller, more contained examples. The surrounding code to the primitive constructs may seem contrived, but I intended to show where, generally, such constructs are useful in the real world.

The Interlocked class
A classic set of threading bugs occurs when an operation in one thread is interrupted at the worst possible moment and an operation in another thread steps in to perform the worst possible activity on a common piece of data. These bugs revolve around so-called atomic operations, which are operations that should not be split up (interrupted).

The Interlocked class provides a set of methods for performing atomic operations. These methods address four atomic operations:

  • ·        The interlocked increment/decrement pair (which counts as two operations)
  • ·        The exchange
  • ·        The test-and-exchange operation

Let’s say a server has many threads performing tasks, and after each completion, the thread increments a numberOfCompletions counter. Suppose the programmer wrote the ThreadStart method  (void Run()) as in Listing A.

Using the code marked as  //Bad seems innocuous until you realize that the increment operation may compile to three instructions—a load-from-memory, an add, and a store-to-memory. One thread might load the value from memory and be interrupted there. Another thread then loads, increments, and stores the value and is later interrupted. Finally, the first thread increments the value it loaded when it was last running and writes it back to memory—out of date and incorrect. The problem may or may not occur, depending on the compiler you use and the processor you are targeting. To be safe, use the coding construct marked //Good in the listing. Interlocked.Increment(…) is guaranteed to complete uninterrupted, once it is begun. The same code with the Interlocked.Decrement(…) method would, of course, result in an atomically decremented counter.

A second circumstance that can lead to unexpected issues is called the “test-and-exchange” problem, as shown in Listing B. Imagine that we’re using an m_bInUse Boolean on a pooled object. We want to iterate through a list of such objects, and if we find an object that is not in use, set the flag to indicate it is in use and start to use it.

In the Run method in Listing B, if we used the code marked  //BAD, the processor might interrupt a thread that had just finished testing the flag and had found the object not to be in use. That is, the interrupt came just after our thread had decided to use the object but just before our thread had marked it as in use. If another thread examined this object while our thread was suspended, it might mark the object in use and begin using it. Then, if our thread were resumed, it would continue its already-determined course of action and begin to use the object.

The code marked //Good describes the proper way to handle this. The CompareExchange(…) method shown looks at the s.m_bInUse location, compares it to 0, and if it is equal, replaces the 0 with a 1 and returns the 0. This is guaranteed to happen without interruption.

The Monitor class
A monitor is a mechanism for ensuring that only one thread at a time may be running a certain piece of code. A monitor has a lock, and only one thread at a time may acquire it. To run in certain blocks of code, a thread must have acquired the monitor. A monitor is always associated with a specific object and cannot be dissociated from or replaced within that object. The Monitor class is a collection of static methods that provides access to the monitor associated with a particular object, which is specified through the method’s first argument.

The example we’ll use to explore monitors has many instances of a worker thread class, which, while running, tries to write to a log file through a Logger class.

If a piece of code requires possession of a monitor lock, it is preceded by an acquisition and followed by the relinquishing of that monitor. See Listing C, particularly in the body of the Logger.Log(…) method, for an example of this construct.

In Listing C, we create a logger that uses a Monitor on the log’s StreamWriter object to ensure that only one thread at a time writes to the log file. We then create 100 WorkerThread objects, all of which are simultaneously interested in logging. In the Log method, we use the Monitor.Enter(…) method to acquire the lock prior to writing to the log, and the Monitor.Exit(…) method to relinquish it. This prevents two worker threads from writing to the log at the same time. This can also be accomplished through two constructs we discussed in the last article—the lock primitive and the MethodImpl attribute.

When a thread is blocked on the Monitor.Enter(…) method, it is in what is called the “ready” queue. When a monitor lock becomes available, the next thread in the ready queue receives the lock. Beyond the simple lock acquire/release mechanism, monitors enable one thread to temporarily relinquish the lock and wait for another thread to give it back. A thread temporarily relinquishes a lock through the Monitor.Wait(…) method. When this happens, it is put into a “waiting” queue. Any number of threads may exist in this queue. Another thread, once it holds the lock, may give it to a thread in the waiting queue via the Monitor.Pulse(…) and Monitor.PulseAll(…) methods. When Monitor.Pulse(…) is called, the first thread in the waiting queue is moved into the ready queue and given the lock. Once it relinquishes the lock, whether via Monitor.Wait(…) or Monitor.Exit(…), the effect of the Pulse operation is finished. On the other hand, if Monitor.PulseAll(…) is called, all threads in the waiting queue are moved into the ready queue, and each is, in turn, given the monitor lock.

See Listing D and Listing E for examples of this construct. Due to the length of the examples (which are quite similar to Listing A and Listing B), some of the code is removed. Download this code to see the complete examples.

Listing D incorporates three changes. First, we added a class and static member, Lock.m_lock, upon which locking operations are now performed. Next, we changed the Logger.Log(…) method to execute Wait(…) after acquiring the lock. Finally, after creating all of the worker threads, the Runner.Main(…) begins a cycle in which it sleeps for a quarter of a second and then issues a Pulse on the monitor. The effect of this is that once per 250 mSec, a single thread is permitted to write to the log.

This sort of a construct is useful in cases where a rather lengthy but linear process must be performed on continually arriving entities—such as a server handling single threaded client requests.

Listing E shows the use of the Monitor.PulseAll(…) method. Since the rest of the example is identical to that in Listing B, we only show the changed Main method.

The change in Listing E, the PulseAll method, results in all waiting logger threads being moved to the ready queue, with each receiving the lock in turn. The effect of this is that once per 250 mSec, all threads are given a chance to write to the log once. This construct might be useful in a situation where a single event is intended to result in the execution of a parallel operation, such as a cache refresh.

A parting thought on monitors
For clarity, the examples above skipped an esoteric but important structure. Consider what would happen if a monitor lock was acquired and a null reference was subsequently encountered, throwing an exception. The exception might not be handled until well after the Monitor.Exit() statement, so that monitor is never exited. This would almost undoubtedly cause problems in your code. To prevent this from happening, if you use the Monitor.Enter/Exit structure, you should always build it as shown in Listing F to ensure that the exit is called in all cases.

We’ve discussed two constructs for controlling threads’ execution. In the Interlocked class, we looked at a set of static methods for performing atomic operations. In the Monitor class, we looked at a mechanism for controlling multiple threads within a process. In future articles, we will describe several other basic constructs and go on to implement a thread pool. Then, we’ll explore some more advanced constructs, such as thread-local storage and overlapped I/O.