Theading


Thread as an operating system object

Thread is an operating system kernel object. Thread is a path of code execution. That is application code is executed through and by a thread. Thread is the code executor.

When a process starts running, a single thread is also created, called the "primary thread". A process is a thread owner and may create multiple threads.

Code and data belongs to the process while code execution is performed by threads. Each thread is associated with a separate call stack.

When a CRL application starts, a default application domain is created inside an operating system process and is associated with the primary thread. A CLR application may create multiple threads though.

see also:

Concurrent execution and multi-tasking

In a single processor system only one thread runs at any instant. In a multi processor system only one thread per processor runs at any instant.

Multiple threads share the same CPU. Each thread in turn, enters the CPU, executes for a short period of time and then exits, leaving the CPU for the next thread. This procedure goes on and on until system shutdown. And because that thread succesion happens every a very short period of time, it creates the illusion of concurrent execution.

Eventually the processor switches between different threads which gives the impression that applications are executed concurrently. This effect is known as multi-tasking.

In a single-processor single-core machine, there is no possibility for true concurrent execution.

see also:

Thread context switch and preemption

It turns out that CPU time is the ultimate resource to a thread. Threads starve for processor time. Each thread is scheduled a fraction of processor time (time slices) by an operating system component known as Scheduler and according to a thread attribute called priority.

CPU state regarding a thread is stored and restored each time a thread enters or exits the CPU. That is, thread context is preserved between subsequent thread executions. This procedure is known as thread context switch.

The thread context includes the thread's set of CPU registers and the thread's call stack, in the address space of the owning process.

Preemption is the interruption of a thread execution because of a thread context switch.

see also:

Thread safety and thread synchronization: the necessity.

A thread enters the processor, executes for a scheduled period of time and then exits the processor and starts waiting for its next turn. Threads are actually executed one after the other.

Multiple threads can make calls to the fields, properties and methods of a certain object. Those calls must be coordinated. Otherwise the next thread might break what the previous thread was doing, leaving that object, and its data, in an inconsistent state.

Imagine for example a thread which iterates the elements of a list. Its scheduled time elapses before the completion of the whole iteration process and the thread exits CPU. Another thread enters CPU, starts execution and deletes some elements of that very same list. That list is now in an inconsistent state and the disaster is near. And the disaster comes when the first thread, the one that iterates list's elements, starts execution again.

Thread safety is protecting shared resources against "concurrent" thread access. A protected resource is said to be thread safe. A thread safe class is a class which protects its members by preventing concurrent thread access.

Protected resources are accessed by multiple threads in a coordinated manner. That coordination is called thread syncronization. In a single-threaded application, all data are accessed solely by the primary thread. No synchronization is requrired. In most multi-threaded applications some kind of thread synchronization is required.

see also:

User interface elements are not thread safe

Consider user interface elements, such as controls placed on a Form. Those elements are under the control of the primary thread. (Actually they are under the control of the thread that creates them). Any attempt to alter such a user interface element directly from a thread other than the one created them, throws an exception.

When a secondary thread tries to alter a user interface element, such as a TextBox, an InvalidOperationException is thrown. This happens because user interface elements are not thread safe. Accessing a user interface element by a thread that is not the creator thread of that element, causes unpredictable results. That's why the execption is thrown.

Synchronization objects

Synchronization and thread safety is implemented by using Synchronization objects. A thread uses a synchronization object in order to gain access to a shared resource. A synchronization object protects shared resources and forces threads to access a resource in a coordinated manner.

There are BCL classes, such as the Monitor, the Mutex or the EventWaitHandle class, that represent specialized synchronization objects.

Except of synchronization classes there are also syntactical constructions which make synchronization possible, implemented internally by using a synchronization class.

Signaled and non-signaled

Synchronization objects have a internal state which can be either signaled or non-signaled.

From the perspective of a calling thread, a signaled object permits access to the shared resource or resources while a non-signaled object prevents access to the shared resource or resources.

In general a signaled object is for a thread something similar to a green traffic light.

Blocking and waiting

A non-signalled syncronization object refuses access to the resource. Threads calling on that object, block and start waiting. A signalled synchronization object allows access to the resource.

Threads gain access to a shared resource by issuing a call through a syncronization object. A protected resource may be already locked by another thread using an exclusive lock.

If access is not possible at the time of the call, the calling thread is blocked and remains in a waiting state infinitely or until a specified timeout elapses.

If access is given, the thread acquires a lock on the resource, and then proceeds by executing the code which gives access to the protected resource.

In essence a synchronization object serializes thread access to a shared resource in order to protect it from concurrent thread access.

Deadlocks

Deadlock is a situation when two or more threads are each waiting for another to release a resource such as an exclusive lock on an object.

Imagine the situation where thread A waits for thread B and thread B waits for thread A.

Multiple threads sharing a common resource should not be mutually dependant. Multi-threaded applications must be carefully designed in order to avoid deadlocks.

see also:

Thread affinity

Thread affinity is a term used to denote thread association to something. For example, in a multi-processor system, thread affinity means thread association to a certain processor. In that case thread affinity refers to the processor a thread should be run on.

Regarding synchronization objects, when a certain synchronization class requires thread affinity means that a thread must have ownerhip of that specific object, if it is to set that object in a signaled state again, otherwise the call that tries to set the object in a signaled state is invalid and results in an exception.

In simple words, a synchronization object can be released only by the thread that owns it.

see also:

Atomic operation and atomic statement

Atomic operation in computing is considered an operation, possibly composed by other operations, that appears to the rest of a system as a single operation. A database transaction for example.

A statement or expression in a high level language, such as C#, it may takes one or more CPU instructions to complete. A statement or expression is atomic if it takes just a single CPU statement to be completed.

In a 32-bit CPU the assignment operation of a 32-bit (or less) length field is atomic. 32-bit operations other than the assignment may not be atomic though. And surely, operations on fields of a length greater than 32 bits are non-atomic operations.

see also:

Race condition

A race condition is a flaw in application logic when the output of that application is unpredictably different in any subsequent run and depends on which of two or more threads enters first a particular block code. That is application result depends on the outcome of a race between two or more parts.

Imagine for example the increment of an integer variable. That increment operation requires three machine instructions in order to complete.

    1. load the value of the variable into a CPU register
    2. increment the value
    3. store the value back to the variable.

The thread performing the operation might be preempted while at the second step, by another thread which completely executes the whole operation. When the first thread resumes execution it has no idea about the change in between.

In a multi-threaded application, an un-protected non-atomic statement may suffer a race condition. Multi-threaded applications must be carefully designed in order to avoid race conditions.

see also:

Reentrant code

A code is considered reentrant when it can be safely called again, probably by multiple threads, even though a previous call, to that same code, is still executed. In a sense a reentrant code is a thread safe code, although thread safety has a broader meaning.

A reentrant method or property should function properly when re-entered, that is called again, while a previous execution of the same member, is still in progress.

see also:

Thread apartments

An apartment is an execution context which acts as a container for threads and objects sharing the same thread access rules. The term apartment comes from the COM (Component Object Model) universe.

A Single-Threaded Apartment (STA) is an apartment which contains exactly one thread. A Multi-Threaded Apartment (MTA) is an apartment which contains one or more threads. Any type of apartment may contain multiple objects.

In an STA model, all code of all contained objects is executed by a single thread. Calls outside the apartment are automatically synchronized by COM.

In an MTA model, calls are not automatically synchronized. Each contained object should manually synchronize its members, if it is to provide thread safe code.

Objects in the same apartment execute calls issued by any thread in the apartment without any special COM intervention. Cross apartment calls require parameter marshalling.

Marshalling is the transformation of data to a format suitable for transmission to another context. That context could be another process local or remote.

CLR does not use apartments. CLR managed objects that are going to be used by multiple threads should provide synchronized access to their members in a thread-safe manner.

Although CLR does not use apartments, Thread class provides the GetApartmentState() and SetApartmentState() methods and the, now obsolete, ApartmentState property. Those calls get/set a value of type ApartmentState, defined as

    public enum ApartmentState
    {
        STA = 0,
        MTA = 1,
        Unknown = 2,
    }  
    

CLR managed threads are by default MTA threads.

The apartment of the user interface primary thread is controlled by an attribute marking the Main() method. CLR provides the STAThreadAttribute and MTAThreadAttribute attribute classes for that reason.

The user interface primary thread of a CLR application performs many calls to Win32 API which is designed to work in a STA model, hence that [STAThread] attribute above the Main().

All this apartment support is provided by CLR to help applications when interoperate with unmanaged code and COM objects specifically.

see also:

Process class and Thread class

The System.Diagnostics.Process class represents an operating system process, local or remote. The Process class makes it possible to start and stop local system processes, control and monitor applications.

The Process.Threads property returns a ProcessThreadCollection object containing System.Threading.Thread objects representing the threads owned by the process.

The System.Threading.Thread class represents a thread. Thread class can create, tune and control a thread.

Creating and using managed threads

A secondary thread object is created by using a constructor of the Thread class. A thread is actually a code executor. So it is required to know what is the first method to call. That thread starting method is passed, as a delegate value, to the Thread's constructor. The thread starts execution right after a call to its Start() method.

    void ThreadProc()
    {
        // code here
    }

...

    Thread t = new Thread(ThreadProc);
    t.Start();


CAUTION: Attempting to terminate an application while any of its foreground threads

            is still running, hangs the application.
            

A call to Thread.Start() does not mean that the thred will start executing immediately. Instead it is actually started when the current thread, the one that called Thread.Start(), yieds its remaining execution time or is preempted by the operating system.

Any code which is called from inside a thread's starting method, it is executed in the context of that thread, no matter to where this code belongs to.

Remember, code and data belongs to a process. Threads are just code executors. One exception to this rule, regarding data only, concerns local variables. Each thread has its own separate call stack and local variables reside in that stack. Value type local variables and value type method parameters, not passed by reference, can not accessed by another thread. They remain isolated in the thread's private call stack.

Passing data back to the primary UI thread in a synchronized manner

The System.Windows.Forms.WindowsFormsSynchronizationContext provides a synchronization context for the Windows.Forms user interface elements. WindowsFormsSynchronizationContext inherits from System.Threading.SynchronizationContext.

SynchronizationContext descendants allow to perform a cross-thread call by using either the SynchronizationContext.Post() or SynchronizationContext.Send() method. Those methods accept two parameters: a delegate value of type SendOrPostCallback and an object.

    public virtual void Send(SendOrPostCallback d, object state);
    public virtual void Post(SendOrPostCallback d, object state);

The SendOrPostCallback delegate type is defined as

    public delegate void SendOrPostCallback(object state);
    

The WindowsFormsSynchronizationContext.Post() or WindowsFormsSynchronizationContext,Send() method, calls the passed in delegate passing it the second parameter, the state object. The called delegate is then executed not in the calling thread's context but in the primary thread's context, which is the thread that controls the user interface. Such a call is a synchronized call. It is perfectly valid and no exception is thrown.

Thus the Send() and Post() methods make it possible to pass data from a secondary thread back to the primary UI thread.

    void ThreadProc()
    {
        /* calling directly from one thread to another causes an InvalidOperationException */
        //SynchronizedMethod("Thread started at: " + DateTime.Now.ToString());  // InvalidOperationException here

        /* syncronized calls from on thread to another */
        synContext.Send(SynchronizedMethod, "Thread started at: " + DateTime.Now.ToString());
        Thread.Sleep(3000);
        synContext.Send(SynchronizedMethod, "Thread terminated at: " + DateTime.Now.ToString());
    }

    void SynchronizedMethod(object state)
    {
        textBox1.Text += state.ToString() + Environment.NewLine;
    }


Thread properties

The Name property

A Thread object can have a name. Naming a thread object easy debugging. The Thread.Name property can be assigned at any time, but only once. A second attemp causes an InvalidOperationException.

The ManagedThreadId property

A read-only integer property which returns a unique ID representing the thread object.

The Priority property

The Priority property is of type System.Threading.ThreadPriority.

    public enum ThreadPriority
    {
        Lowest = 0,
        BelowNormal = 1,
        Normal = 2,
        AboveNormal = 3,
        Highest = 4,
    }
    


Even though threads are executing within the CLR, they are assigned processor time slices by the operating system scheduler component. Processor time scheduling is based on thread priority.

Priority property controls the execution priority of the thread. Set this property before calling the Start() method.

Normal is the best choise in most situations. Avoid AboveNormal and Highest priorities.

The actual scheduled priority of a thread is affected by the setting of the System.Diagnostics.Process.PriorityClass property of type ProcessPriorityClass

    public enum ProcessPriorityClass
    {
        Normal = 32,
        Idle = 64,
        High = 128,
        RealTime = 256,
        BelowNormal = 16384,
        AboveNormal = 32768,
    }

For example

    Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
    

CAUTION: Use priority settings with extreme care.

The IsAlive property

A read-only boolean property. True means the thread is started and has not terminated either normally or aborted.

The ThreadState property

The ThreadState property is of type System.Threading.ThreadState.

    [Flags]
    public enum ThreadState
    {
        Running = 0,
        StopRequested = 1,
        SuspendRequested = 2,
        Background = 4,
        Unstarted = 8,
        Stopped = 16,
        WaitSleepJoin = 32,
        Suspended = 64,
        AbortRequested = 128,
        Aborted = 256,
    }

ThreadState specifies the state of execution of a thread object. It's a read only property. ThreadState is a bitfield (a set). That means that its value is a compination of those flags. Most of the flags are self descriptive. The rest will be clarified next.

ThreadState property value comes as a consequence of calling methods.

caller method ThreadState

another thread Thread.Start() Running the thread Thread.Sleep() WaitSleepJoin the thread Monitor.Wait() WaitSleepJoin the thread Thread.Join() WaitSleepJoin any thread Thread.Suspend() SuspendRequested -> Suspended another thread Thread.Resume() Running another thread Thread.Abort() AbortRequested -> Aborted

The IsBackground property

A read-write boolean property.

Threads may be either foreground or background. A background thread is executed only when the primary user interface thread is idle.

An application remains alive as long as a single foreground thread is still alive. When all foreground threads have terminated CLR terminates the proccess too. When a process terminates CLR stops any running background thread. Those stopped background thread do not complete their execution.

Thread parameterized initialization

Thread class provides four constructors. Avoid using the two constructors expecting a maxStackSize argument.

The other two constructors expect a single delegate.

    public Thread(ThreadStart start);
    public Thread(ParameterizedThreadStart start);
    

Here are the delegates

    public delegate void ThreadStart();    
    public delegate void ParameterizedThreadStart(object obj);
 

Those delegates represent the starting thread procedure (method). The above means that it is possible to pass an initialization object to the thread using a method with an object as a parameter as that thread's starting method.

For this to work a call to the second version of the overloaded Thread.Start() is required, passing it the initialization object, which in turn is going to be passed to the thread's starting method.

    void ThreadProc(object info)
    {
        // code here
    }

...

    t = new Thread(ThreadProc);
    t.Start(DateTime.Now);
    

Threads and Exceptions

Unhandled thread exceptions (after C# 2.0) terminate the application. So a thread starting procedure (method) should be always guarded, regarding exceptions.

    void ThreadProc()
    {
        try
        {
            throw new ApplicationException("this is an exception thrown inside a thread");
        }
        catch (Exception ex)
        {
            synContext.Send(SynchronizedMethod, DateTime.Now + ", ERROR: " + ex.Message); 
        }            
    }
    

CAUTION: Unhandled exceptions thrown by a secondary thread, are not caught by any general exception handler installed using the Application.ThreadException event.

Synchronization: Locking using the Monitor class and the keyword lock

The most common form of synchronization is the protection of a source code region by placing a lock. The code region is protected using a lock object. As lock object can be used anything that inherits from System.Object.

A thread issues a call in order to acquire the lock object by which the code region is protected. If the lock is available the calling thread gets ownership of the lock object immediately and enters the code region. Otherwise the thread is blocked waiting to acquire the lock. When the entered thread is done with that code region, it releases the lock object, thus making it available to other waiting threads.

In essence a protected code region is locked. Such a protected code region is known as critical section.

The static System.Threading.Monitor class provides the Enter() and Exit() methods for marking a source code region as a critical section. Both those methods accept a single parameter of type object which is the lock object. The Monitor.Enter() method acquires the lock object while the Monitor.Exit() method releases the lock object.

CAUTION: Use always a private instance field of type object as the lock object when protecting instance methods. Use always a private static field of type object as the lock object when protecting static methods. Failing to do that it may cause deadlocks. Also consider that the Monitor class can be used effectively with multiple application domains if the lock object derives from MarshalByRefObject.

    double SumList()
    {
        Monitor.Enter(syncLock);    // syncLock is a private field of type object
        try
        {
            double total = 0;

            for (int i = 0; i < list.Count; i++)
                total += (double)list[i];

            return total;
        }
        finally
        {
            Monitor.Exit(syncLock);
        }            
    }

Monitor class provides a variation to Enter() method, the overloaded TryEnter() method.

    public static bool TryEnter(object obj);
    public static bool TryEnter(object obj, int millisecondsTimeout);
    public static bool TryEnter(object obj, TimeSpan timeout);
    

TryEnter() tries to acquire a lock on a certain code block, infinitely or for a specified period of time. Return true on success.

    double SumList()
    {
        if (Monitor.TryEnter(syncLock, 1000)   // try waiting for a second
        {
            try
            {
                double total = 0;

                for (int i = 0; i < list.Count; i++)
                    total += (double)list[i];

                return total;
            }
            finally
            {
                Monitor.Exit(syncLock);
            }         
        }   
    }
    

A call to Monitor.Exit() without a previous call to Monitor.Enter() or TryEnter(), on the same object throws and exception.

The keyword lock enters a lock statement and has the same effect as the Monitor.Enter()/Exit() pair.

    double SumList()
    {
        lock (syncLock)
        {
            double total = 0;

            for (int i = 0; i < list.Count; i++)
                total += (double)list[i];

            return total;                
        }
    }
    

Actually the lock statement of the keyword lock is implemented by the compiler using the Monitor class. In both cases, Monitor class or lock statement, the unlocking of the code block is guaranteed by the CLR in case of an exception inside the block.

Locks may be nested on the same object. In that case the lock is active until the exit of the outermost locked block.

    lock (syncLock)
    {
        lock (syncLock)
        {
                 
        }     
    } // the lock is released here


Another variation of a nested lock using the Monitor class.

    void SomeMethod()
    {
        Monitor.Enter(syncLock);    
        try
        {
            // code here
        }
        finally
        {
            Monitor.Exit(syncLock);
        } // the lock is still active here
    }
    

...

        Monitor.Enter(syncLock);    
        try
        {
            SomeMethod();
        }
        finally
        {
            Monitor.Exit(syncLock);
        } // the lock is released here


Monitor class has thread affinity (see below).

As with the Win32 API critical sections, blocking through Monitor or lock statement is process wide. Not cross process. Meaning, threads from other processes do not affected by those blocking mechanisms.

MethodImplAttribute class

Same effect, as the lock statement and the Monitor.Enter()/Exit(), has the System.Runtime.CompilerServices.MethodImplAttribute attibute when it is used to mark methods with the MethodImplOptions.Synchronized flag.

    [MethodImpl(MethodImplOptions.Synchronized)]
    double SumList()
    {
        double total = 0;

        for (int i = 0; i < list.Count; i++)
            total += (double)list[i];

        return total;                
    }

    [MethodImpl(MethodImplOptions.Synchronized)]
    void AlterList()
    {
        for (int i = 0; i < list.Count; i++)
            list[i] = ((double)list[i]) / 2.0;                
    }


A thread-safe class

A thread-safe class is a class which protects all of its members from concurrent access.

    public class Coords
    {
        private object syncLock = new object();

        private int x = 0;
        private int y = 0;

        public int X 
        {
            get { lock (syncLock) { return x; } }
            set { lock (syncLock) { x = value; } }
        }

        public int Y
        {
            get { lock (syncLock) { return y; } }
            set { lock (syncLock) { y = value; } }
        }
    }

Synchronization means all or nothing. Even if a single thread is able to bypass the protection and access the resource directly, then synchronization is meaningless.

ContextBoundObject class and synchronized contexts

A context is a set of rules that define an environment where an object or a collection of objects resides. Objects subject to a context rules are called context bound objects. Non context bound objects are called agile objects.

Context bound classes are marked with attributes which denote the context usage rules. Those attributes may regard method interception, syncronization or transaction settings.

The abstract System.ContextBoundObject class is the base class for all context bound classes.

A class which inherits from ContextBoundObject class and is marked with the System.Runtime.Remoting.Contexts.SynchronizationAttribute attribute is a thread safe class. Every instance member of that class is synchronized, that is, it is protected from concurrent thread access. Static members are not synchronized though.

    [Synchronization()]
    public class SyncCoords: ContextBoundObject
    {

        private int x = 0;
        private int y = 0;

        public int X
        {
            get { { return x; } }
            set { { x = value; } }
        }

        public int Y
        {
            get { { return y; } }
            set { { y = value; } }
        }
    }
 

Avoid using SynchronizationAttibute as

    [Synchronization(true)]

since it requires any call outside of the context to be serialized because of reentrancy complications. [In that mode, when a thread context switch happens, the context's lock is released and is re-obtained later upon method reentrancy. In the between any thread may call any method without protection.]

Blocking and interrupting

A thread is considered blocked when the ThreadState.WaitSleepJoin flag is contained in its ThreadState property.

The methods Thread.Sleep() Thread.Join() Monitor.Enter() Monitor.TryEnter() Monitor.Wait() or any waiting call of a WaitHandle derived synchronization object, blocks the thread and includes the ThreadState.WaitSleepJoin flag to the ThreadState property.

The Thread.Interrupt() method un-blocks a thread which is in the ThreadState.WaitSleepJoin state and makes it to continue its execution. The Interrupt() method throws an exception though, so a thread that is going to be interrupted it should be prepared to catch and handle that exception.

If Thread.Interrupt() is called on a not blocked thread, no exception occurs at that point and the thread continues its execution normally. The ThreadInterruptedException is then thrown later, the next time the thread is blocked, that is the next time the thread goes into ThreadState.WaitSleepJoin state.

Avoid code like the following

    if ((myThread.ThreadState & ThreadState.WaitSleepJoin) > 0)
      myThread.Interrupt();

since it is not atomic and therefore not thread-safe.

The static Thread.Sleep() works like the Win32 API Sleep() function. It puts the calling thread into sleep by suspending its execution. The time the thread is asleep is scheduled to other threads.

    Thread.Sleep(0);                    // causes the thread to relinquish the remainder of its processor time to any other thread 
    Thread.Sleep(3000);                 // causes the thread to remain idle for 3 seconds
    Thread.Sleeep(Timeout.Infinite);    // causes the thread to remain idle for ever, waiting for an Interrupt() call.
    

NOTE: The Sleep() method suspends windows message processing. That means that if the primary thread calls Sleep() no refreshing of the user interface takes place and the application will be not responding.

The static Thread.SpinWait(int iterations) causes the calling thread to iterate by the specified number of iterations. This is actually similar to

    for (int = 0; i < iterations; i++)
    {
    }

The SpinWait() does not block the thread. It just forces it to iterate. Classes such as Monitor and ReaderWriterLock use the SpinWait() internally.

Using Monitor.Wait() and Monitor.Pulse()

CLR maintains two queues for an object used in locking: a waiting queue and a ready queue.

Threads ready to acquire a lock are placed in the ready qeueu.

The Monitor.Wait() method can be used only by threads owning a lock. When a thread calls Monitor.Wait(), it releases the lock it owns and it is placed in the waiting queue.

The Monitor.Pulse() and Monitor.PulseAll() can be used only by threads owning a lock. When a thread calls Pulse() the first thread in the waiting queue, if any, is moved to the ready queue. PulseAll() moves all waiting threads to the ready queue. When the thread that called Pulse() or PulseAll() releases the lock, possibly by exiting the protected code block, the first thread in the ready queue re-acquires the lock and starts executing.

From a thread perspective, there is no way to know if Monitor.Pulse() is called. And if Pulse() is called when there are no waiting threads at all, the next call to Monitor.Wait() by a thread, may lead to a deadlock.

    /* the secondary thread starting method. 
       It is called by the primary thread too 
       WARNING: for this to work, it must be accessed by the secondary thread first */
    void ThreadProc()
    {
        Send("Enter");

        lock (syncLock)
        {
            if (!IsPrimaryThread())
            {
                Send("waiting....");

                Monitor.Wait(syncLock); // this releases the lock, letting the other threads acquire it

                Send("waked up");
            }
            else
            {
                Send("waking up the second thread");
                Monitor.Pulse(syncLock);  // wake up any waiting thread
            }                
        }

        Send("Exit");
    }
    
    

Here is a more usable example of using Monitor.Wait() and Monitor.Pulse(). The secondary thread process a list, sends a notification to the primary thread and if a flag is true then it terminates, else starts waiting.

The primary thread, each time a button is clicked, wakes up the secondary thread, and puts it to work.

    void ThreadProc()
    {    
        lock (syncLock)
        {
            Monitor.Wait(syncLock);

            while (true)
            {
                ArrayList list = new ArrayList();

                for (int i = 0; i < 5; i++)
                    list.Add(random.Next());

                synContext.Send(SynchronizedMethod, list);

                if (!exitFlag)
                    Monitor.Wait(syncLock); // go to sleep, this releases the lock, letting the other threads acquire it
                else
                {
                    t = null;
                    break;
                }                        
            }
        }
    } 

    private void button1_Click(object sender, EventArgs e)
    {
        if ((t == null))
        {
            t = new Thread(ThreadProc);
            t.Start();

            Thread.Sleep(1000);     // lets be sure the secondary thread starts before the Pulse() call
        }            

        lock (syncLock)
        {
            Monitor.Pulse(syncLock); // wake up the secondary thread
        }
    }
 

see also:

Syncronization: Waitable classes (also known as Wait Handles)

The Win32 API provides a set of synchronization objects which have a name and a handle. The handle, which uniquely identifies the object to the operating system, of a named object can be obtained and used by one or more processes in calling the, so called, "wait" functions of Win32 API, in order to synchronize multiple threads running in one or more processes. That is Win32 waitable synchronization objects provide thread inter-process synchronization.

Those Win32 API objects are: the Mutex, the Semaphore, the Event and the Timer objects.

The CLR provides synchronization classes that represent that syncronization functionality. The base class for all those waitable classes is the abstract System.Threading.WaitHandle class.

    public abstract class WaitHandle : MarshalByRefObject, IDisposable
    {
        // Fields
        public const int WaitTimeout = 258;

        // Methods
        public virtual bool WaitOne();
        public virtual bool WaitOne(int millisecondsTimeout, bool exitContext);
        public virtual bool WaitOne(TimeSpan timeout, bool exitContext);
        
        public static int WaitAny(WaitHandle[] waitHandles);
        public static int WaitAny(WaitHandle[] waitHandles, int millisecondsTimeout, bool exitContext);
        public static int WaitAny(WaitHandle[] waitHandles, TimeSpan timeout, bool exitContext);
        
        public static bool WaitAll(WaitHandle[] waitHandles);
        public static bool WaitAll(WaitHandle[] waitHandles, int millisecondsTimeout, bool exitContext);
        public static bool WaitAll(WaitHandle[] waitHandles, TimeSpan timeout, bool exitContext);

        public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn);
        public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn, int millisecondsTimeout, bool exitContext);
        public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn, TimeSpan timeout, bool exitContext);

        public virtual void Close();

        // Properties
        public virtual IntPtr Handle { get; set; }
        public SafeWaitHandle SafeWaitHandle { get; set; }
    }

A thread that calls any of the wait methods of a WaitHandle object blocks if the object is in non-signaled state and waits until the object become signaled. A successful wait call obtains ownership on the WaitHandle object. That ownership must be released. The WaitHandle subclass used and the wait method called, determine how that release is done.

WaitHandle class is an IDisposable object. A call to WaitHandle.Close() is required in order to free operating system resources regarding the wait handle when done using the object. WaitHandle.Close() actually calls IDisposable.Dispose(). A call to WaitHandle.Close() is not required though if the WaitHandle object is going to be alive for the life-time of the application. When an application terminates, all handles are closed automatically.

The hierarchy of the provided waitable classes is as following

    WaitHandle
        Mutex
        Semaphore
        EventWaitHandle
            AutoResetEvent
            ManualResetEvent 
            

Mutex class

Mutex stands for mutual exclusive. Only one thread at a time can own a mutex. Only the owning thread is allowed to access the protected resource.

Mutex state is set to signaled when it is not owned by any thread, and non-signaled when it is owned.

A Mutex object can be named or unamed. A named Mutex object represents an operating system named mutex object. An unamed Mutex object is a local mutex. A named mutex is created by using one of the constructors of the Mutex class that accept a mutex name. The passed in name could be the name of an already existing mutex thus it is possible to create multiple Mutex objects for the same named mutex object. The static Mutex.OpenExisting() method opens an existing named mutex. Named mutexes are inter-process objects accesible by any thread in any process in the system.

When creating a Mutex object it is possible to ask for initial owneship of the mutex by passing a boolean value to one of the constructors or the Mutex class.

        public Mutex();
        public Mutex(bool initiallyOwned);
        public Mutex(bool initiallyOwned, string name);
        public Mutex(bool initiallyOwned, string name, out bool createdNew);
        public Mutex(bool initiallyOwned, string name, out bool createdNew, MutexSecurity mutexSecurity);

The Mutex.WaitOne() oveloaded instance method is used by a thread to ask for mutex ownership. The calling thread blocks waiting until the mutex is available or until an optionally specified timeout interval elapses. Returns true if the mutex is acquired.

        public virtual bool WaitOne();
        public virtual bool WaitOne(int millisecondsTimeout, bool exitContext);
        public virtual bool WaitOne(TimeSpan timeout, bool exitContext);

The Mutex.ReleaseMutex() instance method releases the mutex once. A thread that already owns a mutex may call WaitOne() multiple times on that mutext, without blocking its execution. But any successful call to WaitOne() requires a corresponding ReleaseMutex() call.

If an thread that owns a mutex terminates without releasing the mutex, that mutex is considered abandoned. In .Net 2.0 when a thread tries to take ownership of an abandoned mutex, results in a AbandonedMutexException error.

The static Mutex.OpenExisting(string name) opens an existing system-wide named mutex object. If the named mutex does not exist in the system an exception is thrown.

Mutex class has thread affinity.

This example simulates a downloader application. A Downloader object is passed a list of Command objects which represent download information and a delegate value to be used in notifiying the caller upon completion of the whole operation. The Downloader object creates a coordinator thread and starts it. The coordinator thread then creates three worker threads, each one associated to a Mutex completion handle, and then issues a WaitHandle.WaitAll(completionHandles) call. completionHandles is an array of Mutex objects. A Mutex object is also used in protecting access to the command list. Each worker thread, when starting execution, blocks its associated completion WaitHandle, which is a Mutex, and signals the Mutex upon completion. When all completion handles are signalled the coordinator thread unblocks and notifies the caller for the completion of the download operation.

    public class Downloader : Disposable
    {
        static private int counter = 0;

        private int id = counter++;
        private int totalDataSize = 0;
        private int totalCommands = 0;
        private ArrayList commandList;
        private CompletionDelegate completionCallBack;

        private Mutex[] completionHandles = new Mutex[]
        {
            new Mutex(false),
            new Mutex(false),
            new Mutex(false)
        };

        private Mutex listProtector = new Mutex(false);

        private bool GetNextCommand(ref Command cmd)
        {
            cmd = null;

            if (listProtector.WaitOne())
            {
                try
                {                    
                    if (commandList.Count > 0)
                    {
                        cmd = (Command)commandList[0];
                        commandList.Remove(cmd);
                    }                    
                }
                finally
                {
                    listProtector.ReleaseMutex();
                }
            }

            return cmd != null;
        }
        
        private void CoordinatorThreadProc()
        {
            Thread worker;

            foreach (object completionHandle in completionHandles)
            {
                worker = new Thread(WorkerThreadProc);
                worker.Start(completionHandle);
            }

            Thread.Sleep(500);

            WaitHandle.WaitAll(completionHandles);

            completionCallBack(this);
        }

        private void WorkerThreadProc(object info)
        {
            Mutex completionHandle = (Mutex)info;
            completionHandle.WaitOne();
            try
            {
                Command cmd = null;
                while (GetNextCommand(ref cmd))
                    cmd.Execute();
            }
            finally
            {
                completionHandle.ReleaseMutex();
            }
        }

        protected override void DisposeUnmanagedResources()
        {
            if (!IsDisposed)
            {
                foreach (WaitHandle wh in completionHandles)
                    wh.Close();

                listProtector.Close();
            }
        }

        public Downloader(ArrayList CommandList, CompletionDelegate CompletionCallBack)
        {
            commandList = CommandList;
            completionCallBack = CompletionCallBack;

            totalCommands = commandList.Count;

            foreach (Command cmd in commandList)
                totalDataSize += cmd.DataSize;
        }


        public void Execute()
        {
            if (commandList.Count > 0)
            {
                Thread coordinator = new Thread(CoordinatorThreadProc);
                coordinator.Start();
            }
        }


        public override string ToString()
        {
            return "Job ID: " + id.ToString() +
                   ", Commands: " + totalCommands.ToString() +
                   ", Total DataSize: " + totalDataSize.ToString();
        }        
        

    }


Single instance application with a Mutex class

Since a mutex is a system wide name object it can be used to ensure that only a single instance of an application is running at a given time.

    static void Main()
    {
        Mutex m = new Mutex(false, "{03B4ED6A-C87D-47a1-BB81-A6E8B086CA08}");

        if (m.WaitOne(1500, false))
        {
            try
            {
                Application.EnableVisualStyles();
                Application.SetCompatibleTextRenderingDefault(false);
                Application.Run(new MainForm());
            }
            finally
            {
                m.ReleaseMutex();
            }
        }
        else
        {
            MessageBox.Show("An instance of this application is already running!");
        }         
    }

Semaphore class

A semaphore is a synchronization object which allows multiple ownership by a specified maximum number of threads. A semaphore object maintains a counter between zero and that specified maximum value. Each time a thread acquires the semaphore the counter decreases. Each time a thread releases the semaphore the counter increases. If the counter is zero semaphore ownership can not be obtained and the calling thread is blocked.

In other words, while a mutex permits exclusive access to a single thread, a semaphore permits access to a specifield maximum number of threads.

A semaphore, just like a mutex, may have a name. A named semaphore is a system wide semaphore. Semaphore class provides overloaded constructors for creating named semaphores.

Semaphore class provides the, already known from Mutex class, methods

    WaitOne()
    and OpenExisting()

for acquiring ownership on a Semaphore object.

The

    public int Release();
    public int Release(int releaseCount);        

methods are used to release ownership on a semaphore. Over-releasing a Semaphore results in an exception.

Semaphore class has no thread affinity.

This example simulates a downloader application. A Downloader object is passed a list of Command objects which represent download information and a delegate value to be used in notifiying the caller upon completion of the whole operation. The Downloader object creates a coordinator thread and starts it. The coordinator thread then creates three worker threads, each one associated to a Mutex completion handle, and then issues a WaitHandle.WaitAll(completionHandles) call. completionHandles is an array of Mutex objects. A Mutex object is also used in protecting access to the command list. Each worker thread, when starting execution, blocks its associated completion WaitHandle, which is a Mutex, and signals the Mutex upon completion. When all completion handles are signalled the coordinator thread unblocks and notifies the caller for the completion of the download operation.

Bandwidth is not an unlimited resource. This downloader application limits thread concurrent acces to network resources by using a Semaphoren which permits a maximum of two threads. Each worker thread calls Semaphore.WaitOne() just before starting the execution of a download command, thus effectively a maximum of two threads are accessing the network at any given time. When the worker thread is done downloading the command it issues a Semaphore.Release().

    public class Downloader : Disposable
    {
        static private int counter = 0;
        static public readonly Semaphore Semaphore = new Semaphore(2, 2);

        private int id = counter++;
        private int totalDataSize = 0;
        private int totalCommands = 0;
        private ArrayList commandList;
        private CompletionDelegate completionCallBack;

        private Mutex[] completionHandles = new Mutex[]
        {
            new Mutex(false),
            new Mutex(false),
            new Mutex(false)
        };

        private Mutex listProtector = new Mutex(false);

        private bool GetNextCommand(ref Command cmd)
        {
            cmd = null;

            if (listProtector.WaitOne())
            {
                try
                {                    
                    if (commandList.Count > 0)
                    {
                        cmd = (Command)commandList[0];
                        commandList.Remove(cmd);
                    }                    
                }
                finally
                {
                    listProtector.ReleaseMutex();
                }
            }

            return cmd != null;
        }
        
        private void CoordinatorThreadProc()
        {
            Thread worker;

            foreach (object completionHandle in completionHandles)
            {
                worker = new Thread(WorkerThreadProc);
                worker.Start(completionHandle);
            }

            Thread.Sleep(500);

            WaitHandle.WaitAll(completionHandles);

            completionCallBack(this);
        }

        private void WorkerThreadProc(object info)
        {
            Mutex completionHandle = (Mutex)info;
            completionHandle.WaitOne();
            try
            {
                Command cmd = null;
                while (GetNextCommand(ref cmd))
                {
                    Downloader.Semaphore.WaitOne();
                    try
                    {
                        cmd.Execute();
                    }
                    finally
                    {
                        Downloader.Semaphore.Release();
                    } 
                }
            }
            finally
            {
                completionHandle.ReleaseMutex();
            }
        }

        protected override void DisposeUnmanagedResources()
        {
            if (!IsDisposed)
            {
                foreach (WaitHandle wh in completionHandles)
                    wh.Close();

                listProtector.Close();
            }
        }

        public Downloader(ArrayList CommandList, CompletionDelegate CompletionCallBack)
        {
            commandList = CommandList;
            completionCallBack = CompletionCallBack;

            totalCommands = commandList.Count;

            foreach (Command cmd in commandList)
                totalDataSize += cmd.DataSize;
        }


        public void Execute()
        {
            if (commandList.Count > 0)
            {
                Thread coordinator = new Thread(CoordinatorThreadProc);
                coordinator.Start();
            }
        }


        public override string ToString()
        {
            return "Job ID: " + id.ToString() +
                   ", Commands: " + totalCommands.ToString() +
                   ", Total DataSize: " + totalDataSize.ToString();
        }        
        

    }



see also:

EventWaitHandle class and the AutoResetEvent and ManualResetEvent derived classes

An event synchronization object has nothing to do with CLR events.

An event synchronization object can be explicitly set to signaled state by issuing a call. An event object is mostly used in sending notifications to threads regarding the occurence of an event such as the completion of an operation.

There are two types of event objects. The auto reset event and the manual reset event. Reseting an event means setting an already signalled event, back into a non-signalled state again, thus making threads to block again.

If an auto reset event is in signalled state, it remains signaled until a single thread is released, and then it goes to non-signalled state automatically.

If a manual reset event is in signalled state, it remains signaled, permitting any number of waiting threads to be released, and requires an explicit call to be set to non-signalled state again.

An event object, just like a mutex and a semaphore, may have a name. A named event is a system wide event.

The BCL provides the EventWaitHandle class which represents an event synchronization object. EventWaitHandle class provides proper constructors for creating either an auto or a manual reset event. The EventWaitHandle object may be named or not. Two of those constructors are shown below.

        public EventWaitHandle(bool initialState, EventResetMode mode);
        public EventWaitHandle(bool initialState, EventResetMode mode, string name);
        

The AutoResetEvent and ManualResetEvent derived classes, correspond to an auto and a manual reset event respectively, and they provide just a constructor for instantiating the object. All other functionality is inherited from the EventWaitHandle base class. Those derived classes, AutoResetEvent and ManualResetEvent, are always local, since they don't have a constructor accepting an event name.

EventWaitHandle class provides the, already known from Mutex and Semaphore class, methods

    WaitOne()
    and OpenExisting()

for acquiring ownership on a event object.

The EventWaitHandle.Set() sets the object into signalled state. An auto reset event goes to non-signalled state as soon as a single thread passes the gate. The EventWaitHandle.Reset() is used to set the object into non-signalled state explicitly. A manual reset event requires this call otherwise the gate will be open forever.

EventWaitHandle and derived classes have no thread affinity.

This example simulates a downloader application. A Downloader object is passed a list of Command objects which represent download information and a delegate value to be used in notifiying the caller upon completion of the whole operation. The Downloader object creates a coordinator thread and starts it. The coordinator thread then creates three worker threads, each one associated to a AutoResetEvent completion handle, and then issues a WaitHandle.WaitAll(completionHandles) call. completionHandles is an array of AutoResetEvent objects. A AutoResetEvent object is also used in protecting access to the command list. Each worker thread, when starting execution, blocks its associated completion WaitHandle, which is a AutoResetEvent, and signals the AutoResetEvent upon completion. When all completion handles are signalled the coordinator thread unblocks and notifies the caller for the completion of the download operation.

    public class Downloader : Disposable
    {
        static private int counter = 0;

        private int id = counter++;
        private int totalDataSize = 0;
        private int totalCommands = 0;
        private ArrayList commandList;
        private CompletionDelegate completionCallBack;

        private AutoResetEvent[] completionHandles = new AutoResetEvent[]
        {
            new AutoResetEvent(true),
            new AutoResetEvent(true),
            new AutoResetEvent(true)
        };

        private AutoResetEvent listProtector = new AutoResetEvent(true);



        private bool GetNextCommand(ref Command cmd)
        {
            cmd = null;

            if (listProtector.WaitOne())
            {
                try
                {                    
                    if (commandList.Count > 0)
                    {
                        cmd = (Command)commandList[0];
                        commandList.Remove(cmd);
                    }                    
                }
                finally
                {
                    listProtector.Set();
                }
            }

            return cmd != null;
        }
        
        private void CoordinatorThreadProc()
        {
            Thread worker;

            foreach (object completionHandle in completionHandles)
            {
                worker = new Thread(WorkerThreadProc);
                worker.Start(completionHandle);
            }

            Thread.Sleep(500);

            WaitHandle.WaitAll(completionHandles);

            completionCallBack(this);
        }

        private void WorkerThreadProc(object info)
        {
            AutoResetEvent completionHandle = (AutoResetEvent)info;
            completionHandle.WaitOne();
            try
            {
                Command cmd = null;
                while (GetNextCommand(ref cmd))
                    cmd.Execute();
            }
            finally
            {
                completionHandle.Set();
            }
        }

        protected override void DisposeUnmanagedResources()
        {
            if (!IsDisposed)
            {
                foreach (WaitHandle wh in completionHandles)
                    wh.Close();

                listProtector.Close();
            }
        }

        public Downloader(ArrayList CommandList, CompletionDelegate CompletionCallBack)
        {
            commandList = CommandList;
            completionCallBack = CompletionCallBack;

            totalCommands = commandList.Count;

            foreach (Command cmd in commandList)
                totalDataSize += cmd.DataSize;
        }


        public void Execute()
        {
            if (commandList.Count > 0)
            {
                Thread coordinator = new Thread(CoordinatorThreadProc);
                coordinator.Start();
            }
        }


        public override string ToString()
        {
            return "Job ID: " + id.ToString() +
                   ", Commands: " + totalCommands.ToString() +
                   ", Total DataSize: " + totalDataSize.ToString();
        }        
        

    }

 

The keyword volatile and volatile reads and writes

C# compiler optimizes read and write access of values from/to memory. This means that a value might be stored in a processor cache for speedy access and not in a normal memory location.

Volatile reads and writes ensure that a value is read from or written to the memory and not in a processor cache. A volatile read guarantees that it always returns the latest value of a memory location, while a volatile write guarantees that the value written is immediately visible to any code.

Thus volatile reads and writes are in essence synchronized reads and writes.

The keyword volatile can be used to mark a class or struct field as a volatile field. Local variables cannot be declared as volatile.

A volatile field is synchronized so neither the lock keyword nor the Monitor.Enter(), Monitor.Exit() pair of calls is required or any other thread synchronization technique. Multiple threads can safely access a volatile member.

   public class Coords
   {
        private volatile int x = 0;
        private volatile int y = 0;

        public int X 
        {
            get { return x; }  
            set { x = value; }  
        }

        public int Y
        {
            get { return y; }  
            set { y = value; }  
        }
    }


Another way to have volatile reads and writes, without using the volatile keyword, is to use the overloaded Thread.VolatileRead() and Thread.VolatileWrite() static methods.

Volatile reads and writes are supported for reference type values, pointer values, integral and enum values, IntPtr and UIntPtr values.

A volatile read and write can be used with 64-bit length fields on 32-bit systems, such as a long field, forcing an atomic read or write operation even of that length.

see also:

Interlocked operations.

Interlocked operations are simple synchronized operations performed on numeric variables.

Win32 provides an interlocked API which includes functions such as the InterlockedIncrement() and InterlockedDecrement() function. That set of interlocked Win32 API functions provide a synchronized access to a variable which is shared by multiple threads. In essence an interlocked call is an atomic call.

BCL provides the Interlocked static class which encapsulates a part of the Win32 interlocked API.

    public static class Interlocked
    {
        public static extern int Increment(ref int location);
        public static extern long Increment(ref long location);    
        
        public static extern int Decrement(ref int location);
        public static extern long Decrement(ref long location);  
        
        public static extern double Exchange(ref double location1, double value);
        public static extern int Exchange(ref int location1, int value);
        public static extern long Exchange(ref long location1, long value);
        public static extern IntPtr Exchange(ref IntPtr location1, IntPtr value);
        public static extern object Exchange(ref object location1, object value);
        public static extern float Exchange(ref float location1, float value);
        public static T Exchange<T>(ref T location1, T value) where T: class;      
        
        public static extern int CompareExchange(ref int location1, int value, int comparand);
        public static T CompareExchange<T>(ref T location1, T value, T comparand) where T: class;
        public static extern double CompareExchange(ref double location1, double value, double comparand);
        public static extern long CompareExchange(ref long location1, long value, long comparand);
        public static extern IntPtr CompareExchange(ref IntPtr location1, IntPtr value, IntPtr comparand);
        public static extern object CompareExchange(ref object location1, object value, object comparand);
        public static extern float CompareExchange(ref float location1, float value, float comparand);          
    
        public static int Add(ref int location1, int value);
        public static long Add(ref long location1, long value);

        public static long Read(ref long location);
    }

The Interlocked class may be used with local variables or fields of a class or struct.

    class InterlockedTest
    {
        static public int ID = 0;
        private int accumulator = 0;

        public InterlockedTest()
        {
            Interlocked.Increment(ref ID);
        }

        public int Add(int value)
        {
            return Interlocked.Add(accumulator, value);
        }
    }
 




Thread Local Storage (TLS)

Although process code is executed by threads, process data are shared by all threads of an application. This is because all threads in a process share the same address space. In other words, fieds of an instance of a class (object) are stored in the same memory location regardless of the number of the threads running. So if a thread changes a field, that change is visible to other threads running at the same time.

Thread local storage (TLS) is a way to make each thread have its own local copy of data. TLS data are unique per thread and per application domain.

There are two ways to have TLS data:

    Data slots
    and thread-relative static fields.
    

A data slot is a memory location which is unique to a combination of thread and application domain. A data slot is in essence an isolated local data store.

There are two types of data slots: named and un-named. Both types are implemented by using the LocalDataStoreSlot struct which has no more public members other than those it inherits from its immediate base class, the System.Object class.

Here are the Thread class methods regarding data slot handling.

    public static LocalDataStoreSlot AllocateDataSlot();
    public static LocalDataStoreSlot AllocateNamedDataSlot(string name);
    public static LocalDataStoreSlot GetNamedDataSlot(string name);
    
    public static object GetData(LocalDataStoreSlot slot);
    public static void SetData(LocalDataStoreSlot slot, object data);
    

Here is an example

    void DataSlotThreadProc()
    {
        LocalDataStoreSlot slot = Thread.GetNamedDataSlot("Slot Value");
        Thread.SetData(slot, 1234);
        int x = (int)Thread.GetData(slot);

        string S = "Thread ID: " + Thread.CurrentThread.ManagedThreadId.ToString() +
                    ", Slot Value: " + x.ToString();

        synContext.Send(SynchronizedMethod, S);
    }
 

The GetNamedDataSlot() returns a named data slot. If the named data slot does not exist, a new date slot is allocated. The AllocateDataSlot() allocates an un-named data slot where the GetData() and SetData() pair is used with both named and un-named data slots.

A thread-relative static field is a field marked with the ThreadStaticAttribute attribute. A thread-relative static field is a TLS field and the field's data is unique to each thread that uses the field.

Thread-relative static fields provide better performance than data slots. Since those fields are static they are initialized the first time the class is loaded. This happens when a thread starts using the class for the first time. But since thread-relative static field value is unique per thread, next threads have to initialize the field for their context.

Here is an example

    [ThreadStatic()]
    static private int? tlsData;

    static private int TlsData
    {
        get { return (int?)tlsData ?? 0; }
        set { tlsData = value; }
    }

    void ThreadStaticFieldThreadProc()
    {
        TlsData++;

        string S = "Thread ID: " + Thread.CurrentThread.ManagedThreadId.ToString() +
        ", Thread static field value: " + TlsData.ToString();

        synContext.Send(SynchronizedMethod, S);
    }


see also:

Thread.Suspend() and Thread.Resume() methods

The Thread.Suspend() and Thread.Resume() instance methods are marked as obsolete since .Net 2.0 and they are going to be removed from the class in a future .Net release.

Thread.Suspend() suspends the execution of a thread while Thread.Resume() resumes the execution of a thread. When calling Thread.Suspend() CLR tries to find a safe point inside the code of a thread before actually suspending the thread. This is because a Thread object is subject to garbage collection and a safe point must be reached before a garbage collection can be safely executed.

As a side not, when a garbage collection is about to performed, CLR suspends all threads. Only the thread that performs the collection is running.

Thread.Abort() method

The Thread.Abort() stops a thread permanently. An aborted thread can not be restarted again.

When Thread.Abort() is called it generates a ThreadAbortException exception on the target thread. So a thread that might be aborted, it should catch that exception.

When the ThreadAbortException is thrown the thread has two options inside the catch block: either terminate or cancel the exception by calling Thread.ResetAbort().

The thread that initially called the Thread.Abort() may elect to wait, possibly by calling Thread.Join(), for the target thread to finish execution. Thread.Join() is a blocking call and in that case it is more safe to pass a timeout to that Join() call because the target thread may opt to continue execution by calling ResetAbort().

Here is an example.

    SynchronizationContext synContext;
    Thread t = null;
    bool executing = false;

    void ThreadProc()
    {
        try
        {
            executing = true;
            synContext.Send(SynchronizedMethod, "thread started");

            while (true)
            {
                Thread.Sleep(100);
            }
        }
        catch (ThreadAbortException)
        {
            executing = false;
            synContext.Send(SynchronizedMethod, "thread aborted");
        }
    } 

    private void btnStart_Click(object sender, EventArgs e)
    {
        if (!executing)
        {
            t = new Thread(ThreadProc);
            t.Start();
        }
    }

    private void btnAbort_Click(object sender, EventArgs e)
    {
        if (executing)
        {
            t.Abort();
            t.Join(1000);
        }
    }
    

Timers

A timer is a devise allowing a method to be called periodically at a specified time interval. BCL provides three timer classes (WPF provides the DispatcherTimer class):

    System.Threading.Timer class
    System.Timers.Timer class and
    System.Windows.Forms.Timer class.
    

The first two are threading timers. Threading timers are executed in the context of a secondary thread. Actually those two threading timers are using the ThreadPool class (decribed elsewhere). The third timer is executed in the context of the primary thread, the user interface thread, and being a component it can be dropped on a form.

The System.Threading.Timer class

    public sealed class Timer : MarshalByRefObject, IDisposable
    {
        public Timer(TimerCallback callback);
        public Timer(TimerCallback callback, object state, int dueTime, int period);
        public Timer(TimerCallback callback, object state, long dueTime, long period);
        public Timer(TimerCallback callback, object state, TimeSpan dueTime, TimeSpan period);
        public Timer(TimerCallback callback, object state, uint dueTime, uint period);

        public bool Change(int dueTime, int period);
        public bool Change(long dueTime, long period);
        public bool Change(TimeSpan dueTime, TimeSpan period);
        public bool Change(uint dueTime, uint period);
        public void Dispose();
        public bool Dispose(WaitHandle notifyObject);
    }
    
    

The System.Timers.Timer class

    public class Timer : Component, ISupportInitialize
    {
        public Timer();
        public Timer(double interval);
        
        public bool AutoReset { get; set; }
        public bool Enabled { get; set; }
        public double Interval { get; set; }
        public override ISite Site { get; set; }
        public ISynchronizeInvoke SynchronizingObject { get; set; }

        public event ElapsedEventHandler Elapsed;
        
        public void BeginInit();
        public void Close();
        public void EndInit();
        public void Start();
        public void Stop();
    }    


The System.Windows.Forms.Timer class

    public class Timer : Component
    {
        public Timer();
        public Timer(IContainer container);

        public virtual bool Enabled { get; set; }
        public int Interval { get; set; }
        public object Tag { get; set; }

        public event EventHandler Tick;

        public void Start();
        public void Stop();
    }
    
    

Using a Timer class is very easy. A Timer requires to know two things. The method to execute and the time interval.

Regarding System.Threading.Timer class that information is passed to a constructor. The TimerCallback delegate is defined as

    public delegate void TimerCallback(object state);

The dueTime parameter is the first interval and the period is the interval after the first call. The state is a user defined object passed to the method.

Regarding System.Timers.Timer and System.Timers.Timer the method is given as an event while the interval is a property.

The first two timers require a disposing call when done using them. It is the System.Threading.Timer.Dispose() and System.Timers.Timer.Close().

Here is an example

        System.Threading.Timer timer1;
        System.Timers.Timer timer2;
        System.Windows.Forms.Timer timer3;

        bool executing = false;

        void StartTimers()
        {
            if (!executing)
            {
                timer1 = new System.Threading.Timer(Threading_TimerTick, "System.Threading", 3000, 3000);

                timer2 = new System.Timers.Timer(2500);
                timer2.Elapsed += new ElapsedEventHandler(Timers_TimerTick);
                timer2.Start();

                timer3 = new System.Windows.Forms.Timer();
                timer3.Interval = 2000;
                timer3.Tick += new EventHandler(WindowsForms_TimerTick); 
                timer3.Start();

                executing = true;
            }
        }

        void StopTimers()
        {
            if (executing)
            {
                timer1.Dispose();
                timer2.Stop();
                timer2.Close();
                timer3.Stop();

                executing = false;
            }
        }

        void Threading_TimerTick(object state)
        {
            DisplayTimerMessage(state.ToString());
        }

        void Timers_TimerTick(object sender, ElapsedEventArgs e)
        {
            DisplayTimerMessage("System.Timers");
        }

        void WindowsForms_TimerTick(object sender, EventArgs e)
        {
            DisplayTimerMessage("System.Windows.Forms");
        }

Control.Invoke Method

Except of the WindowsFormsSynchronizationContext class there is another way to synchronize a call from a secondary thread to the primary user interface thread: the Control.Invoke() method.

        public object Invoke(Delegate method);
        public object Invoke(Delegate method, params object[] args);
        

The Control.Invoke() invokes the passed delegate in the context of the thread that created the control. In case of a user interface element dropped on the main form, this thread is the primary user interface thread.

The Delegate class is an abstract class. It turns out that the most usable version is the second which gets a params parameter. It is valid to pass a delegate value of any delegate type and a proper array of parameters.

The Control.InvokeRequired property returns true if it is called from a thread other than the creator thread, so it is a safe way to know if an Invoke() call is required or not.

Here is an example

    delegate void MessageDelegate(string msg);

    void DisplayTimerMessage(string msg)
    {
        if (textBox1.InvokeRequired)
            textBox1.Invoke(new MessageDelegate(DisplayTimerMessage), msg + " [invoked]");
        else
            textBox1.Text += msg + Environment.NewLine;
    } 

As it is written the above it first checks if an Invoke() call is required and in that case it recursively calls itself through Invoke. The second call, the one Invoke()-ed, is then executed in the context of the primary thread.

BackgroundWorker Class

The System.ComponentModel.BackgroundWorker class is an easy to way to work with threads. BackgroundWorker provides an easy and synchronized way for reporting progress, completion and cancellation of the whole operation. It also makes it easy to cancel, that is abort, the operation by just setting a simple flag and thus avoiding the risk of calling Thread.Abort(). Furthermore it is a component and can be dropped on a form.

BackgroundWorker class actually encapsulates one of the background threads of the ThreadPool. (ThreadPool is described elsewhere).

    public class BackgroundWorker : Component
    {
        public BackgroundWorker();

        public bool CancellationPending { get; }
        public bool IsBusy { get; }
        public bool WorkerReportsProgress { get; set; }
        public bool WorkerSupportsCancellation { get; set; }

        public event DoWorkEventHandler DoWork;
        public event ProgressChangedEventHandler ProgressChanged;
        public event RunWorkerCompletedEventHandler RunWorkerCompleted;

        public void CancelAsync();
        public void ReportProgress(int percentProgress);
        public void ReportProgress(int percentProgress, object userState);
        public void RunWorkerAsync();
        public void RunWorkerAsync(object argument);
    }
    

Using BackgroundWorker class is too easy. The DoWork event is the callback for the thread method. The second version of this overloaded method accepts a user defined parameter. BackgroundWorker traps any exception inside the method DoWork is linked, and reports the error using parameters of the RunWorkerCompleted event. Avoid non-synchronize calls from inside the DoWork handler.

The ProgressChanged event can be used to report the progress of the operation while the RunWorkerCompleted can be used to report the termination of the operation. Both those calls do not require synchronization.

The RunWorkerAsync() method starts the thread. The CancelAsync() method aborts the thread. No need to trap any ThreadAbortException though. Just check the RunWorkerCompletedEventArgs.Cancelled property in the RunWorkerCompleted event.

The ReportProgress() is used to fire the ProgressChanged event.

The simplest way to use the BackgroundWorker class is as following

    void WorkerMethod(object sender, DoWorkEventArgs e)
    {
        // code here                   
    }

...

    worker = new BackgroundWorker();
    worker.DoWork += new DoWorkEventHandler(WorkerMethod);
    worker.RunWorkerAsync(null);    



Investigate the BackgroundWorker class. It's a valuable tool. Here is a full example.

    class Command
    {
        static Random random = new Random();

        private int dataSize = random.Next(30, 50);
        private int remainSize = 0;

        public void Execute(BackgroundWorker worker, DoWorkEventArgs e)
        {
            remainSize = dataSize;
            int donePercent = 0;

            while (remainSize > 0)
            {
                if (worker.CancellationPending) // BackgroundWorker.CancelAsync() has called
                {
                    e.Cancel = true;
                    break;
                }
                else
                {
                    remainSize--;
                    donePercent = ((dataSize - remainSize) * 100) / dataSize;
                    Thread.Sleep(150);

                    worker.ReportProgress(donePercent, this);  // feed the BackgroundWorker.ProgressChanged event 
                }
            }
        }

        public int DataSize { get { return dataSize; } }
        public int RemainSize { get { return remainSize; } }
    }



    public partial class MainForm : Form
    {
        public MainForm()
        {
            InitializeComponent(); 
        }

        BackgroundWorker worker;
        bool executing = false;

        /* this handler is not synchronized */
        void Worker_Work(object sender, DoWorkEventArgs e)
        {
            Command cmd = (Command)e.Argument;              // e.Argument is the object passed to the BackgroundWorker.RunWorkerAsync()      
            cmd.Execute((BackgroundWorker)sender,  e);

            /* e.Result will be passed to the RunWorkerCompletedEventArgs of the BackgroundWorker.RunWorkerCompleted event */
            e.Result = cmd;                                  
        }

        /* synchronized handler */
        void Worker_ProgressChanged(object sender, ProgressChangedEventArgs e)
        {
            Command cmd = (Command)e.UserState;         // e.UserState is an object passed to the BackgroundWorker.ReportProgress()
            progressBar1.Value = e.ProgressPercentage;  // same as above

            textBox1.Text = "worker progress: " + e.ProgressPercentage.ToString() 
                            + "%, Total size: " + cmd.DataSize.ToString() + " (" + cmd.RemainSize.ToString() + ")";
        }

        /* synchronized handler */
        void Worker_Completed(object sender, RunWorkerCompletedEventArgs e)
        {
            if (e.Error != null)
                textBox1.Text = e.Error.Message;
            else if (e.Cancelled)
                textBox1.Text += Environment.NewLine + "Canceled";
            else
                textBox1.Text += Environment.NewLine + "DONE";

            executing = false;
            progressBar1.Value = 0;
        }


        private void btnStart_Click(object sender, EventArgs e)
        {
            if (!executing)
            {
                textBox1.Text = "";                
                
                /* create a worker */
                worker = new BackgroundWorker();

                /* configure a worker */
                worker.WorkerReportsProgress = true;
                worker.WorkerSupportsCancellation = true;

                /* link worker events */
                worker.DoWork += new DoWorkEventHandler(Worker_Work);
                worker.ProgressChanged += new ProgressChangedEventHandler(Worker_ProgressChanged);
                worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(Worker_Completed);
                
                /* start the worker */
                worker.RunWorkerAsync(new Command());    

                executing = true;
            }
        }


        private void btnStop_Click(object sender, EventArgs e)
        {
            if (executing)
            {
                if (worker.WorkerSupportsCancellation)
                {
                    worker.CancelAsync();   // this sets BackgroundWorker.CancellationPending to true
                    executing = false;
                }
            }
        }

        private void btnClear_Click(object sender, EventArgs e)
        {
            textBox1.Text = "";
        }

        private void MainForm_FormClosing(object sender, FormClosingEventArgs e)
        {
            e.Cancel = executing;

            if (e.Cancel)
                MessageBox.Show("Please stop worker manually! Worker is still executed...");
        }
 
    }

ThreadPool class

The System.Threading.ThreadPool static class represents a pool of never-terminated, recycled threads which can be used in executing a passed method.

CLR itself uses ThreadPool threads to carry out many tasks such as asynchronous method calls (Asynchronous method calls are not described here), timer callbacks and wait operations.

ThreadPool threads are background threads, that is they automatically terminated by the CLR when all application foreground threads are terminated, which happens when the parent process terminates.

ThreadPool threads use a default stack size, a default priority and run in a multi-threaded apartment context, not in a single-threaded apartment. (Thread Apartments described elsewhere).

There is no limit regarding the number of the threads the ThreadPool may contain. There is a limit though regarding the number of the threads than can be active, that is concurrently running. By default that limit is 25 worker threads per CPU and 1000 I/O completion threads.

The ThreadPool.GetMaxThreads() returns the number of the threads that can be active at a given time. The ThreadPool.SetMaxThreads() sets that number, that is it changes the default limit of active threads.

Any request above the number the GetMaxThreads() returns is queued until a thread is available. The ThreadPool.GetAvailableThreads() returns the number of threads that are not currently active and thus available to undertake a task.

Most of the time ThreadPool worker threads sit quietly, waiting for a code to use their services. When a ThreadPool worker thread is assigned a task, it executes the specified method and then returns to the pool waiting for the next request.

A ThreadPool worker thread never terminates, at least while the application is running. It is active or inactive (running or idle) but not aborted or terminated. Since it is not possible to get a reference to a ThreadPool thread, it is not possible to call the Thread.Abort() or Thread.Join() instance method, or any other Thread instance method too. So actually there is no way to manually terminate a ThreadPool thread.

[Avoid using the ThreadPool.BindHandle() method. Use the FileStream class instead. FileStream class uses ThreadPool.BindHandle() internally to implement asynchronous IO.]

CAUTION: Unhandled exceptions on a thread pool thread, thrown by user code, terminate the application.

The simplest way to use a ThreadPool thread is the

    static bool QueueUserWorkItem(WaitCallback callBack, object state);
    

method. WaitCallback delegate is defined as

    public delegate void WaitCallback(object state);
    

ThreadPool executes the passed in callBack delegate value, passing it the user defined state object. QueueUserWorkItem() starts execution immediately.

    void CallBack(object state)
    {
        try
        {
            // user code here
        }
        catch (Exception)
        {
            ...
        }
        
    }
    

...

    ThreadPool.QueueUserWorkItem(CallBack, null);
    
    

The second way to use ThreadPool is the

    static RegisteredWaitHandle RegisterWaitForSingleObject(WaitHandle waitObject, 
                                                            WaitOrTimerCallback callBack, 
                                                            object state, 
                                                            int millisecondsTimeOutInterval, 
                                                            bool executeOnlyOnce);

method. WaitOrTimerCallback delegate is defined as

    public delegate void WaitOrTimerCallback(object state, bool timedOut)
    

RegisterWaitForSingleObject() defers execution until waitObject is signalled or the millisecondsTimeOutInterval elapses ( -1 for an infinite timeout). Avoid using Mutex as the waitObject. Use an EventWaitHandle or a Semaphore instead. When the waitObject becomes signalled or the millisecondsTimeOutInterval elapses, whatever comes first, ThreadPool executes the passed in callBack delegate value, passing it the user defined state object. The executeOnlyOnce flag controls subsequent executions.

    void CallBack(object state, bool timedOut)
    {
        try
        {
            // user code here
        }
        catch (Exception)
        {
            ...
        }
    }
     

...

    ManualResetEvent waitHandle = new ManualResetEvent(false);  // initially in non-signalled state
    
    ThreadPool.RegisterWaitForSingleObject(
                        waitHandle,   // WaitHandle waitObject
                        CallBack,     // WaitOrTimerCallback callBack
                        null,         // Object state      
                        -1,           // int millisecondsTimeOutInterval 
                        true          // bool executeOnlyOnce
                        );                           
 
     // any code here 
     
     waitHandle.Set();                  // signal the handle
     

     

Here is a full example.

    public enum JobStatus
    {
        Working,
        Completed,
        Aborted,
        Error
    }

    /* conveys information regarding a queued task */
    public class JobInfo
    {
        private JobStatus status;
        private object userState;
        private string message;

        public JobInfo(JobStatus Status, object UserState, string Message)
        {
            status = Status;
            userState = UserState;
            message = Message;
        }

        public JobStatus Status { get { return status; } }
        public object UserState { get { return userState; } }
        public string Message { get { return message; } }
    }

    public delegate void JobInfoDelegate(JobInfo info);


    /* Represents a job queued for execution using a thread pool thread.
       The Execute() method is called from inside the thread context 
       and sends feed back information using the passed delegate value */
    class Command
    {
        static Random random = new Random();

        private int id;
        private int dataSize = random.Next(30, 50);
        private int remainSize = 0;
        private int donePercent = 0;

        public Command(int ID)
        {
            id = ID;
        }

        public void Execute(JobInfoDelegate JobInfoCallBack)
        {
            remainSize = dataSize;
            donePercent = 0;

            try
            {
                while (remainSize > 0)
                {
                    remainSize--;
                    donePercent = ((dataSize - remainSize) * 100) / dataSize;

                    JobInfoCallBack(new JobInfo(JobStatus.Working, this, ""));
                    Thread.Sleep(150);
                }

                JobInfoCallBack(new JobInfo(JobStatus.Completed, this, ""));
            }
            catch (Exception ex)
            {
                JobInfoCallBack(new JobInfo(JobStatus.Error, this, ex.GetType().FullName + ": " + ex.Message));
            }
        }

        public int ID { get { return id; } }
        public int DataSize { get { return dataSize; } }
        public int RemainSize { get { return remainSize; } }
        public int DonePercent { get { return donePercent; } }
    }
    
    
    
    public partial class MainForm : Form
    {
        public MainForm()
        {
            InitializeComponent(); 
        }

        const int COMMAND_COUNT = 35;

        EventWaitHandle eventWaitHandle;
        string[] lines;
        int commandsLeft = 0;  

        /* it is called by a Command object and provides feedback about the operation.
           It first checks if Control.Invoked() is required and if yes then it recalls itself
           using Invoke(), in order to be executed in a synchronized manner */
        void JobInfoCallBack(JobInfo info)
        { 
            if (progressBar1.InvokeRequired)
                progressBar1.Invoke(new JobInfoDelegate(JobInfoCallBack), info);
            else
            {
                Command cmd = (Command)info.UserState;                

                switch (info.Status)
                {
                    case JobStatus.Working: 
                        lines[cmd.ID] = "ID: " + cmd.ID.ToString() + ", working: " + cmd.DonePercent.ToString() + "%, Total size: " + cmd.DataSize.ToString() + " (" + cmd.RemainSize.ToString() + " left)";
                        textBox1.Lines = lines;
                        break;
                    case JobStatus.Completed:
                        progressBar1.Value = ((COMMAND_COUNT - commandsLeft) * 100) / COMMAND_COUNT;
                        lines[cmd.ID] = "ID: " + cmd.ID.ToString() + "   DONE";
                        textBox1.Lines = lines;
                        commandsLeft--;
                        break;
                    case JobStatus.Aborted:
                        commandsLeft--;          

                        break;
                    case JobStatus.Error:
                        lines[cmd.ID] = "ID: " + cmd.ID.ToString() + "   ERROR: " + info.Message;
                        textBox1.Lines = lines;
                        commandsLeft--;
                        break;
                }

                if (commandsLeft <= 0)
                    progressBar1.Value = 0;
            }
        }

        /* the call back for the ThreadPool.QueueUserWorkItem() */
        void QueueUserWorkItem_CallBack(object state)
        {
            ((Command)state).Execute(JobInfoCallBack);
        }

        /* the call back for the ThreadPool.RegisterWaitForSingleObject() */
        void RegisterWaitForSingleObject_CallBack(object state, bool timedOut)
        {
            ((Command)state).Execute(JobInfoCallBack);
        }

        /* starts a ThreadPool.QueueUserWorkItem() operation */
        private void btnStart_Click(object sender, EventArgs e)
        {
            if (commandsLeft == 0)
            {
                commandsLeft = COMMAND_COUNT;

                textBox1.Text = "";
                lines = new string[COMMAND_COUNT];

                for (int i = 0; i < COMMAND_COUNT; i++)
                    ThreadPool.QueueUserWorkItem(QueueUserWorkItem_CallBack, new Command(i));
 
            }
        }

        /* starts a ThreadPool.RegisterWaitForSingleObject() operation */
        private void btnStartWaiting_Click(object sender, EventArgs e)
        {
            if (commandsLeft == 0)
            {
                eventWaitHandle = new ManualResetEvent(false);
                commandsLeft = COMMAND_COUNT;

                textBox1.Text = "";
                lines = new string[COMMAND_COUNT];

                for (int i = 0; i < COMMAND_COUNT; i++)
                {
                    ThreadPool.RegisterWaitForSingleObject(
                        eventWaitHandle,                        // WaitHandle waitObject
                        RegisterWaitForSingleObject_CallBack,   // WaitOrTimerCallback callBack
                        new Command(i),                         // Object state      
                        -1,                                     // int millisecondsTimeOutInterval 
                        true                                    // bool executeOnlyOnce
                        );                    
                }

                eventWaitHandle.Set();
            }
        }

        private void MainForm_FormClosing(object sender, FormClosingEventArgs e)
        {
            e.Cancel = commandsLeft > 0;

            if (e.Cancel)
                MessageBox.Show("Please wait! Thread pool threads are still executed...");
        }

    }    
    

ReaderWriterLockSlim class (and ReaderWriterLock class)

System.Threading.ReaderWriterLockSlim class is a multi-shared-reads exclusive-write synchronizer. It permits either multiple read access or exclusive write access to a resource. Furthermore, special read locks, called upgradeable read locks, can be promoted to write locks within the same thread, thus covering situations where a reader thread might need to perform a write to the resource, if a certain condition is met.

The advantage of the ReaderWriterLockSlim against other locking devises is that the ReaderWriterLockSlim permits concurrent access by multiple readers. A read lock does not blocks another read lock.

    public class ReaderWriterLockSlim : IDisposable
    { 
        public ReaderWriterLockSlim();  
        public ReaderWriterLockSlim(LockRecursionPolicy recursionPolicy);
        
        public int CurrentReadCount { get; }
        public bool IsReadLockHeld { get; }
        public bool IsUpgradeableReadLockHeld { get; }
        public bool IsWriteLockHeld { get; }
        public LockRecursionPolicy RecursionPolicy { get; }
        public int RecursiveReadCount { get; }
        public int RecursiveUpgradeCount { get; }
        public int RecursiveWriteCount { get; }
        public int WaitingReadCount { get; }
        public int WaitingUpgradeCount { get; }
        public int WaitingWriteCount { get; }        
        
        public void EnterReadLock();
        public void ExitReadLock();        
        public void EnterUpgradeableReadLock();
        public void ExitUpgradeableReadLock();        
        public void EnterWriteLock();        
        public void ExitWriteLock();     

        public bool TryEnterReadLock(int millisecondsTimeout);
        public bool TryEnterReadLock(TimeSpan timeout);
        public bool TryEnterUpgradeableReadLock(int millisecondsTimeout);
        public bool TryEnterUpgradeableReadLock(TimeSpan timeout);
        public bool TryEnterWriteLock(int millisecondsTimeout);
        public bool TryEnterWriteLock(TimeSpan timeout);        
        
        public void Dispose();  
    }

ReaderWriterLockSlim class is new to .Net 3.5 and replaces the ReaderWriterLock class. ReaderWriterLockSlim class overcomes many weaknesses found in ReaderWriterLock class. ReaderWriterLock class still exists though.

ReaderWriterLockSlim EnterXXX() and ExitXXX() methods used to lock/unlock a resouce are as following. The TryEnterXXX() counterparts of the above EnterXXX() methods accept a timeout parameter and return true on success.

The EnterUpgradeableReadLock(), the TryEnterUpgradeableReadLock(), and the ExitUpgradeableReadLock() used to acquire a read lock that may be promoted to a write lock.

Nested locks, in ReaderWriterLockSlim parlance, are called recursive locks. A read lock is considered recursive when it is nested to another read lock, while a write lock is considered recursive when it is nested to another write lock.

Nested locks are allowed only if the ReaderWriterLockSlim object is created by using the the second version of the overloaded constructor, passing it LockRecursionPolicy.SupportsRecursion as an argument. Otherwise recursive locks throw a LockRecursionException exception.

Locking hierarchy, from bottom to top, is: read lock, upgradeable lock and write lock.

Regardless of recursion setting, a read mode thread can not be promoted to upgradeable mode or write mode. Regardless of recursion setting, an upgradeable mode thread can be promoted to write mode or downgraded to read mode. Those rules are established in order to avoid deadlocks.

Regardless of recursion setting, only a single thread can be in write mode. That writer thread gains exclusive access to the resource, that is no other thread can acquire the lock. If there is not a write lock, any number of read mode threads can access the resource but only one of them can be in upgradeable mode.

ReaderWriterLockSlim class favors writer threads. That is a writer thread waiting to acquire a lock is given higher priority than waiting reader threads.

ReaderWriterLockSlim class implements the IDisposable interface. Calling ReaderWriterLockSlim.Dispose() when done, frees system resources.

ReaderWriterLockSlim class has thread affinity.

Here is a List-like example class which uses a ReaderWriterLockSlim object to provide protected access to its internal list.

    public class List : Disposable
    {
 
        static private Random random = new Random();

        private ArrayList list = new ArrayList();
        ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();

        protected override void DisposeUnmanagedResources()
        {
            rwLock.Dispose();
        }


        public int Add()
        {
            rwLock.EnterWriteLock();
            try
            {
                int count = random.Next(5, 15);
                int value;
                int Result = 0;

                for (int i = 0; i < count; i++)
                {
                    value = random.Next(1, 20);
                    Result += value;
                    list.Add(value);
                    Thread.Sleep(150);
                }
                return Result;
                    
            }
            finally
            {
                rwLock.ExitWriteLock();
            }
        }

        public int Delete()
        {
            rwLock.EnterUpgradeableReadLock();
            try
            {
                int Result = 0;
                int count = list.Count / 3;

                if (count > 0)
                {
                    rwLock.EnterWriteLock();    // promote to write lock
                    try
                    {

                        for (int i = 0; i < count; i++)
                        {
                            Result += (int)list[0];
                            list.RemoveAt(0);
                            Thread.Sleep(150);
                        }

                    }
                    finally
                    {
                        rwLock.ExitWriteLock();
                    }
                }

                return Result;
            }
            finally
            {
                rwLock.ExitUpgradeableReadLock();
            }

        }

        public int Total()
        {
            rwLock.EnterReadLock();
            try
            {
                int Result = 0;

                for (int i = 0; i < list.Count; i++)
                {
                    Result += (int)list[i];
                }

                Thread.Sleep(50);
                return Result;
            }
            finally
            {
                rwLock.ExitReadLock();
            }
                
        }

    }




 

  

Copyright © 2009 Theodoros Bebekis, Thessaloniki, Greece (teo point bebekis at gmail point com)