Have you ever spent a few hours on trying to debug a non-deterministic problem occurring in your multi-threading application? If so, then you should definitely read this article. If not it is anyway a good way of revising your current knowledge about threading challenges in C#.  Being aware of some common facts about threading can help you considerably in building well-designed, error-proof multi-threading applications in the future. 

1. Threads share data if they have a common reference to the same object instance

The code below:

When executed, results in: Work done printed on the screen. As you can see in the  example above, both threads (the main one and newThread) call DoWork() method on the same instance of SomeClass. As a result, although the _isWorkDone field is non-static, they share it. In consequence “Work done” is printed on the screen just once, whereas a programmer who is not aware of the above would expect it be printed twice.

2. “Finally” blocks in background threads are not executed when the process terminates

The code below:

When executed, results in:   Doing some work… Closing the program…   printed on the screen. As you can see in the example above, when the process terminates (because the main thread has finished its execution) the “finally” block in the background thread is not executed. Not being aware of this this might cause big troubles in case there is “disposal” work to be done at the end, such as closing streams, releasing resources or deleting some temporary files.

3. Captured values in lambda expressions are shared as well

You could presume that the below code:

Should result in the following output: 0123456789 Well it doesn’t. The result is completely non-deterministic! The trick here is that the i variable refers to the same memory location throughout the lifetime of the “for” loop. As a result each thread calls the method Console.Write on the same variable which is changing while it’s running. The solution to the above is to use a temporary variable :

4. Locking does not restrict access to the synchronizing object itself in any way.

This means that, if one thread calls: lock(x) and another thread calls x.ToString() the latter will not be blocked. In other words: objects used for locking are lockers not being locked.

5. Try/catch/finally blocks in scope when a thread is created, are of no relevance to the thread when it starts executing

You should be aware that the code below:

Will not result in an exception being caught in the “catch” block of Main method. It will unfortunately remain uncaught and will cause the program to shut down. The most natural way of solving the above problem is of course to move the try/catch block to the Divide method.

6. If an object is thread-safe it does not imply that you don’t need to lock around accessing it

Take a look at the code below:

Let’s say that the class of _list is fully thread-safe. Nevertheless, there is still a possibility that between checking and adding to the list another thread has already added a certain item. The above example shows that having a thread-safe class does not mean that when you use it the code is still thread-safe.

7. Your program’s instructions can be reordered by compiler, CLR or CPU to improve efficiency.

This one might really be tricky for developers who are not aware of this. To explain this process, let’s firstly take a look at the following piece of code:

The question is: is this possible, assuming that A and B are ran concurrently on different threads, that B will write “0” on a screen? Logically thinking it is not: we cannot enter Console.WriteLine before _done is set to true. And to set _done to true we firstly need to assign 1 to _value field. Surprisingly though it is possible, because:

  • The compiler, CLR or CPU may reorder your program’s instructions to improve its efficiency
  • The compiler, CLR or CPU may introduce caching optimizations, such assignments to variables won’t be visible to other threads right away.

Okay, so now the question arises: how to solve such a challenge? Well there are at least a few solutions, but the one that is usually most preferred is a creation of memory fences. Shortly speaking, a full memory barrier (fence), when used, prevents any kind of instruction reordering or caching around this barrier. In C# you call Thread.Barrier to generate such a fence. In the above example the following code would solve our problem:

If you would like to know more about different challenges and problems that you can encounter while building multi-threading applications I strongly recommend  a free e-book by Joseph Albahari: Threading in C#. If you know some other interesting scenarios connected with threading in C# feel free to share them with us by leaving a comment! 

.Net Team Lead at Aspire Systems Poland. Dedicated to .NET technologies, especially ASP.NET MVC. After hours plays squash, reads fantasy books and spends time with friends.