Article 41 - Programming Thread Safety

The emergence of multi-cored/threaded processors have led to an increasingly number of software programs that are multi-threaded. While it is a good idea to support multi-threaded applications, there are factors to consider to ensure that the program executes as intended. Otherwise, it may lead to defuncts such as race conditions. In fact, older programs have already been written with multi-threaded support even on single processor machines. In particular, older operating systems that enable multi-tasking.

An article, Article 34 - Programming Threads, was published previously that may be of interest.

Critical Section

The critical section is a fragment of the program that contains statement that access or modify shared memory (variables). Often, the critical section requires mutual exclusion, that is at most only one thread may access it at any given time.

For example, a critical section may be within a function that increments a global counter and returns the updated value.

Locks and Semaphores

Locks and sempahores are basic building blocks for supporting mutual exclusion. A lock is normally a binary object (yes to indicate that it is locked, otherwise it is not), while a semaphore can hold an integral number of values, usually 0 to n, although there are occurences of a binary semaphore (a lock in other words). Both must support atomic operations.

Given a lock object such as a semaphore, there are two operations: acquire and release. The acquire operation is required prior to a critical section, it is used to check whether the lock object is currently locked or not. If not, it is permitted to continue into the critical section. If the lock object is locked (is set), then it must continue to check until the lock is unlocked. This may involve busy waiting, which is to use the processor in spin locks to repeatedly check the lock condition without progress. There are mechanisms to defer lock checking such as the waiting and notifying system in Java but those are advanced topics beyond this article.

A semaphore is very similar, instead of providing mutual exclusion (unless a binary semaphore is used), it can permit up to n threads to access the shared object. A semaphore provides the commands V() and P(), which mean to increment or decrement the count, respectively. This is often used in producer and consumer systems. Pay particular attention to the two variables used (emptyCount and fullCount) in the example.

Definition of a Semaphore.

Using a Semaphore to guard against critical sections.

The complete definition of a Semaphore and its usage.

Implementation

There is only a limited number of known methods to implement a locking structure. Although, some hardware architectures may provide existing atomic operations such as compare-and-swap or test-and-set, it is good to understand a low-level software approach.

Recall that in compare-and-swap, a value is compared to the value of the global access variable, and if it matches, it sets it to the new value. Thus, three values are required: variable, old value, and new value.

Definition and usage of a compare-and-swap lock.

Recall that test-and-set, relies on busy-waiting. Each thread must set the shared access variable to true (indicating that it wants to enter the critical section) and the function that it calls will return the old value. The thread must keep trying (in this busy-waiting loop), since if a thread is already within the critical section, then the function returns true (as set by the previous thread that entered). Thus, the conditional statement is to check if the returned value is true or not.

Definition and usage of a test-and-set lock.

Note the difference with compare-and-swap and test-and-set is that test-and-set modifies a memory location every time, whereas compare-and-swap only modifies the memory location if the old value matches the value stored in the variable. In one sense, compare-and-swap can be technically faster since writing to memory often requires more machine cycles.

One software approach (on a single processor machine) is to set the process priority to high thus the current thread will never be evicted by preemption or by interrupts. The order of operations is to set the priority to high and disable interrupts, enter and execute the critical section codes, and finally set the priorty to low and enable interrupts.

There are issues to this approach, since it does not work with multi-processor machines. Also, if the critical section is stuck in an infinite loop, then there is no ability to recover since interrupts are disabled such as system clock.

Another approach (which can be extended to multi-processor systems) is to use algorithms that rely on atomic operations (provided by the hardware). We will not discuss any of these algorithms in this articles.

Remember to ensure thread safety when architecting your multi-threaded application.

Comments (18)

Posted by anonymous - bookmaring service at Thursday, March 14, 2013 11:16 PM

lyeC5n I think this is a real great blog.Really looking forward to read more. Really Great.

Posted by anonymous - ciprofloxacin side effects at Friday, March 15, 2013 10:53 AM

A round of applause for your post.Much thanks again.

Posted by anonymous - generic viagra no prescription at Friday, March 15, 2013 12:32 PM

Awesome post.Much thanks again.

Posted by anonymous - generi cialis at Friday, March 15, 2013 2:12 PM

Thanks for sharing, this is a fantastic blog post.Really thank you! Want more.

Posted by anonymous - no prescription at Friday, March 15, 2013 3:52 PM

Really enjoyed this post. Really Cool.

Post a comment

  • Name:
  • Post:
  • Challenge:

Register or login to post comments easier.


Vantasy World Copyright 2011 - 2017. Vantasy World is a project developed by Vantasy Online. Privacy Policy.