1. Thread Synchronisation
2. Locks
3. Deadlock
4. Semaphores
5. Mutex (mutual exclusion)
6. Thread
7. Event
8. Waitable timer
Answer:
1. Selvam (2004) shows the definition of Thread Synchronisation. That is, in a multithreaded environment, each thread has its own local thread stack and registers. If multiple threads access the same resource for read and write, the value may not be the correct value. For example, let's say our application contains two threads, one thread for reading content from the file and another thread writing the content to the file. If the write thread tries to write and the read thread tries to read the same data, the data might become corrupted. In this situation, we want to lock the file access. The thread synchronization has two stages. Signaled and non-signaled.
The signaled state allows objects to access and modify data. The non-signaled state does allow accessing or modifying the data in the thread local stack.
Many of the thread synchronization methods are used to synchronize multiple threads.
2. Wang (2001) states that there are 2 types of Locks. a read/write lock manager allows a server to manage its data resource for client read and write requests. Let's review the difference between a read lock, and a write lock. The following table lists the different locking levels, and the compatibilities between each locking levels.

A read lock is required before reading on a data item. A write lock is required before writing on a data item. Multiple parties can read on a data item with no problem. This is indicated in the table by looking at the intersection of requested lock (read), and already granted lock (read also). We should find the 'yes' compatibility label in the intersection. On the other hand, one cannot write on a data item while there are still readers out there. Similarly, one cannot read on a data item while there is a writer out there. For practice, look up these two statements in the compatibility table. Any client that want to access a data item should first obtain a read lock, then after the data item is read, release the lock. Any client that want to write a data item should first obtain a write lock, then after the data item is written, release the lock. (Wang, 2001)
3. Wikipedia (2009) describes that a Deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does. It is often seen in a paradox like "the chicken or the egg". "When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone." - Illogical statute passed by the Kansas Legislature.
In computer science, deadlock refers to a specific condition when two or more processes are each waiting for each other to release a resource, or more than two processes are waiting for resources in a circular chain (see Necessary conditions). Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software, or soft, lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialization. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks.
This situation may be likened to two people who are drawing diagrams, with only one pencil and one ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs when the person with the pencil needs the ruler and the person with the ruler needs the pencil to finish his work with the ruler. Both requests can't be satisfied, so a deadlock occurs.
The telecommunications description of deadlock is a little stronger: deadlock occurs when none of the processes meet the condition to move to another state (as described in the process's finite state machine) and all the communication channels are empty. The second condition is often left out on other systems but is important in the telecommunication context.
4. Selvam (2004) shows that Semaphore is used to synchronize between objects. Semaphore is a thread synchronization object that allows zero to any number of threads access simultaneously.
5. Webopedia (2009) points out that Mutex is short for Mutual Exclusion object. In computer programming, a mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started, a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. The mutex is set to unlock when the data is no longer needed or the routine is finished.
6. Whatis.com (2009) states that in computer programming, a Thread is placeholder information associated with a single use of a program that can handle multiple concurrent users. From the program's point-of-view, a thread is the information needed to serve one individual user or a particular service request. If multiple users are using the program or concurrent requests from other programs occur, a thread is created and maintained for each of them. The thread allows a program to know which user is being served as the program alternately gets re-entered on behalf of different users. (One way thread information is kept by storing it in a special data area and putting the address of that data area in a register. The operating system always saves the contents of the register when the program is interrupted and restores it when it gives the program control again.)
A thread and a task are similar and are often confused. Most computers can only execute one program instruction at a time, but because they operate so fast, they appear to run many programs and serve many users simultaneously. The computer operating system gives each program a "turn" at running, then requires it to wait while another program gets a turn. Each of these programs is viewed by the operating system as a task for which certain resources are identified and kept track of. The operating system manages each application program in your PC system (spreadsheet, word processor, Web browser) as a separate task and lets you look at and control items on a task list. If the program initiates an I/O request, such as reading a file or writing to a printer, it creates a thread. The data kept as part of a thread allows a program to be reentered at the right place when the I/O operation completes. Meanwhile, other concurrent uses of the program are maintained on other threads. Most of today's operating systems provide support for both multitasking and multithreading. They also allow multithreading within program processes so that the system is saved the overhead of creating a new process for each thread.
7. Selvam (2004) shows that Event is used to synchronize between objects. Event is a thread synchronization object used to set the signaled or non-signaled state. The signaled state may be manual or automatic depending on the event declaration.
8. MSDN (2009) describes that a Waitable Timer object is a synchronization object whose state is set to signaled when the specified due time arrives. There are two types of waitable timers that can be created: manual-reset and synchronization. A timer of either type can also be a periodic timer.
Manual-reset timer - A timer whose state remains signaled until SetWaitableTimer is called to establish a new due time.
Synchronization timer - A timer whose state remains signaled until a thread completes a wait operation on the timer object.
Periodic timer - A timer that is reactivated each time the specified period expires, until the timer is reset or canceled. A periodic timer is either a periodic manual-reset timer or a periodic synchronization timer.
- The behavior of a waitable timer can be summarized as follows:
When a timer is set, it is canceled if it was already active, the state of the timer is nonsignaled, and the timer is placed in the kernel timer queue. - When a timer expires, the timer is set to the signaled state. If the timer has a completion routine, it is queued to the thread that set the timer. The completion routine remains in the asynchronous procedure call (APC) queue of the thread until the thread enters an alertable wait state. At that time, the APC is dispatched and the completion routine is called. If the timer is periodic, it is placed back in the kernel timer queue.
- When a timer is canceled, it is removed from the kernel timer queue if it was pending. If the timer had expired and there is still an APC queued to the thread that set the timer, the APC is removed from the thread's APC queue. The signaled state of the timer is not affected.
1. Selvam R. (2004). "CodeProject: Thread Synchronization for Beginners". The Code Project Your Development Resource, Retrieved from URL - http://www.codeproject.com/KB/threads/Synchronization.aspx
2. Wang Thomas (2001). "Java Thread Programming: Implement Read & Write Locks". Thomas Wang's Home Page, Retrieved from URL - http://www.concentric.net/~Ttwang/tech/rwlock.htm
3. Wikipedia (2009). "Deadlock". Wikipedia The Free Encylopedia, Retrieved from URL - http://en.wikipedia.org/wiki/Deadlock
4. Webopedia (2009). "What is Mutex?". The #1 Online Encyclopedia dedicated to computer technology, Retrieved from URL - http://www.webopedia.com/TERM/m/mutex.html
5. Whatis.com (2009). "What is Thread?". TechTarget Corporate Web Site, Retrieved from URL - http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci213139,00.html
6. MSDN (2009). "Waitable Timer Objects (Windows)." MSDN Microsoft Developer Network, Retrieved from URL - http://msdn.microsoft.com/en-us/library/ms687012(VS.85).aspx