PLC Session 10

Session 10: Concurrency

A. Introduction

Concurrency is the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units .

 

Concurrency can occur at four levels:

  • Machine instruction level
  • High-level language statement level
  • Unit level
  • Program level

There are three types of concurrency : quasi-concurrency , physical concurrency and logical concurrency .

Quasi-concurrency: execution of symmetric units in which a number of coroutines can cooperate to intertwine their execution sequences. Supports a single thread of control.

Physical concurrency is several program units from the same program that literally execute simultaneously.Supports multiple threads of control.

Logical concurrency is multiple processors providing actual concurrency, when in fact the actual execution of programs is taking place in interleaved fashion on a single processor.

B. Introduction to Subprogram-Level Concurrency

 

Concurrent subprograms are when there were more than one program unit execute at the same time (physically on separate processors, or logically in some time-sliced fashion on a single processor computer system) .

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior so parts of the program where the shared resource is accessed is protected. This protected section is the critical section.

So  , Critical Section is a set of instructions that must be controlled so as to allow exclusive access to one process.

Solution to the Critical Section Problem must meet three conditions:

  1. mutual exclusion: if process X is executing in its critical section, no other process is executing in its critical section.
  2. progress: if no process is executing in its critical section and there exists some processes that wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision of which will enter its critical section next, and this decision cannot be postponed indefinitely.
  3. bounded waiting: there must exist a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

 

C. Difference between Concurrency and Parallel computing

Concurrency is essentially when two tasks are being performed at the same time.

Parallelism requires that at least two processes/tasks are actively being performed at a particular moment in time.

 

D. Subprogram-level Concurrency

  1. Task

A task is a unit of a program that can be in concurrent execution with other units of the same program. Tasks usually work together.

Task can provide one thread of control in a program and communicate with other tasks through share nonlocal variables, through message passing, or through parameters.

 

General Categories from task :

  • Heavyweight task executes in its own address space.

Advantage : Fast Process

Disadvantage : Huge Memory Consumption

  • Lightweight task all run in the same address space.

Advantage : Less Memory Consumption

Disadvantage : Concurrency.

  • A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way.
    1. Task Synchronization

Synchronization is a mechanism that controls the order in which tasks execute.

 

Two kinds of Synchronization:

1.Cooperation synchronization (a task must wait for another to to continue).

Example of Cooperation synchronization :

Producer – Consumer Problem . Consumer has to wait Producer to produce its product before the Consumer can do its task(buying the product).

2.Competition synchronization (both tasks require the use of some resource that cannot be simultaneously used).

 

To provide competition synchronization, mutually exclusive access to the shared resource must be guaranteed.

A program called a scheduler manages the sharing of processors among tasks.

 

Example of Competition synchronization :

A shared counter.

 

  1. Task Execution States

Task or Thread Life Cycle(Execution States of Task from start to end)  :

  1. New− A new thread begins its life cycle in the new state. It remains in this state until the program starts the thread. It is also referred to as a born thread.
  2. Runnable− After a newly born thread is started, the thread becomes runnable. A thread in this state is considered to be executing its task.
  3. Waiting− Sometimes, a thread transitions to the waiting state while the thread waits for another thread to perform a task. A thread transitions back to the runnable state only when another thread signals the waiting thread to continue executing.
  4. Timed Waiting− A runnable thread can enter the timed waiting state for a specified interval of time. A thread in this state transitions back to the runnable state when that time interval expires or when the event it is waiting for occurs.
  5. Terminated (Dead)− A runnable thread enters the terminated state when it completes its task or otherwise terminates. No longer actives in any sense.

 

  1. Thread Liveleness Problems

Liveness is a concurrent application’s ability to execute in a timely manner.  A kind of “availability of executions”.

 

‘Liveleness’ problem that can happen are :

  • Deadlock

Deadlock describes a situation where two or more threads are blocked forever, waiting for each other.

For example :

Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time.

 

  • Starvation happens if a thread “greedily” occupies a shared resource with the lock too long, even without any yielding in its iterations inside /.

For example : Spinning.

 

  • Livelock means, two threads are blocking each other’s progress but just logically (not like deadlock). Each thread is still running but logically they’re not progressing.  Think about the situation that, two people in the hallway can’t pass by the other because when the one tries to move left (to give a way), the other moves right for the same purpose, then repeat this forever..

 

E. Methods of Providing Synchronization

There are 3 kinds of methods :

  • Semaphores
  • Monitors
  • Message Passing

 

  1. Semaphores

A very simple mechanism used to provide synchronization of tasks. A semaphore provides limited access to a data structure by placing guards around the code that accesses the structure.

Using semaphores to provide corporation and competition synchronization creates an unsafe programming environment.

 

Semaphores are of two types :

  • Binary Semaphore
  • Counting semaphore

 

Binary semaphore can take the value 0 & 1 only. Counting semaphore can take nonnegative integer values.

 

  1. Monitors

A monitor is a set of multiple routines which are protected by a mutual exclusion lock. None of the routines in the monitor can be executed by a thread until that thread acquires the lock. This means that only ONE thread can execute within the monitor at a time. Any other threads must wait for the thread that’s currently executing to give up control of the lock.

However, a thread can actually suspend itself inside a monitor and then wait for an event to occur. If this happens, then another thread is given the opportunity to enter the monitor. The thread that was suspended will eventually be notified that the event it was waiting for has now occurred, which means it can wake up and reacquire the lock.

Monitors are a better way to provide competition synchronization than semaphores. However, cooperation synchronization is subject to same problems as the semaphores.

 

 

Differences between Monitors and Semaphores

Both Monitors and Semaphores are used for the same purpose – thread synchronization. But, monitors are simpler to use than semaphores because they handle all of the details of lock acquisition and release. An application using semaphores has to release any locks a thread has acquired when the application terminates – this must be done by the application itself. If the application does not do this, then any other thread that needs the shared resource will not be able to proceed.

Another difference when using semaphores is that every routine accessing a shared resource has to explicitly acquire a a lock before using the resource. This can be easily forgotten when coding the routines dealing with multithreading . Monitors, unlike semaphores, automatically acquire the necessary locks.

  1. Message Passing

Message passing can be synchronous or asynchronous. Cooperation and competition synchronization of tastes can be conveniently handled with message passing.

Message passing models (Erlang, for example) do not have any shared state; all synchronization and communication is done by exchanging messages.

Source(s) :

https://en.wikipedia.org/wiki/Concurrency_(computer_science)

Concept of Programming Languages – Chapter 13 – Concurrency

https://en.wikipedia.org/wiki/Critical_section

http://stackoverflow.com/questions/1897993/difference-between-concurrent-programming-and-parallel-programming

https://cs.nyu.edu/courses/spring98/G22.2110/class12.html

https://www.quora.com/What-is-the-difference-between-concurrency-and-parallelism

http://www2.cs.uregina.ca/~hamilton/courses/330/notes/synchro/node2.html

https://www.tutorialspoint.com/java/java_multithreading.htm

https://docs.oracle.com/javase/tutorial/essential/concurrency/deadlock.html

Thread liveness problems: deadlock, starvation and livelock

https://www.tutorialspoint.com/operating_system/os_semaphores_qa1.htm

http://stackoverflow.com/questions/1853284/whats-the-difference-between-the-message-passing-and-shared-memory-concurrency

 

Slide Binus Maya

Leave a Comment

Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.