• HOME
  • ABOUT US
  • SERVICES
  • PORTFOLIO
  • BLOG
Connection Quest Connection Quest
  • HOME
  • ABOUT US
  • SERVICES
  • PORTFOLIO
  • BLOG
Connection Quest

Concurrency control

Home / Clean C++ / Concurrency control

Concurrency control

By Marko Jovic inClean C++

As famously observed by Gordon Moore, overall processing power of our computers doubles on a two year base. Consequently, the number of CPU cores in your home / office machine tends to perpetually increase. However, is that trend followed by our programs? Are we utilizing all that processing power available at our disposal?

To effectively use this parallelism-enabled hardware, we write concurrent software. As a result, the responsiveness and throughput of our programs skyrockets and everything is perfect. Until it isn’t. With all its benefits, concurrency potentially introduces a whole new set of problems. Hard-to-detect problems.

Therefore, to avoid needless headaches and stress (we can agree that we already have enough), let’s consider some common issues and connect them with appropriate solutions.

As you surely know, concurrency is usually achieved through the use of threads. By sharing mutable data between those threads we’re essentially opening a programming equivalent of the Pandora’s box. So the first advice is exactly what you expect: do not share data between threads. If you have to, make that data immutable.

struct Data {
	int value {5};
};

// inside thread_1
void thread1Fun(const Data &data) {
	...
}

// inside thread_2
void thread2Fun(const Data &data) {
	...
}

If you still end up sharing mutable data (don’t), then it’s time to introduce a mutual exclusion mechanism. These mechanisms, such as locks (mutexes), semaphores, monitors, etc., ensure that you’ll prevent race conditions, or events where multiple threads try to access the same piece of memory at the same time.

std::mutex mtx;

void criticalFunction(ImportantData &data) {
	std::lock_guard<std::mutex> lock(mtx);  
	// you can now safely access the data
	…
}

In the example above you can see how to use a lock in conjunction with RAII. std::lock_guard is a simple class template that calls lock() on the argument in its constructor and unlock() in its destructor. Consequently, when the lock_guard goes out of scope, the lock is guaranteed to be unlocked. Generally speaking, you should work at the highest abstraction level you can and reuse already tested classes from the standard library. Save yourself some time and don’t try to reinvent the wheel.

On the other hand, what if we write another function that accesses our ImportantData and forget to instantiate the lock_guard? We’re back to square one. To avoid this situation where we have to manually lock the data in each function using it, we can define the lock along with the data it protects.

class ImportantData {
public:
	void changeAndProcessValue(int newValue) {
		std::lock_guard<std::mutex> lock(mtx);  
		...
	}
private:
	int value {3};
	std::mutex mtx;
}

// inside thread_1
void criticalFunction1(ImportantData &data) {
	data.changeAndProcessValue(4);
}

// inside thread_2
void criticalFunction2(ImportantData &data) {
	data.changeAndProcessValue(5);
}

Let’s consider another situation: one thread in your code has to acquire some data and then notify the second thread which is waiting to process that data. Enter std::condition_variable.

std::mutex mtx;
std::condition_variable conditionVar; 
bool dataAcquired {false};

void processData() {
	std::unique_lock<std::mutex> lck(mtx);
	conditionVar.wait(lck, []{ return dataAcquired; });  
	...
}

void acquireData() {
	{
		std::lock_guard<std::mutex> lck(mtx);
		...
		dataAcquired = true;
    }  
	conditionVar.notify_one();                       
}

int main() {
	std::thread thread_1(processData);              
	std::thread thread_2(acquireData);                
	thread_1.join();
	thread_2.join();  
}

The code is pretty self-explanatory, acquireData() collects data from a corresponding source and then notifies processData() which does some processing. The key point, however, is that the condition variable waits with a predicate. If it wasn’t, our code would be susceptible to two race conditions:

  • Lost wakeup: The sender notifies the receiver before it gets to its waiting state. Consequentially, the notification is lost.
  • Spurious wakeup: Even though there was no notification, the receiver wakes up.

As you can see concurrency is a very complicated topic with a lot of potential benefits and pitfalls. Although we have only scratched the surface, applying these concepts will make you confident that your code will not fail at a critical time because of an ‘odd’ synchronization issue.

Stay tuned and happy coding.

AdviceC++Clean CodeConcurrencyConventionsMultithreadingParallelismSoftwareSoftware DevelopmentThreads
15 Posts
Marko Jovic
  • Sequencing
    Previous PostSequencing
  • Next PostPrepare your defense
    Sequencing

Leave a Reply (Cancel reply)

Your email address will not be published. Required fields are marked *

*
*

© 2025 Connection Quest j.d.o.o. All rights reserved.

Copy