C++ mutex for Race Conditions | Order Counter Bugs Through lock_guard

C++ mutex for Race Conditions | Order Counter Bugs Through lock_guard

이 글의 핵심

From “order counts are wrong” to correct synchronization: mutex, critical sections, deadlock prevention, and RAII locks in C++.

Introduction: Why did the counter break?

“Order counts don’t match the database”

On event day, multiple worker threads incremented one shared counter. After the batch, totals were tens of thousands off from the DB.

Cause: plain int counter + counter++ from many threads → lost updates (data race). Fix: std::atomic<int> for a single counter, or std::mutex when you update several fields consistently.

Mutex enforces mutual exclusion: only one thread runs the critical section at a time.

sequenceDiagram
  participant T1 as Thread 1
  participant M as mutex
  participant T2 as Thread 2
  T1->>M: lock()
  M-->>T1: acquired
  T2->>M: lock()
  Note over T2: blocked
  T1->>M: critical section
  T1->>M: unlock()
  M-->>T2: acquired
  T2->>M: critical section

After reading:

  • Tell race condition vs data race
  • Protect sections with std::mutex, lock_guard, unique_lock
  • Avoid deadlock with ordering / std::lock / scoped_lock
  • Use RAII so locks release on exceptions

Table of contents

  1. Race condition and data race
  2. Bug scenarios
  3. std::mutex usage
  4. lock_guard and unique_lock
  5. Avoiding deadlock
  6. Common mistakes
  7. Performance notes
  8. Best practices
  9. Production patterns
  10. Patterns

1. Race condition and data race

Race condition: outcome depends on scheduling order.

Data race (C++): concurrent unsynchronized access where at least one is a write → undefined behavior.

Mutex serializes access to shared data so read-modify-write sequences don’t interleave incorrectly.


2. Bug scenarios (summary)

  • Stock / check-then-act: guard both check and update with one lock
  • Log buffer: string += from many threads → protect with mutex
  • Work queue: empty then front/pop must be one atomic operation → mutex + often condition_variable
  • Hit/miss counters: update related stats under one lock for consistent snapshots
  • Config map: concurrent read/write on std::mapshared_mutex or external mutex

3. std::mutex — basic usage

// Paste: g++ -std=c++17 -pthread -o mutex_safe mutex_safe.cpp && ./mutex_safe
#include <mutex>
#include <thread>
#include <iostream>

int counter = 0;
std::mutex counter_mutex;

void safeIncrement() {
    for (int i = 0; i < 100000; ++i) {
        std::lock_guard<std::mutex> lock(counter_mutex);
        ++counter;
    }
}

int main() {
    std::thread t1(safeIncrement);
    std::thread t2(safeIncrement);
    t1.join();
    t2.join();
    std::cout << "counter=" << counter << "\n";
    return 0;
}

Prefer lock_guard / unique_lock over raw lock()/unlock() so exceptions can’t leave the mutex locked.

try_lock: non-blocking attempt; if you lock, you must unlock (RAII still best).


4. lock_guard and unique_lock

RAII: lock in constructor, unlock in destructor.

unique_lock: can unlock()/lock() mid-scope—needed for condition_variable.

scoped_lock (C++17): lock multiple mutexes deadlock-free:

std::scoped_lock lock(m1, m2);

5. Avoiding deadlock

Cause: two threads lock A then B vs B then A.

Fixes:

  1. Global lock order (always A then B)
  2. std::lock(lock1, lock2) with defer_lock + unique_lock
  3. std::scoped_lock(m1, m2) (recommended on C++17+)
  4. Minimize critical section—no I/O inside locks

6. Common mistakes

  • return before unlock with manual lock/unlock → use RAII
  • Mutex far from data → easy to access data unlocked → encapsulate
  • Double-lock same std::mutex in one thread → deadlock (non-recursive)
  • Callbacks under lock → can re-enter or block on other locks → call callbacks outside the lock

7. Performance

ApproachWhen
mutexMultiple variables, invariants across fields
atomicSingle counters/flags, no invariants with other vars
Lock-freeExpert-only; hard to get right

Keep lock duration minimal; don’t hold mutexs across I/O.


8. Best practices

  • Prefer RAII (lock_guard, unique_lock, scoped_lock)
  • Encapsulate data + mutex in a type
  • Use shared_mutex when reads dominate
  • Use ThreadSanitizer: g++ -fsanitize=thread -g

9. Production patterns

  • Thread-safe queue wrapper: all methods lock internally
  • shared_mutex: shared_lock for read, unique_lock for write
  • Snapshot publish: atomic_store/atomic_load of shared_ptr<const Data> for rare writes

10. Patterns

Bundle data + mutex in one struct; expose only thread-safe methods.


Implementation checklist

  • Shared state protected by mutex or atomic
  • RAII locks
  • Minimal critical sections
  • Multi-lock: scoped_lock or fixed order
  • find/remove + erase idiom for containers

  • std::thread basics
  • Atomics
  • RAII

Keywords

C++ mutex, lock_guard, unique_lock, shared_mutex, scoped_lock, race condition, deadlock, critical section, data race

Summary

  • Data race ⇒ UB; synchronize with mutex/atomic/correct algorithms
  • lock_guard for simple scopes; unique_lock for condition_variable or mid-scope unlock
  • scoped_lock for multiple mutexes
  • Deadlocks: consistent ordering or std::lock/scoped_lock

Next: condition_variable

FAQ

When is this useful?

A. Every multi-threaded C++ program with shared mutable data—servers, games, UI backends.

Read first?

A. Thread basics, then series index.

One-line summary: Wrap shared data in mutex + RAII; avoid deadlocks with scoped_lock and lock order discipline.

References

  • Thread basics
  • Advanced threading
  • condition_variable
  • Atomics
  • Stack vs heap