Search code examples
c++multithreadingboostboost-threadboost-mutex

Boost w/ C++ - Curious mutex behavior


I'm experimenting with Boost threads, as it's to my knowledge I can write a multi-threaded Boost application and compile it in Windows or Linux, while pthreads, which I'm more familiar with, is strictly for use on *NIX systems.

I have the following sample application, which is borrowed from another SO question:


#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>

#define NAP_DURATION    (10000UL)   // 10ms

boost::mutex io_mutex;

void count(int id)
{
    for (int i = 0; i < 1000; ++i)
    {
        boost::mutex::scoped_lock lock(io_mutex);
        std::cout << "Thread ID:" << id << ": " << i << std::endl;
        if (id == 1)
        {
            std::cout << "I'm thread " << id << " and I'm taking a short nap" << std::endl;
            usleep(NAP_DURATION);
        }
        else
        {
            std::cout << "I'm thread " << id << ", I drink 100 cups of coffee and don't need a nap" << std::endl;
        }
        std::cout << "Thread ID:" << id << ": " << i << std::endl;
        boost::thread::yield();
    }
}

int main(int argc, char* argv[])
{
    boost::thread thrd1( boost::bind(&count, 1));
    boost::thread thrd2( boost::bind(&count, 2));

    thrd1.join();
    thrd2.join();
    return 0;
}

I installed Boost on my Ubuntu 14.04 LTS system via:

sudo apt-get install libboost-all-dev

And I compile the above code via:

g++ test.cpp -lboost_system -lboost_thread -I"$BOOST_INLCUDE" -L"$BOOST_LIB"

I've run into what appears to be some interesting inconsistencies. If I set a lengthy NAP_DURATION, say 1 second (1000000) it seems that only thread 1 ever gets the mutex until it completes its operations, and it's very rare that thread 2 ever gets the lock until thread 1 is done, even when I set the NAP_DURATION to be just a few milliseconds.

When I've written similar such applications using pthreads, the lock would typically alternate more or less randomly between threads, since another thread would already be blocked on the mutex.


So, to the question(s):

  1. Is this expected behavior?
  2. Is there a way to control this behavior, such as making scoped locks behave like locking operations are queued?
  3. If the answer to (2) is "no", is it possible to achieve something similar with Boost condition variables and not having to worry about lock/unlock calls failing?
  4. Are scoped_locks guaranteed to unlock? I'm using the RAII approach rather than manually locking/unlocking because apparently the unlock operation can fail and throw an exception, and I'm trying to make this code solid.

Thank you.

Clarifications

I'm aware that putting the calling thread to sleep won't unlock the mutex, since it's still in scope, but the expected scheduling was along the lines of:

  • Thread1 locks, gets the mutex.
  • Thread2 locks, blocks.
  • Thread1 executes, releases the lock, and immediately attempts to lock again.
  • Thread2 was already waiting on the lock, gets it before thread1.

Solution

  • Is this expected behavior?

    Yes and no. You shouldn't have any expectations about which thread will get a mutex, since it's unspecified. But it's certainly within the range of expected behavior.

    Is there a way to control this behavior, such as making scoped locks behave like locking operations are queued?

    Don't use mutexes this way. Just don't. Use mutexes only such that they're held for very short periods of time relative to other things a thread is doing.

    If the answer to (2) is "no", is it possible to achieve something similar with Boost condition variables and not having to worry about lock/unlock calls failing?

    Sure. Code what you want.

    Are scoped_locks guaranteed to unlock? I'm using the RAII approach rather than manually locking/unlocking because apparently the unlock operation can fail and throw an exception, and I'm trying to make this code solid.

    It's not clear what it is you're worried about, but the RAII approach is recommended.