Quick Links:

bal | bbl | bdl | bsl

Namespaces

Component bslmt_meteredmutex
[Package bslmt]

Provide a mutex capable of keeping track of wait and hold time. More...

Namespaces

namespace  bslmt

Detailed Description

Outline
Purpose:
Provide a mutex capable of keeping track of wait and hold time.
Classes:
bslmt::MeteredMutex mutex capable of keeping track of wait and hold time
See also:
Description:
This component provides a class, bslmt::MeteredMutex, that functions as a mutex and has additional capability to keep track of wait time and hold time. This class can be used, for example, in evaluating the performance of an application, based on its lock contention behavior.
Precise Definitions of Wait and Hold Time:
Wait time is defined as the sum of the time intervals between each call to lock (or tryLock) on the underlying mutex and the return of that call. Note that if one or more threads are waiting for the lock at the point when waitTime is called, those waiting time intervals are not included in the returned wait time. Hold time is defined as the sum of the time intervals between return from each call to lock (or a successful call to tryLock) on the underlying mutex and the subsequent call to unlock. Note that if a thread is holding the lock at the point when holdTime is called, then that holding time is not included in the returned hold time.
Performance:
It should be noted that the overhead in keeping track of wait and hold time is very small. We do not use additional mutexes to manipulate these times, instead, we use atomic data types (which have very small overhead compared to a mutex) to update these times atomically.
Inaccuracy of waitTime and holdTime:
Times reported by waitTime and holdTime are (close) approximate times and not 100% accurate. This inaccuracy can sometime cause surprising behavior. For example, one can incorrectly assume lock() and while (tryLock() != 0); to be effectively the same (both disallowing the thread to advance until the lock is acquired) but the wait time reported in the first case can be much more accurate than that of the second because the lock is called only once (and thus computation error is introduced only once) in the first case.
Usage:
In the following example, we have NUM_THREADS threads (that are sequentially numbered from 0 to NUM_THREADS-1) and two counters evenCount and oddCount. evenCount is incremented by the even numbered threads and oddCount is incremented by the odd ones. We considers two strategies to increment these counters. In the first strategy (strategy1), we use two mutexes (one for each counter) and in the second strategy (strategy2), we use a single mutex for both counters.
  int oddCount = 0;
  int evenCount = 0;

  typedef bslmt::MeteredMutex Obj;
  Obj oddMutex;
  Obj evenMutex;
  Obj globalMutex;

  enum { k_USAGE_NUM_THREADS = 4, k_USAGE_SLEEP_TIME = 100000 };
  bslmt::Barrier usageBarrier(k_USAGE_NUM_THREADS);

  void executeInParallel(int                               numThreads,
                         bslmt::ThreadUtil::ThreadFunction function)
      // Create the specified 'numThreads', each executing the specified
      // 'function'.  Number each thread (sequentially from 0 to
      // 'numThreads - 1') by passing i to i'th thread.  Finally join all the
      // threads.
  {
      bslmt::ThreadUtil::Handle *threads =
                                   new bslmt::ThreadUtil::Handle[numThreads];
      assert(threads);

      for (int i = 0; i < numThreads; ++i) {
          bslmt::ThreadUtil::create(&threads[i], function, (void*)i);
      }
      for (int i = 0; i < numThreads; ++i) {
          bslmt::ThreadUtil::join(threads[i]);
      }

      delete [] threads;
  }

  extern "C" {
      void *strategy1(void *arg)
      {
          usageBarrier.wait();
          int remainder = (int)(bsls::Types::IntPtr)arg % 2;
          if (remainder == 1) {
              oddMutex.lock();
              ++oddCount;
              bslmt::ThreadUtil::microSleep(k_USAGE_SLEEP_TIME);
              oddMutex.unlock();
          }
          else {
              evenMutex.lock();
              ++evenCount;
              bslmt::ThreadUtil::microSleep(k_USAGE_SLEEP_TIME);
              evenMutex.unlock();
          }
          return NULL;
      }
  } // extern "C"

  extern "C" {
      void *strategy2(void *arg)
      {
          usageBarrier.wait();
          int remainder = (int)(bsls::Types::IntPtr)arg % 2;
          if (remainder == 1) {
              globalMutex.lock();
              ++oddCount;
              bslmt::ThreadUtil::microSleep(k_USAGE_SLEEP_TIME);
              globalMutex.unlock();
          }
          else {
              globalMutex.lock();
              ++evenCount;
              bslmt::ThreadUtil::microSleep(k_USAGE_SLEEP_TIME);
              globalMutex.unlock();
          }
          return NULL;
      }
  } // extern "C"
Then in the application main:
  executeInParallel(k_USAGE_NUM_THREADS, strategy1);
  bsls::Types::Int64 waitTimeForStrategy1 =
                                  oddMutex.waitTime() + evenMutex.waitTime();

  executeInParallel(k_USAGE_NUM_THREADS, strategy2);
  bsls::Types::Int64 waitTimeForStrategy2 = globalMutex.waitTime();

  assert(waitTimeForStrategy2 > waitTimeForStrategy1);
  if (veryVerbose) {
      P(waitTimeForStrategy1);
      P(waitTimeForStrategy2);
  }
We measured the wait times for each strategy. Intuitively, the wait time for the second strategy should be greater than that of the first. The output was consistent with our expectation.
 waitTimeForStrategy1 = 400787000
 waitTimeForStrategy2 = 880765000