Quick Links:

bal | bbl | bdl | bsl

Namespaces

Component bdlcc_fixedqueue
[Package bdlcc]

Provide a thread-aware fixed-size queue of values. More...

Namespaces

namespace  bdlcc

Detailed Description

Outline
Purpose:
Provide a thread-aware fixed-size queue of values.
Classes:
bdlcc::FixedQueue thread-aware fixed-size queue of TYPE values
Description:
This component defines a type, bdlcc::FixedQueue, that provides an efficient, thread-aware fixed-size queue of values. This class is ideal for synchronization and communication between threads in a producer-consumer model. Under most cicrumstances developers should prefer the newer {bdlcc_boundedqueue} (see Comparison to BoundedQueue).
The queue provides pushBack and popFront methods for pushing data into the queue and popping it from the queue. In case of overflow (queue full when pushing), or underflow (queue empty when popping), the methods block until data or free space in the queue appears. Non-blocking methods tryPushBack and tryPopFront are also provided, which fail immediately returning a non-zero value in case of overflow or underflow.
The queue may be placed into a "disabled" state using the disable method. When disabled, pushBack and tryPushBack fail immediately (they do not block and any blocked invocations will fail immediately). The queue may be restored to normal operation with the enable method.
Unlike bdlcc::Queue, a fixed queue is not double-ended, there is no timed API like timedPushBack and timedPopFront, and no forcePush methods, as the queue capacity is fixed. Also, this component is not based on bdlc::Queue, so there is no API for direct access to the underlying queue. These limitations are a trade-off for significant gain in performance compared to bdlcc::Queue.
Comparison To BoundedQueue:
Both bdlcc::FixedQueue and bdlcc::BoundedQueue provide thread-aware bounded queues. Under most circumstances developers should prefer {bdlcc_boundedqueue}: it is newer, has additional features, and provides better performance under most circumstances. bdlcc::BoundedQueue is not quite a drop in replacement for bdlcc::FixedQueue so both types are currently maintained. There is additional information about performance of various queues in the article Concurrent Queue Evaluation (https://tinyurl.com/mr2un9f7).
Template Requirements:
bdlcc::FixedQueue is a template that is parameterized on the type of element contained within the queue. The supplied template argument, TYPE, must provide both a default constructor and a copy constructors as well as an assignment operator. If the default constructor accepts a bslma::Allocator*, TYPE must declare the uses bslma::Allocator trait (see bslma_usesbslmaallocator) so that the allocator of the queue is propagated to the elements contained in the queue.
Exception safety:
A bdlcc::FixedQueue is exception neutral, and all of the methods of bdlcc::FixedQueue provide the strong exception safety guarantee except for pushBack and tryPushBack, which provide the basic exception guarantee (see bsldoc_glossary).
Memory Usage:
bdlcc::FixedQueue is most efficient when dealing with small objects or fundamental types (as a thread-safe container, its methods pass objects by value). We recommend:
  • Large objects be stored as shared-pointers (or possibly raw pointers).
  • Clients take care in specifying the queue capacity (specified in a number of objects, not a number of bytes).
Note that the implementation of bdlcc::FixedQueue currently creates a fixed size array of the contained object type.
Move Semantics in C++03:
Move-only types are supported by FixedQueue on C++11 platforms only (where BSLMF_MOVABLEREF_USES_RVALUE_REFERENCES is defined), and are not supported on C++03 platforms. Unfortunately, in C++03, there are user types where a bslmf::MovableRef will not safely degrade to a lvalue reference when a move constructor is not available (types providing a constructor template taking any type), so bslmf::MovableRefUtil::move cannot be used directly on a user supplied template type. See internal bug report 99039150 for more information.
Usage:
This section illustrates intended use of this component.
Example 1: A Simple Thread Pool:
In the following example a bdlcc::FixedQueue is used to communicate between a single "producer" thread and multiple "consumer" threads. The "producer" will push work requests onto the queue, and each "consumer" will iteratively take a work request from the queue and service the request. This example shows a partial, simplified implementation of the bdlmt::FixedThreadPool class. See component bdlmt_fixedthreadpool for more information.
First, we define a utility classes that handles a simple "work item":
  struct my_WorkData {
      // Work data...
  };

  struct my_WorkRequest {
      enum RequestType {
          e_WORK = 1,
          e_STOP = 2
      };

      RequestType d_type;
      my_WorkData d_data;
      // Work data...
  };
Next, we provide a simple function to service an individual work item. The details are unimportant for this example:
  void myDoWork(my_WorkData& data)
  {
      // do some stuff...
      (void)data;
  }
Then, we define a myConsumer function that will pop elements off the queue and process them. Note that the call to queue->popFront() will block until there is an element available on the queue. This function will be executed in multiple threads, so that each thread waits in queue->popFront(), and bdlcc::FixedQueue guarantees that each thread gets a unique element from the queue:
  void myConsumer(bdlcc::FixedQueue<my_WorkRequest> *queue)
  {
      while (1) {
          // 'popFront()' will wait for a 'my_WorkRequest' until available.

          my_WorkRequest item = queue->popFront();
          if (item.d_type == my_WorkRequest::e_STOP) { break; }
          myDoWork(item.d_data);
      }
  }
Finally, we define a myProducer function that serves multiple roles: it creates the bdlcc::FixedQueue, starts the consumer threads, and then produces and enqueues work items. When work requests are exhausted, this function enqueues one e_STOP item for each consumer queue. This e_STOP item indicates to the consumer thread to terminate its thread-handling function.
Note that, although the producer cannot control which thread pops a particular work item, it can rely on the knowledge that each consumer thread will read a single e_STOP item and then terminate.
  void myProducer(int numThreads)
  {
      enum {
          k_MAX_QUEUE_LENGTH = 100,
          k_NUM_WORK_ITEMS   = 1000
      };

      bdlcc::FixedQueue<my_WorkRequest> queue(k_MAX_QUEUE_LENGTH);

      bslmt::ThreadGroup consumerThreads;
      consumerThreads.addThreads(bdlf::BindUtil::bind(&myConsumer, &queue),
                                 numThreads);

      for (int i = 0; i < k_NUM_WORK_ITEMS; ++i) {
          my_WorkRequest item;
          item.d_type = my_WorkRequest::e_WORK;
          item.d_data = my_WorkData(); // some stuff to do
          queue.pushBack(item);
      }

      for (int i = 0; i < numThreads; ++i) {
          my_WorkRequest item;
          item.d_type = my_WorkRequest::e_STOP;
          queue.pushBack(item);
      }

      consumerThreads.joinAll();
  }