BDE 4.14.0 Production release
Loading...
Searching...
No Matches
bsls_atomic

Detailed Description

Outline

Purpose

Provide types with atomic operations.

Classes

See also
bsls_atomicoperations

Description

This component provides classes with atomic operations for int, Int64, unsigned int, Uint64, pointer, and bool types. These classes are based on atomic operations supplied by the bsls_atomicoperations component. The bsls::AtomicInt and bsls::AtomicInt64 classes represent the corresponding atomic integer types, and provide overloaded operators and functions for common arithmetic operations. The bsls::AtomicPointer class represents the atomic pointer type, and provides atomic operations to manipulate and dereference a pointer. The bsls::AtomicBool class represents an atomic boolean type and provides operations to set and retrieve its value.

Memory Order and Consistency Guarantees of Atomic Operations

Atomic operations provided by this component ensure various memory ordering consistency guarantees. Memory ordering guarantees of atomic operations are designed to ensure visibility and synchronization order of memory reads and writes between threads that perform atomic operations. The operations on objects of the provided classes ensure the most strict consistency guarantee, sequential consistency (described below), unless explicitly qualified with a less strict consistency guarantee (i.e., Acquire, Release, Acquire/Release or Relaxed).

This component implements memory order and consistency guarantees as defined in the C++ 2011 Standard (sections: [intro.multithreaded], [atomics.order]).

The following memory ordering guarantees are supported:

Acquire and Release Memory Consistency Guarantees

Operations providing acquire and release guarantees are essential to synchronizing the memory state between multiple threads. For example, consider two threads, A and B, that perform store and load operations to shared memory locations. Without any synchronization, store operations in thread A can be freely reordered with load operations in thread B, i.e, thread A can perform two store operations to two memory locations in a certain order and thread B can see those operations done in a different order due to such effects as: compiler or processor optimizations of store and load operations, and cache synchronization between processors and cores.

However, stores in thread A can be ordered with loads in thread B using a combination of store-release and load-acquire operations. A store-release operation in thread A followed by a load-acquire operation in thread B to the same memory location guarantees that thread B sees all other stores done in thread A prior to the store-release operation. The store-release in thread A effectively synchronizes the memory state with the load-acquire in thread B.

An acquire-release operation is a load-modify-store operation that, if performed in both threads A and B on the same memory location, synchronizes stores and loads between threads A and B in both directions.

Sequential Consistency Memory Consistency Guarantee

Finally, load and store operations with sequential consistency are guaranteed to performed in a global total order among all threads in the process. To illustrate the total order, let's consider the so-called "independent reads of independent writes" example:

int r1, r2, r3, r4;
void thread1() {
x = 1; // sequential consistency store
}
void thread2() {
y = 1; // sequential consistency store
}
void thread3() {
r1 = x; // sequential consistency load
r2 = y; // sequential consistency load
}
void thread4() {
r3 = y; // sequential consistency load
r4 = x; // sequential consistency load
}
Definition bsls_atomic.h:743

Where threadN functions are executed concurrently by different threads (note that values x and y are written by independent threads). Sequential consistency guarantees that if thread3 observes values x and y as r1 == 1 && r2 == 0, then thread4 can't observe values x and y in a different order, i.e., r3 == 1 && r4 == 0.

Usage

This section illustrates intended use of this component.

Example 1: Usage Statistics on a Thread Pool

This example demonstrates a common use of atomic integer types for statistics counters. The program creates a series of threads to process transactions. As each thread completes a transaction, it atomically increments the transaction counters.

For this example, we assume the existence of the functions processNextTransaction, createWorkerThread, and waitAllThreads. The function createWorkerThread spawns a new thread, which executes the workerThread function. waitAllThreads blocks until all the worker thread complete.

First, we declare the shared counters (note that, in contrast to the raw types defined in bsls_atomicoperations , these atomic types are zero-initialized at construction):

static bsls::AtomicInt64 transactionCount;
static bsls::AtomicInt64 successCount;
static bsls::AtomicInt64 failureCount;
Definition bsls_atomic.h:892

Next, for each transaction processed, we atomically increment either the success or the failure counter as well as the total transaction count:

static void workerThread(int *stop)
{
while (!(*stop)) {
if (processNextTransaction()) {
++failureCount;
} else {
++successCount;
}
++transactionCount;
}
}

Finally, we write function, serverMain, that provides the overall control logic for the server. This function spawns the threads and then waits for all work to be completed; when all of the threads have finished, this function returns normally:

void serverMain()
{
const int num_threads = 10;
for (int i = 0; i < num_threads; ++i) {
createWorkerThread();
}
waitAllThreads();
}

Note that functions createWorkerThread and waitAllThreads can be implemented using any thread-support package.

Example 2: Thread-Safe Counted Handle

The following example demonstrates the use of atomic integer operations to implement a thread-safe ref-counted handle similar to a shared pointer. Each handle (of type my_CountedHandle) maintains a pointer to a representation object, my_CountedHandleRep, which in turn, stores both a pointer to the managed object and a reference counter.

Both the handle class and the representation class are template classes with two template parameters. The template parameter, INSTANCE, represents the type of the "instance", or managed object.

A representation object can be shared by several handle objects. When a handle object is assigned to a second handle object, the address of the representation is copied to the second handle, and the reference count on the representation is atomically incremented. When a handle releases its reference to the representation, it atomically decrements the reference count. If the resulting reference count becomes 0 (and there are no more references to the object), the handle deletes the representation object and the representation object, in turn, deletes the managed object (INSTANCE).

Class my_CountedHandleRep

First, we define class my_CountedHandleRep. This class manages a single INSTANCE object on behalf of multiple "handle" objects; since different "handle" objects may be active in different threads, class my_CountedHandleRep must be (fully) thread-safe. Specifically, methods increment and decrement must work atomically.

The class declaration for my_CountedHandleRep is identical to the same class in component bsls_atomicoperations , with a single exception: member d_count is of type bsls::AtomicInt, rather than bsls::AtomicOperations::Int. Whereas bsls::AtomicOperations::Int is merely a typedef for a platform-specific data type to be used in atomic integer operations, bsls::AtomicInt encapsulates those atomic operations as member functions and operator overloads. Class my_CountedHandleRep will benefit from this encapsulation: Its method implementations will be able to operate on d_count as if it were a standard integer.

Note that, as in the example in component bsls_atomicoperations , this rep class is intended to be used only by class my_CountedHandle, and thus all methods of class my_CountedHandleRep are declared private, and friend status is granted to class my_CountedHandle:

// =========================
// class my_CountedHandleRep
// =========================
template <class INSTANCE>
class my_CountedHandle;
template <class INSTANCE>
class my_CountedHandleRep {
// DATA
INSTANCE *d_instance_p; // address of managed instance
bsls::AtomicInt d_count; // number of active references
// FRIENDS
friend class my_CountedHandle<INSTANCE>;
// NOT IMPLEMENTED
my_CountedHandleRep(const my_CountedHandleRep&);
my_CountedHandleRep& operator=(const my_CountedHandleRep&);
private:
// PRIVATE CLASS METHODS
static void
deleteObject(my_CountedHandleRep<INSTANCE> *object);
// PRIVATE CREATORS
my_CountedHandleRep(INSTANCE *instance);
~my_CountedHandleRep();
// PRIVATE MANIPULATORS
void increment();
int decrement();
};

Class my_CountedHandle

Then, we create class my_CountedHandle that provides an individual handle to the shared, reference-counted object. Each my_CountedHandle object acts as a smart pointer, supplying an overloaded operator-> that provides access to the underlying INSTANCE object via pointer semantics.

my_CountedHandle can also be copied freely; the copy constructor will use the increment method from my_CountedHandleRep to note the extra copy. Similarly, the destructor will call my_CountedHandleRep::decrement to note that there is one fewer handle the underlying INSTANCE has, and delete the "rep" object when its reference count is reduced to zero.

Similar to my_CountedHandleRep, the class declaration for my_CountedHandle is identical to that in bsls_atomicoperations :

// ======================
// class my_CountedHandle
// ======================
template <class INSTANCE>
class my_CountedHandle {
// DATA
my_CountedHandleRep<INSTANCE> *d_rep_p; // shared rep.
public:
// CREATORS
my_CountedHandle();
my_CountedHandle(INSTANCE *instance);
my_CountedHandle(const my_CountedHandle<INSTANCE>& other);
~my_CountedHandle();
// ACCESSORS
INSTANCE *operator->() const;
int numReferences() const;
};

Function Definitions for my_CountedHandleRep

Next, we provide a definition for the static deleteObject method, which is called by the destructor for class my_CountedHandle for the last instance of my_CountedHandle using the given "rep" object:

template <class INSTANCE>
inline
void my_CountedHandleRep<INSTANCE>::deleteObject(
my_CountedHandleRep<INSTANCE> *object)
{
delete object;
}

Then, we define the constructor for the my_CountedHandleRep<INSTANCE> class. Member d_count is initialized to 1, reflecting the fact that this constructor will be called by a new instance of my_CountedHandle, which instance is our first and only handle when this constructor is called: notice that d_count (of type bsls::AtomicInt) is initialized as if it were a simple integer; its constructor guarantees that the initialization is done atomically.

template <class INSTANCE>
inline
my_CountedHandleRep<INSTANCE>:: my_CountedHandleRep(INSTANCE *instance)
: d_instance_p(instance)
, d_count(1)
{
}

Then, we define the destructor, which just deletes my_CountedHandle d_instance_p:

template <class INSTANCE>
inline
my_CountedHandleRep<INSTANCE>::~my_CountedHandleRep()
{
delete d_instance_p;
}

Next, we define method increment, which is called by my_CountedHandle to add a new reference to the current "rep" object, which simply increments d_count, using the prefix operator++:

// MANIPULATORS
template <class INSTANCE>
inline
void my_CountedHandleRep<INSTANCE>::increment()
{
++d_count;
}

The above operation must be done atomically in a multi-threaded context; class bsls::AtomicInt provides this guarantee for all its overloaded operators, and my_CountedHandleRep relies upon this guarantee.

Then, we implement method decrement, which is called by my_CountedHandle when a reference to the current "rep" object is being deleted:

template <class INSTANCE>
inline
int my_CountedHandleRep<INSTANCE>::decrement()
{
return --d_count;
}

This method atomically decrements the number of references to this my_CountedHandleRep and, once again, atomicity is guaranteed by the underlying type of d_count.

Function Definitions for my_CountedHandle

Next, we define the first constructor for my_CountedHandle, which is used when creating a handle for a new INSTANCE; note that the INSTANCE is constructed separately, and a pointer to that object is passed as the first argument (object):

// ----------------------
// class my_CountedHandle
// ----------------------
// CREATORS
template <class INSTANCE>
inline
my_CountedHandle<INSTANCE>::my_CountedHandle(INSTANCE *instance)
{
d_rep_p = new my_CountedHandleRep<INSTANCE>(instance);
}

Then, we define the copy constructor; the new object copies the underlying my_CountedHandleRep and then increments its counter:

template <class INSTANCE>
inline
my_CountedHandle<INSTANCE>::my_CountedHandle(
const my_CountedHandle<INSTANCE>& other)
: d_rep_p(other.d_rep_p)
{
if (d_rep_p) {
d_rep_p->increment();
}
}

Next, we define the destructor that decrements the "rep" object's reference count using the decrement method. The decrement method returns the object's reference count after the decrement is completed, and my_CountedHandle uses this value to determine whether the "rep" object should be deleted:

template <class INSTANCE>
inline
my_CountedHandle<INSTANCE>::~my_CountedHandle()
{
if (d_rep_p && 0 == d_rep_p->decrement()) {
my_CountedHandleRep<INSTANCE>::deleteObject(d_rep_p);
}
}

Now, we define member operator->(), which provides basic pointer semantics for my_CountedHandle:

// ACCESSORS
template <class INSTANCE>
inline
INSTANCE *my_CountedHandle<INSTANCE>::operator->() const
{
return d_rep_p->d_instance_p;
}

Finally, we define method numReferences, which returns the value of the reference counter:

template <class INSTANCE>
inline
int my_CountedHandle<INSTANCE>::numReferences() const
{
return d_rep_p ? d_rep_p->d_count : 0;
}

Note that, while class my_CountedHandleRep is itself fully thread-safe, it does not guarantee thread safety for the INSTANCE object. In order to provide thread safety for the INSTANCE in the general case, the "rep" would need to use a more general concurrency mechanism such as a mutex.

Example 3: Thread-Safe Lock-Free Singly-Linked List

This example demonstrates the use of atomic pointers to implement a fast and thread-aware, singly-linked list. The example class, my_PtrStack, is a templatized pointer stack, supporting push and pop methods. The class is implemented using a single-linked list. Nodes in the list are linked together using atomic operations. Instance of this structure are allocated using the provided allocator. When nodes are freed, they are cached on a free list. This free list is also implemented as a single-linked list, using atomic pointer operations.

This example parallels the third usage example given for component bsls_atomicoperations , presenting a different implementation of my_PtrStack<T>, with an identical public interface. Note that, where the bsls_atomicoperations example uses the basic data type bsls::AtomicOperations::AtomicTypes::Pointer for members d_list and d_freeList, this implementation uses instead the higher-level type bsls::AtomicPointer<T>.

First, we create class template, my_PtrStack, parameterized by TYPE. Instances of this template maintain a list of nodes and a free-node list. Each node has a pointer to a data item, d_item_p, a link to the next node in the list, d_next_p and an atomic flag, d_inUseFlag, intended for lock-free list manipulation. The definition of the my_PtrStack class is provided below:

template <class TYPE>
class my_PtrStack {
// TYPES
struct Node {
TYPE *d_item_p;
Node *d_next_p;
bsls::AtomicInt d_inUseFlag; // used to lock this node
};
// DATA
// PRIVATE MANIPULATORS
Node *allocateNode();
void freeNode(Node *node);
void deleteNodes(Node *node);
public:
// CREATORS
my_PtrStack();
~my_PtrStack();
// MANIPULATORS
void push(TYPE *item);
TYPE *pop();
};
Definition bsls_atomic.h:1349

Then, we write a constructor that default-initializes the stack. In the corresponding example in bsls_atomicoperations , the constructor must also initialize the atomic pointer d_freeList. Since this example uses the encapsulated type bsls::AtomicPointer, initialization of these member variables is done in their default constructors. Hence, no explicit code is required in this constructor:

// CREATORS
template <class TYPE>
inline my_PtrStack<TYPE>::my_PtrStack()
{
}

Next, we define the deleteNodes and the destructor function to delete nodes that the my_PtrStack object owns. Note that we don't need to worry about the concurrent access to node lists in the destructor, as destructor can be executed in only a single thread:

template <class TYPE>
inline void my_PtrStack<TYPE>::deleteNodes(Node *node)
{
while (node) {
Node *next = node->d_next_p;
delete node;
node = next;
}
}
template <class TYPE>
inline my_PtrStack<TYPE>::~my_PtrStack()
{
deleteNodes(d_list);
deleteNodes(d_freeList);
}

Then, we define method allocateNode to get a node from the free list in the thread-safe manner by leveraging atomic operations to ensure proper thread synchronization:

// PRIVATE MANIPULATORS
template <class TYPE>
typename my_PtrStack<TYPE>::Node *my_PtrStack<TYPE>::allocateNode()
{
Node *node;
while (1) {
node = d_freeList; // get the current head
if (!node) {
break;
}

Next, we try locking the node, and start over if locking fails:

if (node->d_inUseFlag.swapInt(1)) {
continue;
}

Then, we atomically modify the head if it has not changed. testAndSwap compares d_freeList to node, replacing node with node->d_next_p only if it matches d_freeList. If d_freeList did not match node, then the free list has been changed on another thread, between its assignment to the node and the call to testAndSwap. If the list head has changed, then try again:

if (d_freeList.testAndSwap(node, node->d_next_p) == node) {
break;
}
// Unlock the node.
node->d_inUseFlag = 0;
}

Next, we allocate a new node if there were no nodes in the free node list:

if (!node) {
node = new Node(); // should allocate with 'd_allocator_p', but
// here we use 'new' directly for simplicity
node->d_inUseFlag = 1;
}
return node;
}

Note that the node is returned in the locked state and remained locked until it is added to the free list.

Then, we define the freeNode method to add a given node to the free list; freeNode also needs to be synchronized using atomic operations:

template <class TYPE>
inline void my_PtrStack<TYPE>::freeNode(Node *node)
{
if (!node) {
return;
}
while (1) {
node->d_next_p = d_freeList;
// Atomically test and swap the head of the list with the
// new node. If the list head has been changed (by another
// thread), try again.
if (d_freeList.testAndSwap(node->d_next_p, node) == node->d_next_p)
{
break;
}
}
// unlock the 'node'
node->d_inUseFlag = 0;
}

Now, we begin to define the public "stack-like" interface for my_PtrStack. Note that the push method is similar to freeNode, except that it assigns an item value and operates on d_list, which maintains the list of active nodes:

// MANIPULATORS
template <class TYPE>
void my_PtrStack<TYPE>::push(TYPE *item)
{
Node *node = allocateNode();
node->d_item_p = item;
while (1) {
node->d_next_p = d_list;
if (d_list.testAndSwap(node->d_next_p, node) == node->d_next_p) {
break;
}
}
node->d_inUseFlag = 0;
}

Finally, we define the pop method that removes the node from the top of active node list, d_list, adds it to the free-node list, and returns the data item contained in the node to the caller:

template <class TYPE>
TYPE *my_PtrStack<TYPE>::pop()
{
Node *node;
while (1) {
node = d_list;
if (!node) {
break;
}
if (node->d_inUseFlag.swapInt(1)) {
continue; // node is locked
}
if (d_list.testAndSwap(node, node->d_next_p) == node) {
break; // node list is being modified in another thread
}
node->d_inUseFlag = 0;
}
TYPE *item = node ? node->d_item_p : 0;
if (node) {
freeNode(node);
}
return item;
}

Notice that if the stack was empty, a NULL pointer is returned.