BDE 4.14.0 Production release
|
Provide a memory manager to manage pools of varying block sizes.
This component implements a memory manager, bdlma::ConcurrentMultipool
, that maintains a configurable number of bdlma::ConcurrentPool
objects, each dispensing memory blocks of a unique size. The bdlma::ConcurrentPool
objects are placed in an array, starting at index 0, with each successive pool managing memory blocks of a size twice that of the previous pool. Each multipool allocation (deallocation) request allocates memory from (returns memory to) the internal pool managing memory blocks of the smallest size not less than the requested size, or else from a separately managed list of memory blocks, if no internal pool managing memory block of sufficient size exists. Both the release
method and the destructor of a bdlma::ConcurrentMultipool
release all memory currently allocated via the object.
A bdlma::ConcurrentMultipool
can be depicted visually:
Note that a "chunk" is a large, contiguous block of memory, internal to a bdlma::ConcurrentPool
maintained by the multipool, from which memory blocks of uniform size are dispensed to users.
bdlma::ConcurrentMultipool
is fully thread-safe, meaning any operation on the same object can be safely invoked from any thread.
When creating a bdlma::ConcurrentMultipool
, clients can optionally configure:
A default-constructed multipool has a relatively small, implementation-defined number of pools N
with respective block sizes ranging from 2^3 = 8
to 2^(N+2)
. By default, the initial chunk size, (i.e., the number of blocks of a given size allocated at once to replenish a pool's memory) is 1, and each pool's chunk size grows geometrically until it reaches an implementation-defined maximum, at which it is capped. Finally, unless otherwise specified, all memory comes from the allocator that was the currently installed default allocator at the time the bdlma::ConcurrentMultipool
was created.
Using the various pooling options described above, we can configure the number of pools maintained, whether replenishment should be adaptive (i.e., geometric starting with 1) or fixed at a maximum chunk size, what that maximum chunk size should be (which need not be an integral power of 2), and the underlying allocator used to supply memory. Note that both GROWTH STRATEGY and MAX BLOCKS PER CHUNK can be specified separately either as a single value applying to all of the maintained pools, or as an array of values, with the elements applying to each individually maintained pool.
This section illustrates intended use of this component.
A bdlma::ConcurrentMultipool
can be used by containers that hold different types of elements, each of uniform size, for efficient memory allocation of new elements. Suppose we have a factory class, my_MessageFactory
, that creates messages based on user requests. Each message is created with the most efficient memory storage possible - using predefined 8-byte, 16-byte and 32-byte buffers. If the message size exceeds the three predefined values, a generic message is used. For efficient memory allocation of messages, we use a bdlma::ConcurrentMultipool
.
First, we define our message types as follows:
Then we define our factory class, my_MessageFactory
, as follows:
The use of a multipool and the release
method enables the disposeAllMessages
method to quickly deallocate all memory blocks used to create messages:
The multipool can also reuse deallocated memory. Once a message is destroyed by the disposeMessage
method, memory allocated for that message is reclaimed by the multipool and can be used to create the next message having the same size:
A multipool optimizes the allocation of memory by using dynamically-allocated buffers (also known as chunks) to supply memory. As each chunk can satisfy multiple memory block requests before requiring additional dynamic memory allocation, the number of dynamic allocation requests needed is greatly reduced.
For the number of pools managed by the multipool, we chose to use the implementation-defined default value instead of calculating and specifying a value. Note that if users want to specify the number of pools, the value can be calculated as the smallest N
such that the following relationship holds:
Continuing on with the usage example:
Note that in the destructor, all outstanding messages are reclaimed automatically when d_multipool
is destroyed:
A bdlma::ConcurrentMultipool
is ideal for allocating the different sized messages since repeated deallocations might be necessary (which renders a bdlma::SequentialPool
unsuitable) and the sizes of these types are all different:
bslma::Allocator
is used throughout the interfaces of BDE components. Suppose we would like to create a multipool allocator, my_MultipoolAllocator
, that allocates memory from multiple bdlma::ConcurrentPool
objects in a similar fashion to bdlma::ConcurrentMultipool
. This class can be used directly to implement such an allocator.
Note that the documentation for this class is simplified for this usage example. Please see bdlmca_multipoolallocator for full documentation of a similar class.