BDE 4.14.0 Production release
|
Provide an allocator to manage pools of varying object sizes.
This component provides an allocator, bdlma::ConcurrentMultipoolAllocator
, that implements the bdlma::ManagedAllocator
protocol and provides an allocator that maintains a configurable number of bdlma::ConcurrentPool
objects, each dispensing memory blocks of a unique size. The bdlma::ConcurrentPool
objects are placed in an array, starting at index 0, with each successive pool managing memory blocks of a size twice that of the previous pool. Each multipool allocation (deallocation) request allocates memory from (returns memory to) the internal pool managing memory blocks of the smallest size not less than the requested size, or else from a separately managed list of memory blocks, if no internal pool managing memory block of sufficient size exists. Both the release
method and the destructor of a bdlma::ConcurrentMultipoolAllocator
release all memory currently allocated via the object.
The main difference between a bdlma::ConcurrentMultipoolAllocator
and a bdlma::ConcurrentMultipool
is that, very often, a bdlma::ConcurrentMultipoolAllocator
is managed through a bslma::Allocator
pointer. Hence, every call to the allocate
method invokes a virtual function call, which is slower than invoking the non-virtual allocate
method on a bdlma::ConcurrentMultipool
. However, since bslma::Allocator *
is widely used across BDE interfaces, bdlma::ConcurrentMultipoolAllocator
is more general purposed than a bdlma::ConcurrentMultipool
.
When creating a bdlma::ConcurrentMultipoolAllocator
, clients can optionally configure:
A default-constructed multipool allocator has a relatively small, implementation-defined number of pools N
with respective block sizes ranging from 2^3 = 8
to 2^(N+2)
. By default, the initial chunk size, (i.e., the number of blocks of a given size allocated at once to replenish a pool's memory) is 1, and each pool's chunk size grows geometrically until it reaches an implementation-defined maximum, at which it is capped. Finally, unless otherwise specified, all memory comes from the allocator that was the currently installed default allocator at the time the bdlma::ConcurrentMultipoolAllocator
was created.
Using the various pooling options described above, we can configure the number of pools maintained, whether replenishment should be adaptive (i.e., geometric starting with 1) or fixed at a maximum chunk size, what that maximum chunk size should be (which need not be an integral power of 2), and the underlying allocator used to supply memory. Note that both GROWTH STRATEGY and MAX BLOCKS PER CHUNK can be specified separately either as a single value applying to all of the maintained pools, or as an array of values, with the elements applying to each individually maintained pool.
This section illustrates intended use of this component.
A bdlma::ConcurrentMultipoolAllocator
can be used to supply memory to node-based data structures such as bsl::set
, bsl::list
or bsl::map
. Suppose we are implementing a container of named graphs data structure, where a graph is defined by a set of edges and nodes. The various fixed-sized nodes can be efficiently allocated by a bdlma::ConcurrentMultipoolAllocator
.
First, the edge class, my_Edge
, is defined as follows:
Then, the node class, my_Node
, is defined as follows:
Then we define the graph class, my_Graph
, as follows:
Then finally, the container for the collection of named graphs, my_NamedGraphContainer
, is defined as follows:
Finally, in main
, we can create a bdlma::ConcurrentMultipoolAllocator
and pass it to our my_NamedGraphContainer
. Since we know that the maximum block size needed is 32 (comes from sizeof(my_Graph)
), we can calculate the number of pools needed by using the formula specified in the "configuration at construction" section:
When solved for the above equation, the smallest N
that satisfies this relationship is 3: