BDE 4.14.0 Production release
Loading...
Searching...
No Matches
bdlmt.h
Go to the documentation of this file.
1
/// @file bdlmt.h
2
///
3
///
4
/// @defgroup bdlmt Package bdlmt
5
/// @brief Basic Development Library Multi Thread (bdlmt)
6
/// @addtogroup bdl
7
/// @{
8
/// @addtogroup bdlmt
9
/// [bdlmt]: group__bdlmt.html
10
/// @{
11
///
12
/// # Purpose {#bdlmt-purpose}
13
/// Provides thread pools and event schedulers.
14
///
15
/// # Mnemonic {#bdlmt-mnemonic}
16
/// Basic Development Library Multi Thread (bdlmt)
17
///
18
/// @see bdlcc
19
///
20
/// # Description {#bdlmt-description}
21
/// The 'bdlmt' ("Basic Development Library Multi Thread") package
22
/// provides components for creating and managing thread pools, and components for
23
/// scheduling (time-based) events.
24
///
25
/// A "thread pool" is a collection of processor threads that are managed
26
/// together and used interchangeably to support user requests. The
27
/// @ref bdlmt_threadpool component allows clients to configure the pool so that it
28
/// grows and shrinks according to user demand, manage thread availability, and
29
/// schedule client "jobs" to be run independently as threads in the pool become
30
/// available. It does this by placing client requests on an internal job
31
/// queue, and controlling multiple threads as they remove jobs from the queue
32
/// and execute them.
33
///
34
/// A "multi-queue thread pool" defines a dynamic, configurable pool of queues,
35
/// each of which is processed by a thread in a thread pool, such that elements
36
/// on a given queue are processed serially, regardless of which thread is
37
/// processing the queue at a given time. In addition to the ability to create
38
/// and delete queues, clients are able to tune the underlying thread pool.
39
///
40
/// A "timer-event scheduler" defines a thread-safe event scheduler. It
41
/// provides methods to schedule and cancel recurring and non-recurring events
42
/// (also referred to as clock). The callbacks are processed by a separate
43
/// thread (called dispatcher thread).
44
///
45
/// ## Hierarchical Synopsis
46
///
47
/// The 'bdlmt' package currently has 9 components having 2 levels of physical
48
/// dependency. The list below shows the hierarchical ordering of the components.
49
/// The order of components within each level is not architecturally significant,
50
/// just alphabetical.
51
/// @code
52
/// 2. bdlmt_multiqueuethreadpool
53
/// bdlmt_threadmultiplexor
54
///
55
/// 1. bdlmt_eventscheduler
56
/// bdlmt_fixedthreadpool
57
/// bdlmt_multiprioritythreadpool
58
/// bdlmt_signaler
59
/// bdlmt_threadpool
60
/// bdlmt_throttle
61
/// bdlmt_timereventscheduler
62
/// @endcode
63
///
64
/// ## Component Synopsis
65
///
66
/// @ref bdlmt_eventscheduler :
67
/// Provide a thread-safe recurring and one-time event scheduler.
68
///
69
/// @ref bdlmt_fixedthreadpool :
70
/// Provide portable implementation for a fixed-size pool of threads.
71
///
72
/// @ref bdlmt_multiprioritythreadpool :
73
/// Provide a mechanism to parallelize a prioritized sequence of jobs.
74
///
75
/// @ref bdlmt_multiqueuethreadpool :
76
/// Provide a pool of queues, each processed serially by a thread pool.
77
///
78
/// @ref bdlmt_signaler :
79
/// Provide an implementation of a managed signals and slots system.
80
///
81
/// @ref bdlmt_threadmultiplexor :
82
/// Provide a mechanism for partitioning a collection of threads.
83
///
84
/// @ref bdlmt_threadpool :
85
/// Provide portable implementation for a dynamic pool of threads.
86
///
87
/// @ref bdlmt_throttle :
88
/// Provide mechanism for limiting the rate at which actions may occur.
89
///
90
/// @ref bdlmt_timereventscheduler :
91
/// Provide a thread-safe recurring and non-recurring event scheduler.
92
///
93
/// ## Generic Overview of Thread Pools
94
///
95
/// At the current time, this generic overview applies only to the
96
/// 'bdlmt_MultipriorityThreadPool'. The plan is for other threadpools to move
97
/// to this model at a later date.
98
///
99
/// As Figure 1 illustrates, a threadpool allows its clients to enqueue units of
100
/// work to be processed concurrently in multiple threads. Each work item, or
101
/// "job", consists of a function along with the address of its associated input
102
/// data. When executed, this address is supplied to the function as its only
103
/// argument; note that this function must have external linkage and return
104
/// 'void':
105
/// @code
106
/// extern "C" void job(void *); // Idiomatic C-style function signature
107
/// @endcode
108
/// Alternatively both the function and its data can be encapsulated and
109
/// supplied in the form of an (invokable) function object, or "functor", taking
110
/// no arguments and returning 'void'.
111
/// @code
112
/// +-------------------------------------------------------------------------+
113
/// | ThreadPool *Control* Methods |
114
/// | |
115
/// | Front Operations Middle Operations Back Operations |
116
/// | ---------------- ----------------- --------------- |
117
/// | int startThreads() void removeJobs() void enableQueue() |
118
/// | void stopThreads() void drainJobs() void disableQueue() |
119
/// | int resumeProcessing() int enqueueJob(func,arg) |
120
/// | int suspendProcessing() int enqueueJob(job) |
121
/// | |
122
/// +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
123
/// | +--<--+--<--+--<--+--<--+--<--+----------------+ |
124
/// | | | | | | | | |
125
/// | Front <==| Job | Job | Job | Job | Job | |==< Back |
126
/// | | | | | | | | |
127
/// | +--<--+--<--+--<--+--<--+--<--+----------------+ |
128
/// | |
129
/// | ,----------------. ,-----------------. |
130
/// | ( N Worker Threads ) ( Thread Attributes ) |
131
/// | `----------------' `-----------------' |
132
/// +-------------------------------------------------------------------------+
133
/// Figure 1: Illustration of Generalized Thread Pool
134
/// @endcode
135
/// In addition to enqueuing jobs, a thread pool must supply primitive control
136
/// functionality such as creating and destroying worker threads, enabling and
137
/// disabling the enqueuing of new jobs, causing the queue to block until there
138
/// are no pending jobs, and removing (i.e., canceling) all pending (i.e., not
139
/// yet running) jobs. Different kinds of threadpools will provide different
140
/// functionality and/or performance characteristics, corresponding those of the
141
/// underlying thread-enabled ('bdlcc') queue -- e.g., (limited-capacity)
142
/// 'FixedQueue', (heap-based) 'PriorityQueue', and (array-based)
143
/// 'MultipriorityQueue'. Nonetheless, each of the threadpool objects in 'bdlmt'
144
/// should provide a suite of input and control operations that are consistent
145
/// in both name and behavior across the 'bdlmt' package.
146
///
147
/// Due to the intricate nature of threadpools, it is easy to convolve behaviors
148
/// in subtly different ways for functions having the same name. Consider, for
149
/// example, the method 'void drainJobs()', the basic functionality of which is
150
/// to 'block' the caller until all of the pending jobs complete (i.e., the
151
/// queue is empty and all worker threads are idle). Should 'drainJobs()' also
152
/// leave the queue in the disabled state? Even if that is a common usage
153
/// pattern, it is often useful to start with simple, orthogonal behaviors, and
154
/// if needed, define more complex behaviors in terms of them.
155
///
156
/// In the case of a thread pool, it is instructive to break the functionality
157
/// into three categories of operations relative to the underlying queue: Front,
158
/// Middle, and Back. At the back of the queue (refer to Figure 1), we need to
159
/// enable/disable clients from adding work items. Enabling or disabling the
160
/// queue does not affect the items already in the queue [Middle], nor any
161
/// worker threads processing these items [Front].
162
///
163
/// In the middle of the queue, we have two operations that result in purging
164
/// all pending items in the queue: 'drainJobs()' and 'removeJobs()' If we
165
/// invoke 'removeJobs()', then all currently pending (i.e., not started) work
166
/// items will be removed (i.e., canceled). During this process, clients
167
/// attempting to add work items [Back] will block, but their eventual success
168
/// or failure, (which is based solely on whether the queue is enabled or
169
/// disabled) is not affected. Note that jobs that are already in progress
170
/// [Front] are also unaffected. Similarly, invoking our orthogonal
171
/// 'drainJobs()' method will block enqueuing clients until all pending jobs
172
/// have completed, but will not affect the enabledness of the thread pool
173
/// [Back], nor the processing of work items [Front].
174
///
175
/// Finally we come to the front of the queue, which addresses the processing of
176
/// jobs. A (typically fixed) number of worker threads is specified at
177
/// construction. The thread pool "wakes up" in an enabled state, but without
178
/// having created the worker threads. Invoking the 'startThreads()' method
179
/// attempts to create these threads (unless they are already created). The
180
/// 'startThreads()' method returns 0 if all of these threads are started, and a
181
/// non-zero value otherwise (in which case none of the worker threads are
182
/// started). Redundant calls to 'startThreads()' do nothing and return zero.
183
/// Invoking 'stopThreads()' destroys each worker thread (after it completes any
184
/// current job). Note that the current contents of the queue [Middle], and the
185
/// ability to enqueue new jobs [Back] are not affected.
186
///
187
/// Whether or not started threads should be pulling jobs from the queue and
188
/// processing them is not necessarily the same as having the user-specified
189
/// number of worker threads created. In addition to being *enabled* and
190
/// *started* let's consider one more possible state, *suspended*. If a thread
191
/// pool is in the *suspended* state, then even when it is in the *started*
192
/// state, it will not attempt to pop jobs from the queue and execute them.
193
///
194
/// A created threadpool will be created enabled, not suspended, and not
195
/// started. All three of these qualities are orthogonal and any one of them
196
/// can be changed at any time.
197
///
198
/// The vast majority of users will be uninterested in both the 'suspend' and
199
/// 'disable' features, so it is imperative that newly created threadpools be
200
/// both non-suspended and enabled so users can remain blissfully ignorant of
201
/// them. It is also important the first usage examples, if not all of them,
202
/// omit use of these features to minimize learning time for the typical user.
203
///
204
/// To conclude this generic overview, we note that there is one common usage
205
/// that, although not minimal, arguably deserves to be a method of every thread
206
/// pool class: 'void shutdown()'. This method is best described as a
207
/// composition of the simple, orthogonal functions described above. In order
208
/// to shut down a thread pool, we need to first disable the enqueuing of any
209
/// additional jobs, then remove all of the pending work items, and finally stop
210
/// all of the active threads:
211
/// @code
212
/// void shutdown()
213
/// {
214
/// disableQueue();
215
/// removeJobs();
216
/// stopThreads();
217
/// }
218
/// @endcode
219
/// By making sure that our initial operations are simple and orthogonal, we can
220
/// ensure that the precise meaning of more complex operations is kept clear.
221
///
222
/// ## Synchronous Signals on Unix
223
///
224
/// A thread pool ensures that, on Unix platforms, all the threads in the pool
225
/// block all asynchronous signals. Specifically all the signals, except the
226
/// following synchronous signals are blocked.
227
/// @code
228
/// SIGBUS
229
/// SIGFPE
230
/// SIGILL
231
/// SIGSEGV
232
/// SIGSYS
233
/// SIGABRT
234
/// SIGTRAP
235
/// SIGIOT
236
/// @endcode
237
///
238
/// @}
239
/** @} */
doxygen_input
bde
groups
bdl
bdlmt
doc
bdlmt.h
Generated by
1.9.8