comdb2.dbapi2 DBAPI-2.0 compatible Comdb2 API

This module provides a DB-API 2.0 compatible Comdb2 API.

Overview

This module provides a Comdb2 interface that conforms to the Python Database API Specification v2.0.

Basic Usage

The main class used for interacting with a Comdb2 is Connection, which you create by calling the connect factory function. Any errors that are encountered when connecting to or querying the database are raised as instances of the Error class.

A basic usage example looks like this:

from comdb2 import dbapi2
conn = dbapi2.connect('mattdb', autocommit=True)
cursor = conn.cursor()
cursor.execute("select 1, 'a' union all select 2, 'b'")
for row in cursor.fetchall():
    print(row)

The above would result in the following output:

[1, 'a']
[2, 'b']

To reduce the amount of boilerplate required for fetching result sets, we implement 2 extensions to the interface required by the Python DB-API: Cursor objects are iterable, yielding one row of the result set per iteration, and Cursor.execute returns the Cursor itself. By utilizing these extensions, the basic example can be shortened to:

from comdb2 import dbapi2
conn = dbapi2.connect('mattdb', autocommit=True)
for row in conn.cursor().execute("select 1, 'a' union all select 2, 'b'"):
    print(row)

Graceful Teardown and Error Handling

Non-trivial applications should guarantee that the Connection is closed when it is no longer needed, preferably by using contextlib.closing. They should also be prepared to handle any errors returned by the database. So, a more thorough version of the example above would be:

from comdb2 import dbapi2
from contextlib import closing
try:
    with closing(dbapi2.connect('mattdb', autocommit=True)) as conn:
        query = "select 1, 'a' union all select 2, 'b'"
        for row in conn.cursor().execute(query):
            print(row)
except dbapi2.Error as exc:
    print("Comdb2 exception encountered: %s" % exc)

In this example, contextlib.closing is used to guarantee that Connection.close is called at the end of the with block, and an exception handler been added for exceptions of type Error. All exceptions raised by this module are subclasses of Error. See Exceptions for details on when each exception type is raised.

Controlling the Type Used For Result Rows

As you can see, rows are returned as a list of column values in positional order. If you’d prefer to get the columns back as some other type, you can set Connection.row_factory to one of the factories provided by comdb2.factories - for example:

from comdb2 import dbapi2
from comdb2 import factories
conn = dbapi2.connect('mattdb', autocommit=True)
conn.row_factory = factories.dict_row_factory
c = conn.cursor()
for row in c.execute("select 1 as 'x', 2 as 'y' union all select 3, 4"):
    print(row)

This program will return each row as a dict rather than a list:

{'y': 2, 'x': 1}
{'y': 4, 'x': 3}

Parameter Binding

In real applications you’ll often need to pass parameters into a SQL query. This is done using parameter binding, either by name or by position.

By Name

In the query, placeholders are specified using %(name)s, and a mapping of name to parameter value is passed to Cursor.execute along with the query. The %( and )s are fixed, and the name inside them varies for each parameter. For example:

>>> query = "select 25 between %(a)s and %(b)s"
>>> print(conn.cursor().execute(query, {'a': 20, 'b': 42}).fetchall())
[[1]]
>>> params = {'a': 20, 'b': 23}
>>> print(conn.cursor().execute(query, params).fetchall())
[[0]]

In this example, we run the query with two different sets of parameters, producing different results. First, we execute the query with parameter a bound to 20 and b bound to 42. In this case, because 20 <= 25 <= 42, the expression evaluates to true, and a row containing a single column with a value of 1 is returned.

When we run the same query with parameter b bound to 23, a row containing a single column with a value of 0 is returned instead, because 20 <= 25 <= 23 is false.

Warning

Because named parameters are bound using %(name)s, other % signs in a query must be escaped. For example, WHERE name like 'M%' becomes WHERE name LIKE 'M%%'.

Danger

This applies even when no parameters are passed at all - passing None or no parameters behaves the same as passing an empty dict.

By Position

You can bind parameters positionally rather than by name, by using ? for each placeholder and providing a list or tuple of parameter values. For example:

>>> query = "select 25 between ? and ?"
>>> print(conn.cursor().execute(query, [20, 42]).fetchall())
[[1]]

In this example, we execute the query with the first ? bound to 20 and the second ? bound to 42, so a row with a single column with a value of 1 is returned like in the previous example.

Warning

Unlike when binding parameters by name, you must not escape % signs in the SQL when binding parameters positionally.

For example, you could do WHERE val % 5 = ?, but not WHERE val %% 5 = ?.

Tip

You can pass an empty tuple of parameters to avoid the need to escape % signs in the SQL even when you don’t want to bind any parameters, like:

>>> query = "select 42 % 20"
>>> print(conn.cursor().execute(query, ()).fetchall())
[[2]]

Compare this against what happens if you don’t pass any parameters at all:

>>> print(conn.cursor().execute(query).fetchall())
Traceback (most recent call last):
...
comdb2.dbapi2.InterfaceError: Invalid Python format string for query

Types

For all Comdb2 types, the same Python type is used for binding a parameter value as is returned for a SQL query result column of that type. In brief, SQL types are mapped to Python types according to the following table:

SQL type

Python type

NULL

None

integer

int

real

float

blob

bytes

text

str

datetime

datetime.datetime

datetimeus

DatetimeUs

See Comdb2 to Python Type Mappings for a thorough explanation of these type mappings and their implications.

Note

This module uses byte strings to represent BLOB columns, and Unicode strings to represent TEXT columns. This is a very common source of problems for new users. Make sure to carefully read String and Blob Types on that page.

Autocommit Mode

In all of the examples above, we gave the autocommit=True keyword argument when calling connect. This opts out of DB-API compliant transaction handling, in order to use Comdb2’s native transaction semantics.

By default, DB-API cursors are always in a transaction. You can commit that transaction using Connection.commit, or roll it back using Connection.rollback. For example:

conn = dbapi2.connect('mattdb')
cursor = conn.cursor()
query = "insert into simple(key, val) values (%(key)s, %(val)s)"
cursor.execute(query, {'key': 1, 'val': 2})
cursor.execute(query, {'key': 3, 'val': 4})
cursor.execute(query, {'key': 5, 'val': 6})
conn.commit()

There are several things to note here. The first is that the insert statements that were sent to the database don’t take effect immediately, because they are implicitly part of a transaction that must be explicitly completed. This is different from other Comdb2 APIs, where you must execute a BEGIN statement to start a transaction, and where statements otherwise take effect immediately.

The second thing to note is that there are certain error conditions where a Comdb2 connection can automatically recover when outside of a transaction, but not within a transaction. In other words, using transactions when they aren’t needed can introduce new failure modes into your program.

In order to provide greater compatibility with other Comdb2 interfaces and to eliminate the performance costs and extra error cases introduced by using transactions unnecessarily, we allow you to pass the non-standard autocommit=True keyword argument when creating a new Connection. This results in the implicit transaction not being created. You can still start a transaction explicitly by passing a BEGIN statement to Cursor.execute. For example:

conn = dbapi2.connect('mattdb', autocommit=True)
cursor = conn.cursor()
cursor.execute("delete from simple where 1=1")
cursor.execute("begin")
query = "insert into simple(key, val) values (%(key)s, %(val)s)"
cursor.execute(query, {'key': 1, 'val': 2})
cursor.execute(query, {'key': 3, 'val': 4})
cursor.execute(query, {'key': 5, 'val': 6})
cursor.execute("rollback")

In this example, because we’ve used autocommit=True the delete statement takes effect immediately (that is, it is automatically committed). We then explicitly create a transaction, insert 3 rows, then decide not to commit it, and instead explicitly roll back the transaction.

To summarize: you cannot use autocommit mode if you intend to pass the Connection to a library that requires DB-API compliant connections. You should prefer autocommit mode when you don’t want to use transactions (for example, read-only queries where no particular consistency guarantees are required across queries). If you do intend to use transactions but won’t pass the Connection to a library that requires DB-API compliance, you can choose either mode. It may be easier to port existing code if you use autocommit mode, but avoiding autocommit mode may allow you to use 3rd party libraries in the future that require DB-API compliant connections.

DB-API Compliance

The interface this module provides fully conforms to the Python Database API Specification v2.0 with a few specific exceptions:

  1. DB-API requires Date and DateFromTicks constructors, which we don’t provide because Comdb2 has no type for representing a date without a time component.

  2. DB-API requires Time and TimeFromTicks constructors, which we don’t provide because Comdb2 has no type for representing a time without a date component.

  3. DB-API is unclear about the required behavior of multiple calls to Connection.cursor on a connection. Comdb2 does not have a concept of cursors as distinct from connection handles, so each time Connection.cursor is called, we call Cursor.close on any existing, open cursor for that connection.

API Documentation

Constants

The DB-API requires several constants that describe the capabilities and behavior of the module.

comdb2.dbapi2.apilevel = '2.0'

This module conforms to the Python Database API Specification 2.0.

comdb2.dbapi2.threadsafety = 1

Two threads can use this module, but can’t share one Connection.

comdb2.dbapi2.paramstyle = 'pyformat'

The SQL placeholder format for this module is %(name)s.

Comdb2’s native placeholder format is @name, but that cannot be used by this module because it’s not an acceptable DB-API 2.0 placeholder style. This module uses pyformat for named parameters because it is the only DB-API 2.0 paramstyle that we can translate into Comdb2’s placeholder format without needing a SQL parser. This module also supports the qmark parameter style for binding parameters positionally.

Note

An int value is bound as %(my_int)s, not as %(my_int)d - the last character is always s.

Note

When binding parameters by name, any % sign is recognized as the start of a pyformat style placeholder, and so any literal % characters in a SQL statement must be escaped by doubling. WHERE name like 'M%' becomes WHERE name LIKE 'M%%'. This does not apply when binding parameters positionally with ? placeholders, nor when the literal % appears in a parameter value as opposed to literally in the query. In either of those cases, the % characters must not be escaped.

Warning

Literal % signs in the query must be escaped when no parameters are passed at all – passing None or no parameters behaves the same as passing an empty dict. You can avoid the need to escape % signs in an unparametrized query by instead passing an empty tuple as parameters, which causes the statement to be treated as having qmark placeholders instead of pyformat placeholders.

Connections and Cursors

The user interacts with the database through Connection and Cursor objects.

comdb2.dbapi2.connect(database_name, tier='default', autocommit=False, host=None)[source]

Establish a connection to a Comdb2 database.

All arguments are passed directly through to the Connection constructor.

Note

DB-API 2.0 requires the module to expose connect, but not Connection. If portability across database modules is a concern, you should always use connect to create your connections rather than calling the Connection constructor directly.

Returns:

A handle for the newly established connection.

Return type:

Connection

class comdb2.dbapi2.Connection(database_name, tier='default', autocommit=False, host=None)[source]

Represents a connection to a Comdb2 database.

By default, the connection will be made to the cluster configured as the machine-wide default for the given database. This is almost always what you want. If you need to connect to a database that’s running on your local machine rather than a cluster, you can pass “local” as the tier. It’s also permitted to specify “dev”, “alpha”, “beta”, or “prod” as the tier, which will connect you directly to the tier you specify (firewall permitting).

Alternately, you can pass a machine name as the host argument, to connect directly to an instance of the given database on that host, rather than on a cluster or the local machine.

The connection will use UTC as its timezone by default - you can change this with a SET TIMEZONE statement if needed.

By default, or if autocommit is False, cursor will return cursors that behave as mandated by the Python DB API: every statement to be executed is implicitly considered to be part of a transaction, and that transaction must be ended explicitly with a call to commit (or rollback). If autocommit is True, cursor will instead return cursors that behave more in line with Comdb2’s traditional behavior: the side effects of any given statement are immediately committed unless you previously started a transaction by executing a begin statement.

Note

Using autocommit=True will ease porting from code using other Comdb2 APIs, both because other Comdb2 APIs implicitly commit after each statement in the same way as an autocommit Connection will, and because there are certain operations that Comdb2 will implicitly retry when outside of a transaction but won’t retry when inside a transaction - meaning that a non-autocommit Connection has extra failure modes. You should strongly consider using autocommit=True, especially for read-only use cases.

Note

Python does not guarantee that object finalizers will be called when the interpreter exits, so to ensure that the connection is cleanly released you should call the close method when you’re done with it. You can use contextlib.closing to guarantee the connection is released when a block completes.

Note

DB-API 2.0 requires the module to expose connect, but not Connection. If portability across database modules is a concern, you should always use connect to create your connections rather than instantiating this class directly.

Parameters:
  • database_name (str) – The name of the database to connect to.

  • tier (str) – The cluster to connect to.

  • host (str) – Alternately, a single remote host to connect to.

  • autocommit (bool) – Whether to automatically commit after DML statements, disabling DB-API 2.0’s automatic implicit transactions.

close(ack_current_event=True)[source]

Gracefully close the Comdb2 connection.

Once a Connection has been closed, no further operations may be performed on it.

If the connection was used to consume events from a Lua consumer, then ack_current_event tells the database what to do with the last event that was delivered. By default it will be marked as consumed and won’t be redelivered, but if ack_current_event=False then the event will be redelivered to another consumer for processing.

If a socket pool is running on the machine and the connection was in a clean state, this will turn over the connection to the socket pool. This cannot be done if the connection was in a transaction or in the middle of retrieving a result set. Other restrictions may apply as well.

You can ensure that this gets called at the end of a block using something like:

>>> with contextlib.closing(connect('mattdb')) as conn:
>>>     for row in conn.cursor().execute("select 1"):
>>>         print(row)
commit()[source]

Commit any pending transaction to the database.

This method will fail if the Connection is in autocommit mode and no transaction was explicitly started.

cursor()[source]

Return a new Cursor for this connection.

This calls Cursor.close on any outstanding Cursor; only one Cursor is allowed per Connection at a time.

Note

Although outstanding cursors are closed, uncommitted transactions started by them are not rolled back, so the new Cursor will begin in the middle of that uncommitted transaction.

Returns:

A new cursor on this connection.

Return type:

Cursor

rollback()[source]

Rollback the current transaction.

This method will fail if the Connection is in autocommit mode and no transaction was explicitly started.

Note

Closing a connection without committing the changes first will cause an implicit rollback to be performed, but will also prevent that connection from being contributed to the socket pool, if one is available. Because of this, an explicit rollback should be preferred when possible.

property row_factory

Factory used when constructing result rows.

By default, or when set to None, each row is returned as a list of column values. If you’d prefer to receive rows as a dict or as a collections.namedtuple, you can set this property to one of the factories provided by the comdb2.factories module.

Example

>>> from comdb2.factories import dict_row_factory
>>> conn.row_factory = dict_row_factory
>>> cursor = conn.cursor()
>>> cursor.execute("SELECT 1 as 'foo', 2 as 'bar'")
>>> cursor.fetchone()
{'foo': 1, 'bar': 2}

Added in version 0.9.

class comdb2.dbapi2.Cursor[source]

Class used to send requests through a database connection.

This class is not meant to be instantiated directly; it should always be created using Connection.cursor. It provides methods for sending requests to the database and for reading back the result sets produced by the database.

Queries are made using the execute and callproc methods. Result sets can be consumed with the fetchone, fetchmany, or fetchall methods, or (as a nonstandard DB-API 2.0 extension) by iterating over the Cursor.

Note

Only one Cursor per Connection can exist at a time. Creating a new one will close the previous one.

__iter__()[source]

Iterate over all rows in a result set.

By default each row is returned as a list, where the elements in the list correspond to the result row’s columns in positional order, but this can be changed with the Connection.row_factory property.

Note

This is not required by DB-API 2.0; for maximum portability applications should prefer to use fetchone or fetchmany or fetchall instead.

Example

>>> cursor.execute("select 1, 2 UNION ALL select 3, 4")
>>> for row in cursor:
...     print(row)
[1, 2]
[3, 4]
property arraysize

Controls the number of rows to fetch at a time with fetchmany.

The default is 1, meaning that a single row will be fetched at a time.

callproc(procname, parameters)[source]

Call a stored procedure with the given name.

The parameters sequence must contain one entry for each argument that the procedure requires.

If the called procedure emits a result set, it is made available through the fetch methods, or by iterating over the Cursor, as though it was returned by a select statement.

Parameters:
  • procname (str) – The name of the stored procedure to be executed.

  • parameters (Sequence[Any]) – A sequence of values to be passed, in order, as the parameters to the stored procedure. Each element must be an instance of one of the Python types listed in Comdb2 to Python Type Mappings.

Returns:

A copy of the input parameters.

Return type:

List[Any]

close()[source]

Close the cursor now.

From this point forward an exception will be raised if any operation is attempted with this Cursor.

Note

This does not rollback any uncommitted operations executed by this Cursor. A new Cursor created from the Connection that this Cursor uses will start off in the middle of that uncommitted transaction.

property connection

Return a reference to the Connection that this Cursor uses.

property description

Provides the name and type of each column in the latest result set.

This read-only attribute will contain one element per column in the result set. Each of those elements will be a 7-element sequence whose first element is the name of that column, whose second element is a type code for that column, and whose five remaining elements are None.

The type codes can be compared for equality against the STRING, NUMBER, DATETIME, and BINARY objects exposed by this module. If you need more granularity (e.g. knowing whether the result is a REAL or an INTEGER) you can compare the type code for equality against the members of the cdb2.TYPE dictionary. Or, of course, you can use isinstance to check the type of object returned as that column’s value.

Example

>>> cursor = connect('mattdb').cursor()
>>> cursor.execute("select 1 as 'x', '2' as 'y', 3.0 as 'z'")
>>> cursor.description[0][:2] == ('x', NUMBER)
True
>>> cursor.description[1][:2] == ('y', STRING)
True
>>> cursor.description[2][:2] == ('z', NUMBER)
True
>>> cursor.description[2][:2] == ('z', TYPE['INTEGER'])
False
>>> cursor.description[2][:2] == ('z', TYPE['REAL'])
True
execute(sql, parameters=None, *, column_types=None)[source]

Execute a database operation (query or command).

The sql string may contain either named placeholders represented as %(name)s or positionally ordered placeholders represented as ?.

When the parameters are a mapping or None, named placeholders are being used, and so any literal % signs in the statement must be escaped by doubling them, to distinguish them from the start of a named placeholder. When a sequence of parameters is provided instead, any placeholders must be positional ? placeholders, and literal % signs in the SQL must not be escaped.

Note

Using placeholders should always be the preferred method of parameterizing the SQL query, as it prevents SQL injection vulnerabilities, and is faster than dynamically building SQL strings.

If column_types is provided and non-empty, it must be a sequence of members of the ColumnType enumeration. The database will coerce the data in the Nth column of the result set to the Nth given column type. An error will be raised if the number of elements in column_types doesn’t match the number of columns in the result set, or if one of the elements is not a supported column type, or if coercion fails. If column_types is empty or not provided, no coercion is performed.

Note

Databases APIs are not required to allow result set column types to be specified explicitly. We allow this as a non-standard DB-API 2.0 extension.

Parameters:
  • sql (str) – The SQL string to execute, as a Python format string.

  • parameters (Mapping[str, Any] | Sequence[Any]) – If the SQL statement has %(param_name)s style placeholders, you must pass a mapping from parameter name to value. If the SQL statement has ? style placeholders, you must instead pass an ordered sequence of parameter values.

  • column_types (Sequence[int]) – An optional sequence of types (values of the ColumnType enumeration) which the columns of the result set will be coerced to.

Returns:

As a nonstandard DB-API 2.0 extension, this method returns the Cursor that it was called on, which can be used as an iterator over the result set returned by the query. Iterating over it will yield one list per row in the result set, where the elements in the list correspond to the result columns within the row, in positional order.

The Connection.row_factory property can be used to return rows as a different type.

Return type:

Cursor

Example

>>> cursor.execute("select 1, 2 UNION ALL select %(x)s, %(y)s",
...                {'x': 2, 'y': 4})
>>> cursor.fetchall()
[[1, 2], [2, 4]]
>>> cursor.execute("select 1, 2 UNION ALL select ?, ?", [2, 4]])
>>> cursor.fetchall()
[[1, 2], [2, 4]]
executemany(sql, seq_of_parameters)[source]

Execute the same SQL statement repeatedly with different parameters.

This is currently equivalent to calling execute multiple times, once for each element provided in seq_of_parameters.

Parameters:
  • sql (str) – The SQL string to execute, as a Python format string of the format expected by execute.

  • seq_of_parameters (Sequence[Mapping[str, Any]] | Sequence[Sequence[Any]]) – The sql statement will be executed once per element in this sequence, using each successive element as the parameter values for the corresponding call to execute.

fetchall()[source]

Fetch all remaining rows of the current result set.

Returns:

Returns a list containing all remaining rows of the result set. By default each row is returned as a list, where the elements in the list correspond to the result row’s columns in positional order, but this can be changed with the Connection.row_factory property.

Return type:

list

fetchmany(n=None)[source]

Fetch the next set of rows of the current result set.

Parameters:

n – Maximum number of rows to be returned. If this argument is not given, Cursor.arraysize is used as the maximum.

Returns:

Returns a list containing the next n rows of the result set. If fewer than n rows remain, the returned list will contain fewer than n elements. If no rows remain, the list will be empty. By default each row is returned as a list, where the elements in the list correspond to the result row’s columns in positional order, but this can be changed with the Connection.row_factory property.

Return type:

list

fetchone()[source]

Fetch the next row of the current result set.

Returns:

If no rows remain in the current result set, None is returned, otherwise the next row of the result set is returned. By default the row is returned as a list, where the elements in the list correspond to the result row’s columns in positional order, but this can be changed with the Connection.row_factory property.

property rowcount

Provides the count of rows modified by the last transaction.

For Cursor objects on a Connection that is not using autocommit mode, this count is valid only after the transaction is committed with Connection.commit(). For Cursor objects on a Connection that is using autocommit mode, this count is valid after a successful COMMIT, or after an INSERT, UPDATE, or DELETE statement run outside of an explicit transaction. At all other times, -1 is returned.

In particular, -1 is returned whenever a transaction is in progress, because Comdb2 by default handles commit conflicts with other transactions by rerunning each statement of the transaction. As a result, row counts obtained within a transaction are meaningless in the default transaction level; either more or fewer rows may be affected when the transaction eventually commits.

Also, -1 is returned after SELECT or SELECTV, because querying the row count requires calling cdb2_get_effects, which would consume the result set before the user could iterate over it. Likewise, -1 is returned after EXEC PROCEDURE, because a stored procedure could emit a result set.

setinputsizes(sizes)[source]

No-op; implemented for PEP-249 compliance.

setoutputsize(size, column=None)[source]

No-op; implemented for PEP-249 compliance.

Type Objects

Several constants are provided that are meant to be compared for equality against the type codes returned by Cursor.description.

comdb2.dbapi2.STRING = TypeObject('CSTRING',)

The type codes of TEXT result columns compare equal to this constant.

comdb2.dbapi2.BINARY = TypeObject('BLOB',)

The type codes of BLOB result columns compare equal to this constant.

comdb2.dbapi2.NUMBER = TypeObject('INTEGER', 'REAL')

The type codes of numeric result columns compare equal to this constant.

comdb2.dbapi2.DATETIME = TypeObject('DATETIME', 'DATETIMEUS')

The type codes of datetime result columns compare equal to this constant.

comdb2.dbapi2.ROWID = comdb2.dbapi2.STRING

This is required by PEP-249, but we just make it an alias for STRING, because Comdb2 doesn’t have a ROWID result type.

Type Constructors

The DB-API requires constructor functions for DATETIME and BLOB parameter values, since different databases might want to use different types to represent them. We use datetime.datetime objects for DATETIME columns, and bytes objects for BLOB columns - you may use these constructors to create them if you so choose, but you are not required to.

comdb2.dbapi2.Timestamp(year, month, day, hour, minute, second)

Creates an object suitable for binding as a DATETIME parameter.

Returns:

An object representing the given date and time

Return type:

datetime.datetime

comdb2.dbapi2.TimestampFromTicks(seconds_since_epoch)

Creates an object suitable for binding as a DATETIME parameter.

Parameters:

seconds_since_epoch (float) – An offset from the Unix epoch, in seconds

Returns:

An object representing the date and time seconds_since_epoch after the Unix epoch, with millisecond precision

Return type:

datetime.datetime

comdb2.dbapi2.Binary(string)

Creates an object suitable for binding as a BLOB parameter.

If the input argument was a str object, it is encoded as a UTF-8 byte string and returned. Otherwise, the input argument is passed to the bytes constructor, and the result returned.

Parameters:

string – A string from which the new object is constructed

Return type:

bytes

Returns:

A byte string representing the given input

DatetimeUs

A class is provided for differentiating Comdb2’s DATETIMEUS type from its DATETIME type.

class comdb2.dbapi2.DatetimeUs(year, month, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]])

Provides a distinct representation for Comdb2’s DATETIMEUS type.

Historically, Comdb2 provided a DATETIME type with millisecond precision. Comdb2 R6 added a DATETIMEUS type, which instead has microsecond precision.

This module represents each Comdb2 type with a distinct Python type. For backwards compatibility with older Comdb2 databases, datetime.datetime is mapped to the DATETIME type, and this class to the DATETIMEUS type. Because this is a subclass of datetime.datetime, you don’t need to do anything special when reading a DATETIMEUS type out of the database. You can use isinstance if you need to check whether you’ve been given a datetime.datetime (meaning the column was of the DATETIME type) or a DatetimeUs (meaning the column was of the DATETIMEUS type), but all the same methods will work on either.

When binding a parameter of type DATETIMEUS, you must pass an instance of this class, as a datetime.datetime would instead be bound as a DATETIME. Instances of this class can be created using this constructor, or the fromdatetime alternate constructor, or any of the other alternate constructors inherited from datetime.datetime.

classmethod fromdatetime(datetime)

Return a DatetimeUs copied from a given datetime.datetime

Additionally, two constructor functions are provided for DATETIMEUS parameters, for consistency with the required DATETIME constructors documented above.

comdb2.dbapi2.TimestampUs(year, month, day, hour, minute, second)

Creates an object suitable for binding as a DATETIMEUS parameter.

Returns:

An object representing the given date and time

Return type:

DatetimeUs

comdb2.dbapi2.TimestampUsFromTicks(seconds_since_epoch)

Creates an object suitable for binding as a DATETIMEUS parameter.

Parameters:

seconds_since_epoch (float) – An offset from the Unix epoch, in seconds

Returns:

An object representing the date and time seconds_since_epoch after the Unix epoch, with microsecond precision

Return type:

DatetimeUs

class comdb2.dbapi2.ColumnType

This is an alias for comdb2.cdb2.ColumnType, reexported from comdb2.dbapi2 for convenience.

Exceptions

exception comdb2.dbapi2.Error[source]

This is the base class of all exceptions raised by this module.

In addition to being available at the module scope, this class and the other exception classes derived from it are exposed as attributes on Connection objects, to simplify error handling in environments where multiple connections from different modules are used.

exception comdb2.dbapi2.Warning[source]

Exception raised for important warnings.

This is required to exist by the DB-API interface, but we never raise it.

exception comdb2.dbapi2.InterfaceError[source]

Exception raised for errors caused by misuse of this module.

exception comdb2.dbapi2.DatabaseError[source]

Base class for all errors reported by the database.

exception comdb2.dbapi2.InternalError[source]

Exception raised for internal errors reported by the database.

exception comdb2.dbapi2.OperationalError[source]

Exception raised for errors related to the database’s operation.

These errors are not necessarily the result of a bug either in the application or in the database - for example, dropped connections.

exception comdb2.dbapi2.ProgrammingError[source]

Exception raised for programming errors reported by the database.

For example, this will be raised for syntactically incorrect SQL, or for passing a different number of parameters than are required by the query.

exception comdb2.dbapi2.IntegrityError[source]

Exception raised for integrity errors reported by the database.

For example, a subclass of this will be raised if a foreign key constraint would be violated, or a constraint that a column may not be null, or that an index may not have duplicates. Other types of constraint violations may raise this type directly.

exception comdb2.dbapi2.UniqueKeyConstraintError[source]

Exception raised when a unique key constraint would be broken.

Committing after either an INSERT or an UPDATE could result in this being raised, by either adding a new row that violates a unique (non-dup) key constraint or modifying an existing row in a way that would violate one.

Added in version 1.1.

exception comdb2.dbapi2.ForeignKeyConstraintError[source]

Exception raised when a foreign key constraint would be broken.

This would be raised when committing if the changes being committed would violate referential integrity according to a foreign key constraint configured on the database. For instance, deleting a row from a parent table would raise this if rows corresponding to its key still exist in a child table and the constraint doesn’t have ON DELETE CASCADE. Likewise, inserting a row into a child table would raise this if there was no row in the parent table matching the new row’s key.

Added in version 1.1.

exception comdb2.dbapi2.NonNullConstraintError[source]

Exception raised when a non-null constraint would be broken.

Committing after either an INSERT or an UPDATE could result in this being raised if it would result in a null being stored in a non-nullable column. Note that columns in a Comdb2 are not nullable by default.

Added in version 1.1.

exception comdb2.dbapi2.DataError[source]

Exception raised for errors related to the processed data.

For example, this will be raised if you attempt to write a value that’s out of range for the column type that would store it, or if you specify an invalid timezone name for the connection.

exception comdb2.dbapi2.NotSupportedError[source]

Exception raised when unsupported operations are attempted.

This is the exception inheritance layout:

Exception
 +-- Warning
 +-- Error
      +-- InterfaceError
      +-- DatabaseError
           +-- DataError
           +-- OperationalError
           +-- IntegrityError
           |    +-- UniqueKeyConstraintError
           |    +-- ForeignKeyConstraintError
           |    +-- NonNullConstraintError
           +-- InternalError
           +-- ProgrammingError
           +-- NotSupportedError

Exceptions for Polymorphic Clients

Most exceptions types that can be raised by this module are also exposed as attributes on the Connection class:

comdb2.dbapi2.Connection.Error
comdb2.dbapi2.Connection.Warning
comdb2.dbapi2.Connection.InterfaceError
comdb2.dbapi2.Connection.DatabaseError
comdb2.dbapi2.Connection.InternalError
comdb2.dbapi2.Connection.OperationalError
comdb2.dbapi2.Connection.ProgrammingError
comdb2.dbapi2.Connection.IntegrityError
comdb2.dbapi2.Connection.DataError
comdb2.dbapi2.Connection.NotSupportedError

Aliases for Error and its subclasses. This is an optional extension to the DB-API specification, designed to simplify writing polymorphic code that works with any type of DB-API connection.

Note

In order to make any meaningful use of this feature, you need to be writing code that could be passed a connection created by one of at least two different DB-API compliant modules, and both of those modules must implement this optional extension.