aerospike.Client
— Client Class
Overview
The client connects through a seed node (the address of a single node) to an Aerospike database cluster. From the seed node, the client learns of the other nodes and establishes connections to them. It also gets the partition map of the cluster, which is how it knows where every record actually lives.
The client handles the connections, including re-establishing them ahead of executing an operation. It keeps track of changes to the cluster through a cluster-tending thread.
See also
Boilerplate Code For Examples
Assume every in-line example runs this code beforehand:
Warning
Only run example code on a brand new Aerospike server. This code deletes all records in the demo
set!
# Imports
import aerospike
from aerospike import exception as ex
import sys
# Configure the client
config = {
'hosts': [ ('127.0.0.1', 3000)]
}
# Create a client and connect it to the cluster
try:
client = aerospike.client(config).connect()
client.truncate('test', "demo", 0)
except ex.ClientError as e:
print("Error: {0} [{1}]".format(e.msg, e.code))
sys.exit(1)
# Record key tuple: (namespace, set, key)
keyTuple = ('test', 'demo', 'key')
Basic example:
# Write a record
client.put(keyTuple, {'name': 'John Doe', 'age': 32})
# Read a record
(key, meta, record) = client.get(keyTuple)
Methods
To create a new client, use aerospike.client()
.
Connection
- class aerospike.Client
- connect([username, password])
If there is currently no connection to the cluster, connect to it. The optional username and password only apply when connecting to the Enterprise Edition of Aerospike.
- Parameters:
username (str) – a defined user with roles in the cluster. See
admin_create_user()
.password (str) – the password will be hashed by the client using bcrypt.
- Raises:
ClientError
, for example when a connection cannot be established to a seed node (any single node in the cluster from which the client learns of the other nodes).
Note
Python client 5.0.0 and up will fail to connect to Aerospike server 4.8.x or older. If you see the error “-10, ‘Failed to connect’”, please make sure you are using server 4.9 or later.
See also
- is_connected()
Tests the connections between the client and the nodes of the cluster. If the result is
False
, the client will require another call toconnect()
.- Return type:
Changed in version 2.0.0.
Record Operations
- class aerospike.Client
- put(key, bins: dict[, meta: dict[, policy: dict[, serializer=aerospike.SERIALIZER_NONE]]])
Create a new record, or remove / add bins to a record.
- Parameters:
bins (dict) – contains bin name-value pairs of the record.
meta (dict) – record metadata to be set. see Metadata Dictionary.
policy (dict) – see Write Policies.
serializer – override the serialization mode of the client with one of the Serialization Constants. To use a class-level, user-defined serialization function registered with
aerospike.set_serializer()
, useaerospike.SERIALIZER_USER
.
- Raises:
a subclass of
AerospikeError
.
Example:
# Insert a record with bin1 client.put(keyTuple, {'bin1': 4}) # Insert another bin named bin2 client.put(keyTuple, {'bin2': "value"}) # Remove bin1 from this record client.put(keyTuple, {'bin2': aerospike.null()}) # Removing the last bin should delete this record client.put(keyTuple, {'bin1': aerospike.null()})
- exists(key[, policy: dict]) -> (key, meta)
Check if a record with a given key exists in the cluster.
Returns the record’s key and metadata in a tuple.
If the record does not exist, the tuple’s metadata will be
None
.- Parameters:
policy (dict) – see Read Policies.
- Return type:
tuple (key, meta)
- Raises:
a subclass of
AerospikeError
.
# Check non-existent record (key, meta) = client.exists(keyTuple) print(key) # ('test', 'demo', 'key', bytearray(b'...')) print(meta) # None # Check existing record client.put(keyTuple, {'bin1': 4}) (key, meta) = client.exists(keyTuple) print(key) # ('test', 'demo', 'key', bytearray(b'...')) print(meta) # {'ttl': 2592000, 'gen': 1}
Changed in version 2.0.3.
- get(key[, policy: dict]) -> (key, meta, bins)
Returns a record with a given key.
- Parameters:
policy (dict) – see Read Policies.
- Returns:
a Record Tuple.
- Raises:
# Get nonexistent record try: client.get(keyTuple) except ex.RecordNotFound as e: print("Error: {0} [{1}]".format(e.msg, e.code)) # Error: 127.0.0.1:3000 AEROSPIKE_ERR_RECORD_NOT_FOUND [2] # Get existing record client.put(keyTuple, {'bin1': 4}) (key, meta, bins) = client.get(keyTuple) print(key) # ('test', 'demo', None, bytearray(b'...')) print(meta) # {'ttl': 2592000, 'gen': 1} print(bins) # {'bin1': 4}
Changed in version 2.0.0.
- select(key, bins: list[, policy: dict]) -> (key, meta, bins)
Returns specific bins of a record.
If a bin does not exist, it will not show up in the returned Record Tuple.
- Parameters:
bins (list) – a list of bin names to select from the record.
policy (dict) – optional Read Policies.
- Returns:
a Record Tuple.
- Raises:
# Record to select from client.put(keyTuple, {'bin1': 4, 'bin2': 3}) # Only get bin1 (key, meta, bins) = client.select(keyTuple, ['bin1']) # Similar output to get() print(key) # ('test', 'demo', 'key', bytearray(b'...')) print(meta) # {'ttl': 2592000, 'gen': 1} print(bins) # {'bin1': 4} # Get all bins (key, meta, bins) = client.select(keyTuple, ['bin1', 'bin2']) print(bins) # {'bin1': 4, 'bin2': 3} # Get nonexistent bin (key, meta, bins) = client.select(keyTuple, ['bin3']) print(bins) # {}
Changed in version 2.0.0.
- touch(key[, val=0[, meta: dict[, policy: dict]]])
Touch the given record, setting its time-to-live and incrementing its generation.
- Parameters:
val (int) – ttl in seconds, with
0
resolving to the default value in the server config.meta (dict) – record metadata to be set. see Metadata Dictionary
policy (dict) – see Operate Policies.
- Raises:
a subclass of
AerospikeError
.
# Insert record and get its metadata client.put(keyTuple, bins = {"bin1": 4}) (key, meta) = client.exists(keyTuple) print(meta) # {'ttl': 2592000, 'gen': 1} # Explicitly set TTL to 120 # and increment generation client.touch(keyTuple, 120) # Record metadata should be updated (key, meta) = client.exists(keyTuple) print(meta) # {'ttl': 120, 'gen': 2}
- remove(key[meta: dict[, policy: dict]])
Remove a record matching the key from the cluster.
- Parameters:
meta (dict) – contains the expected generation of the record in a key called
"gen"
.policy (dict) – see Remove Policies. May be passed as a keyword argument.
- Raises:
a subclass of
AerospikeError
.
# Insert a record client.put(keyTuple, {"bin1": 4}) # Try to remove it with the wrong generation try: client.remove(keyTuple, meta={'gen': 5}, policy={'gen': aerospike.POLICY_GEN_EQ}) except ex.AerospikeError as e: # Error: AEROSPIKE_ERR_RECORD_GENERATION [3] print("Error: {0} [{1}]".format(e.msg, e.code)) # Remove it ignoring generation client.remove(keyTuple)
- remove_bin(key, list[, meta: dict[, policy: dict]])
Remove a list of bins from a record with a given key. Equivalent to setting those bins to
aerospike.null()
with aput()
.- Parameters:
list (list) – the bins names to be removed from the record.
meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Write Policies.
- Raises:
a subclass of
AerospikeError
.
# Insert record bins = {"bin1": 0, "bin2": 1} client.put(keyTuple, bins) # Remove bin1 client.remove_bin(keyTuple, ['bin1']) # Only bin2 shold remain (keyTuple, meta, bins) = client.get(keyTuple) print(bins) # {'bin2': 1}
Batch Operations
- class aerospike.Client
- get_many(keys[, policy: dict]) [key, meta, bins]
Deprecated since version 12.0.0: Use
batch_read()
instead.Batch-read multiple records, and return them as a
list
.Any record that does not exist will have a
None
value for metadata and bins in the record tuple.- Parameters:
policy (dict) – see Batch Policies.
- Returns:
a
list
of Record Tuple.- Raises:
a
ClientError
if the batch is too big.
# Keys keyTuples = [ ('test', 'demo', '1'), ('test', 'demo', '2'), ('test', 'demo', '3'), ] # Only insert two records with the first and second key client.put(keyTuples[0], {'bin1': 'value'}) client.put(keyTuples[1], {'bin1': 'value'}) # Try to get records with all three keys records = client.get_many(keyTuples) # The third record tuple should have 'meta' and 'bins' set to none # Because there is no third record print(records[0]) print(records[1]) print(records[2]) # Expected output: # (('test', 'demo', '1', bytearray(...)), {'ttl': 2592000, 'gen': 1}, {'bin1': 'value'}) # (('test', 'demo', '2', bytearray(...)), {'ttl': 2592000, 'gen': 1}, {'bin1': 'value'}) # (('test', 'demo', '3', bytearray(...)), None, None)
- exists_many(keys[, policy: dict]) [key, meta]
Deprecated since version 12.0.0: Use
batch_read()
instead.Batch-read metadata for multiple keys.
Any record that does not exist will have a
None
value for metadata in their tuple.- Parameters:
policy (dict) – see Batch Policies.
- Returns:
# Keys # Only insert two records with the first and second key keyTuples = [ ('test', 'demo', '1'), ('test', 'demo', '2'), ('test', 'demo', '3'), ] client.put(keyTuples[0], {'bin1': 'value'}) client.put(keyTuples[1], {'bin1': 'value'}) # Check for existence of records using all three keys keyMetadata = client.exists_many(keyTuples) print(keyMetadata[0]) print(keyMetadata[1]) print(keyMetadata[2]) # (('test', 'demo', '1', bytearray(...)), {'ttl': 2592000, 'gen': 1}) # (('test', 'demo', '2', bytearray(...)), {'ttl': 2592000, 'gen': 1}) # (('test', 'demo', '3', bytearray(...)), None)
- select_many(keys, bins: list[, policy: dict]) [(key, meta, bins), ...]}
Deprecated since version 12.0.0: Use
batch_read()
instead.Batch-read specific bins from multiple records.
Any record that does not exist will have a
None
value for metadata and bins in its tuple.- Parameters:
bins (list) – a list of bin names to read from the records.
policy (dict) – see Batch Policies.
- Returns:
a
list
of Record Tuple.
# Insert 4 records with these keys keyTuples = [ ('test', 'demo', 1), ('test', 'demo', 2), ('test', 'demo', 3), ('test', 'demo', 4) ] # Only records 1, 2, 4 have a bin called bin2 client.put(keyTuples[0], {'bin1': 20, 'bin2': 40}) client.put(keyTuples[1], {'bin1': 11, 'bin2': 50}) client.put(keyTuples[2], {'bin1': 50, 'bin3': 20}) client.put(keyTuples[3], {'bin1': 87, 'bin2': 76, 'bin3': 40}) # Get all 4 records and filter out every bin except bin2 records = client.select_many(keyTuples, ['bin2']) for record in records: print(record) # (('test', 'demo', 1, bytearray(...)), {'ttl': 2592000, 'gen': 1}, {'bin2': 40}) # (('test', 'demo', 2, bytearray(...)), {'ttl': 2592000, 'gen': 1}, {'bin2': 50}) # (('test', 'demo', 3, bytearray(...)), {'ttl': 2592000, 'gen': 1}, {}) # (('test', 'demo', 4, bytearray(...)), {'ttl': 2592000, 'gen': 1}, {'bin2': 76})
- batch_get_ops(keys, ops, policy: dict) [key, meta, bins]
Deprecated since version 12.0.0: Use
batch_operate()
instead.Batch-read multiple records, and return them as a
list
.Any record that does not exist will have a exception type value as metadata and
None
value as bins in the record tuple.- Parameters:
ops (list) – a list of operations to apply.
policy (dict) – see Batch Policies.
- Returns:
a
list
of Record Tuple.- Raises:
a
ClientError
if the batch is too big.
# Insert records for 3 players and their scores keyTuples = [("test", "demo", i) for i in range(1, 4)] bins = [ {"scores": [1, 4, 3, 10]}, {"scores": [20, 1, 4, 28]}, {"scores": [50, 20, 10, 20]}, ] for keyTuple, bin in zip(keyTuples, bins): client.put(keyTuple, bin) # Get highest scores for each player from aerospike_helpers.operations import list_operations ops = [ list_operations.list_get_by_rank("scores", -1, aerospike.LIST_RETURN_VALUE) ] records = client.batch_get_ops(keyTuples, ops) # Print results for _, _, bins in records: print(bins) # {'scores': 10} # {'scores': 28} # {'scores': 50}
Note
The following batch methods will return a
BatchRecords
object with aresult
value of0
if one of the following is true:All transactions are successful.
One or more transactions failed because:
A record was filtered out by an expression
The record was not found
Otherwise:
If the Python client-layer’s code throws an error, such as a connection error or parameter error, an exception will be raised.
If the underlying C client throws an error, the returned
BatchRecords
object will have aresult
value equal to an as_status error code. In this case, theBatchRecords
object has a list of batch records calledbatch_records
, and each batch record contains the result of that transaction.
- batch_write(batch_records: BatchRecords[, policy_batch: dict]) BatchRecords
Write/read multiple records for specified batch keys in one batch call.
This method allows different sub-commands for each key in the batch. The resulting status and operated bins are set in
batch_records.results
andbatch_records.record
.- Parameters:
batch_records (BatchRecords) – A
BatchRecords
object used to specify the operations to carry out.policy_batch (dict) – aerospike batch policy Batch Policies.
- Returns:
A reference to the batch_records argument of type
BatchRecords
.- Raises:
A subclass of
AerospikeError
. See note abovebatch_write()
for details.
from aerospike_helpers.batch import records as br from aerospike_helpers.operations import operations as op # Keys # Only insert two records with the first and second key keyTuples = [ ('test', 'demo', 'Robert'), ('test', 'demo', 'Daniel'), ('test', 'demo', 'Patrick'), ] client.put(keyTuples[0], {'id': 100, 'balance': 400}) client.put(keyTuples[1], {'id': 101, 'balance': 200}) client.put(keyTuples[2], {'id': 102, 'balance': 300}) # Apply different operations to different keys batchRecords = br.BatchRecords( [ # Remove Robert from system br.Remove( key = keyTuples[0], ), # Modify Daniel's ID and balance br.Write( key = keyTuples[1], ops = [ op.write("id", 200), op.write("balance", 100), op.read("id"), ], ), # Read Patrick's ID br.Read( key = keyTuples[2], ops=[ op.read("id") ], policy=None ), ] ) client.batch_write(batchRecords) # batch_write modifies its BatchRecords argument. # Results for each BatchRecord will be set in the result, record, and in_doubt fields. for batchRecord in batchRecords.batch_records: print(batchRecord.result) print(batchRecord.record) # Note how written bins return None if their values aren't read # And removed records have an empty bins dictionary # 0 # (('test', 'demo', 'Robert', bytearray(b'...')), {'ttl': 4294967295, 'gen': 0}, {}) # 0 # (('test', 'demo', 'Daniel', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'id': 200, 'balance': None}) # 0 # (('test', 'demo', 'Patrick', bytearray(b'...')), {'ttl': 2592000, 'gen': 1}, {'id': 102})
Note
Requires server version >= 6.0.0.
See also
More information about the batch helpers aerospike_helpers.batch package
- batch_read(keys: list[, bins: list][, policy_batch: dict]) BatchRecords
Read multiple records.
If a list of bin names is not provided, return all the bins for each record.
If a list of bin names is provided, return only these bins for the given list of records.
If an empty list of bin names is provided, only the metadata of each record will be returned. Each
BatchRecord.record
inBatchRecords.batch_records
will only be a 2-tuple(key, meta)
.- Parameters:
keys (list) – The key tuples of the records to fetch.
bins (list[str]) – List of bin names to fetch for each record.
policy_batch (dict) – See Batch Policies.
- Returns:
an instance of
BatchRecords
.- Raises:
A subclass of
AerospikeError
. See note abovebatch_write()
for details.
Note
Requires server version >= 6.0.0.
- batch_operate(keys: list, ops: list[, policy_batch: dict][, policy_batch_write: dict][, ttl: int]) BatchRecords
Perform the same read/write transactions on multiple keys.
Note
Prior to Python client 14.0.0, using the
batch_operate()
method with only read operations caused an error. This bug was fixed in version 14.0.0.- Parameters:
keys (list) – The keys to operate on.
ops (list) – List of operations to apply.
policy_batch (dict) – See Batch Policies.
policy_batch_write (dict) – See Batch Write Policies.
ttl (int) – The time-to-live (expiration) of each record in seconds.
- Returns:
an instance of
BatchRecords
.- Raises:
A subclass of
AerospikeError
. See note abovebatch_write()
for details.
from aerospike_helpers.operations import operations as op # Insert 3 records keys = [("test", "demo", f"employee{i}") for i in range(1, 4)] bins = [ {"id": 100, "balance": 200}, {"id": 101, "balance": 400}, {"id": 102, "balance": 300} ] for key, bin in zip(keys, bins): client.put(key, bin) # Increment ID by 100 and balance by 500 for all employees ops = [ op.increment("id", 100), op.increment("balance", 500), op.read("balance") ] batchRecords = client.batch_operate(keys, ops) print(batchRecords.result) # 0 # Print each individual transaction's results # and record if it was read from for batchRecord in batchRecords.batch_records: print(f"{batchRecord.result}: {batchRecord.record}") # 0: (('test', 'demo', 'employee1', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'id': None, 'balance': 700}) # 0: (('test', 'demo', 'employee2', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'id': None, 'balance': 900}) # 0: (('test', 'demo', 'employee3', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'id': None, 'balance': 800})
Note
Requires server version >= 6.0.0.
- batch_apply(keys: list, module: str, function: str, args: list[, policy_batch: dict][, policy_batch_apply: dict]) BatchRecords
Apply UDF (user defined function) on multiple keys.
- Parameters:
keys (list) – The keys to operate on.
module (str) – the name of the UDF module.
function (str) – the name of the UDF to apply to the record identified by key.
args (list) – the arguments to the UDF.
policy_batch (dict) – See Batch Policies.
policy_batch_apply (dict) – See Batch Apply Policies.
- Returns:
an instance of
BatchRecords
.- Raises:
A subclass of
AerospikeError
. See note abovebatch_write()
for details.
# Insert 3 records keys = [("test", "demo", f"employee{i}") for i in range(1, 4)] bins = [ {"id": 100, "balance": 200}, {"id": 101, "balance": 400}, {"id": 102, "balance": 300} ] for key, bin in zip(keys, bins): client.put(key, bin) # Apply a user defined function (UDF) to a batch # of records using batch_apply. client.udf_put("batch_apply.lua") args = ["balance", 0.5, 100] batchRecords = client.batch_apply(keys, "batch_apply", "tax", args) print(batchRecords.result) # 0 for batchRecord in batchRecords.batch_records: print(f"{batchRecord.result}: {batchRecord.record}") # 0: (('test', 'demo', 'employee1', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'SUCCESS': 0}) # 0: (('test', 'demo', 'employee2', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'SUCCESS': 100}) # 0: (('test', 'demo', 'employee3', bytearray(b'...')), {'ttl': 2592000, 'gen': 2}, {'SUCCESS': 50})
-- Deduct tax and fees from bin function tax(record, binName, taxRate, fees) if aerospike:exists(record) then record[binName] = record[binName] * (1 - taxRate) - fees aerospike:update(record) else record[binName] = 0 aerospike:create(record) end return record[binName] end
Note
Requires server version >= 6.0.0.
- batch_remove(keys: list[, policy_batch: dict][, policy_batch_remove: dict]) BatchRecords
Note
Requires server version >= 6.0.0.
Remove multiple records by key.
- Parameters:
keys (list) – The keys to remove.
policy_batch (dict) – Optional aerospike batch policy Batch Policies.
policy_batch_remove (dict) – Optional aerospike batch remove policy Batch Remove Policies.
- Returns:
an instance of
BatchRecords
.- Raises:
A subclass of
AerospikeError
. See note abovebatch_write()
for details.
# Insert 3 records keys = [("test", "demo", f"employee{i}") for i in range(1, 4)] bins = [ {"id": 100, "balance": 200}, {"id": 101, "balance": 400}, {"id": 102, "balance": 300} ] for key, bin in zip(keys, bins): client.put(key, bin) batchRecords = client.batch_remove(keys) # A result of 0 means success print(batchRecords.result) # 0 for batchRecord in batchRecords.batch_records: print(batchRecord.result) print(batchRecord.record) # 0: (('test', 'demo', 'employee1', bytearray(b'...')), {'ttl': 4294967295, 'gen': 0}, {}) # 0: (('test', 'demo', 'employee2', bytearray(b'...')), {'ttl': 4294967295, 'gen': 0}, {}) # 0: (('test', 'demo', 'employee3', bytearray(b'...')), {'ttl': 4294967295, 'gen': 0}, {})
String Operations
- class aerospike.Client
Note
Please see
aerospike_helpers.operations.operations
for the new way to use string operations.- append(key, bin, val[, meta: dict[, policy: dict]])
Append a string to the string value in bin.
- Parameters:
bin (str) – the name of the bin.
val (str) – the string to append to the bin value.
meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Operate Policies.
- Raises:
a subclass of
AerospikeError
.
client.put(keyTuple, {'bin1': 'Martin Luther King'}) client.append(keyTuple, 'bin1', ' jr.') (_, _, bins) = client.get(keyTuple) print(bins) # Martin Luther King jr.
- prepend(key, bin, val[, meta: dict[, policy: dict]])
Prepend the string value in bin with the string val.
- Parameters:
bin (str) – the name of the bin.
val (str) – the string to prepend to the bin value.
meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Operate Policies.
- Raises:
a subclass of
AerospikeError
.
client.put(keyTuple, {'bin1': 'Freeman'}) client.prepend(keyTuple, 'bin1', ' Gordon ') (_, _, bins) = client.get(keyTuple) print(bins) # Gordon Freeman
Numeric Operations
- class aerospike.Client
Note
Please see
aerospike_helpers.operations.operations
for the new way to use numeric operations using the operate command.- increment(key, bin, offset[, meta: dict[, policy: dict]])
Increment the integer value in bin by the integer val.
- Parameters:
bin (str) – the name of the bin.
offset (
int
orfloat
) – the value by which to increment the value in bin.meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Operate Policies. Note: the
exists
policy option may not be:aerospike.POLICY_EXISTS_CREATE_OR_REPLACE
noraerospike.POLICY_EXISTS_REPLACE
- Raises:
a subclass of
AerospikeError
.
# Start with 100 lives client.put(keyTuple, {'lives': 100}) # Gain health client.increment(keyTuple, 'lives', 10) (key, meta, bins) = client.get(keyTuple) print(bins) # 110 # Take damage client.increment(keyTuple, 'lives', -90) (key, meta, bins) = client.get(keyTuple) print(bins) # 20
List Operations
Note
Please see
aerospike_helpers.operations.list_operations
for the new way to use list operations. Old style list operations are deprecated. The docs for old style list operations were removed in client 6.0.0. The code supporting these methods will be removed in a coming release.
Map Operations
Note
Please see
aerospike_helpers.operations.map_operations
for the new way to use map operations. Old style map operations are deprecated. The docs for old style map operations were removed in client 6.0.0. The code supporting these methods will be removed in a coming release.
Single-Record Transactions
- class aerospike.Client
- operate(key, list: list[, meta: dict[, policy: dict]]) -> (key, meta, bins)
Performs an atomic transaction, with multiple bin operations, against a single record with a given key.
Starting with Aerospike server version 3.6.0, non-existent bins are not present in the returned Record Tuple. The returned record tuple will only contain one element per bin, even if multiple operations were performed on the bin. (In Aerospike server versions prior to 3.6.0, non-existent bins being read will have a
None
value. )- Parameters:
list (list) – See aerospike_helpers.operations package.
meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Operate Policies.
- Returns:
a Record Tuple.
- Raises:
a subclass of
AerospikeError
.
from aerospike_helpers.operations import operations # Add name, update age, and return attributes client.put(keyTuple, {'age': 25, 'career': 'delivery boy'}) ops = [ operations.increment("age", 1000), operations.write("name", "J."), operations.prepend("name", "Phillip "), operations.append("name", " Fry"), operations.read("name"), operations.read("career"), operations.read("age") ] (key, meta, bins) = client.operate(key, ops) print(key) # ('test', 'demo', None, bytearray(b'...')) # The generation should only increment once # A transaction is *atomic* print(meta) # {'ttl': 2592000, 'gen': 2} print(bins) # Will display all bins selected by read operations # {'name': 'Phillip J. Fry', 'career': 'delivery boy', 'age': 1025}
Note
operate()
can now have multiple write operations on a single bin.Changed in version 2.1.3.
- operate_ordered(key, list: list[, meta: dict[, policy: dict]]) -> (key, meta, bins)
Performs an atomic transaction, with multiple bin operations, against a single record with a given key. The results will be returned as a list of (bin-name, result) tuples. The order of the elements in the list will correspond to the order of the operations from the input parameters.
Write operations or read operations that fail will not return a
(bin-name, result)
tuple.- Parameters:
list (list) – See aerospike_helpers.operations package.
meta (dict) – record metadata to be set. See Metadata Dictionary.
policy (dict) – optional Operate Policies.
- Returns:
a Record Tuple.
- Raises:
a subclass of
AerospikeError
.
from aerospike_helpers.operations import operations # Add name, update age, and return attributes client.put(keyTuple, {'age': 25, 'career': 'delivery boy'}) ops = [ operations.increment("age", 1000), operations.write("name", "J."), operations.prepend("name", "Phillip "), operations.append("name", " Fry"), operations.read("name"), operations.read("career"), operations.read("age") ] (key, meta, bins) = client.operate_ordered(keyTuple, ops) # Same output for key and meta as operate() # But read operations are outputted as bin-value pairs print(bins) # [('name': 'Phillip J. Fry'), ('career': 'delivery boy'), ('age': 1025)]
Changed in version 2.1.3.
User Defined Functions
- class aerospike.Client
- udf_put(filename[, udf_type=aerospike.UDF_TYPE_LUA[, policy: dict]])
Register a UDF module with the cluster.
- Parameters:
filename (str) – the path to the UDF module to be registered with the cluster.
udf_type (int) –
aerospike.UDF_TYPE_LUA
.policy (dict) – currently timeout in milliseconds is the available policy.
- Raises:
a subclass of
AerospikeError
.
Note
To run this example, do not run the boilerplate code.
import aerospike config = { 'hosts': [ ('127.0.0.1', 3000)], 'lua': { 'user_path': '/path/to/lua/user_path'} } client = aerospike.client(config) # Register the UDF module and copy it to the Lua 'user_path' client.udf_put('/path/to/my_module.lua') client.close()
- udf_remove(module[, policy: dict])
Remove a previously registered UDF module from the cluster.
- Parameters:
- Raises:
a subclass of
AerospikeError
.
client.udf_remove('my_module.lua')
- udf_list([policy: dict]) []
Return the list of UDF modules registered with the cluster.
- Parameters:
policy (dict) – currently timeout in milliseconds is the available policy.
- Return type:
- Raises:
a subclass of
AerospikeError
.
print(client.udf_list()) # [ # {'content': bytearray(b''), # 'hash': bytearray(b'195e39ceb51c110950bd'), # 'name': 'my_udf1.lua', # 'type': 0}, # {'content': bytearray(b''), # 'hash': bytearray(b'8a2528e8475271877b3b'), # 'name': 'stream_udf.lua', # 'type': 0} # ]
- udf_get(module: str[, language: int = aerospike.UDF_TYPE_LUA[, policy: dict]]) str
Return the content of a UDF module which is registered with the cluster.
- Parameters:
module (str) – the UDF module to read from the cluster.
language (int) –
aerospike.UDF_TYPE_LUA
policy (dict) – currently timeout in milliseconds is the available policy.
- Return type:
- Raises:
a subclass of
AerospikeError
.
- apply(key, module, function, args[, policy: dict])
Apply a registered (see
udf_put()
) record UDF to a particular record.- Parameters:
- Returns:
the value optionally returned by the UDF, one of
str
,int
,float
,bytearray
,list
,dict
.- Raises:
a subclass of
AerospikeError
.
See also
- scan_apply(ns, set, module, function[, args[, policy: dict[, options]]]) int
Deprecated since version 7.0.0:
aerospike.Query
should be used instead.Initiate a scan and apply a record UDF to each record matched by the scan.
This method blocks until the scan is complete.
- Parameters:
- Return type:
- Returns:
a job ID that can be used with
job_info()
to check the status of theaerospike.JOB_SCAN
.- Raises:
a subclass of
AerospikeError
.
- query_apply(ns, set, predicate, module, function[, args[, policy: dict]]) int
Initiate a query and apply a record UDF to each record matched by the query.
This method blocks until the query is complete.
- Parameters:
ns (str) – the namespace in the aerospike cluster.
set (str) – the set name. Should be
None
if you want to query records in the ns which are in no set.predicate (tuple) – the tuple produced by one of the
aerospike.predicates
methods.module (str) – the name of the UDF module.
function (str) – the name of the UDF to apply to the records matched by the query.
args (list) – the arguments to the UDF.
policy (dict) – optional Write Policies.
- Return type:
- Returns:
a job ID that can be used with
job_info()
to check the status of theaerospike.JOB_QUERY
.- Raises:
a subclass of
AerospikeError
.
- job_info(job_id, module[, policy: dict]) dict
Return the status of a job running in the background.
The returned
dict
contains these keys:"status"
: see Job Statuses for possible values."records_read"
: number of scanned records."progress_pct"
: progress percentage of the job
- Parameters:
job_id (int) – the job ID returned by
scan_apply()
orquery_apply()
.module – one of Job Constants.
policy – optional Info Policies.
- Returns:
- Raises:
a subclass of
AerospikeError
.
Info Operations
- class aerospike.Client
- get_node_names() []
Return the list of hosts and node names present in a connected cluster.
- Returns:
a
list
of node info dictionaries.- Raises:
a subclass of
AerospikeError
.
# Assuming two nodes nodes = client.get_node_names() print(nodes) # [{'address': '1.1.1.1', 'port': 3000, 'node_name': 'BCER199932C'}, {'address': '1.1.1.1', 'port': 3010, 'node_name': 'ADFFE7782CD'}]
Changed in version 6.0.0.
- get_nodes() []
Return the list of hosts present in a connected cluster.
- Returns:
a
list
of node address tuples.- Raises:
a subclass of
AerospikeError
.
# Assuming two nodes nodes = client.get_nodes() print(nodes) # [('127.0.0.1', 3000), ('127.0.0.1', 3010)]
Changed in version 3.0.0.
Warning
In versions < 3.0.0
get_nodes
will not work when using TLS
- info_single_node(command, host[, policy: dict]) str
Send an info command to a single node specified by host name.
- Parameters:
command (str) – the info command. See Info Command Reference.
host (str) – a node name. Example: ‘BCER199932C’
policy (dict) – optional Info Policies.
- Return type:
- Raises:
a subclass of
AerospikeError
.
Note
Use
get_node_names()
as an easy way to get host IP to node name mappings.
- info_all(command[, policy: dict]]) {}
Send an info command to all nodes in the cluster to which the client is connected.
If any of the individual requests fail, this will raise an exception.
- Parameters:
command (str) –
policy (dict) – optional Info Policies.
- Return type:
- Raises:
a subclass of
AerospikeError
.
response = client.info_all("namespaces") print(response) # {'BB9020011AC4202': (None, 'test\n')}
New in version 3.0.0.
- info_random_node(command[, policy: dict]) str
Send an info command to a single random node.
- Parameters:
command (str) –
the info command. See Info Command Reference.
policy (dict) – optional Info Policies.
- Return type:
- Raises:
a subclass of
AerospikeError
.
Changed in version 6.0.0.
- set_xdr_filter(data_center, namespace, expression_filter[, policy: dict]) str
Set the cluster’s xdr filter using an Aerospike expression.
The cluster’s current filter can be removed by setting expression_filter to None.
- Parameters:
data_center (str) – The data center to apply the filter to.
namespace (str) – The namespace to apply the filter to.
expression_filter (AerospikeExpression) – The filter to set. See expressions at
aerospike_helpers
.policy (dict) – optional Info Policies.
- Raises:
a subclass of
AerospikeError
.
See also
Changed in version 5.0.0.
Warning
Requires Aerospike server version >= 5.3.
- get_expression_base64(expression) str
Get the base64 representation of a compiled aerospike expression.
See aerospike_helpers.expressions package for more details on expressions.
- Parameters:
expression (AerospikeExpression) – the compiled expression.
- Raises:
a subclass of
AerospikeError
.
from aerospike_helpers import expressions as exp # Compile expression expr = exp.Eq(exp.IntBin("bin1"), 6).compile() base64 = client.get_expression_base64(expr) print(base64) # kwGTUQKkYmluMQY=
Changed in version 7.0.0.
- shm_key() int
Expose the value of the shm_key for this client if shared-memory cluster tending is enabled,
- truncate(namespace, set, nanos[, policy: dict])
Remove all records in the namespace / set whose last updated time is older than the given time.
This method is many orders of magnitude faster than deleting records one at a time. See Truncate command reference.
This asynchronous server call may return before the truncation is complete. The user can still write new records after the server returns because new records will have last update times greater than the truncate cutoff (set at the time of truncate call)
- Parameters:
namespace (str) – The namespace to truncate.
set (str) – The set to truncate. Pass in
None
to truncate a namespace instead.nanos (long) – A cutoff threshold where records last updated before the threshold will be removed. Units are in nanoseconds since the UNIX epoch
(1970-01-01)
. A value of0
indicates that all records in the set should be truncated regardless of update time. The value must not be in the future.policy (dict) – See Info Policies.
- Return type:
Status indicating the success of the operation.
- Raises:
a subclass of
AerospikeError
.
Note
Requires Aerospike server version >= 3.12
import time client.put(("test", "demo", "key1"), {"bin": 4}) time.sleep(1) # Take threshold time current_time = time.time() time.sleep(1) client.put(("test", "demo", "key2"), {"bin": 5}) threshold_ns = int(current_time * 10**9) # Remove all items in set `demo` created before threshold time # Record using key1 should be removed client.truncate('test', 'demo', threshold_ns) # Remove all items in namespace # client.truncate('test', None, 0)
Index Operations
- class aerospike.Client
- index_string_create(ns, set, bin, name[, policy: dict])
Create a string index with index_name on the bin in the specified ns, set.
- Parameters:
- Raises:
a subclass of
AerospikeError
.
- index_integer_create(ns, set, bin, name[, policy])
Create an integer index with name on the bin in the specified ns, set.
- Parameters:
- Raises:
a subclass of
AerospikeError
.
- index_blob_create(ns, set, bin, name[, policy])
Create an blob index with index name name on the bin in the specified ns, set.
- Parameters:
- Raises:
a subclass of
AerospikeError
.
- index_list_create(ns, set, bin, index_datatype, name[, policy: dict])
Create an index named name for numeric, string or GeoJSON values (as defined by index_datatype) on records of the specified ns, set whose bin is a list.
- Parameters:
ns (str) – the namespace in the aerospike cluster.
set (str) – the set name.
bin (str) – the name of bin the secondary index is built on.
index_datatype – Possible values are
aerospike.INDEX_STRING
,aerospike.INDEX_NUMERIC
,aerospike.INDEX_BLOB
, andaerospike.INDEX_GEO2DSPHERE
.name (str) – the name of the index.
policy (dict) – optional Info Policies.
- Raises:
a subclass of
AerospikeError
.
Note
Requires server version >= 3.8.0
- index_map_keys_create(ns, set, bin, index_datatype, name[, policy: dict])
Create an index named name for numeric, string or GeoJSON values (as defined by index_datatype) on records of the specified ns, set whose bin is a map. The index will include the keys of the map.
- Parameters:
ns (str) – the namespace in the aerospike cluster.
set (str) – the set name.
bin (str) – the name of bin the secondary index is built on.
index_datatype – Possible values are
aerospike.INDEX_STRING
,aerospike.INDEX_NUMERIC
,aerospike.INDEX_BLOB
, andaerospike.INDEX_GEO2DSPHERE
.name (str) – the name of the index.
policy (dict) – optional Info Policies.
- Raises:
a subclass of
AerospikeError
.
Note
Requires server version >= 3.8.0
- index_map_values_create(ns, set, bin, index_datatype, name[, policy: dict])
Create an index named name for numeric, string or GeoJSON values (as defined by index_datatype) on records of the specified ns, set whose bin is a map. The index will include the values of the map.
- Parameters:
ns (str) – the namespace in the aerospike cluster.
set (str) – the set name.
bin (str) – the name of bin the secondary index is built on.
index_datatype – Possible values are
aerospike.INDEX_STRING
,aerospike.INDEX_NUMERIC
,aerospike.INDEX_BLOB
, andaerospike.INDEX_GEO2DSPHERE
.name (str) – the name of the index.
policy (dict) – optional Info Policies.
- Raises:
a subclass of
AerospikeError
.
Note
Requires server version >= 3.8.0
import aerospike client = aerospike.client({ 'hosts': [ ('127.0.0.1', 3000)]}) # assume the bin fav_movies in the set test.demo bin should contain # a dict { (str) _title_ : (int) _times_viewed_ } # create a secondary index for string values of test.demo records whose 'fav_movies' bin is a map client.index_map_keys_create('test', 'demo', 'fav_movies', aerospike.INDEX_STRING, 'demo_fav_movies_titles_idx') # create a secondary index for integer values of test.demo records whose 'fav_movies' bin is a map client.index_map_values_create('test', 'demo', 'fav_movies', aerospike.INDEX_NUMERIC, 'demo_fav_movies_views_idx') client.close()
- index_geo2dsphere_create(ns, set, bin, name[, policy: dict])
Create a geospatial 2D spherical index with name on the bin in the specified ns, set.
- Parameters:
- Raises:
a subclass of
AerospikeError
.
See also
Note
Requires server version >= 3.7.0
import aerospike client = aerospike.client({ 'hosts': [ ('127.0.0.1', 3000)]}) client.index_geo2dsphere_create('test', 'pads', 'loc', 'pads_loc_geo') client.close()
- index_remove(ns: str, name: str[, policy: dict])
Remove the index with name from the namespace.
- Parameters:
ns (str) – the namespace in the aerospike cluster.
name (str) – the name of the index.
policy (dict) – optional Info Policies.
- Raises:
a subclass of
AerospikeError
.
- get_cdtctx_base64(ctx: list) str
Get the base64 representation of aerospike CDT ctx.
See aerospike_helpers.cdt_ctx module for more details on CDT context.
- Parameters:
ctx (list) – Aerospike CDT context: generated by aerospike CDT ctx helper
aerospike_helpers
.- Raises:
a subclass of
AerospikeError
.
import aerospike from aerospike_helpers import cdt_ctx config = {'hosts': [('127.0.0.1', 3000)]} client = aerospike.client(config) ctxs = [cdt_ctx.cdt_ctx_list_index(0)] ctxs_base64 = client.get_cdtctx_base64(ctxs) print("Base64 encoding of ctxs:", ctxs_base64) client.close()
Changed in version 7.1.1.
Admin Operations
The admin methods implement the security features of the Enterprise Edition of Aerospike. These methods will raise a SecurityNotSupported
when the client is connected to a Community Edition cluster (see
aerospike.exception
).
A user is validated by the client against the server whenever a connection is established through the use of a username and password (passwords hashed using bcrypt). When security is enabled, each operation is validated against the user's roles. Users are assigned roles, which are collections of Privilege Objects.
import aerospike
from aerospike import exception as ex
import time
config = {'hosts': [('127.0.0.1', 3000)] }
client = aerospike.client(config).connect('ipji', 'life is good')
try:
dev_privileges = [{'code': aerospike.PRIV_READ}, {'code': aerospike.PRIV_READ_WRITE}]
client.admin_create_role('dev_role', dev_privileges)
client.admin_grant_privileges('dev_role', [{'code': aerospike.PRIV_READ_WRITE_UDF}])
client.admin_create_user('dev', 'you young whatchacallit... idiot', ['dev_role'])
time.sleep(1)
print(client.admin_query_user('dev'))
print(admin_query_users())
except ex.AdminError as e:
print("Error [{0}]: {1}".format(e.code, e.msg))
client.close()
See also
- class aerospike.Client
- admin_create_role(role, privileges[, policy: dict[, whitelist[, read_quota[, write_quota]]]])
Create a custom role containing a
list
of privileges, as well as an optional whitelist and quotas.- Parameters:
role (str) – The name of the role.
privileges (list) – A list of Privilege Objects.
policy (dict) – See Admin Policies.
whitelist (list) – A list of whitelist IP addresses that can contain wildcards, for example
10.1.2.0/24
.read_quota (int) – Maximum reads per second limit. Pass in
0
for no limit.write_quota (int) – Maximum write per second limit, Pass in
0
for no limit.
- Raises:
One of the
AdminError
subclasses.
- admin_set_whitelist(role, whitelist[, policy: dict])
Add a whitelist to a role.
- Parameters:
role (str) – The name of the role.
whitelist (list) – List of IP strings the role is allowed to connect to. Setting this to
None
will clear the whitelist for that role.policy (dict) – See Admin Policies.
- Raises:
One of the
AdminError
subclasses.
- admin_set_quotas(role[, read_quota[, write_quota[, policy: dict]]])
Add quotas to a role.
- Parameters:
role (str) – the name of the role.
read_quota (int) – Maximum reads per second limit. Pass in
0
for no limit.write_quota (int) – Maximum write per second limit. Pass in
0
for no limit.policy (dict) – See Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_drop_role(role[, policy: dict])
Drop a custom role.
- Parameters:
role (str) – the name of the role.
policy (dict) – See Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_grant_privileges(role, privileges[, policy: dict])
Add privileges to a role.
- Parameters:
role (str) – the name of the role.
privileges (list) – a list of Privilege Objects.
policy (dict) – See Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_revoke_privileges(role, privileges[, policy: dict])
Remove privileges from a role.
- Parameters:
role (str) – the name of the role.
privileges (list) – a list of Privilege Objects.
policy (dict) – See Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_get_role(role[, policy: dict]) {}
Get a
dict
of privileges, whitelist, and quotas associated with a role.- Parameters:
role (str) – the name of the role.
policy (dict) – See Admin Policies.
- Returns:
a Role Objects.
- Raises:
one of the
AdminError
subclasses.
- admin_get_roles([policy: dict]) {}
Get the names of all roles and their attributes.
- Parameters:
policy (dict) – See Admin Policies.
- Returns:
a
dict
of Role Objects keyed by role names.- Raises:
one of the
AdminError
subclasses.
- admin_query_role(role[, policy: dict]) []
Get the
list
of privileges associated with a role.- Parameters:
role (str) – the name of the role.
policy (dict) – See Admin Policies.
- Returns:
a
list
of Privilege Objects.- Raises:
one of the
AdminError
subclasses.
- admin_query_roles([policy: dict]) {}
Get all named roles and their privileges.
- Parameters:
policy (dict) – optional Admin Policies.
- Returns:
a
dict
of Privilege Objects keyed by role name.- Raises:
one of the
AdminError
subclasses.
- admin_create_user(username, password, roles[, policy: dict])
Create a user and grant it roles.
- Parameters:
username (str) – the username to be added to the Aerospike cluster.
password (str) – the password associated with the given username.
roles (list) – the list of role names assigned to the user.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_drop_user(username[, policy: dict])
Drop the user with a specified username from the cluster.
- Parameters:
username (str) – the username to be dropped from the aerospike cluster.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_change_password(username, password[, policy: dict])
Change the password of a user.
This operation can only be performed by that same user.
- Parameters:
username (str) – the username of the user.
password (str) – the password associated with the given username.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_set_password(username, password[, policy: dict])
Set the password of a user by a user administrator.
- Parameters:
username (str) – the username to be added to the aerospike cluster.
password (str) – the password associated with the given username.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_grant_roles(username, roles[, policy: dict])
Add roles to a user.
- Parameters:
username (str) – the username of the user.
roles (list) – a list of role names.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_revoke_roles(username, roles[, policy: dict])
Remove roles from a user.
- Parameters:
username (str) – the username to have the roles revoked.
roles (list) – a list of role names.
policy (dict) – optional Admin Policies.
- Raises:
one of the
AdminError
subclasses.
- admin_query_user_info(user: str[, policy: dict]) dict
Retrieve roles and other info for a given user.
- Parameters:
user (str) – the username of the user.
policy (dict) – optional Admin Policies.
- Returns:
a
dict
of user data. See User Dictionary.
- admin_query_users_info([policy: dict]) list
Retrieve roles and other info for all users.
- Parameters:
policy (dict) – optional Admin Policies.
- Returns:
a
list
of users’ data. See User Dictionary.
- admin_query_user(username[, policy: dict]) []
Deprecated since version 12.0.0:
admin_query_user_info()
should be used instead.Return the list of roles granted to the specified user.
- Parameters:
username (str) – the username of the user.
policy (dict) – optional Admin Policies.
- Returns:
a
list
of role names.- Raises:
one of the
AdminError
subclasses.
- admin_query_users([policy: dict]) {}
Deprecated since version 12.0.0:
admin_query_users_info()
should be used instead.Get the roles of all users.
- Parameters:
policy (dict) – optional Admin Policies.
- Returns:
a
dict
of roles keyed by username.- Raises:
one of the
AdminError
subclasses.
Metrics
- class aerospike.Client
- enable_metrics(policy: aerospike_helpers.metrics.MetricsPolicy | None = None)
Enable extended periodic cluster and node latency metrics.
- Parameters:
policy (MetricsPolicy) – Optional metrics policy
- Raises:
AerospikeError
or one of its subclasses.
- disable_metrics()
Disable extended periodic cluster and node latency metrics.
- Raises:
AerospikeError
or one of its subclasses.
User Dictionary
The user dictionary has the following key-value pairs:
"read_info"
(list[int]
): list of read statistics. List may beNone
. Current statistics by offset are:
0: read quota in records per second
1: single record read transaction rate (TPS)
2: read scan/query record per second rate (RPS)
3: number of limitless read scans/queries
Future server releases may add additional statistics.
"write_info"
(list[int]
): list of write statistics. List may beNone
. Current statistics by offset are:
0: write quota in records per second
1: single record write transaction rate (TPS)
2: write scan/query record per second rate (RPS)
3: number of limitless write scans/queries
Future server releases may add additional statistics.
"conns_in_use"
(int
): number of currently open connections.
"roles"
(list[str]
): list of assigned role names.
Scan and Query Constructors
- class aerospike.Client
- scan(namespace[, set]) Scan
Deprecated since version 7.0.0:
aerospike.Query
should be used instead.Returns a
aerospike.Scan
object to scan all records in a namespace / set.If set is omitted or set to
None
, the object returns all records in the namespace.- Parameters:
- Returns:
an
aerospike.Scan
class.
- query(namespace[, set]) Query
Return a
aerospike.Query
object to be used for executing queries over a specified set in a namespace.See aerospike.Query — Query Class for more details.
- Parameters:
- Returns:
an
aerospike.Query
class.
Tuples
Key Tuple
- key
The key tuple, which is sent and returned by various operations, has the structure
(namespace, set, primary key[, digest])
- namespace (
str
)Name of the namespace.
This must be preconfigured on the cluster.
- set (
str
)Name of the set.
The set be created automatically if it does not exist.
- digest
The record’s RIPEMD-160 digest.
The first three parts of the tuple get hashed through RIPEMD-160, and the digest used by the clients and cluster nodes to locate the record. A key tuple is also valid if it has the digest part filled and the primary key part set to
None
.The following code example shows:
How to use the key tuple in a put operation
How to fetch the key tuple in a get operation
import aerospike # NOTE: change this to your Aerospike server's seed node address seedNode = ('127.0.0.1', 3000) config = config = {'hosts': [seedNode]} client = aerospike.client(config) # The key tuple comprises the following: namespaceName = 'test' setName = 'setname' primaryKeyName = 'pkname' keyTuple = (namespaceName, setName, primaryKeyName) # Insert a record recordBins = {'bin1':0, 'bin2':1} client.put(keyTuple, recordBins) # Now fetch that record (key, meta, bins) = client.get(keyTuple) # The key should be in the second format # Notice how there is no primary key # and there is the record's digest print(key) # Expected output: # ('test', 'setname', None, bytearray(b'b\xc7[\xbb\xa4K\xe2\x9al\xd12!&\xbf<\xd9\xf9\x1bPo')) # Cleanup client.remove(keyTuple) client.close()See also
Record Tuple
- record
The record tuple which is returned by various read operations. It has the structure:
(key, meta, bins)
We reuse the code example in the key-tuple section and print the
meta
andbins
values that were returned fromget()
:import aerospike # NOTE: change this to your Aerospike server's seed node address seedNode = ('127.0.0.1', 3000) config = {'hosts': [seedNode]} client = aerospike.client(config) namespaceName = 'test' setName = 'setname' primaryKeyName = 'pkname' keyTuple = (namespaceName, setName, primaryKeyName) # Insert a record recordBins = {'bin1':0, 'bin2':1} client.put(keyTuple, recordBins) # Now fetch that record (key, meta, bins) = client.get(keyTuple) # Generation is 1 because this is the first time we wrote the record print(meta) # Expected output: # {'ttl': 2592000, 'gen': 1} # The bin-value pairs we inserted print(bins) {'bin1': 0, 'bin2': 1} client.remove(keyTuple) client.close()
See also
Metadata Dictionary
The metadata dictionary has the following key-value pairs:
"ttl"
(int
): record time to live in seconds. See TTL Constants for possible special values.
"gen"
(int
): record generation
Policies
Write Policies
- policy
A
dict
of optional write policies, which are applicable toput()
,query_apply()
.remove_bin()
.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:0
Warning
Database writes that are not idempotent (such as “add”) should not be retried because the write operation may be performed multiple times if the client timed out previous transaction attempts. It’s important to use a distinct write policy for non-idempotent writes, which sets max_retries = 0;
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- exists
- One of the Existence Policy Options values such as
aerospike.POLICY_EXISTS_CREATE
Default:aerospike.POLICY_EXISTS_IGNORE
- ttl
The default time-to-live (expiration) of the record in seconds. This field will only be used if the write transaction:
Doesn’t contain a metadata dictionary with a
ttl
value.Contains a metadata dictionary with a
ttl
value set toaerospike.TTL_CLIENT_DEFAULT
.
There are also special values that can be set for this option. See TTL Constants.
- gen
- One of the Generation Policy Options values such as
aerospike.POLICY_GEN_IGNORE
Default:aerospike.POLICY_GEN_IGNORE
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- durable_delete (
bool
) - Perform durable deleteDefault:
False
- durable_delete (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
- compression_threshold (
int
) Compress data for transmission if the object size is greater than a given number of bytes.
Default:
0
, meaning ‘never compress’
- compression_threshold (
- replica
Algorithm used to determine target node. One of the Replica Options values.
Default:
aerospike.POLICY_REPLICA_SEQUENCE
Read Policies
- policy
A
dict
of optional read policies, which are applicable toget()
,exists()
,select()
.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:2
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- deserialize (
bool
) - Should raw bytes representing a list or map be deserialized to a list or dictionary.Set to False for backup programs that just need access to raw bytes.Default:
True
- deserialize (
- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- read_mode_ap
- One of the AP Read Mode Policy Options values such as
aerospike.AS_POLICY_READ_MODE_AP_ONE
Default:aerospike.AS_POLICY_READ_MODE_AP_ONE
New in version 3.7.0.
- read_mode_sc
- One of the SC Read Mode Policy Options values such as
aerospike.POLICY_READ_MODE_SC_SESSION
New in version 3.7.0.
- read_touch_ttl_percent
Determine how record TTL (time to live) is affected on reads. When enabled, the server can efficiently operate as a read-based LRU cache where the least recently used records are expired. The value is expressed as a percentage of the TTL sent on the most recent write such that a read within this interval of the record’s end of life will generate a touch.
For example, if the most recent write had a TTL of 10 hours and
"read_touch_ttl_percent"
is set to 80, the next read within 8 hours of the record’s end of life (equivalent to 2 hours after the most recent write) will result in a touch, resetting the TTL to another 10 hours.Values:
0
: Use server config default-read-touch-ttl-pct for the record’s namespace/set.-1
: Do not reset record TTL on reads.1
-100
: Reset record TTL on reads when within this percentage of the most recent write TTL.
Default:
0
Note
Requires Aerospike server version >= 7.1.
- replica
- One of the Replica Options values such as
aerospike.POLICY_REPLICA_MASTER
Default:aerospike.POLICY_REPLICA_SEQUENCE
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
Operate Policies
- policy
A
dict
of optional operate policies, which are applicable toappend()
,prepend()
,increment()
,operate()
, and atomic list and map operations.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:0
Warning
Database writes that are not idempotent (such as “add”) should not be retried because the write operation may be performed multiple times if the client timed out previous transaction attempts. It’s important to use a distinct write policy for non-idempotent writes, which sets max_retries = 0;
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- gen
- One of the Generation Policy Options values such as
aerospike.POLICY_GEN_IGNORE
Default:aerospike.POLICY_GEN_IGNORE
- ttl (
int
) The default time-to-live (expiration) of the record in seconds. This field will only be used if an operate transaction contains a write operation and either:
Doesn’t contain a metadata dictionary with a
ttl
value.Contains a metadata dictionary with a
ttl
value set toaerospike.TTL_CLIENT_DEFAULT
.
There are also special values that can be set for this option. See TTL Constants.
- ttl (
- read_touch_ttl_percent
Determine how record TTL (time to live) is affected on reads. When enabled, the server can efficiently operate as a read-based LRU cache where the least recently used records are expired. The value is expressed as a percentage of the TTL sent on the most recent write such that a read within this interval of the record’s end of life will generate a touch.
For example, if the most recent write had a TTL of 10 hours and
"read_touch_ttl_percent"
is set to 80, the next read within 8 hours of the record’s end of life (equivalent to 2 hours after the most recent write) will result in a touch, resetting the TTL to another 10 hours.Values:
0
: Use server config default-read-touch-ttl-pct for the record’s namespace/set.-1
: Do not reset record TTL on reads.1
-100
: Reset record TTL on reads when within this percentage of the most recent write TTL.
Default:
0
Note
Requires Aerospike server version >= 7.1.
- replica
- One of the Replica Options values such as
aerospike.POLICY_REPLICA_MASTER
Default:aerospike.POLICY_REPLICA_SEQUENCE
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- read_mode_ap
- One of the AP Read Mode Policy Options values such as
aerospike.AS_POLICY_READ_MODE_AP_ONE
Default:aerospike.AS_POLICY_READ_MODE_AP_ONE
New in version 3.7.0.
- read_mode_sc
- One of the SC Read Mode Policy Options values such as
aerospike.POLICY_READ_MODE_SC_SESSION
New in version 3.7.0.
- exists
- One of the Existence Policy Options values such as
aerospike.POLICY_EXISTS_CREATE
Default:aerospike.POLICY_EXISTS_IGNORE
- durable_delete (
bool
) - Perform durable deleteDefault:
False
- durable_delete (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
Apply Policies
- policy
A
dict
of optional apply policies, which are applicable toapply()
.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:0
Warning
Database writes that are not idempotent (such as “add”) should not be retried because the write operation may be performed multiple times if the client timed out previous transaction attempts. It’s important to use a distinct write policy for non-idempotent writes, which sets max_retries = 0;
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- replica
- One of the Replica Options values such as
aerospike.POLICY_REPLICA_MASTER
Default:aerospike.POLICY_REPLICA_SEQUENCE
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- ttl (
int
) The default time-to-live (expiration) of the record in seconds. This field will only be used if an apply transaction doesn’t have an apply policy with a
ttl
value that overrides this field.There are also special values that can be set for this field. See TTL Constants.
- ttl (
- durable_delete (
bool
) - Perform durable deleteDefault:
False
- durable_delete (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
Remove Policies
- policy
A
dict
of optional remove policies, which are applicable toremove()
.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:0
Warning
Database writes that are not idempotent (such as “add”) should not be retried because the write operation may be performed multiple times if the client timed out previous transaction attempts. It’s important to use a distinct write policy for non-idempotent writes, which sets max_retries = 0;
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- gen
- One of the Generation Policy Options values such as
aerospike.POLICY_GEN_IGNORE
Default:aerospike.POLICY_GEN_IGNORE
- generation (
int
) - The generation of the record. This value is limited to a 16-bit unsigned integer.
- generation (
- durable_delete (
bool
) - Perform durable deleteDefault:
False
Note
Requires Enterprise server version >= 3.10
- durable_delete (
- replica
- One of the Replica Options values such as
aerospike.POLICY_REPLICA_MASTER
Default:aerospike.POLICY_REPLICA_SEQUENCE
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
Batch Policies
- policy
A
dict
of optional batch policies, which are applicable toget_many()
,exists_many()
andselect_many()
.- max_retries (
int
) - Maximum number of retries before aborting the current transaction. The initial attempt is not counted as a retry.If max_retries is exceeded, the transaction will return error
AEROSPIKE_ERR_TIMEOUT
.Default:2
- max_retries (
- sleep_between_retries (
int
) - Milliseconds to sleep between retries. Enter
0
to skip sleep.Default:0
- sleep_between_retries (
- socket_timeout (
int
) - Socket idle timeout in milliseconds when processing a database command.If socket_timeout is not
0
and the socket has been idle for at least socket_timeout, both max_retries and total_timeout are checked. If max_retries and total_timeout are not exceeded, the transaction is retried.If bothsocket_timeout
andtotal_timeout
are non-zero andsocket_timeout
>total_timeout
, thensocket_timeout
will be set tototal_timeout
. Ifsocket_timeout
is0
, there will be no socket idle limit.Default:30000
- socket_timeout (
- total_timeout (
int
) - Total transaction timeout in milliseconds.The total_timeout is tracked on the client and sent to the server along with the transaction in the wire protocol. The client will most likely timeout first, but the server also has the capability to timeout the transaction.If
total_timeout
is not0
andtotal_timeout
is reached before the transaction completes, the transaction will return errorAEROSPIKE_ERR_TIMEOUT
. Iftotal_timeout
is0
, there will be no total time limit.Default:1000
- total_timeout (
- compress (
bool
) - Compress client requests and server responses.Use zlib compression on write or batch read commands when the command buffer size is greater than 128 bytes. In addition, tell the server to compress it’s response on read commands. The server response compression threshold is also 128 bytes.This option will increase cpu and memory usage (for extra compressed buffers), but decrease the size of data sent over the network.Default:
False
- compress (
- read_mode_ap
- One of the AP Read Mode Policy Options values such as
aerospike.AS_POLICY_READ_MODE_AP_ONE
Default:aerospike.AS_POLICY_READ_MODE_AP_ONE
New in version 3.7.0.
- read_mode_sc
- One of the SC Read Mode Policy Options values such as
aerospike.POLICY_READ_MODE_SC_SESSION
New in version 3.7.0.
- read_touch_ttl_percent
Determine how record TTL (time to live) is affected on reads. When enabled, the server can efficiently operate as a read-based LRU cache where the least recently used records are expired. The value is expressed as a percentage of the TTL sent on the most recent write such that a read within this interval of the record’s end of life will generate a touch.
For example, if the most recent write had a TTL of 10 hours and
"read_touch_ttl_percent"
is set to 80, the next read within 8 hours of the record’s end of life (equivalent to 2 hours after the most recent write) will result in a touch, resetting the TTL to another 10 hours.Values:
0
: Use server config default-read-touch-ttl-pct for the record’s namespace/set.-1
: Do not reset record TTL on reads.1
-100
: Reset record TTL on reads when within this percentage of the most recent write TTL.
Default:
0
Note
Requires Aerospike server version >= 7.1.
- replica
- One of the Replica Options values such as
aerospike.POLICY_REPLICA_MASTER
Default:aerospike.POLICY_REPLICA_SEQUENCE
- concurrent (
bool
) - Determine if batch commands to each server are run in parallel threads.Default
False
- concurrent (
- allow_inline (
bool
) - Allow batch to be processed immediately in the server’s receiving thread when the server deems it to be appropriate. If False, the batch will always be processed in separate transaction threads. This field is only relevant for the new batch index protocol.Default
True
- allow_inline (
- allow_inline_ssd (
bool
) Allow batch to be processed immediately in the server’s receiving thread for SSD namespaces. If false, the batch will always be processed in separate service threads. Server versions < 6.0 ignore this field.
Inline processing can introduce the possibility of unfairness because the server can process the entire batch before moving onto the next command.
Default:
False
- allow_inline_ssd (
- deserialize (
bool
) - Should raw bytes be deserialized to as_list or as_map. Set to False for backup programs that just need access to raw bytes.Default:
True
- deserialize (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: NoneNote
Requires Aerospike server version >= 5.2.
- expressions
- respond_all_keys
bool
Should all batch keys be attempted regardless of errors. This field is used on both the client and server. The client handles node specific errors and the server handles key specific errors.
If
True
, every batch key is attempted regardless of previous key specific errors. Node specific errors such as timeouts stop keys to that node, but keys directed at other nodes will continue to be processed.If
False
, the server will stop the batch to its node on most key specific errors. The exceptions areAEROSPIKE_ERR_RECORD_NOT_FOUND
andAEROSPIKE_FILTERED_OUT
which never stop the batch. The client will stop the entire batch on node specific errors for sync commands that are run in sequence (concurrent
== false). The client will not stop the entire batch for async commands or sync commands run in parallel.Server versions < 6.0 do not support this field and treat this value as false for key specific errors.
Default:
True
- respond_all_keys
Batch Write Policies
- policy
A
dict
of optional batch write policies, which are applicable tobatch_write()
,batch_operate()
andWrite
.- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- gen
- One of the Generation Policy Options values such as
aerospike.POLICY_GEN_IGNORE
Default:aerospike.POLICY_GEN_IGNORE
- exists
- One of the Existence Policy Options values such as
aerospike.POLICY_EXISTS_CREATE
Default:aerospike.POLICY_EXISTS_IGNORE
- durable_delete (
bool
) - Perform durable deleteDefault:
False
- durable_delete (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: None
- expressions
- ttl
int
The time-to-live (expiration) in seconds to apply to every record in the batch. This field will only be used if: 1. A
batch_write()
call contains aWrite
that:Doesn’t contain a metadata dictionary with a
ttl
value.Contains a metadata dictionary with a
ttl
value set toaerospike.TTL_CLIENT_DEFAULT
.
A
batch_operate()
call:Doesn’t pass in a ttl argument.
Passes in aerospike.TTL_CLIENT_DEFAULT to the ttl parameter.
There are also special values that can be set for this field. See TTL Constants.
Default:
0
- ttl
Batch Apply Policies
- policy
A
dict
of optional batch apply policies, which are applicable tobatch_apply()
, andApply
.- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- ttl int
- Time to live (expiration) of the record in seconds.See TTL Constants for possible special values.Note that the TTL value will be employed ONLY on write/update calls.Default:
0
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: None
- expressions
Batch Remove Policies
- policy
A
dict
of optional batch remove policies, which are applicable tobatch_remove()
, andRemove
.- key
- One of the Key Policy Options values such as
aerospike.POLICY_KEY_DIGEST
Default:aerospike.POLICY_KEY_DIGEST
- commit_level
- One of the Commit Level Policy Options values such as
aerospike.POLICY_COMMIT_LEVEL_ALL
Default:aerospike.POLICY_COMMIT_LEVEL_ALL
- gen
- One of the Generation Policy Options values such as
aerospike.POLICY_GEN_IGNORE
Default:aerospike.POLICY_GEN_IGNORE
- generation int
- Generation of the record.Default: 0
- durable_delete (
bool
) - Perform durable deleteDefault:
False
- durable_delete (
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: None
- expressions
Batch Read Policies
- policy
A
dict
of optional batch read policies, which are applicable toRead
.- read_mode_ap
- One of the AP Read Mode Policy Options values such as
aerospike.AS_POLICY_READ_MODE_AP_ONE
Default:aerospike.AS_POLICY_READ_MODE_AP_ONE
- read_mode_sc
- One of the SC Read Mode Policy Options values such as
aerospike.POLICY_READ_MODE_SC_SESSION
- expressions
list
- Compiled aerospike expressions
aerospike_helpers
used for filtering records within a transaction.Default: None
- expressions
- read_touch_ttl_percent
Determine how record TTL (time to live) is affected on reads. When enabled, the server can efficiently operate as a read-based LRU cache where the least recently used records are expired. The value is expressed as a percentage of the TTL sent on the most recent write such that a read within this interval of the record’s end of life will generate a touch.
For example, if the most recent write had a TTL of 10 hours and
"read_touch_ttl_percent"
is set to 80, the next read within 8 hours of the record’s end of life (equivalent to 2 hours after the most recent write) will result in a touch, resetting the TTL to another 10 hours.Values:
0
: Use server config default-read-touch-ttl-pct for the record’s namespace/set.-1
: Do not reset record TTL on reads.1
-100
: Reset record TTL on reads when within this percentage of the most recent write TTL.
Default:
0
Note
Requires Aerospike server version >= 7.1.
Info Policies
Admin Policies
List Policies
- policy
A
dict
of optional list policies, which are applicable to list operations.- write_flags
- Write flags for the operation.One of the List Write Flags values such as
aerospike.LIST_WRITE_DEFAULT
Default:aerospike.LIST_WRITE_DEFAULT
Values should be or’d together:aerospike.LIST_WRITE_ADD_UNIQUE | aerospike.LIST_WRITE_INSERT_BOUNDED
- list_order
- Ordering to maintain for the list.One of List Order, such as
aerospike.LIST_ORDERED
Default:aerospike.LIST_UNORDERED
Example:
list_policy = { "write_flags": aerospike.LIST_WRITE_ADD_UNIQUE | aerospike.LIST_WRITE_INSERT_BOUNDED, "list_order": aerospike.LIST_ORDERED }
Map Policies
- policy
A
dict
of optional map policies, which are applicable to map operations.- map_write_flags
- Write flags for the map operation.One of the Map Write Flag values such as
aerospike.MAP_WRITE_FLAGS_DEFAULT
Default:aerospike.MAP_WRITE_FLAGS_DEFAULT
Values should be or’d together:aerospike.LIST_WRITE_ADD_UNIQUE | aerospike.LIST_WRITE_INSERT_BOUNDED
Note
This is only valid for Aerospike Server versions >= 4.3.0.
- map_order
- Ordering to maintain for the map entries.One of Map Order, such as
aerospike.MAP_KEY_ORDERED
Default:aerospike.MAP_UNORDERED
- persist_index (
bool
)
- persist_index (
Example:
# Server >= 4.3.0 map_policy = { 'map_order': aerospike.MAP_UNORDERED, 'map_write_flags': aerospike.MAP_WRITE_FLAGS_CREATE_ONLY }
Bit Policies
- policy
A
dict
of optional bit policies, which are applicable to bitwise operations.Note
Requires server version >= 4.6.0
- bit_write_flags
- Write flags for the bit operation.One of the Bitwise Write Flags values such as
aerospike.BIT_WRITE_DEFAULT
Default:aerospike.BIT_WRITE_DEFAULT
Example:
bit_policy = { 'bit_write_flags': aerospike.BIT_WRITE_UPDATE_ONLY }
HyperLogLog Policies
- policy
A
dict
of optional HyperLogLog policies, which are applicable to bit operations.Note
Requires server version >= 4.9.0
- flags
- Write flags for the HLL operation.One of the HyperLogLog Write Flags values such as
aerospike.HLL_WRITE_DEFAULT
Default:aerospike.HLL_WRITE_DEFAULT
Example:
HLL_policy = { 'flags': aerospike.HLL_WRITE_UPDATE_ONLY }
Misc
Role Objects
- Role
A
dict
describing attributes associated with a specific role:"privileges"
: alist
of Privilege Objects."whitelist"
: alist
of IP address strings."read_quota"
: aint
representing the allowed read transactions per second."write_quota"
: aint
representing the allowed write transactions per second.
Privilege Objects
- privilege
A
dict
describing a privilege and where it applies to:"code"
: one of the Privileges values"ns"
: optionalstr
specifying the namespace where the privilege applies.If not specified, the privilege applies globally.
"set"
: optionalstr
specifying the set within the namespace where the privilege applies.If not specified, the privilege applies to the entire namespace.
Example:
{'code': aerospike.PRIV_READ, 'ns': 'test', 'set': 'demo'}
Partition Objects
- partition_filter
A
dict
of partition information used by the client to perform partition queries or scans. Useful for resuming terminated queries and querying particular partitions or records."begin"
: Optionalint
signifying which partition to start at.Default:
0
(the first partition)"count"
: Optionalint
signifying how many partitions to process.Default:
4096
(all partitions)"digest"
: Optionaldict
containing the keys “init” and “value” signifying whether the digest has been calculated, and the digest value."partition_status"
: Optionaldict
containing partition_status tuples. These can be used to resume a query/scan.Default:
{}
(all partitions)
Default:
{}
(All partitions will be queried/scanned).# Example of a query policy using partition_filter. # partition_status is most easily used to resume a query # and can be obtained by calling Query.get_partitions_status() partition_status = { 0: {0, False, False, bytearray([0]*20)}... } policy = { "partition_filter": { "partition_status": partition_status, "begin": 0, "count": 4096 }, }
- partition_status
Note
Requires Aerospike server version >= 6.0.
A
dict
of partition status information used by the client to set the partition status of a partition query or scan.This is useful for resuming either of those.
The dictionary contains these key-value pairs:
"retry"
:bool
represents the overall retry status of this partition query. (i.e. Does this query/scan need to be retried?)"done"
:bool
represents whether all partitions were finished.
In addition, the dictionary contains keys of the partition IDs (
int
), and each partition ID is mapped to a tuple containing the status details of a partition.That tuple has the following values in this order:
id
:int
represents a partition ID numberinit
:bool
represents whether the digest being queried was calculated.retry
:bool
represents whether this partition should be retried.digest
:bytearray
represents the digest of the record being queried.Should be 20 characters long.
bval
:int
is used in conjunction with"digest"
to determine the last record received by a partition query.
Default:
{}
(All partitions will be queried).# Example of a query policy using partition_status. # Here is the form of partition_status. # partition_status = { # 0: (0, False, False, bytearray([0]*20), 0)... # } partition_status = query.get_partitions_status() policy = { "partition_filter": { "partition_status": partition_status, "begin": 0, "count": 4096 }, }