Batch Writer

A Sidetree operation (Create, Update, Recover and Deactivate) is posted by a client over the operations REST endpoint. The operation is validated according to Sidetree DID Operations and added to the Operation Queue.

The Batch Writer:

  1. Drains operations from the Operation Queue

  2. Batches multiple Sidetree operations together into Sidetree batch files as per Sidetree file structure spec

  3. Stores the Sidetree batch files into Content Addressable Storage (CAS)

  4. Anchors a reference to the main Sidetree batch file (core index file) on the anchoring system as a Sidetree transaction

The number of operations that can be stored in the Sidetree batch files is limited by Sidetree protocol parameter MAX_OPERATION_COUNT. The Batch Writer cuts Sidetree batches if the number of operations in the batch writer queue reaches MAX_OPERATION_COUNT or if the batch writer reaches the batch writer timeout batch-writer-timeout.

Operation Queue

The Operation Queue in Orb is an implementation of the sidetree-core-go operation queue interface. The implemented functions are:

  1. Add: Adds an operation to the queue

  2. Remove: Returns and removes up to N operations from the queue

  3. Peek: Returns (but does not remove) up to N operations from the queue

  4. Len: Returns the current length of the queue

Orb’s implementation of the Operation Queue is backed by an AMQP message broker and a database. Each Orb instance of a domain stores a task entry in the op-queue database on startup. The task entry contains an ID and an update time. The ID is simply the unique ID of the Orb instance (which is a GUID generated by the Task Manager on startup) and the update time contains the timestamp of when the task entry was last updated. (This timestamp is used to check the aliveness of the Orb instance.)

Add Operation

A Sidetree operation is posted by a client to the operations REST endpoint (which is exposed by the sidetree-core-go library). After the operation is validated, the Add function of Orb’s Operation Queue is invoked. The Operation Queue then publishes a message containing the operation to the AMQP orb.operation queue. Each Orb instance has a pool of subscribers for the orb.operation queue. The number of subscribers in the pool is determined by startup parameter op-queue-pool. One of the subscribers on an Orb instance handles the message by adding the operation to the op-queue database and also to an in-memory queue. Operations are stored to the database for recovery purposes, i.e. if the Orb instance goes down then another instance will repost the operations. (See Recovery for details.) Each database entry contains the contents of the operation and is also tagged (indexed) by:

  1. Task ID: Associates an operation with a specific task entry (as described above)

  2. Expiration Time: Tells the Database Expiry service when this entry may be deleted. This value is set to (current time) + (batch writer timeout) + (1 minute).

../../_images/op-queue-add.svg

Remove Operations

The sidetree-core-go library queries the Operation Queue to see if there are enough operations to cut a batch (according to the Sidetree protocol parameter MAX_OPERATION_COUNT, or if the batch has timed out (according to startup parameter batch-writer-timeout). When the batch is cut then the sidetree-core-go library calls the Remove function on the Operation Queue to remove up to N operations from the queue. The Remove function is quasi-transactional such that it returns an Ack and a Nack function along with the operations.

Ack Function

When the sidetree-core-go library has successfully processed the operations then the Ack function is called. The Ack function deletes the removed operations from the op-queue database.

Nack Function

If the sidetree-core-go library has failed to successfully process the operations then the Nack function is called. The Nack function reposts the operations to the AMQP orb.operation queue so they may be retried (potentially by another server instance) and the operations are deleted from the database. Each operation message is reposted with a delay to give the server a chance to recover from whatever caused processing to fail in the first place. The delay is calculated according to the number of failed retries along with parameters, mq-redelivery-initial-interval, mq-redelivery-max-interval, and mq-redelivery-multiplier. The retries header value is also set on the message. The value is first incremented before it is reposted. Once the maximum number of retries for an operation has been reached, the operation is discarded.

../../_images/op-queue-cut.svg

Recovery

An Orb server may go down with pending operations in the queue. The Operation Queue Monitor Task is registered with the Task Manager on startup to periodically run on one server instance in the domain. The period is specified by startup parameter op-queue-task-monitor-interval. This task monitors the operation queue tasks of other servers to ensure that if a server goes down then the operations associated with that server are reposted to the AMQP orb.operation queue.

Each Orb instance periodically (also using the period specified by op-queue-task-monitor-interval) updates the update time of its own task entry in the database in order to indicate to other servers that the instance is still alive.

When the monitor task runs, it queries the op-queue database for all task entries (excluding its own) and checks the update time of the entry. If the update time is older than the expiration time configured with startup parameter op-queue-task-expiration then the server that owns the task is considered to be down. At this point, the op-queue database is queried for the operations associated with the task and each operation is reposted to the queue. (As described in the section above, each operation message reposted to the orb.operation queue has a retries header value which is incremented before it is reposted. Once the maximum number of retries for an operation has been reached, the operation is discarded.) All operations associated with this task are then deleted from the database and the task entry itself is also deleted from the database. (The task entry is deleted from the database since, when the dead server comes back online, it will generate a new task ID.)

../../_images/op-queue-recovery.svg

Anchor Writer

When a batch of operations is cut from the operation queue, the sidetree-core-go library creates a Sidetree anchor object and invokes WriteAnchor. The Orb implementation of WriteAnchor performs the following steps:

  1. Retrieves previous anchors for all DIDs in the batch

  2. Resolves the witnesses for the batch

  3. Creates an Anchor Linkset containing the operations

  4. Posts an Offer activity (containing the anchor linkset) to each of the witnesses

../../_images/write-anchor.svg

Witness Proof Handler

A witness accepts an Offer of an anchor linkset by posting an Accept activity to the inbox. The proof is extracted from the anchor linkset and the proof handler is invoked.

The current status of the anchor is retrieved from the anchor status database. If the status of the anchor is still not complete then the provided proof from the Accept activity is added to the existing set of proofs. The witness policy is then evaluated in order to determine if the anchor has a sufficient number of proofs. If the witness policy is not satisfied then nothing else is done, otherwise:

  1. The status of the anchor is marked as complete

  2. The anchor linkset is updated with all witness proofs

  3. The anchor linkset is published to the orb.anchor_linkset queue so that it may be processed by the

  4. Anchor Linkset Handler

../../_images/proof-handler.svg

Anchor Linkset Handler

On startup, the Anchor Linkset Handler subscribes to the orb.anchor_linkset queue in order to receive witnessed anchor linksets. Upon receiving an anchor linkset from the queue:

  1. The verifiable credential is extracted from the anchor linkset and saved to the verifiable credential database

  2. The anchor linkset is saved to the Content Addressable Storage (CAS)

  3. The anchor linkset is published to the orb.anchor queue so that it may be processed by the Observer

  4. A Create activity (containing the anchor linkset) is posted to all followers

../../_images/anchor-linkset-handler.svg