This is a modular framework for implementing, deploying and testing a distributed ordering service. The main task of such a service is maintaining a totally ordered Log of client Requests. This implementation uses multiple instances of an ordering protocol and multiplexes their outputs into the final Log. The ordering protocol instances running on each peer are orchestrated by a Manager module that decides which instance is responsible for which part of the Log, when to execute a checkpoint protocol and which client requests are to be ordered by which ordering instance. The decisions of the Manager must be consistent across all peers.
The Log is a sequence of Entries. Each Entry has a sequence number (SN) defining its position in the Log, and contains a Batch of Requests. The Log is logically partitioned into Segments - parts of the Log attributed to a single instance of an ordering protocol. It is the Manager's task to create these Segments and to instantiate the ordering protocol for each created Segment.
The set of all possible client Requests is partitioned (based on their hashes) into subsets called Buckets. The manager assigns a Bucket to each Segment it creates. The ordering protocol instance ordering that Segment only creates batches of Requests using the assigned Bucket. It is the Manager's task to create Segments and assign Buckets in a way ensuring that no two Segments that are being ordered concurrently are assigned the same Bucket. This is required to prevent request duplication.
The Manager observes the Log and creates new Segments as the Log fills up. When the Manager creates a new Segment, it triggers the Orderer that orders the Segment. Ordering a Segment means committing new Entries with the SNs of that Segment. Periodically, the Manager triggers the Checkpointer to create checkpoints of the Log. The Manager observes the created checkpoints and issues new Segments as the checkpoints advance, respecting the watermark window.
Create a GOPATH directory and make sure you are the owner of it:
sudo mkdir -p /opt/gopath/
sudo chown -R $user:$group /opt/gopath/
where $user
and $group
your user and group respectively.
Create a directory to clone the repository into:
mkdir -p /opt/gopath/src/github.com/hyperledger-labs/
Clone this repository unter the directory you created:
cd /opt/gopath/src/github.com/hyperledger-labs/
git clone https://github.com/hyperledger-labs/mirbft.git
Checkout theresearch-iss
branch.
With /opt/gopath/src/github.com/hyperledger-labs/mirbft
as working directory, go to the deployment directory:
cd deployment
Configure the user
and group
in vars.sh
To install Golang and requirements:
source scripts/install-local.sh
NOTE: The install-local.sh
script, among other dependencies, installs Go
in the home directory, sets GOPATH to /opt/gopath/bin/
and edits ~/.bashrc
.
The default path to the repository is set to: /opt/gopath/src/github.com/hyperledger-labs/mirbft/
.
The run-protoc.sh
script needs to be run from the project root directory (i.e. mirbft
) before compiling the Go
files.
IMPORTANT: go modules are not supported. Disable with the command: export GO111MODULE=off
before installation.
Compile and install the go code by running go install ./...
from the project root directory.
Detailed instructions can be found here.
An ordered sequence of client Requests. All Requests in a Batch must belong to the same Bucket. The Batch is
defined in the request
package.
A subset of all possible client Requests. Each Request maps to exactly one Bucket (mapping is based on the
Request's hash). The Manager assigns one Bucket to each Segment and the Orderer of the Segment only uses
Requests from the assigned Bucket to propose new Batches. The Bucket is defined in the request
package.
Module responsible for creating checkpoints of the log. The Checkpointer listens to the Manager, which notifies the
Checkpointer about each SN at which a checkpoint should occur. The Checkpointer triggers a separate instance of
the checkpointing protocol for each such SN. When a checkpoint is stable, the Checkpointer submits it to the Log.
Defined in the checkpointer
package.
One element of the Log. It contains a sequence number (SN) defining its position in the Log and a Batch of
Requests. Defined in the log
package.
A sequence of Entries replicated by the peers. The log
package implements this abstraction and all related
functionality.
Module orchestrating all components of the ordering service implementation. The Manager observes the Log, issues
Segments and triggers the Checkpointer. It maintains a watermark window into which all the issued Segments must
fall. The decisions of the Manager must be consistent across all peers. Defined in the manager
package.
Module implementing the actual ordering of Batches, i.e., committing new Entries to the Log.
The Orderer listens to the Manager for new Segments. Whenever the Manager issues a new Segment, the Orderer
creates a new instance of the ordering protocol that proposes and agrees on Request Batches, one for each SN that
is part of the Segment. When a Batch has been agreed upon for a particular SN, the Orderer commits the
(SN, Batch) pair as an Entry to the Log. Defined in the orderer
package.
Opaque client data. Each Request deterministically maps to a Bucket. Defined in the request
package.
Part of the Log ,i.e., a subset of (not necessarily contiguous) SNs, ordered independently by an Orderer.
Segments are disjoint. No SN can appear in more than one single Segment. The Segment data structure (defined in
the manager
package) completely describes an instance of the ordering protocol: the SNs it is responsible for, the
sequence of leaders, the set of followers, the assigned Bucket, as well as information on when it is safe to start
ordering it.
32-bit integer referencing a particilar position of the Log.
A range of SNs for which Entries can be proposed. The watermark window starts at the last stable checkpoint and has a certain length that is a system parameter.