raft alternatives and similar packages
Based on the "Distributed Systems" category.
Alternatively, view raft alternatives based on common mentions on social networks and blogs.
-
Nomad
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations. -
go-zero
DISCONTINUED. go-zero is a web and rpc framework written in Go. It's born to ensure the stability of the busy sites with resilient design. Builtin goctl greatly improves the development productivity. [Moved to: https://github.com/zeromicro/go-zero] -
rpcx
Best microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily. Try it. Test it. If you feel it's better, use it! ๐๐๐ฏ๐ๆ๐๐ฎ๐๐๐จ, ๐๐จ๐ฅ๐๐ง๐ ๆ๐ซ๐ฉ๐๐ฑ! build for cloud! -
Encore
Development Platform for building robust type-safe distributed systems with declarative infrastructure -
gleam
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly. -
glow
Glow is an easy-to-use distributed computation system written in Go, similar to Hadoop Map Reduce, Spark, Flink, Storm, etc. I am also working on another similar pure Go system, https://github.com/chrislusf/gleam , which is more flexible and more performant. -
Olric
Distributed in-memory object store. It can be used as an embedded Go library and a language-independent service. -
Dragonfly
Dragonfly is an open source P2P-based file distribution and image acceleration system. It is hosted by the Cloud Native Computing Foundation (CNCF) as an Incubating Level Project. -
go-doudou
go-doudou๏ผdoudou pronounce /dษudษu/๏ผis OpenAPI 3.0 (for REST) spec and Protobuf v3 (for grpc) based lightweight microservice framework. It supports monolith service application as well. -
resgate
A Realtime API Gateway used with NATS to build REST, real time, and RPC APIs, where all your clients are synchronized seamlessly. -
go-sundheit
A library built to provide support for defining service health for golang services. It allows you to register async health checks for your dependencies and the service itself, provides a health endpoint that exposes their status, and health metrics. -
Maestro
Take control of your data, connect with anything, and expose it anywhere through protocols such as HTTP, GraphQL, and gRPC. -
celeriac
Golang client library for adding support for interacting and monitoring Celery workers, tasks and events. -
drmaa
Compute cluster (HPC) job submission library for Go (#golang) based on the open DRMAA standard.
InfluxDB - Purpose built for real-time analytics at any scale.
Do you think we are missing an alternative of raft or a related project?
Popular Comparisons
README
raft
raft is a Go library that manages a replicated log and can be used with an FSM to manage replicated state machines. It is a library for providing consensus.
The use cases for such a library are far-reaching, such as replicated state machines which are a key component of many distributed systems. They enable building Consistent, Partition Tolerant (CP) systems, with limited fault tolerance as well.
Building
If you wish to build raft you'll need Go version 1.2+ installed.
Please check your installation with:
go version
Documentation
For complete documentation, see the associated Godoc.
To prevent complications with cgo, the primary backend MDBStore
is in a separate repository,
called raft-mdb. That is the recommended implementation
for the LogStore
and StableStore
.
A pure Go backend using Bbolt is also available called
raft-boltdb. It can also be used as a LogStore
and StableStore
.
Community Contributed Examples
Raft gRPC Example - Utilizing the Raft repository with gRPC
Tagged Releases
As of September 2017, HashiCorp will start using tags for this library to clearly indicate major version updates. We recommend you vendor your application's dependency on this library.
v0.1.0 is the original stable version of the library that was in main and has been maintained with no breaking API changes. This was in use by Consul prior to version 0.7.0.
v1.0.0 takes the changes that were staged in the library-v2-stage-one branch. This version manages server identities using a UUID, so introduces some breaking API changes. It also versions the Raft protocol, and requires some special steps when interoperating with Raft servers running older versions of the library (see the detailed comment in config.go about version compatibility). You can reference https://github.com/hashicorp/consul/pull/2222 for an idea of what was required to port Consul to these new interfaces.
This version includes some new features as well, including non voting servers, a new address provider abstraction in the transport layer, and more resilient snapshots.
Protocol
raft is based on "Raft: In Search of an Understandable Consensus Algorithm"
A high level overview of the Raft protocol is described below, but for details please read the full Raft paper followed by the raft source. Any questions about the raft protocol should be sent to the raft-dev mailing list.
Protocol Description
Raft nodes are always in one of three states: follower, candidate or leader. All nodes initially start out as a follower. In this state, nodes can accept log entries from a leader and cast votes. If no entries are received for some time, nodes self-promote to the candidate state. In the candidate state nodes request votes from their peers. If a candidate receives a quorum of votes, then it is promoted to a leader. The leader must accept new log entries and replicate to all the other followers. In addition, if stale reads are not acceptable, all queries must also be performed on the leader.
Once a cluster has a leader, it is able to accept new log entries. A client can request that a leader append a new log entry, which is an opaque binary blob to Raft. The leader then writes the entry to durable storage and attempts to replicate to a quorum of followers. Once the log entry is considered committed, it can be applied to a finite state machine. The finite state machine is application specific, and is implemented using an interface.
An obvious question relates to the unbounded nature of a replicated log. Raft provides a mechanism by which the current state is snapshotted, and the log is compacted. Because of the FSM abstraction, restoring the state of the FSM must result in the same state as a replay of old logs. This allows Raft to capture the FSM state at a point in time, and then remove all the logs that were used to reach that state. This is performed automatically without user intervention, and prevents unbounded disk usage as well as minimizing time spent replaying logs.
Lastly, there is the issue of updating the peer set when new servers are joining or existing servers are leaving. As long as a quorum of nodes is available, this is not an issue as Raft provides mechanisms to dynamically update the peer set. If a quorum of nodes is unavailable, then this becomes a very challenging issue. For example, suppose there are only 2 peers, A and B. The quorum size is also 2, meaning both nodes must agree to commit a log entry. If either A or B fails, it is now impossible to reach quorum. This means the cluster is unable to add, or remove a node, or commit any additional log entries. This results in unavailability. At this point, manual intervention would be required to remove either A or B, and to restart the remaining node in bootstrap mode.
A Raft cluster of 3 nodes can tolerate a single node failure, while a cluster of 5 can tolerate 2 node failures. The recommended configuration is to either run 3 or 5 raft servers. This maximizes availability without greatly sacrificing performance.
In terms of performance, Raft is comparable to Paxos. Assuming stable leadership, committing a log entry requires a single round trip to half of the cluster. Thus performance is bound by disk I/O and network latency.