Description
Package consistent provides a consistent hashing function which simultaneously achieves both uniformity and consistency.
consistent alternatives and similar packages
Based on the "Distributed Systems" category.
Alternatively, view consistent alternatives based on common mentions on social networks and blogs.
-
Nomad
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations. -
go-zero
DISCONTINUED. go-zero is a web and rpc framework written in Go. It's born to ensure the stability of the busy sites with resilient design. Builtin goctl greatly improves the development productivity. [Moved to: https://github.com/zeromicro/go-zero] -
rpcx
Best microservices framework in Go, like alibaba Dubbo, but with more features, Scale easily. Try it. Test it. If you feel it's better, use it! ๐๐๐ฏ๐ๆ๐๐ฎ๐๐๐จ, ๐๐จ๐ฅ๐๐ง๐ ๆ๐ซ๐ฉ๐๐ฑ! build for cloud! -
Encore
Open Source Development Platform for building robust type-safe distributed systems with declarative infrastructure -
gleam
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly. -
glow
Glow is an easy-to-use distributed computation system written in Go, similar to Hadoop Map Reduce, Spark, Flink, Storm, etc. I am also working on another similar pure Go system, https://github.com/chrislusf/gleam , which is more flexible and more performant. -
Olric
Distributed, in-memory key/value store and cache. It can be used as an embedded Go library and a language-independent service. -
Dragonfly
DISCONTINUED. Dragonfly is an open source P2P-based file distribution and image acceleration system. It is hosted by the Cloud Native Computing Foundation (CNCF) as an Incubating Level Project. [Moved to: https://github.com/dragonflyoss/dragonfly] -
go-doudou
go-doudou๏ผdoudou pronounce /dษudษu/๏ผis OpenAPI 3.0 (for REST) spec and Protobuf v3 (for grpc) based lightweight microservice framework. It supports monolith service application as well. -
resgate
A Realtime API Gateway used with NATS to build REST, real time, and RPC APIs, where all your clients are synchronized seamlessly. -
go-sundheit
A library built to provide support for defining service health for golang services. It allows you to register async health checks for your dependencies and the service itself, provides a health endpoint that exposes their status, and health metrics. -
Maestro
Take control of your data, connect with anything, and expose it anywhere through protocols such as HTTP, GraphQL, and gRPC. -
celeriac
Golang client library for adding support for interacting and monitoring Celery workers, tasks and events. -
drmaa
Compute cluster (HPC) job submission library for Go (#golang) based on the open DRMAA standard.
CodeRabbit: AI Code Reviews for Developers

Do you think we are missing an alternative of consistent or a related project?
README
consistent
This library provides a consistent hashing function which simultaneously achieves both uniformity and consistency.
For detailed information about the concept, you should take a look at the following resources:
- Consistent Hashing with Bounded Loads on Google Research Blog
- Improving load balancing with a new consistent-hashing algorithm on Vimeo Engineering Blog
- Consistent Hashing with Bounded Loads paper on arXiv
Table of Content
Overview
In this package's context, the keys are distributed among partitions and partitions are distributed among members as well.
When you create a new consistent instance or call Add/Remove
:
- The member's name is hashed and inserted into the hash ring,
- Average load is calculated by the algorithm defined in the paper,
- Partitions are distributed among members by hashing partition IDs and none of them exceed the average load.
Average load cannot be exceeded. So if all members are loaded at the maximum while trying to add a new member, it panics.
When you want to locate a key by calling LocateKey
:
- The key(byte slice) is hashed,
- The result of the hash is mod by the number of partitions,
- The result of this modulo -
MOD(hash result, partition count)
- is the partition in which the key will be located, - Owner of the partition is already determined before calling
LocateKey
. So it returns the partition owner immediately.
No memory is allocated by consistent
except hashing when you want to locate a key.
Note that the number of partitions cannot be changed after creation.
Notable Users
buraksezer/consistent is used at production by the following projects:
- buraksezer/olric: Embeddable, distributed data structures in Go.
- open-telemetry/opentelemetry-operator: Kubernetes Operator for OpenTelemetry Collector.
- megaease/easegress: A Cloud Native traffic orchestration system.
- chrislusf/seaweedfs: SeaweedFS is a distributed storage system for blobs, objects, files, and data warehouse, to store and serve billions of files fast!.
- erda-project/erda: An enterprise-grade Cloud-Native application platform for Kubernetes.
- celo-org/celo-blockchain: Global payments infrastructure built for mobile.
- koderover/zadig: Zadig is a cloud native, distributed, developer-oriented continuous delivery product.
- mason-leap-lab/infinicache: InfiniCache: A cost-effective memory cache that is built atop ephemeral serverless functions.
- opencord/voltha-lib-go: Voltha common library code.
- kubeedge/edgemesh: Simplified network and services for edge applications.
- authorizer-tech/access-controller An implementation of a distributed access-control server that is based on Google Zanzibar - "Google's Consistent, Global Authorization System.
- Conflux-Chain/confura Implementation of an Ethereum Infura equivalent public RPC service on Conflux Network.
Install
With a correctly configured Go environment:
go get github.com/buraksezer/consistent
You will find some useful usage samples in examples folder.
Configuration
type Config struct {
// Hasher is responsible for generating unsigned, 64 bit hash of provided byte slice.
Hasher Hasher
// Keys are distributed among partitions. Prime numbers are good to
// distribute keys uniformly. Select a big PartitionCount if you have
// too many keys.
PartitionCount int
// Members are replicated on consistent hash ring. This number controls
// the number each member is replicated on the ring.
ReplicationFactor int
// Load is used to calculate average load. See the code, the paper and Google's
// blog post to learn about it.
Load float64
}
Any hash algorithm can be used as hasher which implements Hasher interface. Please take a look at the Sample section for an example.
Usage
LocateKey
function finds a member in the cluster for your key:
// With a properly configured and initialized consistent instance
key := []byte("my-key")
member := c.LocateKey(key)
It returns a thread-safe copy of the member you added before.
The second most frequently used function is GetClosestN
.
// With a properly configured and initialized consistent instance
key := []byte("my-key")
members, err := c.GetClosestN(key, 2)
This may be useful to find backup nodes to store your key.
Benchmarks
On an early 2015 Macbook:
BenchmarkAddRemove-4 100000 22006 ns/op
BenchmarkLocateKey-4 5000000 252 ns/op
BenchmarkGetClosestN-4 500000 2974 ns/op
Examples
The most basic use of consistent package should be like this. For detailed list of functions, visit godoc.org. More sample code can be found under _examples.
package main
import (
"fmt"
"github.com/buraksezer/consistent"
"github.com/cespare/xxhash"
)
// In your code, you probably have a custom data type
// for your cluster members. Just add a String function to implement
// consistent.Member interface.
type myMember string
func (m myMember) String() string {
return string(m)
}
// consistent package doesn't provide a default hashing function.
// You should provide a proper one to distribute keys/members uniformly.
type hasher struct{}
func (h hasher) Sum64(data []byte) uint64 {
// you should use a proper hash function for uniformity.
return xxhash.Sum64(data)
}
func main() {
// Create a new consistent instance
cfg := consistent.Config{
PartitionCount: 7,
ReplicationFactor: 20,
Load: 1.25,
Hasher: hasher{},
}
c := consistent.New(nil, cfg)
// Add some members to the consistent hash table.
// Add function calculates average load and distributes partitions over members
node1 := myMember("node1.olric.com")
c.Add(node1)
node2 := myMember("node2.olric.com")
c.Add(node2)
key := []byte("my-key")
// calculates partition id for the given key
// partID := hash(key) % partitionCount
// the partitions are already distributed among members by Add function.
owner := c.LocateKey(key)
fmt.Println(owner.String())
// Prints node2.olric.com
}
Another useful example is _examples/relocation_percentage.go
. It creates a consistent
object with 8 members and distributes partitions among them. Then adds 9th member,
here is the result with a proper configuration and hash function:
bloom:consistent burak$ go run _examples/relocation_percentage.go
partID: 218 moved to node2.olric from node0.olric
partID: 173 moved to node9.olric from node3.olric
partID: 225 moved to node7.olric from node0.olric
partID: 85 moved to node9.olric from node7.olric
partID: 220 moved to node5.olric from node0.olric
partID: 33 moved to node9.olric from node5.olric
partID: 254 moved to node9.olric from node4.olric
partID: 71 moved to node9.olric from node3.olric
partID: 236 moved to node9.olric from node2.olric
partID: 118 moved to node9.olric from node3.olric
partID: 233 moved to node3.olric from node0.olric
partID: 50 moved to node9.olric from node4.olric
partID: 252 moved to node9.olric from node2.olric
partID: 121 moved to node9.olric from node2.olric
partID: 259 moved to node9.olric from node4.olric
partID: 92 moved to node9.olric from node7.olric
partID: 152 moved to node9.olric from node3.olric
partID: 105 moved to node9.olric from node2.olric
6% of the partitions are relocated
Moved partition count is highly dependent on your configuration and quailty of hash function. You should modify the configuration to find an optimum set of configurations for your system.
_examples/load_distribution.go
is also useful to understand load distribution. It creates a consistent
object with 8 members and locates 1M key. It also calculates average
load which cannot be exceeded by any member. Here is the result:
Maximum key count for a member should be around this: 147602
member: node2.olric, key count: 100362
member: node5.olric, key count: 99448
member: node0.olric, key count: 147735
member: node3.olric, key count: 103455
member: node6.olric, key count: 147069
member: node1.olric, key count: 121566
member: node4.olric, key count: 147932
member: node7.olric, key count: 132433
Average load can be calculated by using the following formula:
load := (consistent.AverageLoad() * float64(keyCount)) / float64(config.PartitionCount)
Contributions
Please don't hesitate to fork the project and send a pull request or just e-mail me to ask questions and share ideas.
License
MIT License, - see LICENSE for more details.
*Note that all licence references and agreements mentioned in the consistent README section above
are relevant to that project's source code only.