bucket alternatives and similar packages
Based on the "Database" category.
Alternatively, view bucket alternatives based on common mentions on social networks and blogs.
-
vitess
vitess provides servers and tools which facilitate scaling of MySQL databases for large scale web services. -
groupcache
Groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases. -
TinyGo
Go compiler for small places. Microcontrollers, WebAssembly, and command-line tools. Based on LLVM. -
go-cache
An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications. -
VictoriaMetrics
fast, resource-effective and scalable open source time series database. May be used as long-term remote storage for Prometheus. Supports PromQL. -
buntdb
A fast, embeddable, in-memory key/value database for Go with custom indexing and spatial support. -
xo
Generate idiomatic Go code for databases based on existing schema definitions or custom queries supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server. -
sql-migrate
Database migration tool. Allows embedding migrations into the application using go-bindata. -
immudb
immudb is a lightweight, high-speed immutable database for systems and applications written in Go. -
nutsdb
Nutsdb is a simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set. -
skeema
Pure-SQL schema management system for MySQL, with support for sharding and external online schema change tools. -
Bitcask
Bitcask is an embeddable, persistent and fast key-value (KV) database written in pure Go with predictable read/write performance, low latency and high throughput thanks to the bitcask on-disk layout (LSM+WAL).
Get performance insights in less than 4 minutes
Do you think we are missing an alternative of bucket or a related project?
Popular Comparisons
README
DEPRECATED, WARNING: The Couchbase Group introduced collections. The main feature of this repository was to focus on one-bucket collection-like usage. Since the collections as feature already implemented in Couchbase v2, it's no longer necessary, so this repository no longer maintained.
Bucket
Simple Couchbase framework.
Project specifically focuses on the one bucket as database approach, and makes it easier to manage complex data sets. It tries to get rid of the embedded jsons per document and separates them into different documents behind the scene.
Disclaimer:
This is still a work in progress. We will not take responsibility for any breaks in the code that happen after a new version comes out.
Features:
- Automatic index generator with indexable tags.
- Simple usage through the handler.
- Following best practices of Couchbase usage behind the scene, which doesn't affect the user of the library.
Rules:
- Only struct can be referenced
How to use:
Create a new handler with the New function, that needs a configuration.
bucket.New( {bucket.Configuration} )
type Configuration struct {
// The address of the couchbase server
ConnectionString string
// Username and password to access couchbase
Username string
Password string
// The name and password of the bucket you want to use
BucketName string
BucketPassword string
// The separator of your choice, this will separate the prefix from the field name
Separator string
}
After that you can use the Insert, Get, Remove, Upsert, Touch, GetAndTouch and Ping methods of the handler.
package main
import (
"context"
"fmt"
"github.com/PumpkinSeed/bucket"
"github.com/couchbase/gocb"
)
var conf = &bucket.Configuration{
ConnectionString: "myServer.org:1234",
Username: "cbUsername",
Password: "cbPassword",
BucketName: "testBucket",
BucketPassword: "testBucketPassword",
Separator: "::",
}
type myStruct struct {
justAField string `json:"just_a_field"`
}
func main() {
var in = &myStruct{
justAField: "basic",
}
var cas bucket.Cas
var typ = "prefix"
ctx := context.Background()
h, err := bucket.New(conf)
if err != nil {
// handle error
}
// insert
cas, id, err := h.Insert(ctx, typ, "myID", in, 0)
if err != nil {
// handle error
}
// get
var out = &myStruct{}
err = h.Get(ctx, typ, id, out)
if err != nil {
// handle error
}
// touch
err = h.Touch(ctx, typ, id, in, 0)
if err != nil {
// handle error
}
// get and touch
var secondOut = &myStruct{}
err = h.GetAndTouch(ctx, typ, id, secondOut, 0)
if err != nil {
// handle error
}
// ping
var services []gocb.ServiceType
res, err := h.Ping(ctx, services)
if err != nil {
// handle error
}
fmt.Println(res)
// upsert
in.justAField = "updated"
cas, newID, err := h.Upsert(ctx, typ, id, in, 0)
// remove
err = h.Remove(ctx, typ, newID, in)
if err != nil {
// handle error
}
}
Important:
- The typ parameter will be the prefix of the initial struct, so you should use the same value for the same types!
- IDs should be unique, if the parameter is an empty string (
""
) a globally unique ID will be automatically generated!
Additional:
Embedded structs can be separated into a a new entry with the cb_referenced
tag. The value will decide the typ of the struct.
type example struct {
refMe *refMe `json:"ref_me" cb_referenced:"separate_entry"`
ignoreMe *ignoreMe `json:"ignore_me"`
}
type refMe struct {
referencedField int `json:"referenced_field"`
}
type ignoreMe struct {
notReferencedField int `json:"not_referenced_field"`
}
You can also index structs adding the cb_indexable:"true"
tag to the field, and then calling bucket.Index({context.Context}, yourStruct)
.
type example struct {
indexMe string `json:"index_me" cb_indexable:"true"`
butNotThisOne string `json:"but_not_this_one"`
}