Confluent Kafka Golang Client alternatives and similar packages
Based on the "Messaging" category.
Alternatively, view Confluent Kafka Golang Client alternatives based on common mentions on social networks and blogs.
-
gorush
A push notification server using APNs2 and google GCM. -
Centrifugo
Real-time messaging (Websockets or SockJS) server in Go. -
machinery
An asynchronous task queue/job queue based on distributed message passing. -
NATS Go Client
A lightweight and high performance publish-subscribe and distributed queueing messaging system -
NATS
A lightweight and highly performant publish-subscribe and distributed queueing messaging system. -
gopush-cluster
gopush-cluster is a go push server cluster. -
Benthos
A message streaming bridge between a range of protocols. -
Mercure
Server and library to dispatch server-sent updates using the Mercure protocol (built on top of Server-Sent Events). -
Uniqush-Push
A redis backed unified push service for server-side notifications to mobile devices. -
zmq4
A Go interface to ZeroMQ version 4. Also available for version 3 and version 2. -
Gollum
A n:m multiplexer that gathers messages from different sources and broadcasts them to a set of destinations. -
EventBus
The lightweight event bus with async compatibility. -
Asynq
A simple, reliable, and efficient distributed task queue for Go built on top of Redis. -
mangos
Pure go implementation of the Nanomsg ("Scalable Protocols") with transport interoperability. -
emitter
Emits events using Go way, with wildcard, predicates, cancellation possibilities and many other good wins. -
oplog
A generic oplog/replication system for REST APIs -
messagebus
messagebus is a Go simple async message bus, perfect for using as event bus when doing event sourcing, CQRS, DDD. -
guble
A messaging server using push notifications (Google Firebase Cloud Messaging, Apple Push Notification services, SMS) as well as websockets, a REST API, featuring distributed operation and message-persistence. -
Bus
Minimalist message bus implementation for internal communication. -
rabbus
A tiny wrapper over amqp exchanges and queues. -
nsq-event-bus
A tiny wrapper around NSQ topic and channel. -
drone-line
Sending Line notifications using a binary, docker or Drone CI. -
grabbit
A lightweight transactional message bus on top of RabbitMQ -
go-mq
RabbitMQ client with declarative configuration. -
RapidMQ
RapidMQ is a lightweight and reliable library for managing of the local messages queue -
go-events
Simple Nodejs-style EventEmmiter for Go. -
redisqueue
redisqueue provides a producer and consumer of a queue that uses Redis streams. -
go-notify
Native implementation of the freedesktop notification spec. -
go-res
Package for building REST/real-time services where clients are synchronized seamlessly, using NATS and Resgate. -
Commander
A high-level event driven consumer/producer supporting various "dialects" such as Apache Kafka. -
structured pubsub
Publish and subscribe functionality within a single process in Go. -
go-vitotrol
A client library to Viessmann Vitotrol web service. -
ami
Go client to reliable queues based on Redis Cluster Streams. -
jazz
A simple RabbitMQ abstraction layer for queue administration and publishing and consuming of messages. -
hare
A user friendly library for sending messages and listening to TCP sockets. -
gosd
A library for scheduling when to dispatch a message to a channel. -
rmqconn
RabbitMQ Reconnection. Wrapper over amqp.Connection and amqp.Dial. Allowing to do a reconnection when the connection is broken before forcing the call to the Close () method to be closed.
Get performance insights in less than 4 minutes
Do you think we are missing an alternative of Confluent Kafka Golang Client or a related project?
README
Confluent's Golang Client for Apache KafkaTM
confluent-kafka-go is Confluent's Golang client for Apache Kafka and the Confluent Platform.
Features:
High performance - confluent-kafka-go is a lightweight wrapper around librdkafka, a finely tuned C client.
Reliability - There are a lot of details to get right when writing an Apache Kafka client. We get them right in one place (librdkafka) and leverage this work across all of our clients (also confluent-kafka-python and confluent-kafka-dotnet).
Supported - Commercial support is offered by Confluent.
Future proof - Confluent, founded by the creators of Kafka, is building a streaming platform with Apache Kafka at its core. It's high priority for us that client features keep pace with core Apache Kafka and components of the Confluent Platform.
The Golang bindings provides a high-level Producer and Consumer with support for the balanced consumer groups of Apache Kafka 0.9 and above.
See the API documentation for more information.
License: Apache License v2.0
Examples
High-level balanced consumer
import (
"fmt"
"gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
)
func main() {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": "localhost",
"group.id": "myGroup",
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
c.SubscribeTopics([]string{"myTopic", "^aRegex.*[Tt]opic"}, nil)
for {
msg, err := c.ReadMessage(-1)
if err == nil {
fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value))
} else {
// The client will automatically try to recover from all errors.
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
}
}
c.Close()
}
Producer
import (
"fmt"
"gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
)
func main() {
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost"})
if err != nil {
panic(err)
}
defer p.Close()
// Delivery report handler for produced messages
go func() {
for e := range p.Events() {
switch ev := e.(type) {
case *kafka.Message:
if ev.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", ev.TopicPartition)
} else {
fmt.Printf("Delivered message to %v\n", ev.TopicPartition)
}
}
}
}()
// Produce messages to topic (asynchronously)
topic := "myTopic"
for _, word := range []string{"Welcome", "to", "the", "Confluent", "Kafka", "Golang", "client"} {
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(word),
}, nil)
}
// Wait for message deliveries before shutting down
p.Flush(15 * 1000)
}
More elaborate examples are available in the [examples](examples) directory, including [how to configure](examples/confluent_cloud_example) the Go client for use with Confluent Cloud.
Getting Started
Supports Go 1.11+ and librdkafka 1.4.0+.
Using Go Modules
Starting with Go 1.13, you can use Go Modules to install confluent-kafka-go.
Import the kafka
package from GitHub in your code:
import "github.com/confluentinc/confluent-kafka-go/kafka"
Build your project:
go build ./...
If you are building for Alpine Linux (musl), -tags musl
must be specified.
go build -tags musl ./...
A dependency to the latest stable version of confluent-kafka-go should be automatically added to
your go.mod
file.
Install the client
If Go modules can't be used we recommend that you version pin the confluent-kafka-go import to v1 using gopkg.in:
Manual install:
go get -u gopkg.in/confluentinc/confluent-kafka-go.v1/kafka
Golang import:
import "gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
librdkafka
Prebuilt librdkafka binaries are included with the Go client and librdkafka does not need to be installed separately on the build or target system. The following platforms are supported by the prebuilt librdkafka binaries:
- Mac OSX x64
- glibc-based Linux x64 (e.g., RedHat, Debian, CentOS, Ubuntu, etc) - without GSSAPI/Kerberos support
- musl-based Linux 64 (Alpine) - without GSSAPI/Kerberos support
When building your application for Alpine Linux (musl libc) you must pass
-tags musl
to go get
, go build
, etc.
CGO_ENABLED
must NOT be set to 0 since the Go client is based on the
C library librdkafka.
If GSSAPI/Kerberos authentication support is required you will need
to install librdkafka separately, see the Installing librdkafka chapter
below, and then build your Go application with -tags dynamic
.
Installing librdkafka
If the bundled librdkafka build is not supported on your platform, or you need a librdkafka with GSSAPI/Kerberos support, you must install librdkafka manually on the build and target system using one of the following alternatives:
- For Debian and Ubuntu based distros, install
librdkafka-dev
from the standard repositories or using Confluent's Deb repository. - For Redhat based distros, install
librdkafka-devel
using Confluent's YUM repository. - For MacOS X, install
librdkafka
from Homebrew. You may also need to brew install pkg-config if you don't already have it:brew install librdkafka pkg-config
. - For Alpine:
apk add librdkafka-dev pkgconf
- confluent-kafka-go is not supported on Windows.
- For source builds, see instructions below.
Build from source:
git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
./configure
make
sudo make install
After installing librdkafka you will need to build your Go application
with -tags dynamic
.
Note: If you use the master branch of the Go client, then you need to use the master branch of librdkafka.
confluent-kafka-go requires librdkafka v1.4.0 or later.
API Strands
There are two main API strands: function and channel based.
Function Based Consumer
Messages, errors and events are polled through the consumer.Poll() function.
Pros:
- More direct mapping to underlying librdkafka functionality.
Cons:
- Makes it harder to read from multiple channels, but a go-routine easily solves that (see Cons in channel based consumer above about outdated events).
- Slower than the channel consumer.
See [examples/consumer_example](examples/consumer_example)
Channel Based Consumer (deprecated)
Deprecated: The channel based consumer is deprecated due to the channel issues mentioned below. Use the function based consumer.
Messages, errors and events are posted on the consumer.Events channel for the application to read.
Pros:
- Possibly more Golang:ish
- Makes reading from multiple channels easy
- Fast
Cons:
- Outdated events and messages may be consumed due to the buffering nature
of channels. The extent is limited, but not remedied, by the Events channel
buffer size (
go.events.channel.size
).
See [examples/consumer_channel_example](examples/consumer_channel_example)
Channel Based Producer
Application writes messages to the producer.ProducerChannel. Delivery reports are emitted on the producer.Events or specified private channel.
Pros:
- Go:ish
- Proper channel backpressure if librdkafka internal queue is full.
Cons:
- Double queueing: messages are first queued in the channel (size is configurable) and then inside librdkafka.
See [examples/producer_channel_example](examples/producer_channel_example)
Function Based Producer
Application calls producer.Produce() to produce messages. Delivery reports are emitted on the producer.Events or specified private channel.
Pros:
- Go:ish
Cons:
- Produce() is a non-blocking call, if the internal librdkafka queue is full the call will fail.
- Somewhat slower than the channel producer.
See [examples/producer_example](examples/producer_example)
Tests
See [kafka/README](kafka/README.md)
Contributing
Contributions to the code, examples, documentation, et.al, are very much appreciated.
Make your changes, run gofmt, tests, etc, push your branch, create a PR, and sign the CLA.
*Note that all licence references and agreements mentioned in the Confluent Kafka Golang Client README section above
are relevant to that project's source code only.