goworker alternatives and similar packages
Based on the "Goroutines" category.
Alternatively, view goworker alternatives based on common mentions on social networks and blogs.
-
ants
๐๐๐ ants is a high-performance and low-cost goroutine pool in Go./ ants ๆฏไธไธช้ซๆง่ฝไธไฝๆ่็ goroutine ๆฑ ใ -
pond
๐ Minimalistic and High-performance goroutine worker pool written in Go -
pool
:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation -
Goflow
Simply way to control goroutines execution order based on dependencies -
artifex
Simple in-memory job queue for Golang using worker-based dispatching -
go-workers
๐ท Library for safely running groups of workers concurrently or consecutively that require input and output through channels -
async
A safe way to execute functions asynchronously, recovering them in case of panic. It also provides an error stack aiming to facilitate fail causes discovery. -
gollback
Go asynchronous simple function utilities, for managing execution of closures and callbacks -
semaphore
๐ฆ Semaphore pattern implementation with timeout of lock/unlock operations. -
Hunch
Hunch provides functions like: All, First, Retry, Waterfall etc., that makes asynchronous flow control more intuitive. -
go-do-work
Dynamically resizable pools of goroutines which can queue an infinite number of jobs. -
goccm
Limits the number of goroutines that are allowed to run concurrently -
gpool
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore. -
routine
go routine control, abstraction of the Main and some useful Executors.ๅฆๆไฝ ไธไผ็ฎก็Goroutine็่ฏ๏ผ็จๅฎ -
go-actor
A tiny library for writing concurrent programs in Go using actor model -
gowl
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status. -
kyoo
Unlimited job queue for go, using a pool of concurrent workers processing the job queue entries -
go-waitgroup
A sync.WaitGroup with error handling and concurrency control -
channelify
Make functions return a channel for parallel processing via go routines. -
go-tools/multithreading
A collection of tools for Golang -
execpool
A pool that spins up a given number of processes in advance and attaches stdin and stdout when needed. Very similar to FastCGI but works for any command. -
conexec
A concurrent toolkit to help execute funcs concurrently in an efficient and safe way. It supports specifying the overall timeout to avoid blocking. -
hands
Hands is a process controller used to control the execution and return strategies of multiple goroutines. -
concurrency-limiter
Concurrency limiter with support for timeouts , dynamic priority and context cancellation of goroutines. -
queue
package queue gives you a queue group accessibility. Helps you to limit goroutines, wait for the end of the all goroutines and much more. -
async-job
AsyncJob is an asynchronous queue job manager with light code, clear and speed. I hope so ! ๐ฌ -
github.com/akshaybharambe14/gowp
High performance, type safe, concurrency limiting worker pool package for golang!
Clean code begins in your IDE with SonarLint
Do you think we are missing an alternative of goworker or a related project?
Popular Comparisons
README
goworker
goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost.
goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby.
Installation
To install goworker, use
go get github.com/benmanns/goworker
to install the package, and then from your worker
import "github.com/benmanns/goworker"
Getting Started
To create a worker, write a function matching the signature
func(string, ...interface{}) error
and register it using
goworker.Register("MyClass", myFunc)
Here is a simple worker that prints its arguments:
package main
import (
"fmt"
"github.com/benmanns/goworker"
)
func myFunc(queue string, args ...interface{}) error {
fmt.Printf("From %s, %v\n", queue, args)
return nil
}
func init() {
goworker.Register("MyClass", myFunc)
}
func main() {
if err := goworker.Work(); err != nil {
fmt.Println("Error:", err)
}
}
To create workers that share a database pool or other resources, use a closure to share variables.
package main
import (
"fmt"
"github.com/benmanns/goworker"
)
func newMyFunc(uri string) (func(queue string, args ...interface{}) error) {
foo := NewFoo(uri)
return func(queue string, args ...interface{}) error {
foo.Bar(args)
return nil
}
}
func init() {
goworker.Register("MyClass", newMyFunc("http://www.example.com/"))
}
func main() {
if err := goworker.Work(); err != nil {
fmt.Println("Error:", err)
}
}
Here is a simple worker with settings:
package main
import (
"fmt"
"github.com/benmanns/goworker"
)
func myFunc(queue string, args ...interface{}) error {
fmt.Printf("From %s, %v\n", queue, args)
return nil
}
func init() {
settings := goworker.WorkerSettings{
URI: "redis://localhost:6379/",
Connections: 100,
Queues: []string{"myqueue", "delimited", "queues"},
UseNumber: true,
ExitOnComplete: false,
Concurrency: 2,
Namespace: "resque:",
Interval: 5.0,
}
goworker.SetSettings(settings)
goworker.Register("MyClass", myFunc)
}
func main() {
if err := goworker.Work(); err != nil {
fmt.Println("Error:", err)
}
}
goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types.
// Expecting (int, string, float64)
func myFunc(queue, args ...interface{}) error {
idNum, ok := args[0].(json.Number)
if !ok {
return errorInvalidParam
}
id, err := idNum.Int64()
if err != nil {
return errorInvalidParam
}
name, ok := args[1].(string)
if !ok {
return errorInvalidParam
}
weightNum, ok := args[2].(json.Number)
if !ok {
return errorInvalidParam
}
weight, err := weightNum.Float64()
if err != nil {
return errorInvalidParam
}
doSomething(id, name, weight)
return nil
}
For testing, it is helpful to use the redis-cli
program to insert jobs onto the Redis queue:
redis-cli -r 100 RPUSH resque:queue:myqueue '{"class":"MyClass","args":["hi","there"]}'
will insert 100 jobs for the MyClass
worker onto the myqueue
queue. It is equivalent to:
class MyClass
@queue = :myqueue
end
100.times do
Resque.enqueue MyClass, ['hi', 'there']
end
or
goworker.Enqueue(&goworker.Job{
Queue: "myqueue",
Payload: goworker.Payload{
Class: "MyClass",
Args: []interface{}{"hi", "there"},
},
})
Flags
There are several flags which control the operation of the goworker client.
-queues="comma,delimited,queues"
โ This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (ร la Resque's*
queue). If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1:-queues='high=2,low=1'
.-interval=5.0
โ Specifies the wait period between polling if no job was in the queue the last time one was requested.-concurrency=25
โ Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources.-connections=2
โ Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans onmaxclients
.-uri=redis://localhost:6379/
โ Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the formatredis://user:[email protected]:port/db
orunix:///path/to/redis.sock
. The flag may also be set by the environment variable$($REDIS_PROVIDER)
or$REDIS_URL
. E.g. set$REDIS_PROVIDER
toREDISTOGO_URL
on Heroku to let the Redis To Go add-on configure the Redis database.-namespace=resque:
โ Specifies the namespace from which goworker retrieves jobs and stores stats on workers.-exit-on-complete=false
โ Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with thetime
command to benchmark different configurations.
You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main()
. It is okay to call flags.Parse()
before calling goworker.Main()
if you need to do additional processing on your flags.
Signal Handling in goworker
To stop goworker, send a QUIT
, TERM
, or INT
signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY
jobs currently running, which will continue to run until they are finished.
Failure Modes
Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure.
If the process is killed with a KILL
or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable.
If you are running goworker on a system like Heroku, which sends a TERM
to signal a process that it needs to stop, ten seconds later sends a KILL
to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under
resque:worker:<hostname>:<process-id>-<worker-id>:<queues>
as a JSON object with keys queue
, run_at
, and payload
, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Contributing
- Fork it
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request