go-do-work alternatives and similar packages
Based on the "Goroutines" category.
Alternatively, view go-do-work alternatives based on common mentions on social networks and blogs.
-
ants
๐๐๐ ants is a high-performance and low-cost goroutine pool in Go./ ants ๆฏไธไธช้ซๆง่ฝไธไฝๆ่็ goroutine ๆฑ ใ -
goworker
goworker is a Go-based background worker that runs 10 to 100,000* times faster than Ruby-based workers. -
pool
:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation -
go-workers
DISCONTINUED. ๐ท Library for safely running groups of workers concurrently or consecutively that require input and output through channels -
async
A safe way to execute functions asynchronously, recovering them in case of panic. It also provides an error stack aiming to facilitate fail causes discovery. -
gollback
Go asynchronous simple function utilities, for managing execution of closures and callbacks -
Hunch
Hunch provides functions like: All, First, Retry, Waterfall etc., that makes asynchronous flow control more intuitive. -
gpool
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore. -
gowl
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status. -
routine
go routine control, abstraction of the Main and some useful Executors.ๅฆๆไฝ ไธไผ็ฎก็Goroutine็่ฏ๏ผ็จๅฎ -
kyoo
Unlimited job queue for go, using a pool of concurrent workers processing the job queue entries -
execpool
A pool that spins up a given number of processes in advance and attaches stdin and stdout when needed. Very similar to FastCGI but works for any command. -
concurrency-limiter
Concurrency limiter with support for timeouts , dynamic priority and context cancellation of goroutines. -
conexec
A concurrent toolkit to help execute funcs concurrently in an efficient and safe way. It supports specifying the overall timeout to avoid blocking. -
queue
package queue gives you a queue group accessibility. Helps you to limit goroutines, wait for the end of the all goroutines and much more. -
hands
Hands is a process controller used to control the execution and return strategies of multiple goroutines. -
async-job
AsyncJob is an asynchronous queue job manager with light code, clear and speed. I hope so ! ๐ฌ
InfluxDB - Power Real-Time Data Analytics at Scale
Do you think we are missing an alternative of go-do-work or a related project?
README
go-do-work
gdw
makes use of eapache's delightfully clever channels package in order to provide dynamically resizable pools of goroutines which can queue an infinite number of jobs.
Installation
go get github.com/c3mb0/go-do-work
Types of Pools
There are currently 2 types of pools in gdw
: Worker
and Rebel
. Their internal mechanics of operation are the same except for jobs queued in a WorkerPool
being waitable. This allows for separation of concerns, namely for jobs whose results and/or execution are of interest and jobs which are of fire-and-forget nature. You can safely mix them without affecting one and other.
Usage
Any object that implements the Job
interface is eligible to be queued and executed by gdw
. There is, however, a big difference between queuing an object and an object pointer.
Consider the following example:
type adder struct {
count int
}
func (a adder) DoWork() {
a.count++
fmt.Print(a.count, " ")
}
func main() {
test := adder{count: 0}
pool := gdw.NewWorkerPool(2)
defer pool.Close()
pool.Add(test, 5)
pool.Wait()
fmt.Println()
}
Here we create a new WorkerPool
with a pool size of 2. We then queue 5 test
jobs. The resulting output is:
1 1 1 1 1
Let's queue an object pointer instead of an object:
type adder struct {
count int
}
func (a *adder) DoWork() {
a.count++
fmt.Print(a.count, " ")
}
func main() {
test := &adder{count: 0}
pool := gdw.NewWorkerPool(2)
defer pool.Close()
pool.Add(test, 5)
pool.Wait()
fmt.Println()
}
The resulting output is:
1 2 3 4 5
When you queue an object, each goroutine in the pool works on a copy of the object provided. On the other hand, when you queue an object pointer, all goroutines work on the same object. These approaches both have their use cases, but keep in mind that the latter approach needs to be thread-safe. Thus, the correct implementation would be:
type adder struct {
count uint32
}
func (a *adder) DoWork() {
atomic.AddUint32(&a.count, 1)
fmt.Print(atomic.LoadUint32(&a.count), " ")
}
func main() {
test := &adder{count: 0}
pool := gdw.NewWorkerPool(2)
defer pool.Close()
pool.Add(test, 5)
pool.Wait()
fmt.Println()
}
Pool Size and Safety
You can safely increase or decrease a pool's size at runtime without losing already queued data or shutting down already running goroutines. The only caveat is that you cannot set the pool size to 0. Details can be found here.
The following example demonstrates pool resizing in action:
type adder struct {
count int
}
func (a adder) DoWork() {
a.count++
fmt.Print(a.count, " ")
time.Sleep(2 * time.Second)
}
func main() {
test := adder{count: 0}
pool := gdw.NewWorkerPool(3)
defer pool.Close()
pool.Add(test, 5)
time.Sleep(1 * time.Second)
pool.SetPoolSize(1)
fmt.Printf("\n%d\n", pool.GetQueueDepth())
pool.Wait()
fmt.Println()
}
Check the output for some magic!
Batching
Instead of waiting for the entire pool to finish, you can wait for a specific group of jobs. This is done via "batching":
type adder struct {
count uint32
}
func (a adder) DoWork() {
a.count++
time.Sleep(1 * time.Second)
}
func main() {
test := adder{count: 0}
pool := NewWorkerPool(3)
defer pool.Close()
batch1 := pool.NewTempBatch()
batch2 := pool.NewTempBatch()
pool.NewBatch("my batch")
defer batch1.Clean()
defer batch2.Clean()
defer pool.CleanBatch("my batch")
batch1.Add(test, 5)
batch2.Add(test, 10)
batch3, _ := pool.LoadBatch("my batch")
batch3.Add(test, 4)
batch1.Wait()
fmt.Println("batch 1 done")
batch2.Wait()
fmt.Println("batch 2 done")
fmt.Println(pool.GetQueueDepth())
pool.Wait() // includes jobs added through batches
}
Keep in mind that even though batches are separately waitable, jobs queued through them contribute to the job count in the pool.
Collecting Results
If you would like to get some results back from your jobs, the most practical approach is to slip in a channel to the object of interest:
type adder struct {
count int
result chan int
}
func (a adder) DoWork() {
a.count++
a.result <- a.count
}
func main() {
result := make(chan int)
test := adder{
count: 0,
result: result,
}
pool := gdw.NewWorkerPool(3)
defer pool.Close()
pool.Add(test, 5)
go func() {
for res := range result {
fmt.Print(res, " ")
}
}()
pool.Wait()
close(result) // close the result channel after the pool has completed
fmt.Println()
}
This works for both object and object pointer jobs.