Description
An extension of the worker queue pattern described in http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/ or http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html to allow graceful stopping, queryable work status, and more generic jobs.
Bifrost alternatives and similar packages
Based on the "Goroutines" category.
Alternatively, view Bifrost alternatives based on common mentions on social networks and blogs.
-
ants
🐜🐜🐜 ants is a high-performance and low-cost goroutine pool in Go, inspired by fasthttp./ ants 是一个高性能且低损耗的 goroutine 池。 -
goworker
goworker is a Go-based background worker that runs 10 to 100,000* times faster than Ruby-based workers. -
pool
:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation -
pond
🔘 Minimalistic and High-performance goroutine worker pool written in Go -
Goflow
Simply way to control goroutines execution order based on dependencies -
go-workers
👷 Library for safely running groups of workers concurrently or consecutively that require input and output through channels -
async
A safe way to execute functions asynchronously, recovering them in case of panic. It also provides an error stack aiming to facilitate fail causes discovery. -
artifex
Simple in-memory job queue for Golang using worker-based dispatching -
semaphore
🚦 Semaphore pattern implementation with timeout of lock/unlock operations. -
gollback
Go asynchronous simple function utilities, for managing execution of closures and callbacks -
go-do-work
Dynamically resizable pools of goroutines which can queue an infinite number of jobs. -
Hunch
Hunch provides functions like: All, First, Retry, Waterfall etc., that makes asynchronous flow control more intuitive. -
gpool
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore. -
routine
go routine control, abstraction of the Main and some useful Executors.如果你不会管理Goroutine的话,用它 -
kyoo
Unlimited job queue for go, using a pool of concurrent workers processing the job queue entries -
go-waitgroup
A sync.WaitGroup with error handling and concurrency control -
gowl
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status. -
channelify
Make functions return a channel for parallel processing via go routines. -
conexec
A concurrent toolkit to help execute funcs concurrently in an efficient and safe way. It supports specifying the overall timeout to avoid blocking. -
execpool
A pool that spins up a given number of processes in advance and attaches stdin and stdout when needed. Very similar to FastCGI but works for any command. -
hands
Hands is a process controller used to control the execution and return strategies of multiple goroutines. -
concurrency-limiter
Concurrency limiter with support for timeouts , dynamic priority and context cancellation of goroutines. -
queue
package queue gives you a queue group accessibility. Helps you to limit goroutines, wait for the end of the all goroutines and much more. -
github.com/akshaybharambe14/gowp
High performance, type safe, concurrency limiting worker pool package for golang! -
oversight
Oversight is a complete implementation of the Erlang supervision trees.
Less time debugging, more time building
Do you think we are missing an alternative of Bifrost or a related project?
Popular Comparisons
README
ᛉ Bifröst - a queryable in-process worker queue
[gofrost](repo/vikgopher.gif "BiFrost")
Package bifrost contains functionality to create an in-process job queue with a configurable number of goroutine via workers. It also includes the ability to query scheduled jobs for status (completed jobs are purged at a configurable interval)
package main
import (
"encoding/json"
"fmt"
"github.com/serdmanczyk/bifrost"
"os"
"time"
)
func main() {
stdoutWriter := json.NewEncoder(os.Stdout)
dispatcher := bifrost.NewWorkerDispatcher(
bifrost.Workers(4),
bifrost.JobExpiry(time.Millisecond),
)
// Queue a job func
tracker := dispatcher.QueueFunc(func() error {
time.Sleep(time.Microsecond)
return nil
})
// Queue a 'JobRunner'
dispatcher.Queue(bifrost.JobRunnerFunc(func() error {
time.Sleep(time.Microsecond)
return nil
}))
// Print out incomplete status
status := tracker.Status()
stdoutWriter.Encode(&status)
// {"ID":0,"Complete":false,"Start":"2017-03-23T21:51:27.140681968-07:00"}
// wait on completion
<-tracker.Done()
// Status is now complete
status = tracker.Status()
stdoutWriter.Encode(&status)
// {"ID":0,"Complete":true,"Success":true,"Start":"2017-03-23T21:51:27.140681968-07:00","Finish":"2017-03-23T21:51:27.140830827-07:00"}
// Queue a job that will 'fail'
tracker = dispatcher.QueueFunc(func() error {
time.Sleep(time.Microsecond)
return fmt.Errorf("Failed")
})
// Show failure status
<-tracker.Done()
status = tracker.Status()
stdoutWriter.Encode(&status)
// {"ID":2,"Complete":true,"Success":false,"Error":"Failed","Start":"2017-03-23T21:51:27.141026625-07:00","Finish":"2017-03-23T21:51:27.141079871-07:00"}
// Query for a job's status.
tracker, _ = dispatcher.JobStatus(tracker.ID())
status = tracker.Status()
stdoutWriter.Encode(&status)
// {"ID":2,"Complete":true,"Success":false,"Error":"Failed","Start":"2017-03-23T21:51:27.141026625-07:00","Finish":"2017-03-23T21:51:27.141079871-07:00"}
// Show all jobs
jobs := dispatcher.Jobs()
stdoutWriter.Encode(jobs)
// [{"ID":2,"Complete":true,"Success":false,"Error":"Failed","Start":"2017-03-23T21:51:27.141026625-07:00","Finish":"2017-03-23T21:51:27.141079871-07:00"},{"ID":0,"Complete":true,"Success":true,"Start":"2017-03-23T21:51:27.140681968-07:00","Finish":"2017-03-23T21:51:27.140830827-07:00"},{"ID":1,"Complete":true,"Success":true,"Start":"2017-03-23T21:51:27.140684331-07:00","Finish":"2017-03-23T21:51:27.140873087-07:00"}]
// wait for jobs to be purged
time.Sleep(time.Millisecond * 5)
// should now be empty
jobs = dispatcher.Jobs()
stdoutWriter.Encode(jobs)
// []
dispatcher.Stop()
}
Why?
If you've read the blog posts Handling 1 Million Requests per Minute with Go or Writing worker queues, in Go this will look very familiar. The main machinery in Bifrost is basically identical to the functionality described in those blog posts, but with a couple added features I wanted for my project.
Added Features:
- Generic jobs: any
func() error
or type that implementsfunc Run() error
can be queued as a job. - Graceful shutdown: when dispatcher is stopped, waits for running jobs to complete.
- Tracking: queued jobs are given an ID that can be used to query for status later.
- Cleanup: completed jobs are cleaned up after a configurable amount of time.
Lacks (might add these later):
- Lost jobs: if the dispatcher is stopped before all jobs are sent to a worker, unsent jobs may be ignored.
- Errant jobs: jobs taking longer than expected cannot be cancelled.
- Single process: this package does not include functionality to schedule jobs across multiple processes via AMQP, gRPC, or otherwise.
For an example, see the [test](dispatcher_test.go) or example [command line app](example/main.go).
Obligatory "not for use in production" but I do welcome feedback.
Etymology
Bifröst (pronounce B-eye-frost popularly, or traditionally more like Beefroast) is the bridge between the realms of Earth and Asgard (the heavens) in norse mythology.
The Futhark ᛉ Elhaz/Algiz is seen as the symbol for Bifröst, or at least according to this thing I Googled.
The symbology intended is that dispatcher is a 'bridge' between the scheduling goroutine and the worker goroutine.
Honestly I just needed a cool Norse thing to name this, I was reaching. Not to be taken too seriously.