go-floc alternatives and similar packages
Based on the "Goroutines" category.
Alternatively, view go-floc alternatives based on common mentions on social networks and blogs.
-
ants
🐜🐜🐜 ants is a high-performance and low-cost goroutine pool in Go./ ants 是一个高性能且低损耗的 goroutine 池。 -
goworker
goworker is a Go-based background worker that runs 10 to 100,000* times faster than Ruby-based workers. -
pond
🔘 Minimalistic and High-performance goroutine worker pool written in Go -
pool
:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation -
Goflow
Simply way to control goroutines execution order based on dependencies -
artifex
Simple in-memory job queue for Golang using worker-based dispatching -
go-workers
👷 Library for safely running groups of workers concurrently or consecutively that require input and output through channels -
async
A safe way to execute functions asynchronously, recovering them in case of panic. It also provides an error stack aiming to facilitate fail causes discovery. -
gollback
Go asynchronous simple function utilities, for managing execution of closures and callbacks -
semaphore
🚦 Semaphore pattern implementation with timeout of lock/unlock operations. -
Hunch
Hunch provides functions like: All, First, Retry, Waterfall etc., that makes asynchronous flow control more intuitive. -
go-do-work
Dynamically resizable pools of goroutines which can queue an infinite number of jobs. -
gpool
gpool - a generic context-aware resizable goroutines pool to bound concurrency based on semaphore. -
routine
go routine control, abstraction of the Main and some useful Executors.如果你不会管理Goroutine的话,用它 -
go-actor
A tiny library for writing concurrent programs in Go using actor model -
gowl
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status. -
kyoo
Unlimited job queue for go, using a pool of concurrent workers processing the job queue entries -
go-waitgroup
A sync.WaitGroup with error handling and concurrency control -
channelify
Make functions return a channel for parallel processing via go routines. -
conexec
A concurrent toolkit to help execute funcs concurrently in an efficient and safe way. It supports specifying the overall timeout to avoid blocking. -
execpool
A pool that spins up a given number of processes in advance and attaches stdin and stdout when needed. Very similar to FastCGI but works for any command. -
concurrency-limiter
Concurrency limiter with support for timeouts , dynamic priority and context cancellation of goroutines. -
hands
Hands is a process controller used to control the execution and return strategies of multiple goroutines. -
queue
package queue gives you a queue group accessibility. Helps you to limit goroutines, wait for the end of the all goroutines and much more. -
async-job
AsyncJob is an asynchronous queue job manager with light code, clear and speed. I hope so ! 😬 -
github.com/akshaybharambe14/gowp
High performance, type safe, concurrency limiting worker pool package for golang!
Access the most powerful time series database as a service
Do you think we are missing an alternative of go-floc or a related project?
Popular Comparisons
README
go-floc
Floc: Orchestrate goroutines with ease.
The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy.
Announcements
Hooray! The new version v2
is released on the 1st of December, 2017!
Installation and requirements
The package requires Go v1.8 or later.
To install the package use go get gopkg.in/workanator/go-floc.v2
Documentation and examples
Please refer Godoc reference of the package for more details.
Some examples are available at the Godoc reference. Additional examples can be found in go-floc-showcase.
Features
- Easy to use functional interface.
- Simple parallelism and synchronization of jobs.
- As little overhead as possible, in comparison to direct use of goroutines and sync primitives.
- Provide better control over execution with one entry point and one exit point.
Introduction
Floc introduces some terms which are widely used through the package.
Flow
Flow is the overall process which can be controlled through floc.Flow
. Flow
can be canceled or completed with any arbitrary data at any point of execution.
Flow has only one enter point and only one exit point.
// Design the job
flow := run.Sequence(do, something, here, ...)
// The enter point: Run the job
result, data, err := floc.Run(flow)
// The exit point: Check the result of the job.
if err != nil {
// Handle the error
} else if result.IsCompleted() {
// Handle the success
} else {
// Handle other cases
}
Job
Job in Floc is a smallest piece of flow. The prototype of job function is
floc.Job
. Each job can read/write data with floc.Context
and control
the flow with floc.Control
.
Cancel()
, Complete()
, Fail()
methods of floc.Flow
has permanent effect.
Once finished flow cannot be canceled or completed anymore. Calling Fail
and
returning error from job is almost equal.
func ValidateContentLength(ctx floc.Context, ctrl floc.Control) error {
request := ctx.Value("request").(http.Request)
// Cancel the flow with error if request body size is too big
if request.ContentLength > MaxContentLength {
return errors.New("content is too big")
}
return nil
}
Example
Lets have some fun and write a simple example which calculates some statistics on text given.
const Text = `Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed
do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
sunt in culpa qui officia deserunt mollit anim id est laborum.`
const keyStatistics = 1
var sanitizeWordRe = regexp.MustCompile(`\W`)
type Statistics struct {
Words []string
Characters int
Occurrence map[string]int
}
// Split to words and sanitize them
SplitToWords := func(ctx floc.Context, ctrl floc.Control) error {
statistics := ctx.Value(keyStatistics).(*Statistics)
statistics.Words = strings.Split(Text, " ")
for i, word := range statistics.Words {
statistics.Words[i] = sanitizeWordRe.ReplaceAllString(word, "")
}
return nil
}
// Count and sum the number of characters in the each word
CountCharacters := func(ctx floc.Context, ctrl floc.Control) error {
statistics := ctx.Value(keyStatistics).(*Statistics)
for _, word := range statistics.Words {
statistics.Characters += len(word)
}
return nil
}
// Count the number of unique words
CountUniqueWords := func(ctx floc.Context, ctrl floc.Control) error {
statistics := ctx.Value(keyStatistics).(*Statistics)
statistics.Occurrence = make(map[string]int)
for _, word := range statistics.Words {
statistics.Occurrence[word] = statistics.Occurrence[word] + 1
}
return nil
}
// Print result
PrintResult := func(ctx floc.Context, ctrl floc.Control) error {
statistics := ctx.Value(keyStatistics).(*Statistics)
fmt.Printf("Words Total : %d\n", len(statistics.Words))
fmt.Printf("Unique Word Count : %d\n", len(statistics.Occurrence))
fmt.Printf("Character Count : %d\n", statistics.Characters)
return nil
}
// Design the flow and run it
flow := run.Sequence(
SplitToWords,
run.Parallel(
CountCharacters,
CountUniqueWords,
),
PrintResult,
)
ctx := floc.NewContext()
ctx.AddValue(keyStatistics, new(Statistics))
ctrl := floc.NewControl(ctx)
_, _, err := floc.RunWith(ctx, ctrl, flow)
if err != nil {
panic(err)
}
// Output:
// Words Total : 64
// Unique Word Count : 60
// Character Count : 370
Contributing
Please found information about contributing in [CONTRIBUTING.md](CONTRIBUTING.md) and the list of bravers who spent their priceless time and effort to make the project better in [CONTRIBUTORS.md](CONTRIBUTORS.md).
*Note that all licence references and agreements mentioned in the go-floc README section above
are relevant to that project's source code only.