Getting started
Basic types
Variables
Constants
Strings
Pointers
Arrays
Slices
Maps
Structs
Interfaces
Empty interface
if, switch, goto
for, while loops
range statement
Functions
Methods
Error handling
Defer
Panic and recover
Concurrency
Channels and select
Mutex
Packages
Files and I/O
Time and date
Command line arguments
Logging
Executing commands
Hex, base64 encoding
JSON
XML
CSV
YAML
SQL
HTTP Client
HTTP Server
Text and HTML templates
Reflection
Context
Package fmt
OS Signals
Testing
Calling C from GO with cgo
Profiling using go tool pprof
Cross compilation
Conditional compilation with build tags
Inlining functions
sync.Pool for better performance
gob
Plugin
HTTP server middleware
Protobuf in Go
Console I/O
Cryptography
Images (PNG, JPEG, BMP, TIFF, WEBP, VP8, GIF)
The Go Command
Testing code with CI services
Windows GUI programming
Contributors

Limiting concurrency with a semaphore

suggest change

When doing CPU intensive tasks like image resizing it doesn’t make sense to create more goroutines than available CPUs.

Things will not go any faster and you might even loose performance due to additional book keeping and switching between goroutines.

One way to limit concurrency (i.e. number of gorutines you launch at the same time) is to use a semaphore.

You can enter (acquire) semaphore and leave (release) a semaphore.

A semaphore has a fixed capacity. If you exceed semaphore’s capacity, acquire blocks until a release operation frees it.

A buffered channel is a natural semaphore.

Here’s an example of using a channel acting as a semaphore to limit number of goroutines active at any given time:

var (
	semaphoreSize = 4

	mu                 sync.Mutex
	totalTasks         int
	curConcurrentTasks int
	maxConcurrentTasks int
)

func timeConsumingTask() {
	mu.Lock()
	totalTasks++
	curConcurrentTasks++
	if curConcurrentTasks > maxConcurrentTasks {
		maxConcurrentTasks = curConcurrentTasks
	}
	mu.Unlock()

	// in real system this would be a CPU intensive operation
	time.Sleep(10 * time.Millisecond)

	mu.Lock()
	curConcurrentTasks--
	mu.Unlock()
}

func main() {
	sem := make(chan struct{}, semaphoreSize)
	var wg sync.WaitGroup
	for i := 0; i < 32; i++ {
		// acquire semaphore
		sem <- struct{}{}
		wg.Add(1)

		go func() {
			timeConsumingTask()
			// release semaphore
			<-sem
			wg.Done()
		}()
	}

	// wait for all task to finish
	wg.Wait()

	fmt.Printf("total tasks         : %d\n", totalTasks)
	fmt.Printf("max concurrent tasks: %d\n", maxConcurrentTasks)
}
total tasks         : 32
max concurrent tasks: 4

We use technique described in waiting for goroutines to finish to wait for all tasks to finish.

Often the right amount of concurrent tasks is equal to number of CPUs, which can be obtained with runtime.NumCPU().

Feedback about page:

Feedback:
Optional: your email if you want me to get back to you:



Table Of Contents