Limiting concurrency with a semaphore

suggest change

When doing CPU intensive tasks like image resizing it doesn’t make sense to create more goroutines than available CPUs.

Things will not go any faster and you might even loose performance due to additional book keeping and switching between goroutines.

One way to limit concurrency (i.e. number of gorutines you launch at the same time) is to use a semaphore.

You can enter (acquire) semaphore and leave (release) a semaphore.

A semaphore has a fixed capacity. If you exceed semaphore’s capacity, acquire blocks until a release operation frees it.

A buffered channel is a natural semaphore.

Here’s an example of using a channel acting as a semaphore to limit number of goroutines active at any given time:

var (
	semaphoreSize = 4

	mu                 sync.Mutex
	totalTasks         int
	curConcurrentTasks int
	maxConcurrentTasks int

func timeConsumingTask() {
	if curConcurrentTasks > maxConcurrentTasks {
		maxConcurrentTasks = curConcurrentTasks

	// in real system this would be a CPU intensive operation
	time.Sleep(10 * time.Millisecond)


func main() {
	sem := make(chan struct{}, semaphoreSize)
	var wg sync.WaitGroup
	for i := 0; i < 32; i++ {
		// acquire semaphore
		sem <- struct{}{}

		go func() {
			// release semaphore

	// wait for all task to finish

	fmt.Printf("total tasks         : %d\n", totalTasks)
	fmt.Printf("max concurrent tasks: %d\n", maxConcurrentTasks)
total tasks         : 32
max concurrent tasks: 4

We use technique described in waiting for goroutines to finish to wait for all tasks to finish.

Often the right amount of concurrent tasks is equal to number of CPUs, which can be obtained with runtime.NumCPU().

Feedback about page:

Optional: your email if you want me to get back to you:

Table Of Contents