Limiting concurrency with a semaphore
suggest changeWhen doing CPU intensive tasks like image resizing it doesn’t make sense to create more goroutines than available CPUs.
Things will not go any faster and you might even loose performance due to additional book keeping and switching between goroutines.
One way to limit concurrency (i.e. number of gorutines you launch at the same time) is to use a semaphore.
You can enter (acquire) semaphore and leave (release) a semaphore.
A semaphore has a fixed capacity. If you exceed semaphore’s capacity, acquire blocks until a release operation frees it.
A buffered channel is a natural semaphore.
Here’s an example of using a channel acting as a semaphore to limit number of goroutines active at any given time:
var (
semaphoreSize = 4
mu sync.Mutex
totalTasks int
curConcurrentTasks int
maxConcurrentTasks int
)
func timeConsumingTask() {
mu.Lock()
totalTasks++
curConcurrentTasks++
if curConcurrentTasks > maxConcurrentTasks {
maxConcurrentTasks = curConcurrentTasks
}
mu.Unlock()
// in real system this would be a CPU intensive operation
time.Sleep(10 * time.Millisecond)
mu.Lock()
curConcurrentTasks--
mu.Unlock()
}
func main() {
sem := make(chan struct{}, semaphoreSize)
var wg sync.WaitGroup
for i := 0; i < 32; i++ {
// acquire semaphore
sem <- struct{}{}
wg.Add(1)
go func() {
timeConsumingTask()
// release semaphore
<-sem
wg.Done()
}()
}
// wait for all task to finish
wg.Wait()
fmt.Printf("total tasks : %d\n", totalTasks)
fmt.Printf("max concurrent tasks: %d\n", maxConcurrentTasks)
}
total tasks : 32
max concurrent tasks: 4
We use technique described in waiting for goroutines to finish to wait for all tasks to finish.
Often the right amount of concurrent tasks is equal to number of CPUs, which can be obtained with runtime.NumCPU()
.