Concurrent Programming in Go

Gene Kuo
5 min readOct 15, 2021
Photo by Kolleen Gladden on Unsplash

The ability to doing many things at once on the computer is called concurrency. Any Go code can be run concurrently by starting it in a Goroutine. The article will introduce some common usages, like Goroutine, WaitGroup, Channel, and Mutex, with examples. There are more advanced techniques in concurrency in Go, which we can research further and outside of this article.

Goroutines

When we run our program and create a new Goroutine, we are running two goroutines: the main one and the one we just created. The operations in different goroutines may run in any order. In the following listing, we use time.Sleep to wait for the created Goroutines to complete their work. However, this is not an elegant solution since we don’t how long we have to wait for others to complete and it is also inefficient.

package mainimport (
"fmt"
"time"
)
func work(id int) {
time.Sleep(3 * time.Second)
fmt.Println("work: ", id, " is finished")
}
func main() {
for i := 0; i < 5; i++ {
go work(i)
}
time.Sleep(4 * time.Second)
}

WaitGroup

We can use the sync.WaitGroup function to synchronize these two Goroutines. We add a parameter called wg with a pointer to the sync.WaitGroup to the work function. When the operation in the work function is finished, we tell WaitGroup that this Goroutine is completed and we return.

Before we create a Goroutine in the main function, we tell WaitGroup to add one to its count to notify the WaitGroup that one Goroutine is running. We then call wg.Wait() for all Goroutines to finish.

package mainimport (
"log"
"sync"
"time"
)
func work(id int, wg *sync.WaitGroup) {
time.Sleep(3 * time.Second)
log.Println("work: ", id, " is finished")
wg.Done()
return
}
func main() {
wg := &sync.WaitGroup{}
for i := 0; i < 5; i++ {
wg.Add(1)
go work(i, wg)
}
wg.Wait()
}

Channels

Goroutines use channels for communication and coordination. Channels can be used as variables, as parameters passed to functions, stored in a structure, and so on. When we send values on a channel in a Goroutine, the send operation will wait until another Goroutine tries to receive on the same channel and the sender Goroutine can’t do anything. Similarly, a receiver in a Goroutine will wait until another Goroutine tries to send on the same channel. So we can modify the previous listing to use a channel to communicate with the main Goroutine that works in other Goroutines is completed.

func work(id int, c chan int) {
time.Sleep(3 * time.Second)
log.Println("work: ", id, " is completed")
c <- id
}
func main() {
c := make(chan int)
for i := 0; i < 5; i++ {
go work(i, c)
}
for i := 0; i < 5; i++ {
id := <- c
log.Println("work: ", id, " has finished")
}
}

We can also imagine that the receive operation <- c can represent the result from all Goroutines running concurrently.

func work(in chan int, out chan int) {
input := <-in
log.Println("input: ", input)
time.Sleep(3 * time.Second) // perform calculation
out <- input
}
func main() {
in := make(chan int)
out := make(chan int)
var res int // Start goroutines and wait for input
for i := 0; i < 5; i++ {
go work(in, out)
}
// Send input to in channel
for i := 0; i < 5; i++ {
in <- i
}

// Collect result from out channel
for i := 0; i < 5; i++ {
output := <-out
res += output
}
log.Println("result: ", res)
}

By using the concept of sending and receiving values on channels, we can design a pipeline where each Goroutine can do simple work and send its result to the next Gouroutine to produce sophisticated results eventually.

A pipeline is useful for processing large streams of data without using a large amount of memory and helps solve complex problems.

In the following listing, we use close to close a channel to signify there are no more values to be sent. And we have to check whether a channel is closed on the receiving side before reading from the channel to avoid looping forever.

v, ok := <- c

We can instead use range on a channel when we are reading values from the channel until the channel is closed. The mechanism will check if a channel is closed for us.

As for the last Goroutine, we call it on the main Gouroutine to wait for other Goroutines finished and no values expected. Finally, the main Goroutine exits.

func source(next chan string) {
for _, v := range []string{"apple", "stone", "orange", "grape"} {
next <- v
}
close(next)
}
func filter(in chan string, next chan string) {
for v := range in {
if !strings.Contains(v, "stone") {
next <- v
}
}
close(next)
}
func produce(in chan string) {
for v := range in {
log.Println(v)
}
}
func main() {
c0 := make(chan string)
c1 := make(chan string)
go source(c0)
go filter(c0, c1)
produce(c1)
}

Shared Values

Sometimes, we have to deal with shared value or state among running Goroutines. If two Goroutines read and write the shared values at the same time. You might get undefined behavior or result in a race condition where the goroutines are racing to use the value.

Goroutines can mutually exclude each other from using a shared value at the same time. The feature is called Mutex from the sync package. Mutexes provide Lock and UnLock methods. If a Goroutine trying to call Lock on a locked mutex, it will wait until it is unlocked before locking it again.

The pattern to use the mutex is to make sure that any code accessing the shared values locks the mutex, uses the values, then unlocks the mutex using defer statement.

type Counter struct {
sync.Mutex
count map[string]int
}
func (c *Counter) enter(name string) {
c.Lock()
defer c.Unlock() c.count[name]++
}
func main() {
c := &Counter{count: make(map[string]int)}
go c.enter("apple")
go c.enter("orange")
go c.enter("grape")
go c.enter("apple")
go c.enter("stone")
// Wait for the goroutines to finish
time.Sleep(300 * time.Millisecond)
fmt.Println(c.count)
}

We have to be careful when we use mutex to guard the shared value. We should do simple operations when the mutex is locked since we may be blocking others out for a long time. If we try to lock the same mutex without unlocking it, the lock will block forever and cause deadlock.

Summary

It is simple to create Goroutines to do concurrent work for us efficiently. We can use WaitGroup to wait, synchronize multiple Goroutines, and collect results from them. This also can be done preferably using channels in Go.

Channels can be used to communicate among Goroutines by sending and receiving values on them. Channels can be used to chain Goroutines together to form the streaming pipelines to simplify complex operations and leverage concurrency.

We can safeguard our shared data when multiple Goroutines try to modify the data at the same time using Mutex provided by the sync package. We can avoid deadlocks and race conditions when we use this mechanism in the right way: Locking the simple operation on the shared data and using only one Mutex for the shared data, and unlocking the Mutex when done with the shared data.

There are more advanced techniques regarding concurrency and efficiency in Go to use. I will further extend those concepts and techniques in further articles.

Thanks for reading.

--

--

Gene Kuo

Solutions Architect, AWS CSAA/CDA: microservices, kubernetes, algorithms, Java, Rust, Golang, React, JavaScript…