Go in 2 Weeks #06 | Days 10–11: Goroutines & Channels — Concurrency That Scales

Go in 2 Weeks #06 | Days 10–11: Goroutines & Channels — Concurrency That Scales

이 글의 핵심

Spawn goroutines, send on channels, use select and WaitGroup, and compare Go’s message-passing style to C++ threads and mutexes.

Series overview

📚 Go in 2 Weeks #06 | Full series index

This post covers Days 10–11 of the two-week Go curriculum for C++ developers.

Previous: #05 Error handling ← | → Next: #07 Modules & testing


Introduction: lightweight concurrency

std::thread in C++ often costs megabytes of stack per thread—hundreds of threads can exhaust memory. Goroutines start around a few KB and grow as needed—you can run tens of thousands. That is a big reason C++ engineers fall for Go.

You will learn:

  • Goroutine basics
  • Channels for safe communication
  • select for multiple channels
  • Patterns: worker pool, pipeline

Real-world notes

Moving from C++ to Go

Same theme: faster iteration, GC safety, simple deploys.


Table of contents

  1. Goroutines: lightweight threads
  2. Channels: communication
  3. Buffered channels
  4. select: multiplexing
  5. Concurrency patterns
  6. Exercises

1. Goroutines: lightweight threads

C++ vs Go

#include <thread>
#include <iostream>
#include <vector>

void worker(int id) {
    std::cout << "Worker " << id << " running\n";
}

int main() {
    std::vector<std::thread> threads;
    for (int i = 0; i < 10; i++) {
        threads.emplace_back(worker, i);
    }
    for (auto& t : threads) {
        t.join();
    }
    return 0;
}
package main

import (
    "fmt"
    "sync"
)

func worker(id int) {
    fmt.Printf("Worker %d running\n", id)
}

func main() {
    var wg sync.WaitGroup
    for i := 0; i < 10000; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            worker(id)
        }(i)
    }
    wg.Wait()
}

Differences:

  • Cheaper creation than OS threads
  • Small initial stack, grows as needed
  • M:N scheduling
  • C++: hundreds of threads; Go: many thousands of goroutines

sync.WaitGroup

package main

import (
    "fmt"
    "sync"
    "time"
)

func task(id int, wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Printf("Task %d starting\n", id)
    time.Sleep(time.Second)
    fmt.Printf("Task %d done\n", id)
}

func main() {
    var wg sync.WaitGroup
    for i := 1; i <= 5; i++ {
        wg.Add(1)
        go task(i, &wg)
    }
    wg.Wait()
    fmt.Println("All tasks completed")
}

2. Channels: goroutine communication

C++ mutex vs Go channel

std::mutex mtx;
std::vector<int> results;

void worker(int id) {
    int result = id * 2;
    std::lock_guard<std::mutex> lock(mtx);
    results.push_back(result);
}
func worker(id int, ch chan int) {
    ch <- id * 2
}

func main() {
    ch := make(chan int)
    for i := 0; i < 10; i++ {
        go worker(i, ch)
    }
    results := make([]int, 0, 10)
    for i := 0; i < 10; i++ {
        results = append(results, <-ch)
    }
    _ = results
}

Do not communicate by sharing memory; instead, share memory by communicating.

Channel basics

ch := make(chan int)
go func() { ch <- 42 }()
value := <-ch
close(ch)
v, ok := <-ch  // after close: zero value, ok==false

Channel directions

func sender(ch chan<- int) {
    ch <- 1
    close(ch)
}

func receiver(ch <-chan int) {
    for v := range ch {
        _ = v
    }
}

3. Buffered channels

// Unbuffered: synchronous handoff
ch := make(chan int)
go func() { ch <- 1 }()
<-ch

// Buffered: async until full
b := make(chan int, 3)
b <- 1
b <- 2
b <- 3

Producer–consumer example: buffered channel between producer and consumer, close when done, range to drain.


4. select: multiple channels

Basic select

select {
case msg := <-ch1:
    _ = msg
case msg := <-ch2:
    _ = msg
}

Timeout

select {
case result := <-ch:
    fmt.Println("Received:", result)
case <-time.After(1 * time.Second):
    fmt.Println("Timeout!")
}

Non-blocking with default

select {
case ch <- 1:
    fmt.Println("Sent")
default:
    fmt.Println("Would block")
}

5. Concurrency patterns

Worker pool

func worker(id int, jobs <-chan int, results chan<- int) {
    for job := range jobs {
        _ = id
        results <- job * 2
    }
}

Pipeline

func generator(nums ...int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for _, n := range nums {
            out <- n
        }
    }()
    return out
}

func square(in <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for n := range in {
            out <- n * n
        }
    }()
    return out
}

Fan-out / fan-in

Split work across workers, merge results—often with sync.WaitGroup and one output channel.

Context for cancel/timeout

ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
select {
case <-ctx.Done():
    // cancelled or timed out
default:
}

6. Exercises

Exercise 1: parallel downloads

package main

import (
    "fmt"
    "io"
    "net/http"
    "sync"
)

func download(url string, wg *sync.WaitGroup, results chan<- string) {
    defer wg.Done()

    resp, err := http.Get(url)
    if err != nil {
        results <- fmt.Sprintf("%s: error - %v", url, err)
        return
    }
    defer resp.Body.Close()

    body, err := io.ReadAll(resp.Body)
    if err != nil {
        results <- fmt.Sprintf("%s: read error - %v", url, err)
        return
    }

    results <- fmt.Sprintf("%s: %d bytes", url, len(body))
}

func main() {
    urls := []string{
        "https://golang.org",
        "https://github.com",
        "https://stackoverflow.com",
    }

    var wg sync.WaitGroup
    results := make(chan string, len(urls))

    for _, url := range urls {
        wg.Add(1)
        go download(url, &wg, results)
    }

    go func() {
        wg.Wait()
        close(results)
    }()

    for result := range results {
        fmt.Println(result)
    }
}

Exercise 2: rate limiter

package main

import (
    "fmt"
    "time"
)

func rateLimiter(requests <-chan int, rate time.Duration) {
    ticker := time.NewTicker(rate)
    defer ticker.Stop()

    for req := range requests {
        <-ticker.C
        fmt.Printf("Processing request %d at %v\n", req, time.Now())
    }
}

func main() {
    requests := make(chan int, 10)
    go rateLimiter(requests, 500*time.Millisecond)

    for i := 1; i <= 5; i++ {
        requests <- i
    }
    close(requests)

    time.Sleep(3 * time.Second)
}

Exercise 3: timeout

package main

import (
    "fmt"
    "time"
)

func longRunningTask(result chan<- string) {
    time.Sleep(3 * time.Second)
    result <- "Task completed"
}

func main() {
    result := make(chan string)
    go longRunningTask(result)

    select {
    case res := <-result:
        fmt.Println(res)
    case <-time.After(2 * time.Second):
        fmt.Println("Timeout: task took too long")
    }
}

Exercise 4: concurrent map access

package main

import (
    "fmt"
    "sync"
)

type SafeCounter1 struct {
    mu    sync.Mutex
    count map[string]int
}

func (c *SafeCounter1) Inc(key string) {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.count[key]++
}

func (c *SafeCounter1) Value(key string) int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.count[key]
}

type SafeCounter2 struct {
    ops chan func(map[string]int)
}

func NewSafeCounter2() *SafeCounter2 {
    c := &SafeCounter2{ops: make(chan func(map[string]int))}
    go func() {
        count := make(map[string]int)
        for op := range c.ops {
            op(count)
        }
    }()
    return c
}

func (c *SafeCounter2) Inc(key string) {
    c.ops <- func(count map[string]int) {
        count[key]++
    }
}

func (c *SafeCounter2) Value(key string) int {
    result := make(chan int)
    c.ops <- func(count map[string]int) {
        result <- count[key]
    }
    return <-result
}

func main() {
    counter1 := &SafeCounter1{count: make(map[string]int)}
    var wg sync.WaitGroup
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter1.Inc("key")
        }()
    }
    wg.Wait()
    fmt.Println("Counter1:", counter1.Value("key"))

    counter2 := NewSafeCounter2()
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter2.Inc("key")
        }()
    }
    wg.Wait()
    fmt.Println("Counter2:", counter2.Value("key"))
}

Wrap-up: Days 10–11 checklist

  • go launches a goroutine
  • sync.WaitGroup waits for groups
  • Channel ops <-, close, range
  • Buffered vs unbuffered
  • select, timeouts, non-blocking default
  • Worker pool and pipeline patterns
  • Four exercises

C++ → Go

C++Go
std::threadgo
join()WaitGroup
mutexMutex or channels
shared memory + locksprefer channels

Concurrency vs parallelism

graph TD
    A[Concurrency] --> B[Structure: many tasks at once]
    C[Parallelism] --> D[Execution: many tasks at once on cores]
    B --> E[Goroutines & channels]
    D --> F[Multi-core]
    E --> G[GOMAXPROCS]
    F --> G

Model:

  • Concurrency: structuring work (goroutines, channels)
  • Parallelism: actually running simultaneously—runtime maps goroutines to threads

Next

Go modules & testing—dependency management and go test.


📚 Series navigation

PreviousIndexNext
← #05 Errors📑 Index#07 Testing →

Go in 2 weeks: Curriculum#01 • … • #06 • …


TL;DR: Goroutines are cheap; channels are safe; select is powerful—Go’s concurrency story is simpler than raw pthreads for most servers.

  • [Go #05] Error handling
  • [Go #09] Context & graceful shutdown
  • Two-week Go curriculum
  • C++ std::thread basics

Keywords

Go goroutine, Go channel, select golang, concurrent programming, Golang tutorial, C++ thread comparison, buffered channel.

Practical tips

Debugging

  • Reproduce deadlocks with small programs; check for missing receive or WaitGroup.Done.

Performance

  • Profile before tuning GOMAXPROCS or buffer sizes.

Code review

  • Document channel ownership (who closes).

Field checklist

Before coding

  • Is a channel the clearest sync primitive?
  • Who closes channels?

While coding

  • No send on closed channel?
  • WaitGroup balanced?

At review

  • Cancellation (context) where needed?

FAQ

Q. Practical use?

A. HTTP servers, pipelines, background workers—anywhere you’d use a thread pool in C++ but want simpler wiring.

Q. Prerequisites?

A. #05 for errors; #04 for interfaces.

Q. Deeper?

A. go.dev/blog (concurrency articles), cppreference.