GO并发实战指南

613 阅读27分钟

在现代软件开发中,并发编程是提升应用性能和响应能力的关键技术。Go语言以其原生的并发支持而著称,本指南将深入探讨Go并发编程的核心概念、实战技巧和最佳实践。

srchttp___pic3.zhimg_.com_v2-775bd26bedcec381349082653b1b46ee_1440w.jpg_source172ae18breferhttp___pic3.zhimg_.jpg

目录

  1. 并发基础概念
  2. Goroutine实战技巧
  3. Channel通信模式
  4. 同步原语使用指南
  5. 并发模式和设计模式
  6. 性能优化和调试技巧
  7. 实战案例分析
  8. 常见陷阱和最佳实践

1. 并发基础概念

1.1 并发 vs 并行

定义区别

// 并发:同时处理多个任务(逻辑上同时)
// 并行:同时执行多个任务(物理上同时)

// 并发示例:单核心处理多个任务
func concurrentExample() {
    go task1() // 任务1
    go task2() // 任务2
    // 两个任务在时间片轮转中执行
}

// 并行示例:多核心同时执行
func parallelExample() {
    runtime.GOMAXPROCS(4) // 设置使用4个处理器核心
    go task1() // 在核心1执行
    go task2() // 在核心2执行
    go task3() // 在核心3执行
    go task4() // 在核心4执行
}

1.2 Go运行时调度器

GMP模型详解

// G: Goroutine,用户级线程
// M: Machine,系统线程
// P: Processor,逻辑处理器

func demonstrateGMP() {
    // 查看当前GOMAXPROCS设置
    fmt.Printf("GOMAXPROCS: %d\n", runtime.GOMAXPROCS(0))
    
    // 查看当前Goroutine数量
    fmt.Printf("NumGoroutine: %d\n", runtime.NumGoroutine())
    
    // 创建多个Goroutine观察调度
    for i := 0; i < 10; i++ {
        go func(id int) {
            fmt.Printf("Goroutine %d is running\n", id)
            runtime.Gosched() // 主动让出执行权
        }(i)
    }
}

1.3 内存模型和数据竞争

内存可见性规则

import "sync"

var (
    data int
    flag bool
    mu   sync.Mutex
)

// 错误示例:数据竞争
func badMemoryModel() {
    // Writer goroutine
    go func() {
        data = 42    // 写操作1
        flag = true  // 写操作2
    }()
    
    // Reader goroutine
    go func() {
        if flag {           // 读操作1
            fmt.Println(data) // 读操作2,可能读到旧值
        }
    }()
}

// 正确示例:使用互斥锁保证内存可见性
func goodMemoryModel() {
    // Writer goroutine
    go func() {
        mu.Lock()
        data = 42
        flag = true
        mu.Unlock()
    }()
    
    // Reader goroutine
    go func() {
        mu.Lock()
        if flag {
            fmt.Println(data) // 保证读到最新值
        }
        mu.Unlock()
    }()
}

2. Goroutine实战技巧

2.1 Goroutine生命周期管理

基础创建和等待

import (
    "fmt"
    "sync"
    "time"
)

// 使用WaitGroup管理Goroutine生命周期
func goroutineLifecycle() {
    var wg sync.WaitGroup
    
    // 启动多个工作Goroutine
    for i := 0; i < 5; i++ {
        wg.Add(1) // 增加等待计数
        go func(id int) {
            defer wg.Done() // 确保在函数退出时调用Done
            
            fmt.Printf("Worker %d starting\n", id)
            time.Sleep(time.Second * time.Duration(id))
            fmt.Printf("Worker %d finished\n", id)
        }(i)
    }
    
    wg.Wait() // 等待所有Goroutine完成
    fmt.Println("All workers completed")
}

优雅关闭模式

import (
    "context"
    "fmt"
    "time"
)

// 使用Context实现优雅关闭
func gracefulShutdown() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    // 启动工作Goroutine
    go worker(ctx, "worker-1")
    go worker(ctx, "worker-2")
    
    // 模拟运行5秒后关闭
    time.Sleep(5 * time.Second)
    fmt.Println("Shutting down...")
    cancel() // 发送取消信号
    
    // 给Goroutine一些时间来清理
    time.Sleep(1 * time.Second)
}

func worker(ctx context.Context, name string) {
    ticker := time.NewTicker(1 * time.Second)
    defer ticker.Stop()
    
    for {
        select {
        case <-ctx.Done():
            fmt.Printf("%s: shutting down\n", name)
            return
        case <-ticker.C:
            fmt.Printf("%s: working\n", name)
        }
    }
}

2.2 Goroutine池管理

固定大小工作池

import (
    "fmt"
    "sync"
    "time"
)

type WorkerPool struct {
    workerCount int
    jobQueue    chan Job
    wg          sync.WaitGroup
}

type Job struct {
    ID   int
    Data string
}

func NewWorkerPool(workerCount, queueSize int) *WorkerPool {
    return &WorkerPool{
        workerCount: workerCount,
        jobQueue:    make(chan Job, queueSize),
    }
}

func (wp *WorkerPool) Start() {
    for i := 0; i < wp.workerCount; i++ {
        wp.wg.Add(1)
        go wp.worker(i)
    }
}

func (wp *WorkerPool) worker(id int) {
    defer wp.wg.Done()
    
    for job := range wp.jobQueue {
        fmt.Printf("Worker %d processing job %d: %s\n", 
                   id, job.ID, job.Data)
        time.Sleep(time.Millisecond * 500) // 模拟工作
    }
}

func (wp *WorkerPool) Submit(job Job) {
    wp.jobQueue <- job
}

func (wp *WorkerPool) Stop() {
    close(wp.jobQueue)
    wp.wg.Wait()
}

// 使用示例
func demonstrateWorkerPool() {
    pool := NewWorkerPool(3, 10)
    pool.Start()
    
    // 提交任务
    for i := 0; i < 10; i++ {
        pool.Submit(Job{
            ID:   i,
            Data: fmt.Sprintf("task-%d", i),
        })
    }
    
    pool.Stop()
}

2.3 动态Goroutine管理

基于负载的动态扩缩容

import (
    "fmt"
    "sync"
    "sync/atomic"
    "time"
)

type DynamicPool struct {
    minWorkers    int
    maxWorkers    int
    currentWorkers int64
    jobQueue      chan Job
    workerQueue   chan chan Job
    quit          chan struct{}
    wg            sync.WaitGroup
}

func NewDynamicPool(min, max, queueSize int) *DynamicPool {
    return &DynamicPool{
        minWorkers:  min,
        maxWorkers:  max,
        jobQueue:    make(chan Job, queueSize),
        workerQueue: make(chan chan Job, max),
        quit:        make(chan struct{}),
    }
}

func (dp *DynamicPool) Start() {
    // 启动最小数量的工作者
    for i := 0; i < dp.minWorkers; i++ {
        dp.addWorker()
    }
    
    // 启动调度器
    go dp.dispatcher()
    
    // 启动监控器
    go dp.monitor()
}

func (dp *DynamicPool) addWorker() {
    if atomic.LoadInt64(&dp.currentWorkers) >= int64(dp.maxWorkers) {
        return
    }
    
    atomic.AddInt64(&dp.currentWorkers, 1)
    dp.wg.Add(1)
    
    go func() {
        defer dp.wg.Done()
        defer atomic.AddInt64(&dp.currentWorkers, -1)
        
        jobChan := make(chan Job)
        
        for {
            // 注册到工作者队列
            select {
            case dp.workerQueue <- jobChan:
            case <-dp.quit:
                return
            }
            
            // 等待任务
            select {
            case job := <-jobChan:
                // 处理任务
                fmt.Printf("Processing job %d\n", job.ID)
                time.Sleep(time.Millisecond * 100)
            case <-dp.quit:
                return
            }
        }
    }()
}

func (dp *DynamicPool) dispatcher() {
    for {
        select {
        case job := <-dp.jobQueue:
            // 分发任务到可用工作者
            select {
            case workerJobQueue := <-dp.workerQueue:
                workerJobQueue <- job
            default:
                // 没有可用工作者,尝试创建新的
                if atomic.LoadInt64(&dp.currentWorkers) < int64(dp.maxWorkers) {
                    dp.addWorker()
                }
                // 重新尝试分发
                go func() {
                    dp.jobQueue <- job
                }()
            }
        case <-dp.quit:
            return
        }
    }
}

func (dp *DynamicPool) monitor() {
    ticker := time.NewTicker(5 * time.Second)
    defer ticker.Stop()
    
    for {
        select {
        case <-ticker.C:
            queueLen := len(dp.jobQueue)
            workers := atomic.LoadInt64(&dp.currentWorkers)
            
            fmt.Printf("Queue length: %d, Workers: %d\n", queueLen, workers)
            
            // 简单的扩缩容逻辑
            if queueLen > 5 && workers < int64(dp.maxWorkers) {
                dp.addWorker()
            }
            
        case <-dp.quit:
            return
        }
    }
}

func (dp *DynamicPool) Submit(job Job) {
    select {
    case dp.jobQueue <- job:
    default:
        fmt.Println("Job queue is full, dropping job")
    }
}

func (dp *DynamicPool) Stop() {
    close(dp.quit)
    dp.wg.Wait()
}

3. Channel通信模式

3.1 Channel基础操作

创建和基本使用

import (
    "fmt"
    "time"
)

// 无缓冲Channel
func unbufferedChannel() {
    ch := make(chan string)
    
    go func() {
        ch <- "Hello" // 发送操作,会阻塞直到有接收者
    }()
    
    msg := <-ch // 接收操作,会阻塞直到有发送者
    fmt.Println(msg)
}

// 有缓冲Channel
func bufferedChannel() {
    ch := make(chan int, 3) // 缓冲区大小为3
    
    // 可以发送3个值而不阻塞
    ch <- 1
    ch <- 2
    ch <- 3
    
    // 接收值
    fmt.Println(<-ch) // 1
    fmt.Println(<-ch) // 2
    fmt.Println(<-ch) // 3
}

// Channel的方向性
func channelDirections() {
    ch := make(chan string, 1)
    
    // 只发送Channel
    go sender(ch)
    
    // 只接收Channel
    go receiver(ch)
    
    time.Sleep(time.Second)
}

func sender(ch chan<- string) { // 只能发送
    ch <- "message"
}

func receiver(ch <-chan string) { // 只能接收
    msg := <-ch
    fmt.Println("Received:", msg)
}

3.2 Select语句和非阻塞操作

基础Select使用

import (
    "fmt"
    "time"
)

func basicSelect() {
    ch1 := make(chan string)
    ch2 := make(chan string)
    
    go func() {
        time.Sleep(1 * time.Second)
        ch1 <- "channel 1"
    }()
    
    go func() {
        time.Sleep(2 * time.Second)
        ch2 <- "channel 2"
    }()
    
    for i := 0; i < 2; i++ {
        select {
        case msg1 := <-ch1:
            fmt.Println("Received from ch1:", msg1)
        case msg2 := <-ch2:
            fmt.Println("Received from ch2:", msg2)
        }
    }
}

// 超时控制
func selectWithTimeout() {
    ch := make(chan string)
    
    go func() {
        time.Sleep(3 * time.Second)
        ch <- "too late"
    }()
    
    select {
    case msg := <-ch:
        fmt.Println("Received:", msg)
    case <-time.After(2 * time.Second):
        fmt.Println("Timeout!")
    }
}

// 非阻塞操作
func nonBlockingOperations() {
    ch := make(chan string, 1)
    
    // 非阻塞发送
    select {
    case ch <- "message":
        fmt.Println("Sent message")
    default:
        fmt.Println("Channel is full")
    }
    
    // 非阻塞接收
    select {
    case msg := <-ch:
        fmt.Println("Received:", msg)
    default:
        fmt.Println("No message available")
    }
}

3.3 Channel模式实现

生产者-消费者模式

import (
    "fmt"
    "math/rand"
    "sync"
    "time"
)

type Producer struct {
    id     int
    output chan<- int
}

func (p *Producer) Produce(count int) {
    for i := 0; i < count; i++ {
        value := rand.Intn(100)
        p.output <- value
        fmt.Printf("Producer %d produced: %d\n", p.id, value)
        time.Sleep(time.Millisecond * 500)
    }
}

type Consumer struct {
    id    int
    input <-chan int
}

func (c *Consumer) Consume() {
    for value := range c.input {
        fmt.Printf("Consumer %d consumed: %d\n", c.id, value)
        time.Sleep(time.Millisecond * 300)
    }
}

func producerConsumerDemo() {
    buffer := make(chan int, 10)
    var wg sync.WaitGroup
    
    // 启动生产者
    for i := 0; i < 2; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            producer := &Producer{id: id, output: buffer}
            producer.Produce(5)
        }(i)
    }
    
    // 启动消费者
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            consumer := &Consumer{id: id, input: buffer}
            consumer.Consume()
        }(i)
    }
    
    // 等待所有生产者完成
    go func() {
        wg.Wait()
        close(buffer) // 关闭channel,消费者将退出
    }()
    
    // 等待消费者完成
    time.Sleep(5 * time.Second)
}

Fan-out/Fan-in模式

import (
    "fmt"
    "sync"
)

// Fan-out: 将工作分发到多个Goroutine
func fanOut(input <-chan int, workers int) []<-chan int {
    outputs := make([]<-chan int, workers)
    
    for i := 0; i < workers; i++ {
        output := make(chan int)
        outputs[i] = output
        
        go func(out chan<- int) {
            defer close(out)
            for n := range input {
                // 处理数据(这里简单地平方)
                result := n * n
                out <- result
            }
        }(output)
    }
    
    return outputs
}

// Fan-in: 将多个Goroutine的结果合并
func fanIn(inputs ...<-chan int) <-chan int {
    output := make(chan int)
    var wg sync.WaitGroup
    
    // 为每个输入channel启动一个Goroutine
    for _, input := range inputs {
        wg.Add(1)
        go func(ch <-chan int) {
            defer wg.Done()
            for n := range ch {
                output <- n
            }
        }(input)
    }
    
    // 启动Goroutine关闭输出channel
    go func() {
        wg.Wait()
        close(output)
    }()
    
    return output
}

func fanOutFanInDemo() {
    // 创建输入数据
    input := make(chan int)
    go func() {
        defer close(input)
        for i := 1; i <= 10; i++ {
            input <- i
        }
    }()
    
    // Fan-out到3个worker
    outputs := fanOut(input, 3)
    
    // Fan-in结果
    result := fanIn(outputs...)
    
    // 收集结果
    for n := range result {
        fmt.Printf("Result: %d\n", n)
    }
}

Pipeline模式

import (
    "fmt"
)

// 管道阶段1:生成数字
func generate(nums ...int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for _, n := range nums {
            out <- n
        }
    }()
    return out
}

// 管道阶段2:平方计算
func square(input <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for n := range input {
            out <- n * n
        }
    }()
    return out
}

// 管道阶段3:过滤偶数
func filterEven(input <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for n := range input {
            if n%2 == 0 {
                out <- n
            }
        }
    }()
    return out
}

func pipelineDemo() {
    // 构建管道
    numbers := generate(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    squared := square(numbers)
    filtered := filterEven(squared)
    
    // 消费最终结果
    for result := range filtered {
        fmt.Printf("Final result: %d\n", result)
    }
}

3.4 Channel关闭和检测

安全关闭Channel

import (
    "fmt"
    "sync"
)

// 单发送者,多接收者模式
func singleSenderMultiReceiver() {
    dataCh := make(chan int, 5)
    done := make(chan struct{})
    var wg sync.WaitGroup
    
    // 单个发送者
    go func() {
        defer close(dataCh) // 发送者负责关闭
        for i := 0; i < 10; i++ {
            select {
            case dataCh <- i:
                fmt.Printf("Sent: %d\n", i)
            case <-done:
                return // 提前退出
            }
        }
    }()
    
    // 多个接收者
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for {
                select {
                case data, ok := <-dataCh:
                    if !ok {
                        fmt.Printf("Receiver %d: channel closed\n", id)
                        return
                    }
                    fmt.Printf("Receiver %d got: %d\n", id, data)
                case <-done:
                    return
                }
            }
        }(i)
    }
    
    wg.Wait()
}

// 多发送者,单接收者模式
func multiSenderSingleReceiver() {
    dataCh := make(chan int, 5)
    done := make(chan struct{})
    var wg sync.WaitGroup
    
    // 多个发送者
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 5; j++ {
                select {
                case dataCh <- id*10+j:
                    fmt.Printf("Sender %d sent: %d\n", id, id*10+j)
                case <-done:
                    return
                }
            }
        }(i)
    }
    
    // 关闭发送者的Goroutine
    go func() {
        wg.Wait()
        close(dataCh)
    }()
    
    // 单个接收者
    for data := range dataCh {
        fmt.Printf("Received: %d\n", data)
    }
}

4. 同步原语使用指南

4.1 Mutex互斥锁

基础使用模式

import (
    "fmt"
    "sync"
    "time"
)

type SafeCounter struct {
    mutex sync.Mutex
    count int
}

func (c *SafeCounter) Increment() {
    c.mutex.Lock()
    defer c.mutex.Unlock() // 确保解锁
    c.count++
}

func (c *SafeCounter) GetCount() int {
    c.mutex.Lock()
    defer c.mutex.Unlock()
    return c.count
}

func mutexDemo() {
    counter := &SafeCounter{}
    var wg sync.WaitGroup
    
    // 启动多个Goroutine并发递增
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                counter.Increment()
            }
        }()
    }
    
    wg.Wait()
    fmt.Printf("Final count: %d\n", counter.GetCount()) // 输出: 10000
}

读写锁RWMutex

import (
    "fmt"
    "sync"
    "time"
)

type SafeMap struct {
    mutex sync.RWMutex
    data  map[string]int
}

func NewSafeMap() *SafeMap {
    return &SafeMap{
        data: make(map[string]int),
    }
}

func (sm *SafeMap) Set(key string, value int) {
    sm.mutex.Lock()         // 写锁
    defer sm.mutex.Unlock()
    sm.data[key] = value
}

func (sm *SafeMap) Get(key string) (int, bool) {
    sm.mutex.RLock()        // 读锁
    defer sm.mutex.RUnlock()
    value, exists := sm.data[key]
    return value, exists
}

func (sm *SafeMap) GetAll() map[string]int {
    sm.mutex.RLock()
    defer sm.mutex.RUnlock()
    
    // 返回副本以避免并发修改
    result := make(map[string]int)
    for k, v := range sm.data {
        result[k] = v
    }
    return result
}

func rwMutexDemo() {
    safeMap := NewSafeMap()
    var wg sync.WaitGroup
    
    // 启动写入者
    for i := 0; i < 5; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 10; j++ {
                key := fmt.Sprintf("key-%d-%d", id, j)
                safeMap.Set(key, id*10+j)
                time.Sleep(time.Millisecond * 10)
            }
        }(i)
    }
    
    // 启动读取者
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 20; j++ {
                key := fmt.Sprintf("key-%d-%d", id%5, j%10)
                if value, exists := safeMap.Get(key); exists {
                    fmt.Printf("Reader %d found %s = %d\n", id, key, value)
                }
                time.Sleep(time.Millisecond * 5)
            }
        }(i)
    }
    
    wg.Wait()
    fmt.Printf("Final map size: %d\n", len(safeMap.GetAll()))
}

4.2 原子操作

基础原子操作

import (
    "fmt"
    "sync"
    "sync/atomic"
)

type AtomicCounter struct {
    count int64
}

func (ac *AtomicCounter) Increment() {
    atomic.AddInt64(&ac.count, 1)
}

func (ac *AtomicCounter) Decrement() {
    atomic.AddInt64(&ac.count, -1)
}

func (ac *AtomicCounter) GetCount() int64 {
    return atomic.LoadInt64(&ac.count)
}

func (ac *AtomicCounter) SetCount(value int64) {
    atomic.StoreInt64(&ac.count, value)
}

func (ac *AtomicCounter) CompareAndSwap(old, new int64) bool {
    return atomic.CompareAndSwapInt64(&ac.count, old, new)
}

func atomicDemo() {
    counter := &AtomicCounter{}
    var wg sync.WaitGroup
    
    // 启动递增器
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                counter.Increment()
            }
        }()
    }
    
    // 启动递减器
    for i := 0; i < 5; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 500; j++ {
                counter.Decrement()
            }
        }()
    }
    
    wg.Wait()
    fmt.Printf("Final count: %d\n", counter.GetCount()) // 输出: 7500
}

原子指针操作

import (
    "fmt"
    "sync"
    "sync/atomic"
    "unsafe"
)

type Config struct {
    Host string
    Port int
}

type ConfigManager struct {
    config unsafe.Pointer
}

func NewConfigManager(config *Config) *ConfigManager {
    cm := &ConfigManager{}
    atomic.StorePointer(&cm.config, unsafe.Pointer(config))
    return cm
}

func (cm *ConfigManager) GetConfig() *Config {
    return (*Config)(atomic.LoadPointer(&cm.config))
}

func (cm *ConfigManager) UpdateConfig(newConfig *Config) {
    atomic.StorePointer(&cm.config, unsafe.Pointer(newConfig))
}

func atomicPointerDemo() {
    cm := NewConfigManager(&Config{Host: "localhost", Port: 8080})
    var wg sync.WaitGroup
    
    // 启动读取者
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 100; j++ {
                config := cm.GetConfig()
                fmt.Printf("Reader %d: %s:%d\n", id, config.Host, config.Port)
            }
        }(i)
    }
    
    // 启动更新者
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < 10; i++ {
            newConfig := &Config{
                Host: fmt.Sprintf("host-%d", i),
                Port: 8080 + i,
            }
            cm.UpdateConfig(newConfig)
            fmt.Printf("Updated config to: %s:%d\n", newConfig.Host, newConfig.Port)
        }
    }()
    
    wg.Wait()
}

4.3 条件变量Cond

基础条件变量使用

import (
    "fmt"
    "sync"
    "time"
)

type Queue struct {
    mutex sync.Mutex
    cond  *sync.Cond
    items []int
    maxSize int
}

func NewQueue(maxSize int) *Queue {
    q := &Queue{
        items: make([]int, 0),
        maxSize: maxSize,
    }
    q.cond = sync.NewCond(&q.mutex)
    return q
}

func (q *Queue) Put(item int) {
    q.mutex.Lock()
    defer q.mutex.Unlock()
    
    // 等待队列有空间
    for len(q.items) >= q.maxSize {
        fmt.Printf("Queue full, waiting...\n")
        q.cond.Wait()
    }
    
    q.items = append(q.items, item)
    fmt.Printf("Added item: %d, queue size: %d\n", item, len(q.items))
    
    // 通知等待的消费者
    q.cond.Broadcast()
}

func (q *Queue) Get() int {
    q.mutex.Lock()
    defer q.mutex.Unlock()
    
    // 等待队列非空
    for len(q.items) == 0 {
        fmt.Printf("Queue empty, waiting...\n")
        q.cond.Wait()
    }
    
    item := q.items[0]
    q.items = q.items[1:]
    fmt.Printf("Removed item: %d, queue size: %d\n", item, len(q.items))
    
    // 通知等待的生产者
    q.cond.Broadcast()
    
    return item
}

func condDemo() {
    queue := NewQueue(3)
    var wg sync.WaitGroup
    
    // 启动生产者
    for i := 0; i < 2; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 5; j++ {
                item := id*10 + j
                queue.Put(item)
                time.Sleep(time.Millisecond * 100)
            }
        }(i)
    }
    
    // 启动消费者
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for j := 0; j < 3; j++ {
                item := queue.Get()
                fmt.Printf("Consumer %d got: %d\n", id, item)
                time.Sleep(time.Millisecond * 200)
            }
        }(i)
    }
    
    wg.Wait()
}

4.4 Once单次执行

确保代码只执行一次

import (
    "fmt"
    "sync"
)

type Singleton struct {
    data string
}

var (
    instance *Singleton
    once     sync.Once
)

func GetInstance() *Singleton {
    once.Do(func() {
        fmt.Println("Creating singleton instance...")
        instance = &Singleton{data: "singleton data"}
    })
    return instance
}

// 配置初始化示例
type Config struct {
    DatabaseURL string
    APIKey      string
}

var (
    config *Config
    configOnce sync.Once
)

func InitConfig() {
    configOnce.Do(func() {
        fmt.Println("Initializing configuration...")
        config = &Config{
            DatabaseURL: "postgresql://localhost:5432/mydb",
            APIKey:      "secret-api-key",
        }
    })
}

func GetConfig() *Config {
    InitConfig() // 确保配置已初始化
    return config
}

func onceDemo() {
    var wg sync.WaitGroup
    
    // 多个Goroutine尝试获取单例
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            instance := GetInstance()
            fmt.Printf("Goroutine %d got instance: %p\n", id, instance)
        }(i)
    }
    
    wg.Wait()
    
    // 多个Goroutine尝试初始化配置
    for i := 0; i < 5; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            config := GetConfig()
            fmt.Printf("Goroutine %d got config: %s\n", id, config.DatabaseURL)
        }(i)
    }
    
    wg.Wait()
}

4.5 WaitGroup等待组

高级WaitGroup使用模式

import (
    "context"
    "fmt"
    "sync"
    "time"
)

// 带超时的WaitGroup
func waitGroupWithTimeout(wg *sync.WaitGroup, timeout time.Duration) bool {
    done := make(chan struct{})
    
    go func() {
        wg.Wait()
        close(done)
    }()
    
    select {
    case <-done:
        return true
    case <-time.After(timeout):
        return false
    }
}

// 带Context的WaitGroup
func waitGroupWithContext(ctx context.Context, wg *sync.WaitGroup) error {
    done := make(chan struct{})
    
    go func() {
        wg.Wait()
        close(done)
    }()
    
    select {
    case <-done:
        return nil
    case <-ctx.Done():
        return ctx.Err()
    }
}

// 嵌套WaitGroup示例
func nestedWaitGroupDemo() {
    var mainWG sync.WaitGroup
    
    for i := 0; i < 3; i++ {
        mainWG.Add(1)
        go func(groupID int) {
            defer mainWG.Done()
            
            var subWG sync.WaitGroup
            fmt.Printf("Group %d starting sub-tasks\n", groupID)
            
            // 每个组启动多个子任务
            for j := 0; j < 5; j++ {
                subWG.Add(1)
                go func(taskID int) {
                    defer subWG.Done()
                    fmt.Printf("Group %d, Task %d executing\n", groupID, taskID)
                    time.Sleep(time.Millisecond * 100)
                }(j)
            }
            
            subWG.Wait()
            fmt.Printf("Group %d completed all sub-tasks\n", groupID)
        }(i)
    }
    
    // 使用带超时的等待
    if waitGroupWithTimeout(&mainWG, 5*time.Second) {
        fmt.Println("All groups completed within timeout")
    } else {
        fmt.Println("Timeout waiting for groups to complete")
    }
}

5. 并发模式和设计模式

5.1 Worker Pool模式

动态负载均衡Worker Pool

import (
    "context"
    "fmt"
    "sync"
    "sync/atomic"
    "time"
)

type Task interface {
    Execute() error
    GetPriority() int
}

type SimpleTask struct {
    ID       int
    Priority int
    Data     string
}

func (t *SimpleTask) Execute() error {
    fmt.Printf("Executing task %d: %s\n", t.ID, t.Data)
    time.Sleep(time.Millisecond * 100) // 模拟工作
    return nil
}

func (t *SimpleTask) GetPriority() int {
    return t.Priority
}

type LoadBalancedPool struct {
    workers    []Worker
    taskQueue  chan Task
    workerLoad []int64 // 每个worker的负载
    ctx        context.Context
    cancel     context.CancelFunc
    wg         sync.WaitGroup
}

type Worker struct {
    id        int
    taskChan  chan Task
    pool      *LoadBalancedPool
    busyCount int64
}

func NewLoadBalancedPool(workerCount, queueSize int) *LoadBalancedPool {
    ctx, cancel := context.WithCancel(context.Background())
    
    pool := &LoadBalancedPool{
        workers:    make([]Worker, workerCount),
        taskQueue:  make(chan Task, queueSize),
        workerLoad: make([]int64, workerCount),
        ctx:        ctx,
        cancel:     cancel,
    }
    
    // 初始化workers
    for i := 0; i < workerCount; i++ {
        pool.workers[i] = Worker{
            id:       i,
            taskChan: make(chan Task, 10),
            pool:     pool,
        }
    }
    
    return pool
}

func (p *LoadBalancedPool) Start() {
    // 启动调度器
    p.wg.Add(1)
    go p.dispatcher()
    
    // 启动所有workers
    for i := range p.workers {
        p.wg.Add(1)
        go p.workers[i].start(p.ctx)
    }
}

func (p *LoadBalancedPool) dispatcher() {
    defer p.wg.Done()
    
    for {
        select {
        case task := <-p.taskQueue:
            // 找到负载最低的worker
            minLoad := atomic.LoadInt64(&p.workerLoad[0])
            minIndex := 0
            
            for i := 1; i < len(p.workerLoad); i++ {
                load := atomic.LoadInt64(&p.workerLoad[i])
                if load < minLoad {
                    minLoad = load
                    minIndex = i
                }
            }
            
            // 分配任务到负载最低的worker
            select {
            case p.workers[minIndex].taskChan <- task:
                atomic.AddInt64(&p.workerLoad[minIndex], 1)
            case <-p.ctx.Done():
                return
            }
            
        case <-p.ctx.Done():
            return
        }
    }
}

func (w *Worker) start(ctx context.Context) {
    defer w.pool.wg.Done()
    
    for {
        select {
        case task := <-w.taskChan:
            atomic.AddInt64(&w.busyCount, 1)
            err := task.Execute()
            if err != nil {
                fmt.Printf("Worker %d task failed: %v\n", w.id, err)
            }
            atomic.AddInt64(&w.busyCount, -1)
            atomic.AddInt64(&w.pool.workerLoad[w.id], -1)
            
        case <-ctx.Done():
            return
        }
    }
}

func (p *LoadBalancedPool) Submit(task Task) error {
    select {
    case p.taskQueue <- task:
        return nil
    case <-p.ctx.Done():
        return p.ctx.Err()
    default:
        return fmt.Errorf("task queue is full")
    }
}

func (p *LoadBalancedPool) Stop() {
    p.cancel()
    p.wg.Wait()
}

func (p *LoadBalancedPool) GetStats() map[string]interface{} {
    stats := make(map[string]interface{})
    stats["workers"] = len(p.workers)
    stats["queue_length"] = len(p.taskQueue)
    
    loads := make([]int64, len(p.workerLoad))
    for i, load := range p.workerLoad {
        loads[i] = atomic.LoadInt64(&load)
    }
    stats["worker_loads"] = loads
    
    return stats
}

5.2 发布-订阅模式

类型安全的事件总线

import (
    "fmt"
    "reflect"
    "sync"
)

type EventBus struct {
    mutex       sync.RWMutex
    subscribers map[reflect.Type][]reflect.Value
}

func NewEventBus() *EventBus {
    return &EventBus{
        subscribers: make(map[reflect.Type][]reflect.Value),
    }
}

func (eb *EventBus) Subscribe(fn interface{}) error {
    fnType := reflect.TypeOf(fn)
    if fnType.Kind() != reflect.Func {
        return fmt.Errorf("subscriber must be a function")
    }
    
    if fnType.NumIn() != 1 {
        return fmt.Errorf("subscriber function must have exactly one parameter")
    }
    
    eventType := fnType.In(0)
    fnValue := reflect.ValueOf(fn)
    
    eb.mutex.Lock()
    defer eb.mutex.Unlock()
    
    eb.subscribers[eventType] = append(eb.subscribers[eventType], fnValue)
    return nil
}

func (eb *EventBus) Publish(event interface{}) {
    eventType := reflect.TypeOf(event)
    eventValue := reflect.ValueOf(event)
    
    eb.mutex.RLock()
    subscribers, exists := eb.subscribers[eventType]
    eb.mutex.RUnlock()
    
    if !exists {
        return
    }
    
    var wg sync.WaitGroup
    for _, subscriber := range subscribers {
        wg.Add(1)
        go func(fn reflect.Value) {
            defer wg.Done()
            defer func() {
                if r := recover(); r != nil {
                    fmt.Printf("Subscriber panic: %v\n", r)
                }
            }()
            
            fn.Call([]reflect.Value{eventValue})
        }(subscriber)
    }
    
    wg.Wait()
}

// 事件类型定义
type UserCreatedEvent struct {
    UserID   int
    Username string
    Email    string
}

type OrderPlacedEvent struct {
    OrderID    int
    UserID     int
    TotalPrice float64
}

// 使用示例
func pubSubDemo() {
    bus := NewEventBus()
    
    // 订阅用户创建事件
    bus.Subscribe(func(event UserCreatedEvent) {
        fmt.Printf("Email service: Sending welcome email to %s\n", event.Email)
    })
    
    bus.Subscribe(func(event UserCreatedEvent) {
        fmt.Printf("Analytics service: User %d created\n", event.UserID)
    })
    
    // 订阅订单事件
    bus.Subscribe(func(event OrderPlacedEvent) {
        fmt.Printf("Payment service: Processing payment for order %d\n", event.OrderID)
    })
    
    bus.Subscribe(func(event OrderPlacedEvent) {
        fmt.Printf("Inventory service: Updating stock for order %d\n", event.OrderID)
    })
    
    // 发布事件
    bus.Publish(UserCreatedEvent{
        UserID:   1,
        Username: "john_doe",
        Email:    "john@example.com",
    })
    
    bus.Publish(OrderPlacedEvent{
        OrderID:    100,
        UserID:     1,
        TotalPrice: 99.99,
    })
}

5.3 熔断器模式

自适应熔断器实现

import (
    "context"
    "errors"
    "fmt"
    "sync"
    "sync/atomic"
    "time"
)

type State int32

const (
    StateClosed State = iota
    StateOpen
    StateHalfOpen
)

type CircuitBreaker struct {
    maxRequests    uint32
    interval       time.Duration
    timeout        time.Duration
    readyToTrip    func(counts Counts) bool
    onStateChange  func(name string, from State, to State)
    
    mutex      sync.Mutex
    state      State
    generation uint64
    counts     Counts
    expiry     time.Time
    name       string
}

type Counts struct {
    Requests             uint32
    TotalSuccesses       uint32
    TotalFailures        uint32
    ConsecutiveSuccesses uint32
    ConsecutiveFailures  uint32
}

func (c *Counts) onRequest() {
    c.Requests++
}

func (c *Counts) onSuccess() {
    c.TotalSuccesses++
    c.ConsecutiveSuccesses++
    c.ConsecutiveFailures = 0
}

func (c *Counts) onFailure() {
    c.TotalFailures++
    c.ConsecutiveFailures++
    c.ConsecutiveSuccesses = 0
}

func (c *Counts) clear() {
    c.Requests = 0
    c.TotalSuccesses = 0
    c.TotalFailures = 0
    c.ConsecutiveSuccesses = 0
    c.ConsecutiveFailures = 0
}

type Settings struct {
    Name        string
    MaxRequests uint32
    Interval    time.Duration
    Timeout     time.Duration
    ReadyToTrip func(counts Counts) bool
    OnStateChange func(name string, from State, to State)
}

func NewCircuitBreaker(st Settings) *CircuitBreaker {
    cb := &CircuitBreaker{
        maxRequests:    st.MaxRequests,
        interval:       st.Interval,
        timeout:        st.Timeout,
        readyToTrip:    st.ReadyToTrip,
        onStateChange:  st.OnStateChange,
        name:           st.Name,
    }
    
    cb.toNewGeneration(time.Now())
    return cb
}

func (cb *CircuitBreaker) Execute(req func() (interface{}, error)) (interface{}, error) {
    generation, err := cb.beforeRequest()
    if err != nil {
        return nil, err
    }
    
    defer func() {
        e := recover()
        if e != nil {
            cb.afterRequest(generation, false)
            panic(e)
        }
    }()
    
    result, err := req()
    cb.afterRequest(generation, err == nil)
    return result, err
}

func (cb *CircuitBreaker) beforeRequest() (uint64, error) {
    cb.mutex.Lock()
    defer cb.mutex.Unlock()
    
    now := time.Now()
    state, generation := cb.currentState(now)
    
    if state == StateOpen {
        return generation, errors.New("circuit breaker is open")
    } else if state == StateHalfOpen && cb.counts.Requests >= cb.maxRequests {
        return generation, errors.New("too many requests")
    }
    
    cb.counts.onRequest()
    return generation, nil
}

func (cb *CircuitBreaker) afterRequest(before uint64, success bool) {
    cb.mutex.Lock()
    defer cb.mutex.Unlock()
    
    now := time.Now()
    state, generation := cb.currentState(now)
    if generation != before {
        return
    }
    
    if success {
        cb.onSuccess(state, now)
    } else {
        cb.onFailure(state, now)
    }
}

func (cb *CircuitBreaker) onSuccess(state State, now time.Time) {
    cb.counts.onSuccess()
    
    if state == StateHalfOpen && cb.counts.ConsecutiveSuccesses >= cb.maxRequests {
        cb.setState(StateClosed, now)
    }
}

func (cb *CircuitBreaker) onFailure(state State, now time.Time) {
    cb.counts.onFailure()
    
    if cb.readyToTrip != nil && cb.readyToTrip(cb.counts) {
        cb.setState(StateOpen, now)
    }
}

func (cb *CircuitBreaker) currentState(now time.Time) (State, uint64) {
    switch cb.state {
    case StateClosed:
        if !cb.expiry.IsZero() && cb.expiry.Before(now) {
            cb.toNewGeneration(now)
        }
    case StateOpen:
        if cb.expiry.Before(now) {
            cb.setState(StateHalfOpen, now)
        }
    }
    return cb.state, cb.generation
}

func (cb *CircuitBreaker) setState(state State, now time.Time) {
    if cb.state == state {
        return
    }
    
    prev := cb.state
    cb.state = state
    
    cb.toNewGeneration(now)
    
    if cb.onStateChange != nil {
        cb.onStateChange(cb.name, prev, state)
    }
}

func (cb *CircuitBreaker) toNewGeneration(now time.Time) {
    cb.generation++
    cb.counts.clear()
    
    var zero time.Time
    switch cb.state {
    case StateClosed:
        if cb.interval == 0 {
            cb.expiry = zero
        } else {
            cb.expiry = now.Add(cb.interval)
        }
    case StateOpen:
        cb.expiry = now.Add(cb.timeout)
    default: // StateHalfOpen
        cb.expiry = zero
    }
}

// 使用示例
func circuitBreakerDemo() {
    // 模拟不稳定的服务
    var failureCount int64
    unreliableService := func() (interface{}, error) {
        count := atomic.AddInt64(&failureCount, 1)
        if count%3 == 0 {
            return "success", nil
        }
        return nil, errors.New("service failure")
    }
    
    cb := NewCircuitBreaker(Settings{
        Name:        "test-service",
        MaxRequests: 3,
        Interval:    time.Second * 5,
        Timeout:     time.Second * 10,
        ReadyToTrip: func(counts Counts) bool {
            return counts.ConsecutiveFailures > 2
        },
        OnStateChange: func(name string, from State, to State) {
            fmt.Printf("Circuit breaker '%s' changed from %v to %v\n", name, from, to)
        },
    })
    
    // 模拟请求
    for i := 0; i < 20; i++ {
        result, err := cb.Execute(unreliableService)
        if err != nil {
            fmt.Printf("Request %d failed: %v\n", i+1, err)
        } else {
            fmt.Printf("Request %d succeeded: %v\n", i+1, result)
        }
        time.Sleep(time.Millisecond * 500)
    }
}

5.4 限流器模式

令牌桶限流器

import (
    "context"
    "fmt"
    "sync"
    "time"
)

type TokenBucket struct {
    capacity   int           // 桶容量
    tokens     int           // 当前令牌数
    refillRate time.Duration // 令牌填充间隔
    mutex      sync.Mutex
    lastRefill time.Time
    ticker     *time.Ticker
    ctx        context.Context
    cancel     context.CancelFunc
}

func NewTokenBucket(capacity int, refillRate time.Duration) *TokenBucket {
    ctx, cancel := context.WithCancel(context.Background())
    
    tb := &TokenBucket{
        capacity:   capacity,
        tokens:     capacity,
        refillRate: refillRate,
        lastRefill: time.Now(),
        ctx:        ctx,
        cancel:     cancel,
    }
    
    tb.startRefilling()
    return tb
}

func (tb *TokenBucket) startRefilling() {
    tb.ticker = time.NewTicker(tb.refillRate)
    
    go func() {
        defer tb.ticker.Stop()
        for {
            select {
            case <-tb.ticker.C:
                tb.refill()
            case <-tb.ctx.Done():
                return
            }
        }
    }()
}

func (tb *TokenBucket) refill() {
    tb.mutex.Lock()
    defer tb.mutex.Unlock()
    
    now := time.Now()
    elapsed := now.Sub(tb.lastRefill)
    tokensToAdd := int(elapsed / tb.refillRate)
    
    if tokensToAdd > 0 {
        tb.tokens = min(tb.capacity, tb.tokens+tokensToAdd)
        tb.lastRefill = now
    }
}

func (tb *TokenBucket) Allow() bool {
    return tb.AllowN(1)
}

func (tb *TokenBucket) AllowN(n int) bool {
    tb.mutex.Lock()
    defer tb.mutex.Unlock()
    
    if tb.tokens >= n {
        tb.tokens -= n
        return true
    }
    return false
}

func (tb *TokenBucket) Wait(ctx context.Context) error {
    return tb.WaitN(ctx, 1)
}

func (tb *TokenBucket) WaitN(ctx context.Context, n int) error {
    for {
        if tb.AllowN(n) {
            return nil
        }
        
        select {
        case <-ctx.Done():
            return ctx.Err()
        case <-time.After(tb.refillRate / 10): // 短暂等待后重试
            continue
        }
    }
}

func (tb *TokenBucket) Stop() {
    tb.cancel()
}

func min(a, b int) int {
    if a < b {
        return a
    }
    return b
}

// 滑动窗口限流器
type SlidingWindowLimiter struct {
    limit    int
    window   time.Duration
    requests []time.Time
    mutex    sync.Mutex
}

func NewSlidingWindowLimiter(limit int, window time.Duration) *SlidingWindowLimiter {
    return &SlidingWindowLimiter{
        limit:    limit,
        window:   window,
        requests: make([]time.Time, 0),
    }
}

func (swl *SlidingWindowLimiter) Allow() bool {
    swl.mutex.Lock()
    defer swl.mutex.Unlock()
    
    now := time.Now()
    cutoff := now.Add(-swl.window)
    
    // 移除过期的请求
    i := 0
    for i < len(swl.requests) && swl.requests[i].Before(cutoff) {
        i++
    }
    swl.requests = swl.requests[i:]
    
    // 检查是否超过限制
    if len(swl.requests) >= swl.limit {
        return false
    }
    
    // 添加当前请求
    swl.requests = append(swl.requests, now)
    return true
}

// 使用示例
func rateLimiterDemo() {
    // 令牌桶示例
    fmt.Println("Token Bucket Demo:")
    bucket := NewTokenBucket(5, time.Millisecond*200)
    defer bucket.Stop()
    
    for i := 0; i < 15; i++ {
        if bucket.Allow() {
            fmt.Printf("Request %d: Allowed\n", i+1)
        } else {
            fmt.Printf("Request %d: Rate limited\n", i+1)
        }
        time.Sleep(time.Millisecond * 100)
    }
    
    fmt.Println("\nSliding Window Demo:")
    // 滑动窗口示例
    limiter := NewSlidingWindowLimiter(3, time.Second)
    
    for i := 0; i < 10; i++ {
        if limiter.Allow() {
            fmt.Printf("Request %d: Allowed\n", i+1)
        } else {
            fmt.Printf("Request %d: Rate limited\n", i+1)
        }
        time.Sleep(time.Millisecond * 300)
    }
}

6. 性能优化和调试技巧

6.1 性能监控和分析

运行时统计信息收集

import (
    "fmt"
    "runtime"
    "sync"
    "time"
)

type RuntimeStats struct {
    Timestamp    time.Time
    NumGoroutine int
    NumCPU       int
    GOMAXPROCS   int
    MemStats     runtime.MemStats
}

type PerformanceMonitor struct {
    stats   []RuntimeStats
    mutex   sync.RWMutex
    running bool
    stop    chan struct{}
}

func NewPerformanceMonitor() *PerformanceMonitor {
    return &PerformanceMonitor{
        stats: make([]RuntimeStats, 0),
        stop:  make(chan struct{}),
    }
}

func (pm *PerformanceMonitor) Start(interval time.Duration) {
    pm.mutex.Lock()
    if pm.running {
        pm.mutex.Unlock()
        return
    }
    pm.running = true
    pm.mutex.Unlock()
    
    go func() {
        ticker := time.NewTicker(interval)
        defer ticker.Stop()
        
        for {
            select {
            case <-ticker.C:
                pm.collectStats()
            case <-pm.stop:
                return
            }
        }
    }()
}

func (pm *PerformanceMonitor) collectStats() {
    var memStats runtime.MemStats
    runtime.ReadMemStats(&memStats)
    
    stat := RuntimeStats{
        Timestamp:    time.Now(),
        NumGoroutine: runtime.NumGoroutine(),
        NumCPU:       runtime.NumCPU(),
        GOMAXPROCS:   runtime.GOMAXPROCS(0),
        MemStats:     memStats,
    }
    
    pm.mutex.Lock()
    pm.stats = append(pm.stats, stat)
    // 保持最近100条记录
    if len(pm.stats) > 100 {
        pm.stats = pm.stats[1:]
    }
    pm.mutex.Unlock()
}

func (pm *PerformanceMonitor) Stop() {
    pm.mutex.Lock()
    if !pm.running {
        pm.mutex.Unlock()
        return
    }
    pm.running = false
    pm.mutex.Unlock()
    
    close(pm.stop)
}

func (pm *PerformanceMonitor) GetCurrentStats() RuntimeStats {
    pm.mutex.RLock()
    defer pm.mutex.RUnlock()
    
    if len(pm.stats) == 0 {
        return RuntimeStats{}
    }
    return pm.stats[len(pm.stats)-1]
}

func (pm *PerformanceMonitor) PrintReport() {
    pm.mutex.RLock()
    defer pm.mutex.RUnlock()
    
    if len(pm.stats) == 0 {
        fmt.Println("No statistics available")
        return
    }
    
    latest := pm.stats[len(pm.stats)-1]
    fmt.Printf("=== Runtime Statistics Report ===\n")
    fmt.Printf("Timestamp: %v\n", latest.Timestamp.Format("2006-01-02 15:04:05"))
    fmt.Printf("Goroutines: %d\n", latest.NumGoroutine)
    fmt.Printf("CPUs: %d\n", latest.NumCPU)
    fmt.Printf("GOMAXPROCS: %d\n", latest.GOMAXPROCS)
    fmt.Printf("Memory Allocated: %s\n", formatBytes(latest.MemStats.Alloc))
    fmt.Printf("Total Allocations: %s\n", formatBytes(latest.MemStats.TotalAlloc))
    fmt.Printf("System Memory: %s\n", formatBytes(latest.MemStats.Sys))
    fmt.Printf("GC Cycles: %d\n", latest.MemStats.NumGC)
    fmt.Printf("Last GC: %v ago\n", time.Since(time.Unix(0, int64(latest.MemStats.LastGC))))
}

func formatBytes(bytes uint64) string {
    const unit = 1024
    if bytes < unit {
        return fmt.Sprintf("%d B", bytes)
    }
    div, exp := int64(unit), 0
    for n := bytes / unit; n >= unit; n /= unit {
        div *= unit
        exp++
    }
    return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

// 使用示例
func monitoringDemo() {
    monitor := NewPerformanceMonitor()
    monitor.Start(time.Second)
    defer monitor.Stop()
    
    // 模拟一些并发工作
    var wg sync.WaitGroup
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            // 模拟内存分配
            data := make([]int, 1000)
            for j := range data {
                data[j] = j * id
            }
            time.Sleep(time.Millisecond * 100)
        }(i)
    }
    
    // 每秒打印一次统计信息
    for i := 0; i < 5; i++ {
        time.Sleep(time.Second)
        monitor.PrintReport()
        fmt.Println()
    }
    
    wg.Wait()
}

6.2 pprof性能分析

集成pprof分析工具

import (
    "context"
    "fmt"
    "net/http"
    _ "net/http/pprof"
    "runtime"
    "sync"
    "time"
)

// 启动pprof服务器
func startPprofServer(addr string) {
    go func() {
        fmt.Printf("pprof server starting on %s\n", addr)
        fmt.Printf("访问 http://%s/debug/pprof/ 查看性能数据\n", addr)
        if err := http.ListenAndServe(addr, nil); err != nil {
            fmt.Printf("pprof server error: %v\n", err)
        }
    }()
}

// CPU密集型任务示例
func cpuIntensiveTask(ctx context.Context, id int, result chan<- int) {
    defer close(result)
    
    sum := 0
    for i := 0; i < 1000000; i++ {
        select {
        case <-ctx.Done():
            return
        default:
            sum += i * i
        }
        
        if i%100000 == 0 {
            runtime.Gosched() // 主动让出CPU
        }
    }
    
    result <- sum
}

// 内存密集型任务示例
func memoryIntensiveTask(ctx context.Context, size int) [][]int {
    matrix := make([][]int, size)
    for i := range matrix {
        select {
        case <-ctx.Done():
            return nil
        default:
            matrix[i] = make([]int, size)
            for j := range matrix[i] {
                matrix[i][j] = i * j
            }
        }
    }
    return matrix
}

// 带性能分析的工作负载示例
func profiledWorkload() {
    startPprofServer("localhost:6060")
    
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()
    
    var wg sync.WaitGroup
    
    // CPU密集型任务
    for i := 0; i < runtime.NumCPU(); i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            result := make(chan int)
            go cpuIntensiveTask(ctx, id, result)
            
            select {
            case sum := <-result:
                fmt.Printf("CPU task %d completed with sum: %d\n", id, sum)
            case <-ctx.Done():
                fmt.Printf("CPU task %d cancelled\n", id)
            }
        }(i)
    }
    
    // 内存密集型任务
    for i := 0; i < 5; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            matrix := memoryIntensiveTask(ctx, 1000)
            if matrix != nil {
                fmt.Printf("Memory task %d completed with matrix size: %dx%d\n", 
                          id, len(matrix), len(matrix[0]))
            } else {
                fmt.Printf("Memory task %d cancelled\n", id)
            }
        }(i)
    }
    
    wg.Wait()
    fmt.Println("所有任务完成,保持pprof服务器运行...")
    time.Sleep(time.Minute) // 保持服务器运行以便分析
}

6.3 数据竞争检测

数据竞争示例和检测

import (
    "fmt"
    "sync"
    "time"
)

// 错误示例:存在数据竞争
type UnsafeCounter struct {
    count int
}

func (c *UnsafeCounter) Increment() {
    c.count++ // 数据竞争!
}

func (c *UnsafeCounter) GetCount() int {
    return c.count // 数据竞争!
}

// 正确示例:使用互斥锁
type SafeCounterMutex struct {
    mutex sync.Mutex
    count int
}

func (c *SafeCounterMutex) Increment() {
    c.mutex.Lock()
    defer c.mutex.Unlock()
    c.count++
}

func (c *SafeCounterMutex) GetCount() int {
    c.mutex.Lock()
    defer c.mutex.Unlock()
    return c.count
}

// 正确示例:使用原子操作
type SafeCounterAtomic struct {
    count int64
}

func (c *SafeCounterAtomic) Increment() {
    atomic.AddInt64(&c.count, 1)
}

func (c *SafeCounterAtomic) GetCount() int64 {
    return atomic.LoadInt64(&c.count)
}

// 竞争检测测试
func raceDetectionDemo() {
    fmt.Println("=== 数据竞争检测示例 ===")
    fmt.Println("使用 'go run -race main.go' 来检测数据竞争")
    
    // 不安全的计数器(会产生数据竞争)
    unsafeCounter := &UnsafeCounter{}
    
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                unsafeCounter.Increment()
            }
        }()
    }
    
    wg.Wait()
    fmt.Printf("不安全计数器结果: %d (期望: 10000)\n", unsafeCounter.GetCount())
    
    // 安全的计数器(使用互斥锁)
    safeCounterMutex := &SafeCounterMutex{}
    
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                safeCounterMutex.Increment()
            }
        }()
    }
    
    wg.Wait()
    fmt.Printf("安全计数器(互斥锁)结果: %d\n", safeCounterMutex.GetCount())
    
    // 安全的计数器(使用原子操作)
    safeCounterAtomic := &SafeCounterAtomic{}
    
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                safeCounterAtomic.Increment()
            }
        }()
    }
    
    wg.Wait()
    fmt.Printf("安全计数器(原子操作)结果: %d\n", safeCounterAtomic.GetCount())
}

6.4 死锁检测和避免

死锁示例和解决方案

import (
    "fmt"
    "sync"
    "time"
)

// 死锁示例1:循环等待
func deadlockExample1() {
    fmt.Println("=== 死锁示例1: 循环等待 ===")
    
    var mutex1, mutex2 sync.Mutex
    
    go func() {
        mutex1.Lock()
        fmt.Println("Goroutine 1: 获得锁1")
        time.Sleep(time.Millisecond * 100)
        
        fmt.Println("Goroutine 1: 尝试获得锁2")
        mutex2.Lock() // 会阻塞
        fmt.Println("Goroutine 1: 获得锁2")
        mutex2.Unlock()
        mutex1.Unlock()
    }()
    
    go func() {
        mutex2.Lock()
        fmt.Println("Goroutine 2: 获得锁2")
        time.Sleep(time.Millisecond * 100)
        
        fmt.Println("Goroutine 2: 尝试获得锁1")
        mutex1.Lock() // 会阻塞
        fmt.Println("Goroutine 2: 获得锁1")
        mutex1.Unlock()
        mutex2.Unlock()
    }()
    
    time.Sleep(time.Second * 2)
    fmt.Println("检测到死锁!")
}

// 解决方案1:锁排序
func deadlockSolution1() {
    fmt.Println("=== 死锁解决方案1: 锁排序 ===")
    
    var mutex1, mutex2 sync.Mutex
    var wg sync.WaitGroup
    
    // 总是按照相同的顺序获取锁
    lockInOrder := func(name string) {
        defer wg.Done()
        
        mutex1.Lock() // 先获取锁1
        fmt.Printf("%s: 获得锁1\n", name)
        time.Sleep(time.Millisecond * 100)
        
        mutex2.Lock() // 再获取锁2
        fmt.Printf("%s: 获得锁2\n", name)
        
        mutex2.Unlock()
        mutex1.Unlock()
        fmt.Printf("%s: 释放所有锁\n", name)
    }
    
    wg.Add(2)
    go lockInOrder("Goroutine 1")
    go lockInOrder("Goroutine 2")
    
    wg.Wait()
    fmt.Println("所有操作完成,无死锁")
}

// 解决方案2:使用超时
func deadlockSolution2() {
    fmt.Println("=== 死锁解决方案2: 使用超时 ===")
    
    type TimedMutex struct {
        ch chan struct{}
    }
    
    func NewTimedMutex() *TimedMutex {
        return &TimedMutex{ch: make(chan struct{}, 1)}
    }
    
    func (tm *TimedMutex) Lock() {
        tm.ch <- struct{}{}
    }
    
    func (tm *TimedMutex) Unlock() {
        <-tm.ch
    }
    
    func (tm *TimedMutex) TryLockTimeout(timeout time.Duration) bool {
        select {
        case tm.ch <- struct{}{}:
            return true
        case <-time.After(timeout):
            return false
        }
    }
    
    mutex1 := NewTimedMutex()
    mutex2 := NewTimedMutex()
    var wg sync.WaitGroup
    
    worker := func(name string, first, second *TimedMutex) {
        defer wg.Done()
        
        first.Lock()
        fmt.Printf("%s: 获得第一个锁\n", name)
        
        if second.TryLockTimeout(time.Millisecond * 500) {
            fmt.Printf("%s: 获得第二个锁\n", name)
            second.Unlock()
        } else {
            fmt.Printf("%s: 无法获得第二个锁,避免死锁\n", name)
        }
        
        first.Unlock()
        fmt.Printf("%s: 完成操作\n", name)
    }
    
    wg.Add(2)
    go worker("Goroutine 1", mutex1, mutex2)
    go worker("Goroutine 2", mutex2, mutex1)
    
    wg.Wait()
    fmt.Println("所有操作完成,避免了死锁")
}

// 解决方案3:使用Context取消
func deadlockSolution3() {
    fmt.Println("=== 死锁解决方案3: 使用Context ===")
    
    ctx, cancel := context.WithTimeout(context.Background(), time.Second)
    defer cancel()
    
    var mutex1, mutex2 sync.Mutex
    var wg sync.WaitGroup
    
    worker := func(name string, ctx context.Context) {
        defer wg.Done()
        
        done := make(chan struct{})
        go func() {
            mutex1.Lock()
            fmt.Printf("%s: 获得锁1\n", name)
            time.Sleep(time.Millisecond * 100)
            
            mutex2.Lock()
            fmt.Printf("%s: 获得锁2\n", name)
            
            mutex2.Unlock()
            mutex1.Unlock()
            close(done)
        }()
        
        select {
        case <-done:
            fmt.Printf("%s: 操作完成\n", name)
        case <-ctx.Done():
            fmt.Printf("%s: 操作被取消,避免死锁\n", name)
        }
    }
    
    wg.Add(2)
    go worker("Goroutine 1", ctx)
    go worker("Goroutine 2", ctx)
    
    wg.Wait()
}

func deadlockDemo() {
    // 注意:deadlockExample1会导致死锁,仅作演示
    // deadlockExample1() 
    
    deadlockSolution1()
    fmt.Println()
    deadlockSolution2()
    fmt.Println()
    deadlockSolution3()
}

7. 实战案例分析

7.1 Web服务器并发处理

高性能HTTP服务器实现

import (
    "context"
    "fmt"
    "log"
    "net/http"
    "sync"
    "time"
)

type Server struct {
    addr        string
    server      *http.Server
    rateLimiter *TokenBucket
    monitor     *PerformanceMonitor
    middleware  []Middleware
}

type Middleware func(http.HandlerFunc) http.HandlerFunc

func NewServer(addr string) *Server {
    return &Server{
        addr:        addr,
        rateLimiter: NewTokenBucket(100, time.Millisecond*10),
        monitor:     NewPerformanceMonitor(),
        middleware:  make([]Middleware, 0),
    }
}

func (s *Server) Use(middleware Middleware) {
    s.middleware = append(s.middleware, middleware)
}

func (s *Server) applyMiddleware(handler http.HandlerFunc) http.HandlerFunc {
    for i := len(s.middleware) - 1; i >= 0; i-- {
        handler = s.middleware[i](handler)
    }
    return handler
}

// 限流中间件
func RateLimitMiddleware(limiter *TokenBucket) Middleware {
    return func(next http.HandlerFunc) http.HandlerFunc {
        return func(w http.ResponseWriter, r *http.Request) {
            if !limiter.Allow() {
                http.Error(w, "Too Many Requests", http.StatusTooManyRequests)
                return
            }
            next(w, r)
        }
    }
}

// 日志中间件
func LoggingMiddleware() Middleware {
    return func(next http.HandlerFunc) http.HandlerFunc {
        return func(w http.ResponseWriter, r *http.Request) {
            start := time.Now()
            next(w, r)
            duration := time.Since(start)
            log.Printf("%s %s %v", r.Method, r.URL.Path, duration)
        }
    }
}

// 恢复中间件
func RecoveryMiddleware() Middleware {
    return func(next http.HandlerFunc) http.HandlerFunc {
        return func(w http.ResponseWriter, r *http.Request) {
            defer func() {
                if err := recover(); err != nil {
                    log.Printf("Panic recovered: %v", err)
                    http.Error(w, "Internal Server Error", http.StatusInternalServerError)
                }
            }()
            next(w, r)
        }
    }
}

func (s *Server) Start() error {
    // 配置中间件
    s.Use(RecoveryMiddleware())
    s.Use(LoggingMiddleware())
    s.Use(RateLimitMiddleware(s.rateLimiter))
    
    // 启动性能监控
    s.monitor.Start(time.Second * 5)
    
    // 配置路由
    mux := http.NewServeMux()
    mux.HandleFunc("/", s.applyMiddleware(s.handleRoot))
    mux.HandleFunc("/api/data", s.applyMiddleware(s.handleData))
    mux.HandleFunc("/api/stats", s.applyMiddleware(s.handleStats))
    
    s.server = &http.Server{
        Addr:         s.addr,
        Handler:      mux,
        ReadTimeout:  15 * time.Second,
        WriteTimeout: 15 * time.Second,
        IdleTimeout:  60 * time.Second,
    }
    
    log.Printf("服务器启动在 %s", s.addr)
    return s.server.ListenAndServe()
}

func (s *Server) Stop(ctx context.Context) error {
    s.monitor.Stop()
    return s.server.Shutdown(ctx)
}

func (s *Server) handleRoot(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello, 并发世界! 时间: %v\n", time.Now())
}

func (s *Server) handleData(w http.ResponseWriter, r *http.Request) {
    // 模拟数据处理
    time.Sleep(time.Millisecond * 100)
    
    data := map[string]interface{}{
        "timestamp": time.Now(),
        "data":      []int{1, 2, 3, 4, 5},
        "message":   "数据处理完成",
    }
    
    w.Header().Set("Content-Type", "application/json")
    fmt.Fprintf(w, `{"timestamp":"%v","data":[1,2,3,4,5],"message":"数据处理完成"}`,
               data["timestamp"])
}

func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
    stats := s.monitor.GetCurrentStats()
    fmt.Fprintf(w, "Goroutines: %d\nMemory: %s\n",
               stats.NumGoroutine, formatBytes(stats.MemStats.Alloc))
}

// 使用示例
func webServerDemo() {
    server := NewServer(":8080")
    
    // 启动服务器
    go func() {
        if err := server.Start(); err != nil && err != http.ErrServerClosed {
            log.Fatalf("服务器启动失败: %v", err)
        }
    }()
    
    // 模拟客户端请求
    time.Sleep(time.Second) // 等待服务器启动
    
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            
            resp, err := http.Get(fmt.Sprintf("http://localhost:8080/api/data?id=%d", id))
            if err != nil {
                log.Printf("请求失败: %v", err)
                return
            }
            defer resp.Body.Close()
            
            log.Printf("请求 %d 完成,状态: %d", id, resp.StatusCode)
        }(i)
    }
    
    wg.Wait()
    
    // 优雅关闭
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()
    
    if err := server.Stop(ctx); err != nil {
        log.Printf("服务器关闭错误: %v", err)
    } else {
        log.Println("服务器优雅关闭")
    }
}

8. 常见陷阱和最佳实践

8.1 常见并发陷阱

陷阱1:Goroutine泄漏

import (
    "context"
    "fmt"
    "time"
)

// 错误示例:Goroutine泄漏
func goroutineLeakBad() {
    ch := make(chan int)
    
    // 启动Goroutine但没有正确关闭机制
    go func() {
        for {
            select {
            case val := <-ch:
                fmt.Printf("Received: %d\n", val)
            }
            // 没有退出条件,Goroutine会一直运行
        }
    }()
    
    // 主函数结束,但Goroutine仍在运行
    ch <- 1
    time.Sleep(time.Second)
    // ch没有被关闭,Goroutine永远等待
}

// 正确示例:使用Context控制Goroutine生命周期
func goroutineLeakGood() {
    ch := make(chan int)
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel() // 确保取消Context
    
    go func() {
        for {
            select {
            case val := <-ch:
                fmt.Printf("Received: %d\n", val)
            case <-ctx.Done():
                fmt.Println("Goroutine shutting down")
                return // 正确退出
            }
        }
    }()
    
    ch <- 1
    time.Sleep(time.Second)
    // cancel()会通过Context通知Goroutine退出
}

陷阱2:Channel使用错误

import (
    "fmt"
    "time"
)

// 错误示例:向关闭的Channel发送数据
func channelMisuseBad() {
    ch := make(chan int, 1)
    close(ch)
    
    // 这会导致panic!
    // ch <- 1 // panic: send on closed channel
}

// 错误示例:重复关闭Channel
func channelMisuseBad2() {
    ch := make(chan int, 1)
    close(ch)
    
    // 这会导致panic!
    // close(ch) // panic: close of closed channel
}

// 正确示例:安全的Channel操作
func channelUseGood() {
    ch := make(chan int, 1)
    done := make(chan bool)
    
    go func() {
        defer func() {
            done <- true
        }()
        
        // 安全发送
        select {
        case ch <- 1:
            fmt.Println("Sent successfully")
        default:
            fmt.Println("Channel is full")
        }
        
        // 安全接收
        select {
        case val := <-ch:
            fmt.Printf("Received: %d\n", val)
        case <-time.After(time.Second):
            fmt.Println("Receive timeout")
        }
    }()
    
    <-done
    close(ch) // 只关闭一次
}

陷阱3:锁的不当使用

import (
    "fmt"
    "sync"
    "time"
)

// 错误示例:忘记解锁
func lockMisuseBad() {
    var mutex sync.Mutex
    var data int
    
    increment := func() {
        mutex.Lock()
        data++
        // 忘记调用Unlock(),导致死锁
        if data > 10 {
            return // 这里没有解锁就返回了
        }
        mutex.Unlock()
    }
    
    var wg sync.WaitGroup
    for i := 0; i < 20; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            increment()
        }()
    }
    
    wg.Wait()
    fmt.Printf("Final data: %d\n", data)
}

// 正确示例:使用defer确保解锁
func lockUseGood() {
    var mutex sync.Mutex
    var data int
    
    increment := func() {
        mutex.Lock()
        defer mutex.Unlock() // 使用defer确保解锁
        
        data++
        if data > 10 {
            return // 现在可以安全返回
        }
    }
    
    var wg sync.WaitGroup
    for i := 0; i < 20; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            increment()
        }()
    }
    
    wg.Wait()
    fmt.Printf("Final data: %d\n", data)
}

8.2 最佳实践原则

原则1:优雅的错误处理

import (
    "context"
    "fmt"
    "sync"
    "time"
)

type WorkerPool struct {
    workers    int
    jobQueue   chan Job
    errorQueue chan error
    ctx        context.Context
    cancel     context.CancelFunc
    wg         sync.WaitGroup
}

type Job struct {
    ID   int
    Task func() error
}

func NewWorkerPool(workers int) *WorkerPool {
    ctx, cancel := context.WithCancel(context.Background())
    return &WorkerPool{
        workers:    workers,
        jobQueue:   make(chan Job, workers*2),
        errorQueue: make(chan error, workers),
        ctx:        ctx,
        cancel:     cancel,
    }
}

func (wp *WorkerPool) Start() {
    // 启动worker goroutines
    for i := 0; i < wp.workers; i++ {
        wp.wg.Add(1)
        go wp.worker(i)
    }
    
    // 启动错误处理goroutine
    wp.wg.Add(1)
    go wp.errorHandler()
}

func (wp *WorkerPool) worker(id int) {
    defer wp.wg.Done()
    
    for {
        select {
        case job := <-wp.jobQueue:
            if err := job.Task(); err != nil {
                // 非阻塞发送错误
                select {
                case wp.errorQueue <- fmt.Errorf("worker %d, job %d: %w", id, job.ID, err):
                default:
                    // 错误队列满时不阻塞
                }
            }
        case <-wp.ctx.Done():
            return
        }
    }
}

func (wp *WorkerPool) errorHandler() {
    defer wp.wg.Done()
    
    for {
        select {
        case err := <-wp.errorQueue:
            fmt.Printf("Error occurred: %v\n", err)
            // 这里可以添加错误处理逻辑,如重试、日志记录等
        case <-wp.ctx.Done():
            return
        }
    }
}

func (wp *WorkerPool) Submit(job Job) error {
    select {
    case wp.jobQueue <- job:
        return nil
    case <-wp.ctx.Done():
        return wp.ctx.Err()
    default:
        return fmt.Errorf("job queue is full")
    }
}

func (wp *WorkerPool) Stop() {
    wp.cancel()
    wp.wg.Wait()
}

原则2:资源清理和超时控制

import (
    "context"
    "fmt"
    "sync"
    "time"
)

type ResourceManager struct {
    resources map[string]*Resource
    mutex     sync.RWMutex
    cleanup   chan string
    ctx       context.Context
    cancel    context.CancelFunc
}

type Resource struct {
    ID        string
    CreatedAt time.Time
    LastUsed  time.Time
    Data      interface{}
    InUse     bool
}

func NewResourceManager() *ResourceManager {
    ctx, cancel := context.WithCancel(context.Background())
    rm := &ResourceManager{
        resources: make(map[string]*Resource),
        cleanup:   make(chan string, 100),
        ctx:       ctx,
        cancel:    cancel,
    }
    
    // 启动清理goroutine
    go rm.cleanupWorker()
    
    return rm
}

func (rm *ResourceManager) Get(id string, timeout time.Duration) (*Resource, error) {
    ctx, cancel := context.WithTimeout(rm.ctx, timeout)
    defer cancel()
    
    ticker := time.NewTicker(time.Millisecond * 100)
    defer ticker.Stop()
    
    for {
        select {
        case <-ctx.Done():
            return nil, ctx.Err()
        case <-ticker.C:
            rm.mutex.Lock()
            if resource, exists := rm.resources[id]; exists && !resource.InUse {
                resource.InUse = true
                resource.LastUsed = time.Now()
                rm.mutex.Unlock()
                return resource, nil
            }
            rm.mutex.Unlock()
        }
    }
}

func (rm *ResourceManager) Create(id string, data interface{}) {
    rm.mutex.Lock()
    defer rm.mutex.Unlock()
    
    rm.resources[id] = &Resource{
        ID:        id,
        CreatedAt: time.Now(),
        LastUsed:  time.Now(),
        Data:      data,
        InUse:     false,
    }
}

func (rm *ResourceManager) Release(id string) {
    rm.mutex.Lock()
    defer rm.mutex.Unlock()
    
    if resource, exists := rm.resources[id]; exists {
        resource.InUse = false
        resource.LastUsed = time.Now()
    }
}

func (rm *ResourceManager) cleanupWorker() {
    ticker := time.NewTicker(time.Minute)
    defer ticker.Stop()
    
    for {
        select {
        case <-rm.ctx.Done():
            return
        case <-ticker.C:
            rm.performCleanup()
        case id := <-rm.cleanup:
            rm.deleteResource(id)
        }
    }
}

func (rm *ResourceManager) performCleanup() {
    rm.mutex.Lock()
    defer rm.mutex.Unlock()
    
    cutoff := time.Now().Add(-time.Hour) // 清理1小时未使用的资源
    
    for id, resource := range rm.resources {
        if !resource.InUse && resource.LastUsed.Before(cutoff) {
            delete(rm.resources, id)
            fmt.Printf("Cleaned up resource: %s\n", id)
        }
    }
}

func (rm *ResourceManager) deleteResource(id string) {
    rm.mutex.Lock()
    defer rm.mutex.Unlock()
    
    delete(rm.resources, id)
}

func (rm *ResourceManager) Stop() {
    rm.cancel()
}

原则3:可观测性和监控

import (
    "context"
    "fmt"
    "sync"
    "sync/atomic"
    "time"
)

type Metrics struct {
    TotalRequests   int64
    SuccessfulReqs  int64
    FailedRequests  int64
    AverageLatency  time.Duration
    ActiveGoroutines int32
}

type Observer struct {
    metrics     *Metrics
    latencies   []time.Duration
    latencyMux  sync.RWMutex
    subscribers []chan Metrics
    subMux      sync.RWMutex
}

func NewObserver() *Observer {
    return &Observer{
        metrics:     &Metrics{},
        latencies:   make([]time.Duration, 0, 1000),
        subscribers: make([]chan Metrics, 0),
    }
}

func (o *Observer) RecordRequest(latency time.Duration, success bool) {
    atomic.AddInt64(&o.metrics.TotalRequests, 1)
    
    if success {
        atomic.AddInt64(&o.metrics.SuccessfulReqs, 1)
    } else {
        atomic.AddInt64(&o.metrics.FailedRequests, 1)
    }
    
    // 记录延迟
    o.latencyMux.Lock()
    o.latencies = append(o.latencies, latency)
    if len(o.latencies) > 1000 {
        o.latencies = o.latencies[100:] // 保持最近1000条记录
    }
    o.updateAverageLatency()
    o.latencyMux.Unlock()
    
    // 通知订阅者
    o.notifySubscribers()
}

func (o *Observer) updateAverageLatency() {
    if len(o.latencies) == 0 {
        return
    }
    
    var total time.Duration
    for _, latency := range o.latencies {
        total += latency
    }
    o.metrics.AverageLatency = total / time.Duration(len(o.latencies))
}

func (o *Observer) IncrementActiveGoroutines() {
    atomic.AddInt32(&o.metrics.ActiveGoroutines, 1)
}

func (o *Observer) DecrementActiveGoroutines() {
    atomic.AddInt32(&o.metrics.ActiveGoroutines, -1)
}

func (o *Observer) Subscribe() <-chan Metrics {
    ch := make(chan Metrics, 10)
    
    o.subMux.Lock()
    o.subscribers = append(o.subscribers, ch)
    o.subMux.Unlock()
    
    return ch
}

func (o *Observer) notifySubscribers() {
    metrics := o.GetMetrics()
    
    o.subMux.RLock()
    for _, ch := range o.subscribers {
        select {
        case ch <- metrics:
        default:
            // 非阻塞发送,避免阻塞观察者
        }
    }
    o.subMux.RUnlock()
}

func (o *Observer) GetMetrics() Metrics {
    o.latencyMux.RLock()
    defer o.latencyMux.RUnlock()
    
    return Metrics{
        TotalRequests:    atomic.LoadInt64(&o.metrics.TotalRequests),
        SuccessfulReqs:   atomic.LoadInt64(&o.metrics.SuccessfulReqs),
        FailedRequests:   atomic.LoadInt64(&o.metrics.FailedRequests),
        AverageLatency:   o.metrics.AverageLatency,
        ActiveGoroutines: atomic.LoadInt32(&o.metrics.ActiveGoroutines),
    }
}

// 使用示例
func observabilityDemo() {
    observer := NewObserver()
    
    // 订阅指标更新
    metricsChan := observer.Subscribe()
    
    // 启动指标监控
    go func() {
        for metrics := range metricsChan {
            fmt.Printf("Total: %d, Success: %d, Failed: %d, Avg Latency: %v, Active: %d\n",
                      metrics.TotalRequests, metrics.SuccessfulReqs, metrics.FailedRequests,
                      metrics.AverageLatency, metrics.ActiveGoroutines)
        }
    }()
    
    // 模拟并发请求
    var wg sync.WaitGroup
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            
            observer.IncrementActiveGoroutines()
            defer observer.DecrementActiveGoroutines()
            
            start := time.Now()
            
            // 模拟工作
            time.Sleep(time.Millisecond * time.Duration(50+id%100))
            
            latency := time.Since(start)
            success := id%10 != 0 // 10%失败率
            
            observer.RecordRequest(latency, success)
        }(i)
    }
    
    wg.Wait()
    time.Sleep(time.Second) // 等待最后的指标更新
}

8.3 最佳实践总结

核心设计原则

  1. 明确的所有权和生命周期管理

    • 每个Goroutine都应该有明确的启动和退出条件
    • 使用Context来传播取消信号
    • 确保所有资源都能被正确清理
  2. 错误处理和容错性

    • 不要让错误使程序崩溃
    • 使用错误聚合和重试机制
    • 实现熔断器模式防止雪崩
  3. 性能和可扩展性

    • 使用对象池减少内存分配
    • 合理设置Goroutine数量
    • 避免过度同步
  4. 可观测性和调试

    • 添加充分的日志和指标
    • 使用pprof进行性能分析
    • 实现健康检查和监控

代码规范

// 好的并发代码示例
func WellDesignedService() {
    // 1. 使用Context控制生命周期
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    // 2. 使用WaitGroup等待所有Goroutine完成
    var wg sync.WaitGroup
    
    // 3. 使用Channel进行通信
    resultChan := make(chan Result, 10)
    
    // 4. 错误处理
    errorChan := make(chan error, 10)
    
    // 5. 启动workers
    for i := 0; i < 5; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            defer func() {
                if r := recover(); r != nil {
                    errorChan <- fmt.Errorf("worker %d panic: %v", id, r)
                }
            }()
            
            // Worker逻辑
            select {
            case result := <-doWork():
                resultChan <- result
            case <-ctx.Done():
                return
            }
        }(i)
    }
    
    // 6. 优雅关闭
    go func() {
        wg.Wait()
        close(resultChan)
        close(errorChan)
    }()
    
    // 7. 处理结果和错误
    for {
        select {
        case result, ok := <-resultChan:
            if !ok {
                return
            }
            handleResult(result)
        case err := <-errorChan:
            handleError(err)
        case <-ctx.Done():
            return
        }
    }
}

总结

通过本指南,我们深入探讨了Go语言并发编程的各个方面:

  1. 基础概念:理解了并发与并行的区别,掌握了Go运行时调度器的工作原理
  2. 核心工具:学会了Goroutine、Channel和同步原语的正确使用方法
  3. 设计模式:掌握了常见的并发设计模式,如Worker Pool、发布订阅等
  4. 性能优化:了解了性能监控、分析和调试的实用技巧
  5. 最佳实践:学习了如何避免常见陷阱,编写健壮的并发代码

Go语言的并发模型基于CSP理论,提供了简洁而强大的并发编程能力。通过"不要通过共享内存来通信,而要通过通信来共享内存"的哲学,Go让并发编程变得更加安全和可维护。

掌握Go并发编程需要:

  • 理论基础:理解并发模型和内存模型
  • 实践经验:通过大量代码练习掌握各种模式
  • 工程思维:在实际项目中权衡性能、可维护性和复杂度
  • 持续学习:跟上Go语言的发展和社区最佳实践

希望这份指南能帮助您在Go并发编程的道路上更进一步,编写出高性能、高质量的并发应用程序。