Go Runtime and Scheduler

[by: redacted]

Greetings. Nerds.

Just another day another and another Go poast. We are learning about Go runtime and Scheduler.

Ofc you might know about Go runtime but I’ll explain it here anyways and if you don’t know then you can learn something here ;3

So Go runtime is an essential part of golang that manages the execution of Go programs. One of its standout features is the goroutine scheduler, which allows developers to write concurrent code with ease.

What is Goroutine?

A goroutine is a lightweight thread managed by the Go runtime. Unlike traditional threads (If you don’t know about threads yeet yourself then google about them and read about them.) which can be resource-intensive, goroutines are inexpensive to create and manage, allowing you to spawn thousands of them without significant overhead and to create a Go routine you just need to use the keyword go before calling a function, enabling concurrent execution of functions.

//
package main

import (
	"fmt"
	"time"
)

func printEven() {
	fmt.Println("goroutine started")
	for i := 1; i <= 10; i++ {
		if i%2 == 0 {
			fmt.Printf("%d ", i)
		}
	}
	fmt.Println()
}

func main() {
	go printEven() // start printNumbers as a goroutine

	// wait for 2 seconds so goroutine can finish
	time.Sleep(2 * time.Second)
	fmt.Println("2 second wait done")
}

Output

goroutine started
1 2 3 4 5
2 second wait done

So here we created a function primeNumbers that prints all prime numbers from 0 to 10. I used time.Sleep() here waiting for 2 seconds before exiting main function because when we run a goroutine it doesn’t care about if that function finished its execution or not, as soon as you’re done with main function it’ll exit without caring (just like she did. jk.)

How the Go Scheduler Works

Go scheduler uses a model called M scheduling, where M operating system threads are multiplexed onto N goroutines. This efficient model allows the scheduler to decide when and where to run goroutines based on their state, maximizing resource utilization.

Here are some key components of Scheduler

  • G (Goroutine): Each routine has its own stack and context.
  • M (Machine): An operating system thread. Goroutines run on these threads.
  • P (Processor): A logical processor that holds a queue of runnable goroutines. Each P can execute an M, allowing it to run goroutines.

The scheduler operates by allocating available M to P, which in turn executes the goroutines from its queue. When a goroutine blocks (like waiting for I/O), the scheduler can easily switch to another runnable goroutine.

Simple Goroutine

//
package main

import (
	"fmt"
	"time"
)

func sayGM() {
	for i := 0; i < 5; i++ {
		fmt.Println("say gm")
		time.Sleep(100 * time.Millisecond) // some work here??
	}
}

func main() {
	go sayGM() // start goroutine

	for i := 0; i < 5; i++ {
		fmt.Println("meow")
		time.Sleep(150 * time.Millisecond) // some work here??
	}
}

Output

//
meow
say gm
say gm
meow
say gm
say gm
meow
say gm
meow
meow

Okay so here we have a function which got a for loop running 5 times holy shit this is easy bro you can look at the code for god’s sake.

Anyways, so in loop we have time.Sleep() let’s say any random work is being done there. Same goes for for loop in main function there we have a 150ms delay so ofc it’ll wait for more time than sayGM function and hence complete sayGM function before finishing for loop in main function. We use go before calling sayGM to make it a goroutine. It’s that easy.

Goroutine Lifecycle

Goroutine have a lifecycle that follows these states:

  1. Runnable: Goroutine is ready to run but not currently executing
  2. Running: Goroutine is currently being executed by an M
  3. Blocked: Goroutine is waiting for a resource (could be waiting for I/O let’s say)
  4. Dead: Goroutine has finished executing

Scheduler managed these states, allowing it to efficiently utilize resources and maintain responsiveness in concurrent applications.

Using Wait Groups

To synchronize goroutines, we can use a wait group from the sync package. This allows the main function to wait for all goroutines to complete before exiting.

//
package main

import (
	"fmt"
	"sync"
	"time"
)

func sayGM(wg *sync.WaitGroup) {
	defer wg.Done() // signal completion when function exits
	for i := 0; i < 5; i++ {
		fmt.Println("say gm")
		time.Sleep(100 * time.Millisecond) /// some work again?
	}
}

func main() {
	var wg sync.WaitGroup
	wg.Add(1) // add a goroutine to wait for

	go sayGM(&wg) // start goroutine

	// for i := 0; i < 5; i++ {
	// 	fmt.Println("meow")
	// 	time.Sleep(150 * time.Millisecond) // some work again?
	// }

	wg.Wait() // wait for all goroutines to finish
}

Output:

//
say gm
say gm
say gm
say gm
say gm

Same code again but here we have a sync.WaitGroup called wg to track goroutines. wg.Add(1) increments the wait group counter to indicate that we are waiting for one goroutine. Inside sayGM function we call wg.Done() and defer it to indicate that the goroutine has finished its execution. wg.Wait() blocks the main function until all goroutines have finished.

Scheduling Strategies

To optimize performance here’s some strategies you can apply

  • Preemptive Scheduling: The scheduler can preempt running goroutine to allow others to run, especially during blocking operations or I/O
  • Work Stealing: If a P is idle, it can “steal” goroutines from another P to keep processing efficient. (P from G M P)
  • Garbage Collection: The runtime includes a garbage collector that runs concurrently with goroutines, managing memory efficiently. (more about garbage collection HERE)

Multiple Goroutines

To test our working with multiple golang let’s write a smol code ‘cause your dumahh won’t use just one Goroutine.

//
package main

import (
	"fmt"
	"sync"
	"time"
)

func count(id int, wg *sync.WaitGroup) {
	defer wg.Done() // signal completion when function exits
	for i := 0; i < 5; i++ {
		fmt.Printf("goroutine %d: %d\n", id, i)
		time.Sleep(time.Millisecond * 100) // do some work here
	}
}

func main() {
	var wg sync.WaitGroup
	numGoroutines := 3

	for i := 1; i <= numGoroutines; i++ {
		wg.Add(1)        // add goroutine to wait for
		go count(i, &wg) // start goroutine
	}

	wg.Wait() // wait for all goroutines to finish
	fmt.Println("all goroutines complete")
}

Output

//
goroutine 3: 0
goroutine 2: 0
goroutine 1: 0
goroutine 2: 1
goroutine 3: 1
goroutine 1: 1
goroutine 1: 2
goroutine 3: 2
goroutine 2: 2
goroutine 3: 3
goroutine 1: 3
goroutine 2: 3
goroutine 1: 4
goroutine 3: 4
goroutine 2: 4
all goroutines complete

What we did here? Can’t you just read what’s in the code? You should be typing this code out if you’re actually reading this. HAHHH

The count function takes an id and a wait group. It prints a count specific to the goroutine. In main, we launch multiple goroutines by looping and starting count for each goroutine. The main function waits for all goroutines to complete using wg.Wait()

Ez stuff huh? Now you can use goroutines and and stop wasting some of your time and also improve your code maybe.

Go runtime and Scheduler are powerful concurrency features that make it really easy to manage multiple tasks simultaneously. Understanding these topics you can greatly enhance the efficiency and responsiveness of your applications.

If you want to explore more then you should check out these links