Memory Management in Go
Greetings. Nerds.
Now that we are done with the GC part let’s take a look at memory management and scary ahh pointers. Pointers are not scary actually they are “KOOL!”.
So you prolly know what pointers are and if you don’t then first of all go and yeet yourself. google about them and read maybe. Anyways, so pointers are like magical arrows pointing to the exact memory location of your variables. They’re useful for improving performance and managing memory better.
Pointers: The Magical Arrows
//
package main
import "fmt"
func main() {
x := 10
p := &x // p points to x's memory address
fmt.Println("value of x:", x)
fmt.Println("memory address of x:", &x) // prints the memory address of x
fmt.Println("value at pointer p:", *p)
}
Output
//
value of x: 10
memory address of x: 0xc000012070
value at pointer p: 10
So here
x
holds10
integer valuep
points to memory address ofx
We can print memory address by using &x
in fmt.Println()
and print value from pointer using *p
Using Pointers in Real Code
“Oh, syk but i can just declare variables and do my thing”
SHUT UP!
Yeah so why bother with pointers? They clean up your code, save memory, and make you look like you actually know what you’re doing.
//
package main
import "fmt"
func increment(n *int) {
*n++ // dereferencing p and incrementing its value
}
func main() {
x := 10
increment(&x) // pass the address of num
fmt.Println("new value of num:", x)
}
Output:
//
new value of num: 11
See? We passed a pointer to x
instead of the whole value. Cleaner, faster, and more efficient! Like a true pro.
Pitfalls of Memory Management
Again, Go’s memory handling is not all sunshine and rainbows. Here’s some stuff that you gotta avoid
Dangling Pointers (more of a C/C++ horror, but…)
Go’s garbage collection makes life easier, so you won’t have classic dangling pointers haunting your code. But if you hold onto a pointer past its prime, it’s like clinging to a coupon that’s already expired. No use to anyone.
Here’s a lil’ code focusing on proper cleanup in action
//
package main
import (
"fmt"
"os"
)
// FileHandler manages a file with proper cleanup
type FileHandler struct {
file *os.File
isClosed bool
}
// NewFileHandler creates a new FileHandler for the specified filename
func NewFileHandler(filename string) (*FileHandler, error) {
file, err := os.Create(filename)
if err != nil {
return nil, err
}
return &FileHandler{file: file}, nil
}
// BadClose closes the file without fully cleaning up
// Leaves a dangling pointer and doesn’t mark as closed
func (fh *FileHandler) BadClose() {
fh.file.Close()
// The file is closed but fh.file still points to it
}
// SafeClose properly closes the file and marks the handler as closed
func (fh *FileHandler) SafeClose() {
fh.file.Close()
fh.file = nil // clears pointer
fh.isClosed = true // marks as closed
}
// WriteData attempts to write data to the file
// Returns an error if the file has already been closed
func (fh *FileHandler) WriteData(data string) error {
if fh.isClosed || fh.file == nil {
return fmt.Errorf("file handle is already closed")
}
_, err := fh.file.WriteString(data)
return err
}
func main() {
fmt.Println("1: improper Close")
handler1, _ := NewFileHandler("test1.txt")
handler1.BadClose()
// write after an improper close
err := handler1.WriteData("test")
fmt.Printf("write attempt after bad close: %v\n\n", err)
fmt.Println("2: proper Close")
handler2, _ := NewFileHandler("test2.txt")
handler2.SafeClose()
// write after a proper close
err = handler2.WriteData("test")
fmt.Printf("write attempt after safe close: %v\n", err)
}
Output
//
1: improper Close
write attempt after bad close: write test1.txt: file already closed
2: proper Close
write attempt after safe close: file handle is already closed
Memory Leaks
Again thanks to GO’s GC, memory leaks are rare, but they can still sneak in, especially if you have rogue goroutines or massive data structures.
Here’s some kode showing leaky and safe memory usage
//
package main
import (
"fmt"
"runtime"
"time"
)
// global cache that can lead to memory leaks if not managed
var cache = make(map[string][]byte)
func leakyCode() {
for i := 0; ; i++ {
// continuously adding to cache without cleanup
key := fmt.Sprintf("key-%d", i)
cache[key] = make([]byte, 1024*1024) // allocate 1MB
if i%10 == 0 {
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("memory usage (leaky): %v MB\n", m.Alloc/1024/1024)
}
time.Sleep(time.Millisecond * 100)
}
}
func safeCode() {
const maxItems = 10
for i := 0; ; i++ {
key := fmt.Sprintf("key-%d", i)
cache[key] = make([]byte, 1024*1024)
// cleanup when cache gets too big
if len(cache) > maxItems {
// remove oldest entry
delete(cache, fmt.Sprintf("key-%d", i-maxItems))
}
if i%10 == 0 {
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("memory usage (safe): %v MB\n", m.Alloc/1024/1024)
}
time.Sleep(time.Millisecond * 100)
}
}
func main() {
fmt.Println("starting memory leak demonstration...")
// uncomment one of these to test
// go leakyCode() // this will keep growing memory!
// go safeCode() // this will maintain stable memory usage
// we let it run for a while to see ze action
time.Sleep(time.Second * 10)
fmt.Println("done.")
}
Output (leaky)
//
starting memory leak demonstration...
memory usage (leaky): 1 MB
memory usage (leaky): 11 MB
memory usage (leaky): 21 MB
memory usage (leaky): 31 MB
memory usage (leaky): 41 MB
memory usage (leaky): 51 MB
memory usage (leaky): 61 MB
memory usage (leaky): 71 MB
memory usage (leaky): 81 MB
memory usage (leaky): 91 MB
memory usage (leaky): 101 MB
done.
Output (safe)
//
starting memory leak demonstration...
memory usage (safe): 1 MB
memory usage (safe): 11 MB
memory usage (safe): 14 MB
memory usage (safe): 13 MB
memory usage (safe): 12 MB
memory usage (safe): 22 MB
memory usage (safe): 21 MB
memory usage (safe): 20 MB
memory usage (safe): 19 MB
memory usage (safe): 18 MB
done.
See the difference? The safe version keeps memory stable, while the leaky code just hoards memory endlessly.
Data Races (The true wild west)
Goroutines are a Go superpower, but data races can be lurking if you don’t keep things synchronized.
//
package main
import (
"fmt"
"sync"
"sync/atomic"
)
func main() {
// UNSAFE: Data race
counter := 0
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter++ // RACE: Unsafe concurrent access
}()
}
wg.Wait()
fmt.Println("unsafe counter (wrong result):", counter)
// SAFE: using Mutex
counter = 0
var mutex sync.Mutex
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mutex.Lock()
counter++
mutex.Unlock()
}()
}
wg.Wait()
fmt.Println("mutex-protected counter (correct):", counter)
// SAFE: using Atomic
var atomicCounter atomic.Int64
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
atomicCounter.Add(1)
}()
}
wg.Wait()
fmt.Println("atomic counter (correct):", atomicCounter.Load())
}
Output
//
unsafe counter (wrong result): 997
mutex-protected counter (correct): 1000
atomic counter (correct): 1000
In simple words, Go handles lots of memory management for you, but stay sharp, anon! Proper pointer use, garbage collection awareness, and thread-safe practices will keep your code running smoothly without surprises.
Using new
and make
In Go, you also have new
and make
for memory management.
You can use new
for allocating memory for a variable:
//
package main
import "fmt"
func main() {
p := new(int)
*p = 100
fmt.Println("value at pointer p:", *p)
}
Outputs
//
value at pointer p: 100
And make
is for creating slices, maps, and channels:
//
package main
import "fmt"
func main() {
slice := make([]int, 0) // creates a slice of int
slice = append(slice, 1, 2, 3)
fmt.Println("slice contents:", slice)
}
Output
//
slice contents: [1 2 3]
Best Practices for Memory Management
- Keep Track of your Pointers: Be mindful of what they’re pointing to!
- User
defer
for Clean up: If you’re using resources that need to be cleaned up, usedefer
to ensure they get freed when no longer needed. - Avoid Global Variables: They complicate memory and can lead to unexpected issues.