You might be coming from Python, right? ;-)

It has the module named multiprocessing in its stdlib, and this might well explain why you have used this name in the title of your question and why you apparently are having trouble interpreting what @JimB meant by saying

If you need a separate process, you need to exec it yourself

"Multiprocessing" in Python

The thing is, Python's multiprocessing is a quite high-level thing which hides under its hood a whole lot of stuff. When you spawn a multiprocessing.Process and make it run a function, what really happens is this:

  1. The Python interpreter creates another operating system's process (using fork(2) on Unix-like systems or CreateProcess on Windows) and arranges for it to execute a Python interpter, too.

    The crucial point is that you will now have two processes running two Python interpters.

  2. It is arranged for that Python interpterer in the child process to have a way to communicate with the Python interpreter in the parent process.

    This "communication link" necessarily involves some form of IPC @JimB referred to. There is simply no other way to communicate data and actions between separate processes exactly because a commodity contemporary OS provides strict process separation.

  3. When you exchange Python objects between the processes, the two communicating Python interpreters serialize and deserialize them behind your back before sending them over their IPC link and after receiving them from there, correspondingly. This is implemented using the pickle module.

Back to Go

Go does not have any direct solution which would closely match Python's multiprocessing, and I really doubt it could have been sensibly implemented.

The chief reason for this mostly stems from the fact Go is quite more lower level than Python, and hence it does not have the Python's luxury of making sheer assumptions about the types of values it manages, and it also strives to have as few hidden costs in its constructs as possible.

Go also strives to steer clear of "framework-style" approaches to solve problems, and use "library-style" solutions when possible. (A good rundown of the "framework vs library" is given, for instance, here.) Go has everything in its standard library to implement something akin to Python's multiprocessing but there is no ready-made frakework-y solution for this.

So what you could do for this is to roll along these lines:

  1. Use os/exec to run another copy of your own process.

    • Make sure the spawned process "knows" it's started in the special "slave" mode—to act accordingly.
    • Use any form of IPC to communicate with the new process. Exchanging data via the standard I/O streams of the child process is supposedly the simplest way to roll (except when you need to exchange opened files but this is a harder topic, so let's not digress).
  2. Use any suitable package in the encoding/ hierarchy — such as binary, gob, xml — to serialize and deserialize data when exchanging.

    The "go-to" solution is supposedly encoding/gob but encoding/json will also do just fine.

  3. Invent and implement a simple protocol to tell the child process what to do, and with which data, and how to communicate the results back to master.

Does it really worth the trouble?

I would say that no, it doesn't—for a number of reasons:

  • Go has nothing like the dreaded GIL, so there's no need to sidestep it to achieve real parallelism when it is naturally possible.

  • Memory safety is all in your hands, and achieving it is not really that hard when you dutifully obey the principle that what is sent over a channel is now owned by the receiver. In other words, sending values over a channel is also the transfer of ownership of those values.

  • The Go toolchain has integrated race detector, so you may run your test suite with the -race flag and create evaluation builds of your program using go build -race for the same purpose: when a program instrumented in such a way runs, the race detector crashes it as soon as it detects any unsynchronized read/write memory access. The printout resulting from that crash includes explanatory messages on what, and where went wrong, with stack traces.

  • IPC is slow, so the gains may well be offset by the losses.

All-in-all, I see no real reason to separate processes unless you're writing something like an e-mail processing server where this concept comes naturally.

Answer from kostix on Stack Overflow
Top answer
1 of 3
47

You might be coming from Python, right? ;-)

It has the module named multiprocessing in its stdlib, and this might well explain why you have used this name in the title of your question and why you apparently are having trouble interpreting what @JimB meant by saying

If you need a separate process, you need to exec it yourself

"Multiprocessing" in Python

The thing is, Python's multiprocessing is a quite high-level thing which hides under its hood a whole lot of stuff. When you spawn a multiprocessing.Process and make it run a function, what really happens is this:

  1. The Python interpreter creates another operating system's process (using fork(2) on Unix-like systems or CreateProcess on Windows) and arranges for it to execute a Python interpter, too.

    The crucial point is that you will now have two processes running two Python interpters.

  2. It is arranged for that Python interpterer in the child process to have a way to communicate with the Python interpreter in the parent process.

    This "communication link" necessarily involves some form of IPC @JimB referred to. There is simply no other way to communicate data and actions between separate processes exactly because a commodity contemporary OS provides strict process separation.

  3. When you exchange Python objects between the processes, the two communicating Python interpreters serialize and deserialize them behind your back before sending them over their IPC link and after receiving them from there, correspondingly. This is implemented using the pickle module.

Back to Go

Go does not have any direct solution which would closely match Python's multiprocessing, and I really doubt it could have been sensibly implemented.

The chief reason for this mostly stems from the fact Go is quite more lower level than Python, and hence it does not have the Python's luxury of making sheer assumptions about the types of values it manages, and it also strives to have as few hidden costs in its constructs as possible.

Go also strives to steer clear of "framework-style" approaches to solve problems, and use "library-style" solutions when possible. (A good rundown of the "framework vs library" is given, for instance, here.) Go has everything in its standard library to implement something akin to Python's multiprocessing but there is no ready-made frakework-y solution for this.

So what you could do for this is to roll along these lines:

  1. Use os/exec to run another copy of your own process.

    • Make sure the spawned process "knows" it's started in the special "slave" mode—to act accordingly.
    • Use any form of IPC to communicate with the new process. Exchanging data via the standard I/O streams of the child process is supposedly the simplest way to roll (except when you need to exchange opened files but this is a harder topic, so let's not digress).
  2. Use any suitable package in the encoding/ hierarchy — such as binary, gob, xml — to serialize and deserialize data when exchanging.

    The "go-to" solution is supposedly encoding/gob but encoding/json will also do just fine.

  3. Invent and implement a simple protocol to tell the child process what to do, and with which data, and how to communicate the results back to master.

Does it really worth the trouble?

I would say that no, it doesn't—for a number of reasons:

  • Go has nothing like the dreaded GIL, so there's no need to sidestep it to achieve real parallelism when it is naturally possible.

  • Memory safety is all in your hands, and achieving it is not really that hard when you dutifully obey the principle that what is sent over a channel is now owned by the receiver. In other words, sending values over a channel is also the transfer of ownership of those values.

  • The Go toolchain has integrated race detector, so you may run your test suite with the -race flag and create evaluation builds of your program using go build -race for the same purpose: when a program instrumented in such a way runs, the race detector crashes it as soon as it detects any unsynchronized read/write memory access. The printout resulting from that crash includes explanatory messages on what, and where went wrong, with stack traces.

  • IPC is slow, so the gains may well be offset by the losses.

All-in-all, I see no real reason to separate processes unless you're writing something like an e-mail processing server where this concept comes naturally.

2 of 3
0

Channel is used for communicating between goroutines, you shouldn't use it in same goroutine like this code:

sensorData <- string(msg.Payload())
fmt.Println(<-sensorData) //currently not printing anything

If you like to test printing by channel, you can use buffered channel in same goroutine to avoid blocking, like this:

sensorData := make(chan []byte, 1)

Cheers

🌐
GitHub
github.com › spzala19 › Multiprocessing-with-golang
GitHub - spzala19/Multiprocessing-with-golang · GitHub
fun fact: if python's execution time is x then, Go with concurrency (160x) faster > Go (40x) faster > C (10x) faster > python (x)
Author   spzala19
🌐
Pure Storage Blog
blog.purestorage.com › home › concurrent programming case study: comparing python, go, and rust
Concurrent Programming Case Study: Comparing Python, Go, and Rust |
November 16, 2025 - I compared concurrent implementations of this example program written using Python multiprocessing, Go, and Rust. Both Go and Rust produce significantly better requests/sec performance than concurrent Python (3x and 4x faster respectively). Surprisingly, Rust is 30% faster than Go, despite this being the first concurrent Rust program that I have written.
🌐
Reddit
reddit.com › r/golang › should i migrate my multithreading python program to golang or other languages
r/golang on Reddit: Should I migrate my Multithreading Python program to Golang or other languages
March 15, 2024 -

Described in short my Python program has 290 Threads, in every Thread a while Loop sends a request for 20kb of data every 3 seconds. If the data changed in the last 3 seconds (If-Modified-Since is not always accurate unfortunatly) the Thread sends a request for a 550kb response. This response gets analyzed, that means basic arithmetics, accessing a large dict and sending another request to localhost for <1kb. This is very heavy on my CPU, so I thought of migrating to Golang.

Do You think Golang is the right choice? I'm a bit concerned, because time is important and the Golang Garbage Collector could waste some. I have absolutley no experience with Golang and university level C++(so mainly theoretical). Would Java be a better fit?

🌐
Madeddu
madeddu.xyz › posts › golang vs python: deep dive into the concurrency
GoLang vs Python: deep dive into the concurrency | madeddu.xyz
January 17, 2018 - Ok, I’m a Python lover - I guess, because it’s in the title and I don’t remember where the .md respective source is - so I decided to make a comparision to see how these magical GoLang tricky statements really perform. To do that, I wrote a simple go-py program (here the code) that completes the merge sort over a list of random integers and can be run in a single-core environment or multicore environment.
🌐
Reddit
reddit.com › r/golang › is it worth using goroutines (or multi-threading in general) when nothing is blocking?
r/golang on Reddit: Is it worth using Goroutines (or multi-threading in general) when nothing is blocking?
December 14, 2023 -

This is more of a computer science question, but for a program that has no blocking operations (e.g. file or network IO), and just needs to churn through some data, is it worth parallelising this? If the work needs to be done either way, does adding Goroutines make it any faster?

Sorry if this is a silly question, I've always seen multithreading as a way to handle totally different lines of execution, rather than just approaching the same line of execution with multiple threads.

🌐
Medium
iorilan.medium.com › multi-processing-vs-multi-threading-vs-async-await-vs-goroutine-983716514e03
Multi-processing vs multi-threading vs async-await vs Goroutine | by LORY | Medium
July 2, 2023 - “Multi-threading is similar to asyncio in Python. asyncio will also start a thread, the same as threading” · “Multi-threading is only good for I/O tasks because multi-threads can not be executed by multiple CPUs at the same time if they are in the same process” · “When called another async within one async function, it started another thread, and when calling ‘await’, which is the same as ‘thread. join()’” · “Golang is fast, it is made for concurrency.
Find elsewhere
🌐
Reddit
reddit.com › r/python › how different is python concurrency vs. golang concurrency?
r/Python on Reddit: How different is python concurrency vs. Golang concurrency?
February 18, 2022 - golang also encourages synchronization over channels instead of traditional locks, unlike python’s normal synchronization idioms. but this is actually not that dissimilar from python’s multiprocessing module, which can’t share memory in the first place.
🌐
Medium
medium.com › dev-bits › a-cup-of-gos-concurrent-programming-for-python-developers-a80e621c45ff
A cup of Go’s concurrent programming for Python developers | by Naren Yellavula | Dev bits | Medium
April 13, 2019 - If a computing problem can be broken into individual pieces and has a lot of I/O interaction, we can easily bring concurrency into the picture. An example is zipping individual files using different workers in a directory. There is a lightweight event library in Python called Gevent, which provides concurrency in Python by multiprocessing instead of threads.
🌐
Mobilunity
mobilunity.com › home › blog › technologies › backend › python › go vs python: a comprehensive guide to picking the right language for your project in 2025
Go vs Python: Pick the Language for Your Project | Guide 2025
November 7, 2025 - Channels enable flexible concurrent patterns, making Golang highly effective for parallelism. Python supports concurrency through several modules. Its threading module allows several threads to run concurrently while its Global Interpreter Lock (GIL) prevents parallel code execution in a single process. The asyncio module manages asynchronous tasks using an event loop. It is ideal for I/O (Input/Output) tasks and does not create new processes or threads. The multiprocessing module achieves true parallelism, bypassing the GIL, and is suitable for CPU-bound tasks.
🌐
CodiLime
codilime.com › blog › software development › backend › go vs. python — main differences overview - codilime
Go vs. Python — main differences overview - CodiLime
October 4, 2021 - Python doesn’t natively support ... with the help of async/await syntax, multiprocessing - Python supports multiprocessing by sending tasks to different processors....
🌐
Lincoln Loop
lincolnloop.com › blog › concurrency-python-vs-go
Concurrency in Python vs GO | Lincoln Loop
August 29, 2025 - The beauty of Go is that it only takes 2 letters to move from a synchronous to a concurrent version. Simply add go in front of the function call to fibHandler(conn). Not only is it simple, but, unlike Python, there is one obvious way to do it.
🌐
Sahil M
isahil.me › blogs › concurrency_in_python_vs_golang
Concurrency in Python vs. Golang: Performance Analysis and Use Cases | Sahil M
August 29, 2024 - In benchmark tests, Golang consistently outperforms Python in concurrency scenarios, particularly in web servers and real-time data processing systems. Python, however, remains a strong choice for IO-bound tasks when using async IO. ... Python: Higher memory usage due to multiprocessing.
🌐
Go Packages
pkg.go.dev › github.com › peng225 › oval › multiprocess
multiprocess package - github.com/peng225/oval/multiprocess - Go Packages
October 28, 2023 - Common problems companies solve with Go · Stories about how and why companies use Go
🌐
Bytegoblin
bytegoblin.io › blog › multiprocessing-with-goroutines-and-channels-in-go-programming
Multiprocessing With Goroutines And Channels In Go Programming
Multiprocessing with Goroutines and Channels provides a robust framework for performing concurrent operations in Go.
🌐
Antonz
antonz.org › multi
Native threading and multiprocessing in Go
September 22, 2025 - Exploring unconventional ways to handle concurrency.
🌐
Oxyprogrammer's blog
oxyprogrammer.com › tenets-of-multithreading-in-go-detailed-tutorial
Tenets of Multithreading in GO: Detailed Tutorial
March 14, 2025 - Go began development around 2007-08, a time when chip manufacturers recognized the benefits of using multiple processors to increase computational power, rather than merely boosting the clock speed of single processors. While multithreading existed before this period, its real advantages became apparent with the advent of multiprocessor computers. In this article, we will review the various tools that Go provides for writing robust multiprocessing code.
🌐
DigitalOcean
digitalocean.com › community › tutorials › how-to-run-multiple-functions-concurrently-in-go
How To Run Multiple Functions Concurrently in Go | DigitalOcean
January 22, 2022 - To run programs faster, a programmer needs to design their programs to run at the same time. Two features in Go, goroutines and channels, make concurrency ea…