I think I know part of the answer. I tried to summarize my understanding of the differences, in order of importance, between asyncio tasks and goroutines:
1) Unlike under asyncio, one rarely needs to worry that their goroutine will block for too long. OTOH, memory sharing across goroutines is akin to memory sharing across threads rather than asyncio tasks since goroutine execution order guarantees are much weaker (even if the hardware has only a single core).
asyncio will only switch context on explicit await, yield and certain event loop methods, while Go runtime may switch on far more subtle triggers (such as certain function calls). So asyncio is perfectly cooperative, while goroutines are only mostly cooperative (and the roadmap suggests they will become even less cooperative over time).
A really tight loop (such as with numeric computation) could still block Go runtime (well, the thread it's running on). If it happens, it's going to have less of an impact than in python - unless it occurs in mutliple threads.
2) Goroutines are have off-the-shelf support for parallel computation, which would require a more sophisticated approach under asyncio.
Go runtime can run threads in parallel (if multiple cores are available), and so it's somewhat similar to running multiple asyncio event loops in a thread pool under a GIL-less python runtime, with a language-aware load balancer in front.
3) Go runtime will automatically handle blocking syscalls in a separate thread; this needs to be done explicitly under asyncio (e.g., using run_in_executor).
That said, in terms of memory cost, goroutines are very much like asyncio tasks rather than threads.
I think I know part of the answer. I tried to summarize my understanding of the differences, in order of importance, between asyncio tasks and goroutines:
1) Unlike under asyncio, one rarely needs to worry that their goroutine will block for too long. OTOH, memory sharing across goroutines is akin to memory sharing across threads rather than asyncio tasks since goroutine execution order guarantees are much weaker (even if the hardware has only a single core).
asyncio will only switch context on explicit await, yield and certain event loop methods, while Go runtime may switch on far more subtle triggers (such as certain function calls). So asyncio is perfectly cooperative, while goroutines are only mostly cooperative (and the roadmap suggests they will become even less cooperative over time).
A really tight loop (such as with numeric computation) could still block Go runtime (well, the thread it's running on). If it happens, it's going to have less of an impact than in python - unless it occurs in mutliple threads.
2) Goroutines are have off-the-shelf support for parallel computation, which would require a more sophisticated approach under asyncio.
Go runtime can run threads in parallel (if multiple cores are available), and so it's somewhat similar to running multiple asyncio event loops in a thread pool under a GIL-less python runtime, with a language-aware load balancer in front.
3) Go runtime will automatically handle blocking syscalls in a separate thread; this needs to be done explicitly under asyncio (e.g., using run_in_executor).
That said, in terms of memory cost, goroutines are very much like asyncio tasks rather than threads.
I suppose you could think of it working that way underneath, sure. It's not really accurate, but, close enough.
But there is a big difference: in Go you can write straight line code, and all the I/O blocking is handled for you automatically. You can call Read, then Write, then Read, in simple straight line code. With Python asyncio, as I understand it, you need to queue up a function to handle the reads, rather than just calling Read.
For anyone who's tried both Go and Python async (ex uvloop + sanic, apistar, etc) for their webapp, what are the pros and cons of working in each language?
In Python terms, the event loop is built into Go. You would launch two goroutines with go async_say(...) and wait for them to complete, for example using a channel or a wait group.
A straightforward translation of your code to Go could look like this:
package main
import "fmt"
import "time"
func async_say(delay time.Duration, msg string, done chan bool) {
time.Sleep(delay)
fmt.Println(msg)
done <- true
}
func main() {
done1 := make(chan bool, 1)
go async_say(4 * time.Second, "hello", done1)
done2 := make(chan bool, 1)
go async_say(6 * time.Second, "world", done2)
<-done1
<-done2
}
Note that, unlike Python (and JavaScript, etc.), Go functions do not come in different colors depending on whether they are asynchronous or not. They can all be run asynchronously, and the equivalent of asyncio is built into the standard library.
You don't need this in Go as in Go this would be an anti-pattern.
Instead, in Go, you have management of "pollable" descriptors — such as sockets — tightly integrated with the runtime and the goroutine scheduler.
This allows you to write normal sequential code which will internally be handled via a platform-specific "eventful" interface (such as epoll on Linux, kqueue on FreeBSD and IOCP on Windows).
As soon as a goroutine tries to perform any I/O on a socket and the socket is not ready, the goroutine gets suspended until that data is ready after which it will be resumed right at the place it has been suspended.
Hence in Go, you merely create a separate goroutine to serve each request which should be performed or served concurrently with the others and write plain sequential code to handle it.
For backrgound, start here and here.
The tutorials explaining how the Go scheduler works are, for instance, this and this.