I am new to the go channels, I am trying to learn go channels by constructing a mock kernel and processes interaction through channels. The aim of this sample program is have multiple processes (2) sending memory allocation requests to the kernel using a single channel simultaneously, other processess sending release memory requests to the kernel using a single but different channel.
+-------------+
+------------------+ | |
-> Alloc. Mem. Ch. |<--\ | |
+-----------------+ ---/ +------------------+ >-->| Kernel |
| Process A |<-- +------------------+ -/ | |
+-----------------+ \--> | Realse Mem. Ch. |< | |
+------------------+ +-------------+
Program works if I only have allocation requests, as soon as I introduces release requests program run into a deadlock.
Note, process also creates a reply queue when sending an allocation request however, that is not shown in the above diagram because it is not part of the problem.
The complete program is below:
package main
import (
"fmt"
// "log"
"time"
)
const (
_ float64 = iota
LowPrio
MedPrio
HghPrio
)
// Kernel type to communicate between processes and memory resources
type Kernel struct {
reqMemCh chan chan int
rlsMemCh chan int
}
func (k *Kernel) Init() {
k.reqMemCh = make(chan chan int, 2)
k.rlsMemCh = make(chan int, 2)
go k.AllocMem()
go k.RlsMem()
}
// Fetch memory on process request
func (k *Kernel) GetReqMemCh() chan chan int {
return k.reqMemCh
}
func (k *Kernel) GetRlsMemCh() chan int {
return k.rlsMemCh
}
func (k *Kernel) AllocMem() {
// loop over the items (process reply channels) received over
// the request channel
for pCh := range k.GetReqMemCh() {
// for now think 0 is the available index
// send this as a reply to the exclusive process reply channel
pCh <- 0
close(pCh)
}
}
// Release memory
func (k *Kernel) RlsMem() {
// we do not have to anything here
}
// Process type which requests memory
type Proc struct {
ind int
prio float64
exeT time.Time
count int
memInd int
rqMemCh chan chan int
rlMemCh chan int
}
func (p *Proc) Init(
ind int,
prio float64,
rqMemCh chan chan int,
rlMemCh chan int,
) {
p.ind = ind
p.prio = prio
p.memInd = -1
p.rqMemCh = rqMemCh
p.rlMemCh = rlMemCh
}
func (p *Proc) GetReqMemCh() chan chan int {
return p.rqMemCh
}
func (p *Proc) GetRlsMemCh() chan int {
return p.rlMemCh
}
func (p *Proc) ReqMem() {
// create the reply channel exclusive to the process
// this channel will return the allocated memeory id/address
rpCh := make(chan int)
// send the reply channel through the request channel
// to get back the allocation memory id
p.GetReqMemCh() <- rpCh
// Below line is blocking ...
for mi := range rpCh {
p.memInd = mi
}
}
func (p Proc) RlsMem() {
p.GetRlsMemCh() <- 0
}
func (p Proc) String() string {
return fmt.Sprintf(
"Proc(%d): Memory(%d), Count(%d)",
p.ind+1, p.memInd+1, p.count,
)
}
func main() {
k := &Kernel{}
k.Init()
p := &Proc{}
for i := 0; i < 3; i++ {
p.Init(i, LowPrio, k.GetReqMemCh(), k.GetRlsMemCh())
p.ReqMem()
p.RlsMem()
}
time.Sleep(time.Second)
}
and the exception is as follows:
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
main.Proc.RlsMem(...)
main.go:100
main.main()
main.go:119 +0xc5
goroutine 6 [chan receive]:
main.(*Kernel).AllocMem(0x0?)
main.go:41 +0x5e
created by main.(*Kernel).Init in goroutine 1
main.go:25 +0xc5
exit status 2
Any help will be really appreciated.
Cheers,
DD.
As Brits commented, you have a buffered channel that is reaching it's capacity with nothing to read from it.
Per the language tour (1 2), sends and receives block until the other side is ready. While buffering a channel gives some lenience here, the behavior is the same once the buffer is full.
This can be fixed by adding a consumer for k.rlsMemCh
. If you don't have any action planned for this, either remove the channel or have logic to drain it for now.
func (k *Kernel) Init() {
k.reqMemCh = make(chan chan int, 2)
k.rlsMemCh = make(chan int, 2)
go k.AllocMem()
go k.RlsMem()
}
func (k *Kernel) AllocMem() {
for pCh := range k.GetReqMemCh() {
pCh <- 0
close(pCh)
}
}
func (k *Kernel) RlsMem() {
// TODO: Add a for-select or for-range over k.rlsMemCh here
}
Draining might look like this:
func (k *Kernel) RlsMem() {
for {
<-k.GetRlsMemCh()
}
}