It will be hard without having the whole code but I will try my best to explain it in detail. If you need more information please let me know.
So I have a python program with 3 processes (multiprocessing) running in parallel. The first one is a video-preprocessing task. The second is an audio-preprocessing task, and the last is a DNN model call. All processes are written kinda like this:
from multiprocessing import Process
class NameOfTheProcess(Process):
def __init__(self, queue, videoBuffer, audioBuffer):
super().__init__()
# ....
def run(self):
while True: # so that the processes run till I stop the program
# ....
The video-preprocessing is a simple face tracking and filling in a queue (which is used so I can share the data between the processes).
The audio-preprocessing is a process where I get an audio frame using the jack library. There I downsample the audio and put it in a buffer. After a specific delay of 20 callbacks of jack I start the DNN model process.
In the DNN model process, I have currently only 4 simple steps. First I check if the audio queue is empty if not then I get the element of the queue and then I go through a "dummy" for loop in a range of 1000. After that, I take the last x elements of the audio queue and put them in another queue to use it later.
The video-preprocessing and audio-preprocessing work fine I have no issues there but when I also start the DNN-process than I get many audio lost and in jack-client I get a lot of 16:00:12.435 XRUN callback (7 skipped)
. And when I just start the audio-preprocessing and the DNN-process I have the same issue. So in my mind, there is no problem with the video-preprocessing.
After a while, I figured out that when I remove this line audioBufferIn = self.audioBuffer.get()
in the code below I don't have the audio lost anymore but I need to get the audio queue there somehow so I can work with it.
from multiprocessing import Process
class DnnModelCall(Process):
def __init__(self, queue, audioBuffer):
super().__init__()
print("DnnModelCall: init")
self.queue = queue
self.audioBuffer = audioBuffer
def run(self):
print("DnnModelCall: run")
while True:
if(not self.audioBuffer.empty()):
k = 0
audioBufferIn = self.audioBuffer.get()
# audioBufferIn = self.audioBuffer.get(block=False)
for i in range(0, 1000):
k += 1
outputDnnBackPart = audioBufferIn[-2560:]
outputQueue = []
outputQueue.extend(outputDnnBackPart)
self.queue.put(outputQueue)
I have also tried it with block=False
but I get the same result.
Have anyone an idea? And if you need more information let me know. Thanks in advance.
The problem was that the element in the queue was too big. The solution was to reduce the size. So what I did was that I sent every audio frame which has a size of 128 audio samples to the other process and collect them there in a buffer till I have my desired 40800 audio samples. After the buffer is full I start the process.