Search code examples
audiomath

What is the best way to schedule multiple audio events with audiomath?


I want to schedule the playback of multiple audio segments at arbitrary times. From the audiomath docs, I think I should use PyschPortAudio to get the most predictable latency. Also from the docs, I understand that I can schedule the playback of a single sound. I'm curious whether multiple short segments can be scheduled in advance at exact times (modulo latency)? Separately: ideally, this schedule could be altered during playback.


Solution

  • Yes, coordinated pre-scheduling of multiple independent PsychPortAudio players is possible. As a minimal example, the following seems to work for me on the Mac:

    import audiomath as am
    am.BackEnd.Load('PsychToolboxInterface')
    
    p1 = am.Player(am.TestSound('12'))
    p2 = am.Player(am.TestSound('34'))
    
    t0 = am.Seconds()
    
    p1.Play(when=t0 + 2.0)
    p2.Play(when=t0 + 2.2)   # overlaps p1, but offset by 200ms
    

    This won't allow p1 and p2 to be timed completely seamlessly—each scheduled onset will still be jittered by something on the order of a couple of hundred microseconds. This is very good as stimulus presentation goes, but might (depending on sound content) still be enough to create audible transition artifacts between sounds. This depends on what effect you want to achieve—the best idea is to try it and find out whether it meets your needs. Depending on exactly how you want things to sound and behave, there may be better strategies than PsychPortAudio prescheduling (for example: not loading the PsychPortAudio back-end, but playing a sound on a loop and then on-the-fly changing parts of the Sound data before the Player gets to them.)