I'm building a fMRI paradigm and I have a stimulus that disappears when a user presses a button (up to 4s), then a jitter (0-12s), then another stimulus presentation. I'm locking the stimuli presentation to the 1s TR of the scanner so I'm curious how I can round up the jitter time to the nearest second.
So, the task is initialized as:
stimulus 1 ( ≤4 s) -- jitter (e.g. 6 s) -- stimulus 2
But if the user responds to stimulus-1 at 1.3 seconds, then the task becomes
stimulus-1 (1.3 s) -- jitter (6.7 s) -- stimulus-2
Does that make sense? Thanks for the help!
difference = 1.0 - (RT - int(RT))