I am implementing a protocol decoder which receives bytes through UART of a microcontroller. The ISR takes bytes from the UART peripheral and puts it in a ring buffer. The main loop reads from the ring buffer and runs a state machine to decode it.
The UART internally has a 32-byte receive FIFO, and provides interrupts when this FIFO is quarter-full, half-full, three-quarter full and completely full. How should i determine which of these interrupts should trigger my ISR? What is the tradeoff involved?
Note - The protocol involves packets of 32-byte (fixed length), send every 10ms.
This depends on a lot of things, most of all the maximum baudrate supported, and how much time your application needs for executing other tasks.
Traditional ring buffers work on byte-per-byte interrupt basis. But it is of course always nice to reduce the number of interrupts. It probably doesn't matter much how often you let it trigger.
It is much more important to implement a double-buffer scheme. You should of course not start to run a state machine decoding straight from a single ring buffer. That will turn into a race condition nightmare.
Your main program should hit the semaphore/disable the UART interrupt, then copy the whole buffer, then allow interrupt. Ideally buffer copy is done by changing a pointer, rather than doing a hard copy. The code doing this needs to be benchmarked to perform faster than 1/baudrate * 10 seconds. Where 10 is: 1 start, 8 data, 1 stop, assuming UART is 8-N-1.
If available, use DMA over software ring buffers.