I want to implement an 8ms delay in my driver code. I used the msleep
function, but I found that I only looped twice. The time difference between the two prints in dmesg
is actually 10ms, shouldn't it be 2ms? What is the problem?
a part of dmesg
:
[ 386.199343] this is ioctl
[ 386.199359] ioctl value = 0
[ 386.210085] ioctl value = 1
ioctl
function in driver code:
static long spiio_drv_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int i = 0;
int value = 0;
int *param = (int *)arg;
printk("this is ioctl\n");
switch (cmd)
{
case 0: /*no sleep*/
*param = 0;
case 1: /*SPIIO_IOCTL_CMD*/
do {
value = gpio_get_value(spiio_gpio);
printk("ioctl value = %d\n", value);
if (value == 1) {
return 1;
} else {
if (*param) {
msleep(1);
}
i ++;
}
} while(i < *param);
break;
default:
printk("spiio ioctrl cmd %d is error", cmd);
return -1;
}
return 0;
}
What is the problem?
There are 2 problems.
The first problem is that you're expecting it to be precise when it's impossible to guarantee that any delay or sleep won't be significantly longer than requested (e.g. an interrupt or task switch can occur after the requested time expired but before the function returns control to your code). In loops the best alternative would be some kind of sleep_until(when)
with a when += period;
so that if a previous sleep takes longer then the next sleep automatically compensates (by being shorter) and the extra unwanted time can't/doesn't accumulate and get worse with every iteration of the loop. Also, msleep()
may not be the best function to use (e.g. compared to usleep_range()
).
The other problem is that the implementation of msleep()
in Linux is bad. Specifically, Linux has 2 different systems for timing. For the first/oldest system, one timer was configured to generate an interrupt at a fixed frequency (e.g. 100 Hz, or every 10 milliseconds) and then everything (all time delays, all scheduling, etc) was built on top of that. This was considered "acceptable" back when there were no multi-core CPUs and no power management. The second/newer system is called "tickless", and involves setting the underlying (hardware) timer to create an interrupt at the "as exact as possible" soonest expiry time. Because your time delays are exactly 10 milliseconds it's extremely likely that Linux is using the old/bad "10 millisecond ticks" system and has not been updated to use the newer/better "tickless" system.
Of course the "theoretically best" you could hope for would be a split approach, where the kernel determines the most time it can safely sleep (running other tasks or saving power) and then does a small final "busy wait" loop to get max. precision. AFAIK Linux has never done this for either "ticks" or "tickless" for any kind of delay or sleep; but you may be able to implement it yourself (e.g. using a combination of schedule_timeout_uninterruptible()
and ktime_get()
maybe, to implement some kind of wait_until(when)
).
Note: You might also be able to make Linux less bad by reconfiguring the kernel (enabling "CONFIG_HZ=1000" or tickless); but if you need to ensure your code works for other people you'd want to do the opposite.