While playing 4K video user can resize players window - and result image will be scaled smoothly in run time.
On the other hand - program written with libav which reads 4k video file frame by frame and scale it down with sws_scale function do it less effective: it took more time then video duration to resize it.
Why is it so? Maybe because player fps is less and some frames are skipped - but video still looks smooth?
This is because most video players do scaling in the video card's hardware. With GL, for example, scaling (or even format conversion from YUV to RGB) is free.