I have dockerized my PostgreSQL DB (about 4GB of data). It is working fine on my notebook (linux, 2core CPU, SSD, 8GB RAM). I am trying to move it using docker hub to free AWS EC2 t2.micro.
On startup of container, there are executed some scripts, that set up structure of schema and tables and populate it with data using pg_restore (custom, compressed format). It takes about 20 minutes on my notebook. But on that t2.micro it looks it will take probably hours or days (there are 2.4GB in cluster after 10 hours).
The question is, why it is such slow, while the process (postgres) is not using much of CPU (CPU credits balance is increasing!) nor it has some noticeable disc operations... what is the limit causing this slow progress?
There is also log messages:
LOG: using stale statistics instead of current ones because stats collector is not responding
which I found here, but I don't know what causes this...
I have also reniced process after some one hour to 19, because server was not possible to work with (too slow response), but when I renice to 0 back, it seems that it has no effect.
Thanks in advance, J.
PS.:
It seems, that there is i/o problem... here is an output of iostat:
avg-cpu: %user %nice %system %iowait %steal %idle
0.23 0.45 0.26 97.54 0.66 0.87
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 90.79 33.11 1469.24 2718730 120629372
also in top utility I can see state "D" on that process almost all the time.
It is definitely caused by i/o credit limit. The problem can occurs when using little SSD (free tier is up to 30GB) which has limits causing very low disk performance after some 30 minutes of full load (without respect to CPU credits).
I was able to perform full restore of DB in several phases