I have encountered peculiar slowness on mac, when using node-postgres or deno-postgres. I have a very simple table with two columns, and when I execute query select * from table
it happens very very slowly. I have also tried selecting directly with SQL client and it is very fast.
So to be precise - the table has 60 entries. two columns. on the remote postgres server (12.2)
I have the following three scripts.
#node v13.12.0
const { Client } = require('pg')
const client = new Client({
user: 'u',
host: 'address',
database: 'db',
password: 'pw',
port: 5432,
})
client.connect()
const start = Date.now();
client.query('SELECT * from unit', (err, res) => {
const ms = Date.now() - start;
console.log(`db call ${ms}`);
console.log(res.rows.length);
client.end()
})
#deno 1.1.2
#v8 8.5.216
#typescript 3.9.2
import { Client } from "https://deno.land/x/postgres@v0.4.2/mod.ts";
const client = new Client({
user: "u",
database: "db",
hostname: "addr",
password: "pw",
port: 5432,
});
await client.connect();
const start = Date.now();
const dataset = await client.query("SELECT * FROM unit");
const ms = Date.now() - start;
console.log(`db call ${ms}`);
console.log(dataset.rowsOfObjects().length)
#python 3.7.7
import psycopg2
from datetime import datetime
#try:
connection = psycopg2.connect(user = "u",
password = "p",
host = "addr",
port = "5432",
database = "db")
cursor = connection.cursor()
a = datetime.now()
cursor.execute("select * from unit");
records = cursor.fetchall()
b = datetime.now()
c = b - a
print(len(records))
print(c.total_seconds() * 1000)
and when I execute all three scripts on my macos (10.15.5) I get the following results:
"select * from unit" (60 records)
node ~16'000ms
deno ~16'000ms
python ~240ms
when I execute "select * from unit limit 5"
node ~480ms
deno ~110ms
python ~220ms
when I execute "select * from unit" on the same ubuntu server where postgres is installed then all 3 scripts execute in around 10ms.
I have enabled timing and full logging in the postgres server, and I see that I can see that queries in all the above situations have executed in below one milisecond, around ~0.600ms
At this point, I have feeling that fault lies into intersection of node/deno and my macos, which could probably be v8. or something else that deno and node share.
So, what could it be?
p.s I also tried node profiler and I see this:
[Summary]:
ticks total nonlib name
0 0.0% 0.0% JavaScript
116 84.7% 99.1% C++
22 16.1% 18.8% GC
20 14.6% Shared libraries
1 0.7% Unaccounted
[C++ entry points]:
ticks cpp total name
45 54.9% 32.8% T __ZN2v88internal32Builtin_DatePrototypeSetUTCHoursEiPmPNS0_7IsolateE
36 43.9% 26.3% T __ZN2v88internal21Builtin_HandleApiCallEiPmPNS0_7IsolateE
1 1.2% 0.7% T __ZN2v88internal23Builtin_DateConstructorEiPmPNS0_7IsolateE
but I have no idea what that might mean.
ok, I finally figured it out.
As nothing was working I decided to move my API to the remote server instead of running it locally, started it up, and was pleased to see instant communication between API and database... only to see exactly the same slowness on the frontend running on my machine.
And this is when it dawned on me - this is some sort of traffic shaping from my internet provider. I turned on VPN and everything started working as expected immediately.
No wonder I couldn't understand why it was getting stuck. The issue was way down the stack, this will be a lesson for me - always have to think outside the box that is a computer itself.
this explains why it was sometimes working normally. However, it doesn't explain why this issue never affected python script - maybe it was communicating with the Postgres server in a little bit different manner that didn't trigger the provider's filter. Who knows.