I use a the Apache SSHD java library's client to serve files over HTTP that are read via SSH. The concept app works great.
But when I want to read hundreds of files serially (e.g. to display images in a gallery) I run into scalability problems: my server ends up with hundreds of OpenSSH daemon processes:
/usr/lib/openssh/sftp-server
andsshd:username@notty
, and sshd: username [priv]
which causes my server to run out of memory and my server to crash.
I don't think this is a bug in OpenSSH (I'm using OpenSSH_5.9p1
), but how I'm using the Apache SSHD client.
Here is the code I run every time I serve a file:
SshClient client = SshClient.setUpDefaultClient();
client.getProperties().put(ClientFactoryManager.HEARTBEAT_INTERVAL, "50000");
client.start();
session = client.connect("username", "server", 22).await().getSession();
session.addPasswordIdentity("password");
session.auth().await();
SftpClient sftp = session.createSftpClient();
// Create an HTTP response from an sftp channel stream
Which of the following, if any, do I need to make hundreds of JSch client requests?
Any insight, specific or general, would be helpful.
You definitely must disconnect your SSH/SFTP session after your are done with it:
client.stop();
Client pooling is not bad idea, but you do it only after disconnecting does not help.