Search code examples
jschopensshapache-minasshd

Hundreds of Apache SSHD clients leave hundreds of OpenSSH daemon processes running


I use a the Apache SSHD java library's client to serve files over HTTP that are read via SSH. The concept app works great.

But when I want to read hundreds of files serially (e.g. to display images in a gallery) I run into scalability problems: my server ends up with hundreds of OpenSSH daemon processes:

  • under my username as /usr/lib/openssh/sftp-server and
  • sshd:username@notty, and
  • under root as sshd: username [priv]

which causes my server to run out of memory and my server to crash.

I don't think this is a bug in OpenSSH (I'm using OpenSSH_5.9p1), but how I'm using the Apache SSHD client.

Here is the code I run every time I serve a file:

SshClient client = SshClient.setUpDefaultClient();
client.getProperties().put(ClientFactoryManager.HEARTBEAT_INTERVAL, "50000");
client.start();
session = client.connect("username", "server", 22).await().getSession();
session.addPasswordIdentity("password");
session.auth().await();
SftpClient sftp = session.createSftpClient();
// Create an HTTP response from an sftp channel stream

Which of the following, if any, do I need to make hundreds of JSch client requests?

  1. Close/stop my session and/or client after each request? (the terminology here is so generic yet the functionality is so precise that I may have wrong understanding of each component)?
  2. Client pooling?
  3. Server configuration to limit the number of daemon threads?
  4. Reduce the timeout (on the client or on the server)?

Any insight, specific or general, would be helpful.


Solution

  • You definitely must disconnect your SSH/SFTP session after your are done with it:

    client.stop();
    

    Client pooling is not bad idea, but you do it only after disconnecting does not help.