I have a program written in java on my local machine. This program connects to a remote machine using the JSch and shows the output on a MessageConsole
in the user interface.
I can update the MessageConsole
in the UI even though my started script doesn't terminate and outputs random numbers. I can read the OutputStream
and show it on the MessageConsole
. It works perfectly well.
My problem is the output after starting the C-program. Nothing appears on the OutputStream
.
The output only appears if the program terminated completely.
all outputs are either done with
printf
or
std::cout
If I log into the remote machine manually and start the program by hand, my terminal shows all the outputs instantaneously.
I used the ChannelExec
to start the program on the remote machine using this:
String program = "sudo ./foo/bar.out";
String continousOutput = "sh ~/foo/bar.sh";
//ProcessWorker(String command, SSHSession ssh, GUI gui)
pw = new ProcessWorker(program, ssh, this);
pw.execute();
The ProcessWorker pw
is a SwingWorker
class to keep the connection open in the background and updating the GUI without locking it up.
This code does all the dirty work. The SSH connection is already established and works.
protected Void doInBackground() throws Exception {
try {
exec = ssh.session.openChannel("exec");
//command is the String program or continousOutput
((ChannelExec)exec).setCommand(command);
exec.setInputStream(null);
((ChannelExec)exec).setErrStream(System.err);
InputStream in = exec.getInputStream();
exec.connect();
byte [] tmp = new byte[1024];
while(true) {
while(in.available()>0) {
int i = in.read(tmp, 0, 1024);
if(i<0)break;
String response = new String(tmp, 0, i);
check(response); //parse response
System.out.println(response);
//this.publish(); //desperate try to get something
}
if(exec.isClosed()) {
if(in.available() > 0) continue;
System.out.println("exit-status: " + exec.getExitStatus());
break;
}
try {Thread.sleep(1000);} catch(Exception ee) {}
}
exec.disconnect();
} catch (JSchException | IOException e) {
e.printStackTrace();
}
return null;
}
I have no more ideas where the problem is. I mean it works perfectly fine with the script.
Any ideas?
The short answer is to allocate PTY (pseudo-TTY) for the remote session. That will probably cause the remote process to emit its output as it's running, instead of all at the end:
exec.setPty(true);
When you run a program through the unix command line, you're communicating with the program through a TTY device. When you launch a program through SSH without requesting a PTY (TTY) for the session, you're communicating with the program through a set of pipes, one each for standard input, output, and error.
Unix programs typically buffer their output. The buffering behavior is built into the standard I/O logic used by most C programs (and also used or replicated by perl, python, and so on). When writing to a TTY, the typical behavior is to buffer output data until a full line is written (ie, the program writes a newline character), then the full line is written to the TTY at once. This is what you experience when you run the program interactively.
When writing to a pipe or a file, the default behavior is to buffer until the buffer is filled up. The buffer might be 8 KB in size. This behavior is based on the assumption that the output isn't going to a human and so the program should behave in the most efficient way possible.
Your description sounds like the program is block-buffering its output, and it's not producing enough output to flush the buffer before the program finishes. You want the program to line-buffer its output instead (or unbuffer it completely). There are a few ways to do that:
stdbuf
or unbuffer
is available on the system. These can be use to alter the buffering behavior for a process.