I often want to tail log files on remote server(s) when running diagnostics for an application. You can either login to the remote server via an SSH session, then execute the tail command. Or you can do it from a local shell (without explicitly logging onto the remote server first). That’s pretty simple:
ssh myRemoteServer tail -f /path/to/logs/myapp.log
The problem here, is when you ctrl-c out of this command to kill the tail, the process is still running on the remote machine. Some googling told me that this is because of the lack of a controlling terminal for the running process.
From superuser.com:
This behaviour stems from the lack of a controlling terminal for the running process. When the remote process does not have a controlling terminal, the remote ssh process handling your session is unable to kill the command, which is left hanging in a zombie state to be eventually cleaned up by init.
So although the process on the remote server(s) will eventually be cleaned up, it’s not great to leave a lot of zombie processes lying around. And you certainly don’t want to logon to every server and ps ax to kill them. Crazy.
The answer, as described on superuser, is that you simply add the -t flag when you connect via SSH from a local terminal. Essentially that makes the remote process terminate when you ctrl+c your tail locally.
So for the initial example at the top:
ssh -t myRemoteServer tail -f /path/to/logs/myapp.log
I often use multitail a lot because it facilitates tailing log files on a remote servers from one command, a sample multi-tail script is now (with -t):
#!/bin/bash
multitail -l "ssh -t myRemote1 tail -f /path/to/logs/myapp.log" \
-l "ssh -t myRemote2 tail -f /path/to/logs/myapp.log"