I'm trying to update my application to use it with systemd. When I have used Upstart, I've just create a /etc/init.d/myService script:
#!/bin/bash
#chkconfig: 2345 90 10
#description: myDescription
### BEGIN INIT INFO
# Provides: myService
# Required-Start: sshd
# Required-Stop: sshd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start myService
# Description:
### END INIT INFO
SCRIPT=$(readlink -f $0)
lockfile="/var/lock/subsys/myService"
do_start() {
if [ -d "/var/lock/subsys" ]; then
touch $lockfile
fi
...
}
do_stop() {
...
if [ -d "/var/lock/subsys" ]; then
if [ -f "$lockfile" ]; then
rm -f $lockfile
fi
fi
}
do_status() {
...
}
case "$1" in
start)
do_start
exit 0
;;
stop)
do_stop
exit 0
;;
status)
do_status
exit 0
;;
restart)
do_stop
do_start
exit 0
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart}" >&2
exit 3
;;
esac
And all were fine.
Notice, this script generate some subprocesses which will executing in background. To use it with systemd, I made the follow service file (myService.service):
[Unit]
Description=My Description
Requires=sshd.service
After=sshd.service
Before=shutdown.target reboot.target halt.target
[Service]
Type=oneshot
ExecStart=/etc/init.d/myService start
ExecStop=/etc/init.d/myService stop
RemainAfterExit=yes
KillMode=none
[Install]
WantedBy=multi-user.target
If I run
systemctl stop myService.service
All work fine. My application stop successfully by /etc/init.d/myService stop command.
But I've got the follow issue: When I reboot the system, and /etc/init.d/myService stop is executing, process which I should stop by myService script already killed. There are many processes which I should control ( around 7 processes ), and system should not terminated it itself.
I've tried to use Type=forking and specify the PIDFile as a pidfile of process, which has the longest life-time ( it should started first end stopped last ), however all my process were terminated again.
Is any simple way to avoid killing my subprocess?
Solution was found.
I ran hadoop & hbase, some of their components was starting by ssh-connection to localhost, and processes, which was started by this way, was unable to be controlled by systemd. It was the design for distributed system, but in my case the work is going on the one machine. So I have replaced in hadoop/bin/slaves.sh
for slave in `cat "$HOSTLIST"|sed "s/#.*$//;/^$/d"`; do
ssh $HADOOP_SSH_OPTS $slave $"${@// /\\ }" \
2>&1 | sed "s/^/$slave: /" &
if [ "$HADOOP_SLAVE_SLEEP" != "" ]; then
sleep $HADOOP_SLAVE_SLEEP
fi
done
to
for slave in `cat "$HOSTLIST"|sed "s/#.*$//;/^$/d"`; do
eval "$@"
if [ "$HADOOP_SLAVE_SLEEP" != "" ]; then
sleep $HADOOP_SLAVE_SLEEP
fi
done
The problem was resolved and now processes are showing in service process tree.
Hbase probably has the same solution, but now it start with distributed=false and don't start any process by ssh.