When I am using telepresence to debug remote kubernetes,throw this error:
RuntimeError: SSH to the cluster failed to start
This is the detail output:
~ ⌚ 17:26:43
$ telepresence
T: How Telepresence uses sudo: https://www.telepresence.io/reference/install#dependencies
T: Invoking sudo. Please enter your sudo password.
Password:
T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run
T: per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list
T: of method limitations see https://telepresence.io/reference/methods.html
T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
T: Starting network proxy to cluster using new Deployment telepresence-1582277212-643104-29913
Looks like there's a bug in our code. Sorry about that!
Traceback (most recent call last):
File "/usr/local/bin/telepresence/telepresence/cli.py", line 135, in crash_reporting
yield
File "/usr/local/bin/telepresence/telepresence/main.py", line 68, in main
socks_port, ssh = do_connect(runner, remote_info)
File "/usr/local/bin/telepresence/telepresence/connect/connect.py", line 119, in do_connect
args.from_pod
File "/usr/local/bin/telepresence/telepresence/connect/connect.py", line 70, in connect
raise RuntimeError("SSH to the cluster failed to start. See logfile.")
RuntimeError: SSH to the cluster failed to start. See logfile.
Here are the last few lines of the logfile (see /Users/dolphin/telepresence.log for the complete logs):
50.2 37 | QoS Class: Burstable
50.2 37 | Node-Selectors: <none>
50.2 37 | Tolerations: node.kubernetes.io/not-ready:NoExecute for 360s
50.2 37 | node.kubernetes.io/unreachable:NoExecute for 360s
50.2 37 | Events:
50.2 37 | Type Reason Age From Message
50.2 37 | ---- ------ ---- ---- -------
50.2 37 | Normal Scheduled 38s default-scheduler Successfully assigned dabai-fat/telepresence-1582277212-643104-29913-7bb5765b6-7xflh to azshara-k8s01
50.2 37 | Normal Pulled 35s kubelet, azshara-k8s01 Container image "datawire/telepresence-k8s:0.104" already present on machine
50.2 37 | Normal Created 34s kubelet, azshara-k8s01 Created container telepresence-1582277212-643104-29913
50.2 37 | Normal Started 34s kubelet, azshara-k8s01 Started container telepresence-1582277212-643104-29913
50.2 TEL | [37] ran in 0.50 secs.
what should I do to fix this problem? My kubernetes server version is 1.15.2.Client version:1.17.3.
Install socat in your remote kuberentes cluster host,I am using CentOS,so install like this:
sudo yum install socat -y
If using ubuntu/debian,install like this:
sudo apt-get install socat -y