Error: FATAL: # nodes found, but does not have the required attribute to establish the connection. Try setting another attribute to open the connection using --attribute.
The issue I'm having started with attempting to run a command using the chef knife command. The intent is to have the command run on any server part of the role as you'll see.
So, when I run the command I get the following I get the following
knife ssh "role:servers" "touch /home/ubuntu/file.txt"
WARNING: Failed to connect to ip-172-31-8-x.us-west-2.compute.internal -- SocketError: getaddrinfo: nodename nor servname provided, or not known
WARNING: Failed to connect to ip-172-31-94-xb.us-west-2.compute.internal -- SocketError: getaddrinfo: nodename nor servname provided, or not known
WARNING: Failed to connect to ip-172-31-99-x.us-west-2.compute.internal -- SocketError: getaddrinfo: nodename nor servname provided, or not known
When I try new attributes
knife ssh "role:web" "touch /home/ubuntu/file.txt" -x ubuntu -a hostname
FATAL: 6 nodes found, but does not have the required attribute to establish the
connection. Try setting another attribute to open the connection using --attribute.
I tried differnet attributes but with no luck
knife ssh "role:servers" "touch /home/ubuntu/file.txt" -a ec2.public_hostname
knife ssh "role:servers" "touch /home/ubuntu/file.txt" -a public_hostname
knife ssh "role:servers" "touch /home/ubuntu/testfile.txt" --attribute 54.68.122.109 -i /Users/useraccount/.ssh/mykey.pem -x ubuntu
FATAL: 2 nodes found, but does not have the required attribute to establish the connection. Try setting another attribute to open the connection using --attribute.
Clearly I'm missing something.
I should note that I am able to ssh into the servers but not by using the ip-172-31-x-x.us-west-2.compute.internal that chef-clients detect at provisioning, rather, the AWS Public IP.
Would this affect my being able to run the command above properly?
Chef server lists node data as
FQDN: ip-172-31-xx-xx.us-west-2.compute.internal
IP Address: 172.31.xx.xx
With AWS setting matching like so,
Priviate DNS :ip-172-31-xx-xx.us-west-2.compute.internal
Priviate IPs : 172.31.xx.xx
However, the information AWS for sshing into the an instance is
Public DNS : ec2-54-200-xx-xxx.us-west-2.compute.amazonaws.com Public IP : 54.200.xx.xxx
It's the AWS Public DNS data I can use to properly ssh into a server(non knife).
If it's possible the issue is the way chef-client provisions node data into chef server, is there away to correct it so chef server reflects the nodes aws public ip data?
Unless your workstation is on an AWS VPN, or actually running inside your VPC, you'll want to use -a public_ip_address
.
If this is a VPC instance, it is possible that you don't have that attribute set due to some issues with ohai
. In this case, make sure that you set the ec2 hint for ohai
. The easiest way to do so is to use the --hint ec2
flag when you bootstrap the node.
Finally, if this node doesn't have a public ip address, as many don't in a VPC, then you'll need to use a proxy node to reach it. --ssh-gateway
flag is used for this, and would need to point to a node in the public subnet of your VPC.