Forgive me if this question is off-topic for this community; I couldn't find a more appropriate one and StackOverflow has always come through when I need it!
I am trying to set up a bastion host with Amazon EC2. I want the only way to connect to any of my instances to be an SSH from this bastion instance. The public subnet containing the bastion uses CIDR block 10.0.128.0/17, and the subnet containing my other instances uses CIDR block 10.0.0.0/17. I have network ACL and security group rules permitting SSH egress from the bastion to the other subnet, and permitting SSH ingress to the other subnet from the bastion. Everything should work. Unfortunately, my bastion is trying to communicate with the other instances using their public IPs, which of course is not in the 10.0.0.0/17 block, and therefore the traffic is being blocked. How can I ensure that my bastion uses private IP addresses while communicating with other instances in the private subnet? This seems like it should be the default behavior for local traffic in a VPC but apparently its not!
EDIT:
I left out some key info. The "private" instance giving me trouble is actually public; it is a Wordpress web server with public IP 52.14.20.167
(please don't spam it lol) and a custom www
DNS name. However, while I want my bastion to be able to SSH into it using that DNS name, I still want all SSH traffic be local so that my security groups and network ACLs can be very restrictive. According to this AWS doc:
We resolve a public DNS hostname to the public IPv4 address of the instance outside the network of the instance, and to the private IPv4 address of the instance from within the network of the instance.
However, I think this rule only applies to the AWS-provided (IP-like) public DNS names. My custom DNS always resolves to the public IP, not the private one, as seen in the flow log from my bastion's subnet below. 10.0.128.6
is the bastion and 52.14.20.167
is the web server. Idk what the 190.
and 14.
addresses are. So my more educated question is, how can I have custom DNS names resolve to private IPs for a bastion host and second instance in the same VPC?
10.0.128.6 52.14.20.167 56008 22 6 7 420 1485465879 1485465996 REJECT OK
190.173.143.165 10.0.128.6 27754 22 6 1 40 1485466241 1485466296 REJECT OK
10.0.128.6 52.14.20.167 56012 22 6 7 420 1485466903 1485467016 REJECT OK
190.13.10.206 10.0.128.6 28583 22 6 1 40 1485467140 1485467197 REJECT OK
10.0.128.6 52.14.20.167 56014 22 6 7 420 1485467437 1485467557 REJECT OK
14.158.51.244 10.0.128.6 55532 22 6 1 44 1485467500 1485467557 REJECT OK
After some question/answer back and forth with the OP, we determined that one of the root causes for the issue was the use of custom DNS names.
The user would access the bastion host, and from there would resolve the custom DNS name to a public IP address for the EC2 instance in question. This is why the traffic, from the bastion host, was not using the EC2 instances' private IP addresses. The key to using an EC2 instance's private IP address is to resolve the AWS-assigned public DNS hostname of that instance from within AWS. As the OP noted from the AWS docs, that AWS-assigned public DNS hostname will resolve to a public IP address from outside of AWS, but will resolve to a private IP address from inside of AWS. Thus the key was to get the user using that AWS-assigned public DNS hostname.
One way in which to keep the use of the custom DNS hostname, and still resolve to the private IP address of the EC2 instances from the bastion host is to make the custom DNS name be a CNAME
record (rather than an A
record), which points to the AWS-assigned public DNS name. Yes, this requires updating that DNS record whenever a new/different EC2 instance appears, but such an update would be required anyway, for an A
record to point to the new public IP address. By using a CNAME
, things should work as desired.
Hope this helps!