AWS Session Manager can't connect unless opening SSH port

24.1k Views Asked by At

I'm trying to use AWS Systems Manager Session Manager to connect to my EC2 instances.

These are private EC2 instances, without public IP, sitting on a private subnet in a VPC with Internet access through a NAT Gateway.

Network ACLs are fully opened (both inbound and outbound), but there's no Security Group that allows SSH access into the instances.

I went through all the Session Manager prerequisites (SSM agent, Amazon Linux 2 AMI), however, when I try to connect to an instance through the AWS Console I get a red warning sign saying: "We weren’t able to connect to your instance. Common reasons for this include".

Then, if I add a Security Group to the instance that allows SSH access (inbound port 22) and wait a few seconds, repeat the same connection procedure and the red warning doesn't come up, and I can connect to the instance.

Even though I know these instances are safe (they don't have public IP and are located in a private subnet), opening the SSH port to them is not a requirement I would expect from Session Manager. In fact, the official documentation says that one of its benefits is: "No open inbound ports and no need to manage bastion hosts or SSH keys".

I searched for related posts but couldn't find anything specific. Any ideas what I might be missing?

Thanks!

8

There are 8 best solutions below

3
Marcin On

Please make sure you are using Session Manager Console, not EC2 Console to establish the session.

From my own experience, I know that sometimes using EC2 Console option of "Connect" does not work at first.

However, if you go to AWS Systems Manager console, and then to Session Manager you will be able to Start session to your instance. This assumes that your SSM agent, role and internet connectivity are configured correctly. If yes, you should be able to see the SSM managed instances for which to start your ssh session.

Also Security Group should allow outbound connections. Inbound ssh are not needed if you setup up everything correctly.

2
Nicolás García On

Thanks for your response. I tried connecting using Session Manager Console instead of EC2 console and didn't work. Actually I get the red warning only the first time I try to connect without the SSH port opened. Then I assign a security group with inbound access to port 22 and can connect. Now, when I remove the security group and try connecting again, I don't get the red warning in the console but a blank screen, nothing happens and I can't get in.

That being said, I found that my EC2 instances didn't have any outbound port opened in the security groups. I opened the entire TCP port range for the output, without opening SSH inbound and could connect. Then I restricted the outbound port range a little bit: tried opening only the ephemeral range (reserved ports blocked) and that problem came up again.

My conclusion is that all the TCP port range has to be opened for the outbound. This is better than opening the SSH port 22 for inbound, but there's something I still don't fully understand. It is reasonable that outbound ports are needed in order to establish the connection and communicate with the instance, buy why reserved ports? Does the SSH server side use a reserved port for the backwards connection?

0
Bhargav On

I was stuck with this similar issue. My Security Groups and NACLS had inbound and outbound ports open only to precise ports and IPs as needed in addition to ephemeral port range of 1024~65535 for all internal IPs.

Finally what worked was, opening up Port 443 outbound for all internet IPs. Even restricting 443 outbound to internal IP ranges did not work.

2
A Kingscote On

Despite what all the documentation says, you need to enable HTTPS inbound and it'll work.

0
eatsfood On

The easiest way to do this would be to create the 3 VPC interface endpoints that SSM requires in your VPC and associated subnets (Service Names: com.amazonaws.[REGION].ssm, com.amazonaws.[REGION].ssmmessages and com.amazonaws.[REGION].ec2messages).

Then, you can add an ingress and an egress rule for only port 443 that allows communication within the VPC.

This is more secure than opening up large swathes of the Internet to your private instances and faster since the traffic stays on AWS' own network and does not have to traverse NATs, or gateways.

Here are some helpful links to AWS documentation:

0
j7skov On

Another item that tripped me up: Make sure the security group for your VPC endpoints is open to all inbound connections on 443, and all outbound.

I had mine originally tied to the security group of the EC2 instances I was connecting to (e.g. SG1), and when I created another security group (e.g. SG2), I could not connect. The above reason was why... originally I set up my VPC Endpoints Security Group to reference SG1, instead of all inbound connections on 443.

1
Alexander On

I had similar issue and what helped me was restarting SSM agent on a server. I've logged in with SSH and then run:

sudo systemctl restart amazon-ssm-agent

Session Manager Console immediately displayed EC2 instance as available.

0
Nor.Z On

I want to connect with ssm to an ec2 inside a private subnet, with NAT instance setup.

  • (Outbound rule is set to all ipv4.)

  • allow inbound rule for https
    (as other answer posts pointed out)

  • It seems like, you can delete the inbound rule of https
    after you verified you can use ssm to connect

    • security group setup & delete inbound rule for http

    • If you encounter a situation like this:
      • At first, I have to allow inbound with All Traffic,
        otherwise it just wont connect... (Https is not enough)
    • => what i did was:
      • just restart / recreate the ec2 instance & natInstance
        & wait for maybe 10min ~ 1h for the ssm to kick in (idk)

related topics