Please login or sign up to post and edit reviews.
Networking in the Cloud Fundamentals: Connectivity Issues in EC2
Publisher |
Corey Quinn
Media Type |
audio
Categories Via RSS |
Business News
News
Tech News
Publication Date |
Jan 23, 2020
Episode Duration |
00:15:24

About Corey Quinn

Over the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.

TranscriptCorey: Welcome to the AWS Morning Briefs miniseries, Networking In the Cloud, sponsored by ThousandEyes. ThousandEyes has released their cloud performance benchmark report for 2020. They effectively race the top five cloud providers. That's AWS, Google Cloud Platform, Microsoft Azure, IBM Cloud, and Alibaba Cloud, notably not including Oracle Cloud, because it is restricted to real clouds, not law firms. It winds up being derived from an unbiased third party and metric-based perspective on cloud performance as it relates to end user experience. So this comes down to what real users see, not arbitrary benchmarks that can't be gamed. It talks about architectural and conductivity differences between those five cloud providers and how that impacts performance. It talks about AWS Global Accelerator in exhausting detail. It talks about the Great Firewall of China and what effect that has on cloud performance in that region, and it talks about why regions like Asia and Latin America experience increased network latency on certain providers. To get your copy of this fascinating and detailed report, visit snark.cloud/realclouds, because again, Oracle's not invited. That's snark.cloud/realclouds, and my thanks to ThousandEyes for their continuing sponsorship of this ridiculous podcast segment.

Now, let's say you go ahead and spin up a pair of EC2 instances, and as would never happen until suddenly it does, you find that those two EC2 instances can't talk to one another. This episode of the AWS Morning Brief's Networking in the Cloud Podcast focuses on diagnosing connectivity issues in EC2. It is something that people don't have to care about until suddenly they really, really do. Let's start with our baseline premise, that we've spun up an EC2 instance, and a second EC2 instance can't talk to it. How do we go about troubleshooting our way through that process?

The first thing to check, above all else, and this goes back to my grumpy Unix systems administrator days is: are both EC2 instances actually up?

Yes, the console says they're up. It is certainly billing you for both of those instances, I mean, this is the cloud we're talking about, and it even says that the monitoring checks, there are two by default for each instance, are passing. That doesn't necessarily mean as much as you might hope. If you go into the EC2 console, you can validate through the system logs that they booted successfully. You can pull a screenshot out of them. If everything else was working, you could use AWS Systems Manager Session Manager, and if you'll forgive the ridiculous name, that's not a half bad way to go about getting access to an instance. It spins up a shell instance in a browser that you can poke around inside that instance within, but that may or may not get you where it needs to go. I'm assuming you're trying to connect to one of those instances or both of those instances and failing, so validate that you can get into both of those instances independently.

Something else to check. Consider protocols. Very often, you may not have permitted SSH access to these things. Okay, or maybe you can't ping these and you're assuming they're down. Well, an awful lot of networks block certain types of ICMP traffic, echo requests, for example. Type eight. Otherwise, you may very well find that whatever protocol you're attempting to use isn't permitted all the way through. Note incidentally, just as an aside, that blocking all ICMP traffic is going to cause problems for your network. When things are fragmented and they need to have a different window size set for things that are being sent across the internet, ICMP traffic is how things are made aware of that. You'll see increased latency if you block all ICMP traffic, and it's very difficult to diagnose, so please, for the love of God, don't do that.

Something else to consider as you go down the process of tearing apart what could possibly be going on with these EC2 instances not able to speak to each other. Try and connect to them via IP addresses rather than DNS names. Just because there's ... I'm not saying the problem is always DNS, but it usually is DNS, and this removes a whole host of different problems that could be manifesting if you just go by IP address. Suddenly resolution, timeouts, bad DNS, et cetera, fall by the wayside. When you have a system that you're trying to talk to another system and you're only using IP, suddenly there's a whole host of problems you don't have to think about. It goes well.

Something else to consider in the wonderful world of AWS is network ACLs. The best practice around network ACLs is, of course, don't use them. Have an ACL that permits all traffic, and then do everything else further down the stack. The reason is that no one thinks about network ACLs when diagnosing these problems. So if this is the issue, you're going to spend a lot of time spinning around and trying to figure out what it is that's going on.

The next more likely approach, and something to consider whenever you're trying to set up different ways of dividing traffic across various regimes of segmentation, is security groups. Security groups are fascinating, and the way that they interact with one another is not hugely well understood. Some people treat security groups like they did old school IP address restrictions, where anything in the following network, and you can express that in CIDR notation the way one would expect, or C-I-D-R depending on how you enjoy pronouncing or mispronouncing things, can wind up being used, sure, but you can also say members of a particular security group are themselves allowed to speak to this other thing. That, in turn, is extraordinarily useful, but it also means extremely complex things, especially when you have multiple security groups layering upon one another.

Assuming that you have multiple security group rules in place, the one that allows traffic is likelier to have precedents. Note as well that there's a security group rule that is in place by default that allows all outbound traffic. If that's gotten removed, that could be a terrific reason why an instance is not able to speak to the larger internet.

One thing to consider when talking about the larger internet is what ThousandEyes does other than releasing cloud benchmark performance reports. That's right. They are a monitoring company that gives a global observer perspective on the current state of the internet. If certain providers are having problems, they're well positioned to be able to figure out who that provider is, where that provider is having the issue, and how that manifests, and then present that in real time to its customers. So if you have widely dispersed users and want to keep a bit ahead of what t...

Join me as continue my series on cloud fundamentals with a look at connectivity issues in EC2, including how to troubleshoot said issues, why DNS is often the cause of connectivity issues and how to get around that, security groups and why they’re fascinating, how IPv6 might be the cause of connectivity issues in certain instances, VPC considerations to keep in mind when troubleshooting issues, when you should try to replicate your issue by spinning up a new server and when you absolutely shouldn’t, why connectivity issues should be resolved relatively quickly in the cloud era, what I believe the most transformative feature of the cloud is, and more.

About Corey Quinn

Over the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.

TranscriptCorey: Welcome to the AWS Morning Briefs miniseries, Networking In the Cloud, sponsored by ThousandEyes. ThousandEyes has released their cloud performance benchmark report for 2020. They effectively race the top five cloud providers. That's AWS, Google Cloud Platform, Microsoft Azure, IBM Cloud, and Alibaba Cloud, notably not including Oracle Cloud, because it is restricted to real clouds, not law firms. It winds up being derived from an unbiased third party and metric-based perspective on cloud performance as it relates to end user experience. So this comes down to what real users see, not arbitrary benchmarks that can't be gamed. It talks about architectural and conductivity differences between those five cloud providers and how that impacts performance. It talks about AWS Global Accelerator in exhausting detail. It talks about the Great Firewall of China and what effect that has on cloud performance in that region, and it talks about why regions like Asia and Latin America experience increased network latency on certain providers. To get your copy of this fascinating and detailed report, visit snark.cloud/realclouds, because again, Oracle's not invited. That's snark.cloud/realclouds, and my thanks to ThousandEyes for their continuing sponsorship of this ridiculous podcast segment.

Now, let's say you go ahead and spin up a pair of EC2 instances, and as would never happen until suddenly it does, you find that those two EC2 instances can't talk to one another. This episode of the AWS Morning Brief's Networking in the Cloud Podcast focuses on diagnosing connectivity issues in EC2. It is something that people don't have to care about until suddenly they really, really do. Let's start with our baseline premise, that we've spun up an EC2 instance, and a second EC2 instance can't talk to it. How do we go about troubleshooting our way through that process?

The first thing to check, above all else, and this goes back to my grumpy Unix systems administrator days is: are both EC2 instances actually up?

Yes, the console says they're up. It is certainly billing you for both of those instances, I mean, this is the cloud we're talking about, and it even says that the monitoring checks, there are two by default for each instance, are passing. That doesn't necessarily mean as much as you might hope. If you go into the EC2 console, you can validate through the system logs that they booted successfully. You can pull a screenshot out of them. If everything else was working, you could use AWS Systems Manager Session Manager, and if you'll forgive the ridiculous name, that's not a half bad way to go about getting access to an instance. It spins up a shell instance in a browser that you can poke around inside that instance within, but that may or may not get you where it needs to go. I'm assuming you're trying to connect to one of those instances or both of those instances and failing, so validate that you can get into both of those instances independently.

Something else to check. Consider protocols. Very often, you may not have permitted SSH access to these things. Okay, or maybe you can't ping these and you're assuming they're down. Well, an awful lot of networks block certain types of ICMP traffic, echo requests, for example. Type eight. Otherwise, you may very well find that whatever protocol you're attempting to use isn't permitted all the way through. Note incidentally, just as an aside, that blocking all ICMP traffic is going to cause problems for your network. When things are fragmented and they need to have a different window size set for things that are being sent across the internet, ICMP traffic is how things are made aware of that. You'll see increased latency if you block all ICMP traffic, and it's very difficult to diagnose, so please, for the love of God, don't do that.

Something else to consider as you go down the process of tearing apart what could possibly be going on with these EC2 instances not able to speak to each other. Try and connect to them via IP addresses rather than DNS names. Just because there's ... I'm not saying the problem is always DNS, but it usually is DNS, and this removes a whole host of different problems that could be manifesting if you just go by IP address. Suddenly resolution, timeouts, bad DNS, et cetera, fall by the wayside. When you have a system that you're trying to talk to another system and you're only using IP, suddenly there's a whole host of problems you don't have to think about. It goes well.

Something else to consider in the wonderful world of AWS is network ACLs. The best practice around network ACLs is, of course, don't use them. Have an ACL that permits all traffic, and then do everything else further down the stack. The reason is that no one thinks about network ACLs when diagnosing these problems. So if this is the issue, you're going to spend a lot of time spinning around and trying to figure out what it is that's going on.

The next more likely approach, and something to consider whenever you're trying to set up different ways of dividing traffic across various regimes of segmentation, is security groups. Security groups are fascinating, and the way that they interact with one another is not hugely well understood. Some people treat security groups like they did old school IP address restrictions, where anything in the following network, and you can express that in CIDR notation the way one would expect, or C-I-D-R depending on how you enjoy pronouncing or mispronouncing things, can wind up being used, sure, but you can also say members of a particular security group are themselves allowed to speak to this other thing. That, in turn, is extraordinarily useful, but it also means extremely complex things, especially when you have multiple security groups layering upon one another.

Assuming that you have multiple security group rules in place, the one that allows traffic is likelier to have precedents. Note as well that there's a security group rule that is in place by default that allows all outbound traffic. If that's gotten removed, that could be a terrific reason why an instance is not able to speak to the larger internet.

One thing to consider when talking about the larger internet is what ThousandEyes does other than releasing cloud benchmark performance reports. That's right. They are a monitoring company that gives a global observer perspective on the current state of the internet. If certain providers are having problems, they're well positioned to be able to figure out who that provider is, where that provider is having the issue, and how that manifests, and then present that in real time to its customers. So if you have widely dispersed users and want to keep a bit ahead of what t...

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review