December 16, 2020
The AWS cloud offers a lot of advantages for hosting cybersecurity training environments (scalability, pay-as-you-go pricing, etc.), but there are also some key differences from traditional on-premises infrastructures. One of the most relevant differences for defenders, in particular, to be aware of is the way AWS handles some of the lower level networking in its Elastic Cloud Compute (EC2) service. In this post, we’ll examine some of these networking differences with a focus on security monitoring applications and network traffic analysis with tools like Zeek, Surricata, and Snort.
First, imagine the typical network monitoring setup that one might see in an on-premises datacenter; several servers/workstations are connected physically through a switch. The switch sees all network traffic bound from one server to another and acts as a hub for all the traffic on that particular network. This is a logical place to inspect network traffic with the traffic analysis tool of choice.
When we compare this to what’s happening in our AWS VPCs, the break from traditional infrastructures becomes pretty obvious.
In an AWS VPC, there is no concept of a “switch”. At least not one that we as the customer have access to. There are probably a lot of good technical reasons for this that help power the scalability and cloud magic that AWS provides, but for us, all that matters is an inconvenient fact. There’s no single interface we can tap to see all the traffic on the network. In the above diagram, we have various EC2 instances deployed in a VPC. The network traffic between each instance is controlled by AWS infrastructure which is outside of our control as the customer. We can define how traffic is routed from instances to systems outside the VPC in the route table, but don’t have the ability to inspect the traffic at that level.
AWS recognized that this presents some tangible pain points for their customers, so in mid 2019 they announced a new service to help address this concern, VPC Traffic Mirroring.
VPC Traffic Mirroring allows us to get more direct access to the network traffic on certain instance types (any powered by the Nitro system) by “mirroring” the traffic from specified network interfaces to a target destination. At a high level, a simple traffic mirroring setup for network monitoring and traffic analysis would look like the following.
Here we have several instances (traffic sources) mirroring traffic from a single Elastic Network Interface (ENI) to a single traffic target, an ENI on an instance running Zeek. With this setup, we can achieve more or less the same thing we would in a traditional on-premises environment. This will work just fine in plenty of non-production environments and smaller lab scenarios. For production use cases or more realistic lab networks, we’ll want to implement something slightly more complex to be more flexible about the instances we can monitor.
For most of our environments at Snap Labs, there are quite a few systems we might be interested in mirroring traffic from to perform network analysis. For example, Shirts Corp has over 20 instances that currently send logging and telemetry to the lab’s SIEM (Splunk). Unfortunately, there is a limit of 10 traffic mirror sources per target network interface for non-dedicated hosts. So, sending this traffic to a single ENI on a Zeek instance won’t for in our labs.
To analyze traffic from our entire lab, we must use an intermediary Network Load Balancer as a traffic mirror target, then forward the traffic from there on to our traffic analysis tool. The NLB’s we can deploy in AWS are highly available and can handle millions of requests per second. They also aren’t limited to 10 traffic mirror sources, so they make excellent VPC traffic mirror targets!
In this setup, all of our traffic is forwarded to the Network Load Balancer which will collect and forward that traffic to a network analysis tool of our choice. There are a number of advantages for this type of setup. First, we have a lot of flexibility with our network traffic analysis instance. It would be relatively straightforward to swap out for a different tool stack if our requirements changed and we could also run separate tools simultaneously! The NLB allows us to forward our mirrored traffic to multiple destinations.
Using a Network Load Balancer as our traffic mirror target is a great high performance, high availability option for analyzing network traffic. We can mirror traffic from a large number of sources while still having just a single instance running our network monitoring software. Sometimes, we don’t need an enterprise grade solution for our lab environment. If our scenario only requires monitoring traffic on a handful of systems, it can be more cost effective to run monitoring software on each system and forward the logs to a central logging server running our favorite SIEM. This is exactly what we did during recent custom lab build.
For this lab, we only needed to monitor traffic on two hosts and send the logs to a central location. There, blue teamers can analyze and identify malicious traffic like SSH bruteforcing or directory enumeration on web services. We ended up running Snort on each host to monitor a network interface, and we leveraged Filebeat to send a log of any alerts generated to an Elastic Stack (Elasticsearch, Logstash, Kibana) server.
This solution gave us comparable visibility over the network traffic on our hosts as a more complex traffic mirroring setup. At the same time, we kept the lab infrastructure more manageable and shed costs associated with traffic mirror sessions and NLBs. With only two hosts to monitor, we also weren’t concerned with the overhead of maintaining consistent Snort configs across the environment.
In this post we described, at a high level, a number of setups in AWS that can enable network traffic analysis. Stay tuned for a follow-up post where we’ll break down these scenarios in more detail and walk through their setup step by step!