February 15, 2021

Snort & Elastic Stack

Our customers often ask for detailed logging and monitoring capabilities in their lab environments, and we’ve implemented a number of unique scenarios to enable those requests. Two specific scenarios, outlined in this post, make use of native AWS functionality along with popular open-source network traffic analysis tools to provide insight into network traffic in our lab environments. We’ve already covered how to implement VPC Traffic Mirroring and Zeek, and in this post we’ll walk through setting up the Snort IDS and Elastic Stack to identify potentially malicious network activity in a lab.

Lab Resources and Estimated Costs

The lab we’ll be creating in this post has several AWS resources which cost money to run. The following list covers the costs of of the individual resources and the total estimated cost per hour to run this lab environment. The estimates are for the US-East-1 region.

  • 2 x t2.medium Linux instances - $0.0464/hr

Total Estimated Lab Cost - $0.0928/hr

If following along and deploying resources, be sure to terminate the above resources when finished with the lab to avoid unexpected costs.

Lab Design

                   

Local Traffic Inspection - Snort (1).png
                                                                                                               

         

The last lab design we looked at for network monitoring forwarded traffic from several EC2 instances to a single interface on our Zeek host. Once the traffic reaches the Zeek instance interface, it can be analyzed for malicious indicators such as Command and Control (C2) traffic or network enumeration.

Our approach with Snort and Elastic differs slightly in that the heavy lifting of the traffic analysis occurs on the interface of each instance before shipping off to Elastic. This approach is effective on a small scale, but as the number of systems grows managing Snort configurations across them can get cumbersome without dedicated automation. For this post, our lab will consist of a single instance running Snort and a single instance running an Elastic Stack, both running on Ubuntu Server 20.04.

To get our base infrastructure deployed, we can launch two Ubuntu instances into a VPC w/ Public Subnet via the AWS Console or the following AWS CLI command:

aws ec2 run-instances /
 --image-id ami-03d315ad33b9d49c4 /
 --count 2 /
 --instance-type t2.medium/.
 --key-name <your-keypair-name> /
 --security-group-ids <your-security-group> /
 --subnet-id <your-subnet-id

Once our instances are deployed and running, we can install Snort to start detection malicious activity.

Installing and Configuring Snort

Installing Snort 3 on is fairly straightforward, and there is some great existing documentation on getting in set up. We referenced this article during our setup and recommend following along there before moving on to our Elastic setup.

Installation

  • Update & Upgrade
  • Install Dependencies
  • Install Snort Data Acquisition library (DAQ) - this is also required!
  • Install Snort3
  • This step can take a while so a little patience is necessary. There will also be a lot of warnings, none of which should be fatal to the install.

Once installation is complete, we’ll want to validate our install was successful by running Snort and checking our installation version.

snort -V

Configuration

After installation, we can configure Snort to monitor our desired interface and add some basic rules to fire alerts. Again, we’ll follow the configuration steps laid out on kifarunix.

  • Download Community Rules
  • Install the Snort OpenAppID
  • This allows for application layer detection, but is not required.
  • The link provided in the kifarunix.com walkthrough is out of date, so we’ll want to find the most recent download here: https://www.snort.org/downloads/#snort-3.0
  • Add any custom rules
  • We’ll take care to create the custom ICMP rule for testing
  • Configure logging and alerting
  • (Optional) Set up Snort as a service

At this point, we have Snort 3 installed and configured with a set of community rules along with a rule to detect ICMP requests to the host. We can test our configuration by pinging the Snort host (making sure snort is running first) and checking the contents of the alert_fast.txt log file we’ve created.  

ping <snort-host-ip>
cat /var/log/snort/alert_fast.txt

If everything is functioning properly, our alert_fast.txt log file should look something like this.

02/11-14:14:37.704500 [**] [1:1000001:1] "ICMP connection test" [**] [Priority: 0] {ICMP} 172.16.2.24 -> 172.31.92.244
02/11-14:14:37.704528 [**] [1:1000001:1] "ICMP connection test" [**] [Priority: 0] {ICMP} 172.31.92.244 -> 172.16.2.24
02/11-14:14:38.708868 [**] [1:1000001:1] "ICMP connection test" [**] [Priority: 0] {ICMP} 172.16.2.24 -> 172.31.92.244
02/11-14:14:38.708894 [**] [1:1000001:1] "ICMP connection test" [**] [Priority: 0] {ICMP} 172.31.92.244 -> 172.16.2.24

Now that Snort has been install and configured with the a sample alert in place, we’re ready to move on to setting up Elastic Stack on our monitoring system.

Installing Elastic Stack

Our Elastic Stack system will ingest the alerts that Snort generates and allow us to create visualizations and security dashboards to easily identify potential malicious activity on the monitored host(s). The typical Elastic Stack has four components to power this functionality: Elasticsearch, Logstash, Kibana, and Filebeat. In our setup, we’re going to skip the Logstash configuration for simplicity, but for larger environments with high volumes of data it is a crucial piece of the data pipeline.

Again, there are many existing guides to setting up an Elastic Stack server. We’ve found this guide from Digital Ocean to be succinct and straightforward.

Since we won’t be following all the steps in that guide and will be deviating from some of them, we’ll provide a full guide of our own below.

Installing Elasticsearch

The Elasticsearch service is available through a package repository, but not included by default in Ubunut 20.04. To install it, we can simply add the package, update, and install.

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update
sudo apt install elasticsearch

Elasticsearch should automatically install and configure itself as a service. By default, it will listen on localhost on port 9200.

Once the installation finishes we can start and enable the service, then confirm its running with a curl command.

sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
curl -X GET "localhost:9200"

If successful, our curl command should output the following:

{
 "name" : "Elasticsearch",
 "cluster_name" : "elasticsearch",
 "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
 "version" : {
   "number" : "7.7.1",
   "build_flavor" : "default",
   "build_type" : "deb",
   "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
   "build_date" : "2020-03-26T06:34:37.794943Z",
   "build_snapshot" : false,
   "lucene_version" : "8.5.1",
   "minimum_wire_compatibility_version" : "6.8.0",
   "minimum_index_compatibility_version" : "6.0.0-beta1"
 },
 "tagline" : "You Know, for Search"
}

In order for us to accept logs directly from our Snort server via Filebeat, we need to enable Elasticsearch to listen on all interfaces and specify a few other settings. To accomplish this, we can modify the file at /etc/elasticsearch/elasticsearch.yml. In particular, we’ll need to modify the network.host, http.port, and discovery.seed_hosts values to match the following.

sudo nano /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["localhost"]
#

We can save the file and restart the Elasticsearch service for the changes to take effect.

sudo systemctl restart elasticsearch

Now that Elasticsearch is running properly, let’s configure our visualization service, Kibana.

Installing Kibana

Kibana is the service that will power any dashboards or visualizations of data ingested by Elasticsearch. Since we’ve already added the Elastic package repository to this server, installation is quick and easy.

sudo apt install kibana
sudo systemctl enable kibana
sudo systemctl start kibana

By default, Kibana only listens on localhost port 5601 for connections. Since our lab is hosted in AWS, we’ll need to be able to access the Kibana dashboard remotely. To do this, we can edit two lines in the kibana.yml configuration file.

sudo nano /etc/kibana/kibana.yml

These should be the first two settings in the config file. Uncomment them and change the server.host value to 0.0.0.0 for all interfaces, or to a specific address.

server.port: 5601
server.host: "0.0.0.0"

After we’ve modified the Kibana configuration, we need to restart the service for the changes to take effect.

sudo systemctl restart kibana

Now our Kibana dashboard should accept external connections over port 5601. We can verify this by browsing to http://<kibana-server-ip>:5601/status.

                   

kibanastatus.png
                                                                                                           

         

This configuration opens up the Kibana service externally with no authentication configured. Obviously, this isn’t an ideal setup outside of an isolated lab environment like the one we’re working from. To set up a more secure configuration requiring authentication, a simple solution is to use NGINX as a reverse proxy. The Digital Ocean guide referenced earlier has excellent details on this process.

Now that our Kibana dashboard is running, and Elasticsearch is ready to accept incoming logs, we’re ready to forward our Snort alerts to Elastic with Filebeat.

Ingesting Snort Alerts with Filebeat

Filebeat is a lightweight logging agent that runs on Linux systems and ships logs to a Logstash or Elasticsearch endpoint. In this lab setup, we’re going to send some basic system events along with alerts from Snort directly to Elasticsearch. Like Elasticsearch and Kibana, Filebeat is easily installable via a repository package.

Installing Filebeat

To install Filebeat through the installer package, we’ll need to add the Elastic repository to our Snort system then run the installer.

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update
sudo apt install filebeat

Next, we’ll configure Filebeat to send our Snort alerts to Elastic by modifying the /etc/filebeat/filebeat.yml file. There are two sections we’ll need to modify: filebeat.inputs and output.elasticsearch. The first section should match the following:

sudo nano /etc/filebeat/filebeat.yml

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

 # Change to true to enable this input configuration.
 enabled: true

 # Paths that should be crawled and fetched. Glob based paths.
 paths:
   - /var/log/*.log
   - /var/log/snort/alert_fast.txt
   #- c:\programdata\elasticsearch\logs\*


# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
 # Array of hosts to connect to.
 hosts: ["<elastic-server-ip>:9200"]

These configuration changes should ship our Snort alerts to the proper Elastic endpoint. We’ll also want to enable some other system logs to be sent. This is accomplished by enabling Filebeat’s “system” module.

sudo filebeat modules enable system

We’ll also want to configure some indexes and Kibana dashboards to our data loads nicely into our Kibana dasbhoard.

sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["<elastic-server-ip>:9200"]'
sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['<elastic-server-ip>:9200'] -E setup.kibana.host=<elastic-server-ip>:5601

Once all this is set, we’re finally ready to start the Filebeat service.

sudo systemctl start filebeat

Viewing Alerts in Kibana

At this point, we should have all the services running and configured to send Snort alerts and other system logs to Elastic, viewable in a Kibana dashboard!

To test out our lab setup, let’s ping the Snort server from another system in the lab (Elastic or our host would both suffice). When we do this, our custom Snort rule should create an alert entry in /var/log/snort/alert_fast.txt. Filebeat should pick up a change to this file and send the log data to Elasticsearch, where we can view it through the Kibana Discover page.

If our setup was successful, we should see someting like the following at http://<elasic-server-ip>:5601/app/discover#/.

                   

image-20210211-172208.png

         

Conclusion

Network monitoring is a powerful tool in the defender’s arsenal, and our lab environments should be able to provide a place to practice capturing and identifying malicious traffic at the network level.

In this post, we’ve quickly set up a basic network monitoring lab with Snort, Elasticsearch, Kibana, and Filebeat, and configured a test Snort alert for ICMP traffic. Our next step would be to configure more custom Snort alerts and test other kinds of malicious traffic against the Snort host, then build dashboards, visualization, and security alerts for this traffic in Kibana.

Our previous post, VPC Traffic Mirroring and Zeek, walked through an alternative network traffic monitoring lab setup.

We hope you’ve enjoyed this series on setting up networking monitoring in AWS lab environments! If you’re looking for more robust lab solutions with customized logging and monitoring capabilities, be sure to check out our Enterprise Lab Solutions.

UP NEXT

How Accenture Keeps Cyber Security Teams Trained on the Latest Threats

read NOW