본문 바로가기
  • AI (Artificial Intelligence)
Security/Suricata

suricata for tilera

by 로샤스 2014. 2. 20.
Suricata for Tilera

Overview

This repository contains port of Suricata to Tilera's multi core processors. The intent of this repository is to collect work in progress on the Suricata port to Tilera and make it available to the community. Ultimately the modifications to Suricata to support Tilera are expected to be folded back into the Suricata source base maintained by OISF.

This supports Suricata on Tile64, TilePRO and TileGX processors. Although the main focus has been Suricata on the TileGX-36 processor. The code contains support for Tile64 and TilePRO processors, although this part of the code has not been maintained and may no longer be functional.

Suricata runs on TileEncore-GX TilEmpower-GX and TilExtreme-GX platforms.

This effort at supports Suricata on Tilera as an Intrusion Detection System (IDS). Follow on work to support inline IPS operation will be come later. Suricata's IPS operation relies on either Netfilter/IPtables or IPFW.

Suricata on Tilera should support all of the accompanying tools found alongside Suricata (ex. Barnyard, Snorby, Sguil, etc.). although exhaustive testing of each is ongoing work.

Prerequisites for Suricata on Tilera

These instructions describe the proceedure for cross compiling Suricata on an X86 host containing Tilera's MDE development software for a Tilera target platform.

Suricata for Tilera is provided in source code form.

For TileGX based systems you'll need:

  • Tilera MDE 4.0.1 or later from Tilera
  • A suitable Tilera based TileGX platform from Tilera.

For Tile64 and TilePro based systems you'll need:

  • Tilera MDE 3.0. or later from Tilera
  • A suitable Tilera based platform from Tilera or one of their hardware partners.
  • Note: as mentioned Suricata for Tile64 and TilePro isn't currently being maintained.

Acquiring Suricata for Tilera

This repository contains the source code for Suricata. Suricata requires this libyaml and libmagic which at the time this effort was begin weren't delivered by the Tilera Multicore Development Envionment (MDE). Therefore copies of these libraries have been made available here on github with instructions on building them. These copies of libyaml and libmagic have been tested with suricata on Tilera and are known to work. You may however use alternate versions of these libraries if you desire.

For assistance with acquiring Tilera hardware and development tools visit http://www.tilera.com/about_tilera/contact/contact_form .

The following discussion assumes you are placing the suricata software in a work/TileGX/github subdirectory of your home directory. Therefore do the following:

cd
mkdir -p work/TileGX/github
cd work/TileGX/github

Retrieve the source code for Suricata, libyaml and libmagic as follows:

git clone git@github.com:decanio/suricata.git
git clone git@github.com:decanio/yaml.git
git clone git@github.com:decanio/libmagic.git

Building Suricata for Tilera

Both the libyaml and libmagic libraries must be build prior to building Suricata for Tilera.

Build libyaml as follows:

cd yaml
./configure --host=tile
make

Build libmagic as follows:

cd libmagic
./configure --host=tile
make

Building Suricata for TileGX

The file doc/INSTALL.TILERA contains written instructions on building Suricata for both the TilePro and TileGX architectures.

A Makefile tile/Makefile.tilegx has been provided that automates building suricata with the build target "build_static". This performs most of the operations required to build suricata for Tilera.

It is probably necessary to modify several environment variables to reflect your host environment. The file suricata/tile/Makefile.tilegx contains the following section that may need to be modified in whole or in part.

#
# Local environment configuation
# Go ahead and modify these:
BASE_DIR=/home/tdecanio
RULES_DIR=$(BASE_DIR)/work/TileGX/emergingthreats
INSTALL_DIR=$(BASE_DIR)/work/TileGX/suricata-install-dir
RULES_CONFIG=/opt/suricata/etc/suricata-etpro.yaml
YAML_DIR=$(BASE_DIR)/work/TileGX/github/yaml
LIBMAGIC_DIR=$(BASE_DIR)/work/TileGX/github/libmagic
PCAPFILE=$(BASE_DIR)/work/suricata.pcap
LOG_DIR=$(BASE_DIR)/work/TileGX/logs

You will most need to modify the value of the BASE_DIR variable to point to your own directory. If you install the Suricata source code in someplace other than $(BASE_DIR)/work/TileGX you will need to modify several other environment variable to point to the proper directories.

After modifying tile/Makefile.tilegx as described suricata for TileGX can be built as follows:

cd suricata
./autogen.sh
make -f tile/Makefile.tilegx build_static

If you need to rebuild Suricata simply:

cd suricata
make

Installing Suricata on a TILEmpower-GX or TILExtreme-GX System

This section describes the procedure for installing Suricata on the SSD storage contained within the TILEmpower-GX and TILExtreme-GX system.

If your TILEmpower-GX or TILExtreme-GX does not already have at least the minimum release of software installed (MDE 4.0.1) follow the instructions in section 1.9 of the "Gx MDE Getting Started Guide" to install the Linux runtime environment on your system.

Customized Hypervisor and Linux Kernel Image for Suricata

In order to run Suricata your target needs to run a somewhat modified Tilera hypervisor and Linux kernel configuration. The tile/Makefile.tilegx makefile contains a target that will replace the SPI ROM boot image with an image that contains the necessary modifications to support Suricata.

tile/Makefile.tilegx contains the following line that needs to be modified with the address of your target system.

NET_ARGS=--net 192.168.0.11

After changing the IP address run the following:

cd suricata
make -f tile/Makefile.tilegx reimage_net

This will take a couple of minutes to write a new boot image to your target. When the above completes reboot your target.

Installing the Suricata Binaries

The build procedure above created a Makefile that contains a build target that creates an installation directory, $(INSTALL_DIR) above, that can be copied to your TILEmpower-GX or TILExtreme-GX file system.

cd suricata
make install

This will populate your $(INSTALL_DIR) directory with the Suricata files to be copied to your TILEmpower-GX or TILExtreme-GX target. The simplest way to do that is to create a tar file of the $(INSTALL_DIR) contents. Copy the tar file to your target system using scp. Then extract the contents of the tar file in the / directory of your TILEmpower-GX or TILExtreme-GX system.

Preconfigured Tilera based Suricata Systems

If you would like a system with suricata for Tilera already installed please contact decanio.tom@gmail.com for assistance.

Running Suricata on TILEmpower-GX or TILExtreme-GX System

Suricata runs on Tilera based platforms much as it runs on typical Intel based platforms.

Copy the suricata binary, built above, and your suricata configuration files to your Tilera platform.

Execute suricata with a command such as the following:

suricata -c /opt/suricata/etc/suricata.yaml --mpipe

Tilera tilegx platforms use Tilera's mpipe hardware to deliver packets to Suricata. Therefore the --mpipe flag indicates usage of the mpipe runmode using the given ethernet interface(s) listed in the suricata.yaml configuration file that should be monitored.

The 10 gigabit interface names are xgbe1, xgbe2, xgbe3 and xgbe4. The 1 gigabit interface names are gbe1, gbe2, gbe3 and gbe4.

Running Suricata on TILEncore-GX

A TILEncore-GX PCIe card does not contain any local storage that is capable of holding Linux, suricata, its configuration and the log files is produces. Instead it relies on booting the tilegx processor on the PCIe card with software provided via the PCIe bus from the host system.

If you haven't already done so follow the instructions in the "GX MDE Getting Started Guide" to install the software and drivers necessary to boot the Tilera PCIe card from your host system.

Once that is done you should be able to run suricata on your TILEncore-GX card by executing the following on your host system.

cd suricata
make -f tile/Makefile.tilegx run_pci_static

This will reboot your TILEncore card, boot Linux on it and start Suricata. The logs produced by Suricata will be available to the host in the $(LOG_DIR) as specified in tile/Makefile.tilegx.

YAML Configuration for Tilera

The Tilera implementation of Suricata adds two optional sections in the yaml configuration file. These affect the Tilera specific runmode configuration and allow specification of a list of monitored interfaces within the configuration file.

The section below configures suricata to monitor the two Tilera gigabit ethernet interfaces gbe3 and gbe4.

# Tilera mpipe configuration. for use on Tilera tilegx
mpipe:

  # Load balancing mode "static" or "dynamic".
  load-balance: static

  # Enable packet capture to pcie
  capture:
      enabled: no

  # List of interfaces we will listen on.
  inputs:
    - interface: gbe3
    - interface: gbe4

Suricata can be configured to utilize available tiles on different Tilera processors by indicating the number of parallel processing pipelines to be spawned and the number of detect threads to be utilized on each of those pipelines. The configuration must leave one free non-dataplane tile to contain several overhead threads.

The configuration below is a typical configuration for the Tilera tilegx-36 processor. This spawns 5 parallel pipelines each pipeline utilizing 4 detect threads. This configuration utilizes 35 Tilera dataplane tiles and fully utilizes the device.

# Tilera runmode configuration. for use on Tilera tilepro and tilegx
tile:

  # Number of parallel processing pipelines
  pipelines: 6

  # Number of detect threads per pipeline
  detect-per-pipeline: 3

  # Inter-tile queueing method ("simple" or "tmc")
  queue: simple

  # Use tilegx mica for zeroing memory
  mica-memcpy: no

If this section is omitted in the YAML configuration file the configuration above is used by default.

<textarea id="fullscreen-contents" class="js-fullscreen-contents" name="fullscreen-contents" placeholder="" data-suggester="fullscreen_suggester"></textarea>
Something went wrong with that request. Please try again.

 

 

 

 

 

 

 

 

 

 

 

댓글