Tag Archives: Logstash

Logstash and IIS

Note: If you are also using Kibana as your front end, you will need to add a MimeType of “application/json” for the extension .json to IIS.

We are pushing all of our logs into Elasticsearch using Logstash. IIS was the most painful part of the process so I am writing up a few gotchas for Logstash 1.3.3 and IIS in general.

The process is relatively straight forward on paper:

  1. Logstash monitors the IIS log and pushes new entries into the pipeline
  2. Use a grok filter to split out the fields in the IIS log line (more on this below)
  3. Push the result into Elasticsearch

Firstly there is a bug in the Logstash file input on windows (doesn’t handle files named the same in different directories) which results in partial entries being read. To remedy this you need to get IIS to generate a single log file per server (default is per website). Once that is done we can read the IIS logs with this config

Once we have IIS log lines pumping through the veins of Logstash, we need to break down the line into its component fields. To do this we use the Logstash Grok filter. In IIS the default logging is W3C but you are able to select the fields you want outputed. The following config works for the default fields and [bytes sent] so we can see bandwidth usuage. The Heroku Grok Debugger is a lifesaver for debugging the Grok string (paste an entry from your log into it and then paste you GROK pattern in)

Below is the complete IIS configuration for logstash. There are a few other filters we use to enrich the event sent to logstash as well as a conditional to remove IIS log comments.

Tagged ,

Centralising Logs with Logstash and Kibana

Image

We have recently centralised our logs (IIS, CRM, our application of about 5 components) into Elasticsearch on Windows Server using Logstash as the data transformation pipeline (over RabbitMQ) and Kibana as the UI.   It allows us to see all our logs in one place (and if needed in a single timeline), developers can access live logs in a way that they can easily slice and dice the information with out requiring server access. And the front end (pictured) Kibana, is damn sexy! Its dead easy as well. All in all it took about a day to setup.

Architecture

Log Producers

All servers that produce file logs have Logstash installed as a service. Logstash monitors the log file and puts new entries onto a local RabbitMQ exchange . There are much lighter weight shippers out there, however they write directly to Elasticsearch. We wanted something a little more fault tolerant.

Log producers which we control (i.e. our custom components) write directly to RabbitMQ. We use NLog and a modified version (I’ll post more about that later) of the NLog.RabbitMq Target to write our log messages directly (async) to the local RabbitMQ exchange.

Log Server

Our centralized log server has Elasticsearch (the datastore) and Kibana (the UI) running. It also has another logstash agent that reads the messages off RabbitMQ, transforms them into more interesting events (extracting fields for search, GeoLocating IP addresses etc), and then dumps them into Elasticsearch.

Tagged , , ,