Saturday 5 July 2014

How do you log a problem like MySql?

Well, following on from my blog on MongoDB and ELK I thought I would share my exploration for another client, logging MySql. Still using ELK, we are solving 2 issues in the MySql relational world:

1. How not to have to use Splunk and its slightly 'higher than free' licensing mechanism.
2. How to log every single query that goes through your db server, while:
           a) Not affecting current throughput and performance.
           b) Generating the right information to find connection issues, slow queries, and so unclogging your systems before they get clogged.

The backdrop to all this was with a previous client, where as the business increased, so did the throughput on the servers. One day (night actually) something horrible happened, and things ground to a halt (well... nearly). As a result I was given the wonderful task of trawling the slow logs to 'find the reason'. This became a daily task, and it wasn't fun.
Soon we automated the process, getting data into a seperate DB that we could generate graphs from. This was great... we trimmed about 4.5 million queries off the slow logs (anything over 1s) per year. But it still didn't show anything that went through the system under 1s. Enough of these can throw the system out, and you wont see the reason why.
We tried a lot of things, trying to solve the issue, while satisfying 2(a) above... none worked. Whatever we did involved a few points of breakage, or involved io write on the DB servers, and other services while we read the output.
Then, one day , I discovered a little used or known about MySql tool called mysql-proxy, written by Jan Kneschke. This is basically a small lua app that sits between client and server, passing information between the two. There is a lot you can do with this, but what I needed was something that can pick up the information on the way to the server, and particularly, pick up the query/execute times on the way back. Proxy allows you to write your own lua scripts, that use the hooks it provides, to pick up this information. It also requires a lua apr piece of glue to get microtime granularity... so thanks to Pete Rodding for his lua-apr library.
We tried it, it was good. I even got it writing the output to a RabbitMQ server, that then had workers picking that up. We used it in non-production servers to prove the point.

Now, I thought.. I have a new client. I have recently used the ELK stack to log MongoDB. I can take the proxy code I wrote, change the output method slightly (it was already outputting Json documents), make a simple curl call to ElasticSearch, and presto! Putting Kibana up against the results, and we should have 1-2s delayed information on connections/queries/execute times ... all manner of good things.
(Update.. great idea.. lousy implementation. Curl works, but slows the process down considerably. So writing to a log file, allowing logstash to update ES, and logrotate to keep the log file to a minimum every hour, is the way to go for now.)

 We have been in production with these for 8 months now (updated since my original blog post). We 
Master Queries
Web Slaves
 We have slaves split between two groups (web and stats). On the left we have the master details, on the right we have some of teh slave details. The topline gives a histogram of the number of queries in a 10 minute interval. By recording this information the savings made have been mind boggling. We have saved 60m queries per day (yes... 60m). We have been able to identify connections in the code that did nothing (except take up sockets), we have idenitied queries that were being replicated when the data was only alive for a matter of minutes (think sessions).  

Lockout and Server splits
We have another histogram based on the total lockout time of the queries on each slave. High is bad, and needs further investigation.

Query Detail Table
Then we have a histogram of the connections per 10 minutes on the slaves. This is after the code we found that produced about 25k connections per slave per 10 minutes. The 'smoking gun' I was partially hired to discover. Job Done! It was also taken at a slow period on a sunday, so no surprises that not a lot was happening.
Finally we have a table of queries from one of the groups. Actually removed any identifying information for hackers, but in reality we also record Server / Client IPs, Server threads.. and all kinds of interesting information.

If you want to try this yourself, the code required, along with instructions can be found on GitHub here. As with all good things in life, its free.

Without duplicating the instructions on the GitHub repo,  I shall explain the information you get, and the basic requirements from an ELK perspective. However,it is worth mentioning here, that, in this scenario, we installed a copy of mysql-proxy on each DB server, listening on Port 3306. The Servers themselves listen on port 4040. In this way, any application will go through the proxy (and our logging system) with no changes at all... that was cool. For those using TCP replication by statement, that will also go through the proxy.. so you will need to change that if your proxy deals with more than one server, otherwise your data will become slightly messy.
But if it does only go to one server... you will automatically be recording that too... Yippee.


Output from Mysql Server

Server                                 Server ip:port connected
client_connections              Number of Client connections when this call was made
User                                    User name running the query
proxyName                         Proxy ip:port
server_version                     Server Version number
Client                                  Client ip:port making the connection
Thread                                Server thread that ran the query (useful for seeing queues of queries)
QueryLen                           Length of the query
Query                                 The actual query (see below)
QueryType                         The MySql query type
timeSent                             The time query was received from the client (microtime)
queryTime                          The actual execute time of the query
responseTime                     Time taken to process the query
timeReceived                      Time information received back from the server
lockoutTime                       (timeReceived-timeSent-responseTime). So queuetime, network lag etc.
current                                 Number of queries proxy is dealing with at present.

All the above is fed into a Json document and fed into ElasticSearch. Along with a timestamp, and a ttl. You will probably have many thousands of these per second/minute/hour depending on the size of your system. TTL will save storage.
All times are in microtime resolution, which is really useful. Having come across some dynamic sql that is in excess of 100k in size, I decided to limit the length of the query. It defaults to 1k, plus the last 30 bytes, but this is configurable within the lua code. If you are happy storing H U G E queries...
feel free to play.

This data is written to an output file, and Logstash reads that and writes to the ES server.

 Logstash Configuration

The configuration file for Logstash looks like this:
input {
      file {
        codec => json
        discover_interval => 5
        path => ["/var/log/mysql/bitonproxy.out"]
        start_position => "beginning"
    }
 }
filter {
  date{
    match => ["timestamp", ISO8601]
    target => "@timestamp"
    remove_field => ["timestamp"]
    remove_field => ["host"]
    remove_field => ["path"]
    remove_field => ["@version"]
    remove_field => ["message"]
  }
}
output {
  elasticsearch {
    index => "mysql"
    index_type => "query_data"
    host => "ES-Server"
    protocol => "http"
    bind_port => 9200
    manage_template => true
    codec => json
 }
 stdout {codec=> json  }
}

Replace the ES-SERVER with your server definition. Due to the logrotate causing interim havoc with your og file, sometimes the formatting gets messed up. Tis produces a document with a 'message' element, and not the main elements you are looking for. The aobve will remove the majority of these for you. You will end up losing one or two statemets this way, but essentialy who cares? You have 99.995% of all queries going through your system. 

ElasticSearch Mappings

The mappings, included in the Github repo, are here also. These are important, as it enables ES to index the relevant fields as they are written, alowing for faster response times. Searching on unindexed fields doesnt work well. This has to be done prior to you writing to the ES index, but does not require the index to exist. It will create it if it isn't there. If you haven't got ES installed, or Kibana, please go to go to my previous blog for instructions.

curl -XPOST http://[MY_ES_SERVER]:9200/mysql/ -d '{
    "settings" : {
        "number_of_shards" : 8,
        "number_of_replicas" : 1,
        "index.analysis.analyzer.default.type": "standard",
        "index.refresh_interval": "5s"
    },
    "mappings" : {
     "_default_":{
            "_ttl" : {"enabled":true, "default":"1d"},
        "properties" : {
                "@timestamp":{"type":"date"},
                "Server":{"type":"string"},
                "client_connections":{"type":"long"},
                "User":{"type":"string"},
                "proxyName":{"type":"string"},
                "server_version":{"type":"string"},
                "Client":{"type":"string"},
                "Thread":{"type":"string"},
                "QueryLen":{"type":"long"},
                "Query":{"type":"string"},
                "QueryType":{"type":"long"},
                "timeSent":{"type":"double"},
                "queryTime":{"type":"double"},
                "responseTime":{"type":"double"},
                "timeReceived":{"type":"double"},
                "lockoutTime":{"type":"double"},
                "current":{"type":"long"}
            }
         },
        "query_data" : {
            "_ttl" : {"enabled":true, "default":"1d"},
            "properties" : {
                "@timestamp":{"type":"date"},
                "Server":{"type":"string"},
                "client_connections":{"type":"long"},
                "User":{"type":"string"},
                "proxyName":{"type":"string"},
                "server_version":{"type":"string"},
                "Client":{"type":"string"},
                "Thread":{"type":"string"},
                "QueryLen":{"type":"long"},
                "Query":{"type":"string"},
                "QueryType":{"type":"long"},
                "timeSent":{"type":"double"},
                "queryTime":{"type":"double"},
                "responseTime":{"type":"double"},
                "timeReceived":{"type":"double"},
                "lockoutTime":{"type":"double"},
                "current":{"type":"long"}
            }
        }
    }
}'


You will need to replace the server in the curl call with the ip of your ES cluster, and maybe the index name (I have called it... imaginatively... mysql). That is all. 

The other major benefits of this are:

1. You will no longer need to record slow queries at the server side, you now have the ability to monitor queries of any length of time using Kibanas Filter.
2. You can use the search results to query the Sql/connection from any other data, such as table/column names, using a search engine, which is designed for this purpose. So, finding heavy use tables/columns will be made a lot easier.


As an example, within the first 3 weeks of being in Beta with this system, it has been repsonsible for highlighting 3 queries, which, when rationalised, have saved between 40-60m (million... get that?) queries per day, and bringing a 25k (that is thousand) connections per 10minutes down to a maximum of 60 per 10 minutes. Yes .. these were dumb queries for the size of site, but were never going to show up in the slow query logs, or the cacti graphs. The connections did show up in cacti, but there was nothing to say this was out of the ordinary.. until they were tied down to an old query, running on a cron, that was never used, but went into a huge loop.
This meant we have reduced the number of slaves in use by just over a third. For no actual software cost.. and we haven't even started on the actual problem queries.

Hope this has been useful, please feel free to quote, feed back, ask for help...

Later...








No comments:

Post a Comment