The other day I found myself trying to tune a Ruby on Rails app I had written as a side project. (The app lets me keep track of my favorite eateries and pubs. It’s searchable, includes multiple images, and has stored locations.) On past projects, I relied on SolarWinds® Papertrail™, path testing, a lot of trial and error, and a general feel to try to improve performance. This time I thought I would give SolarWinds AppOptics™ Dev Edition a try.
I’ve never done much with application performance management (APM) tools. I heard they were hard to deploy and configure and pricey. With the announcement of AppOptics Dev Edition, which is designed to be quick to set up, integrated with Papertrail, and free for small pre-production environments, I thought I should give it a shot.
Installing AppOptics Dev Edition
Setting up AppOptics Dev Edition for my app was a breeze. It has a Ruby integration, so it was just a matter adding the “appoptics_gem” to my Gem file and setting an environment variable. I didn’t write anything custom around my application code, and the instrumentation was seamless.
The next step for me was to see if using an APM solution like AppOptics Dev Edition lived up to the hype.
What Can APM Do for Me?
For background, the app I built is in Ruby on Rails with a hosted elastic search service and hosted Redis and Postgres SQL services. I built the CRUD (create, read, update, delete) operations around the Postgres SQL service and use Rails for most of the app.
I noticed the app felt a little sluggish on pages that made multiple searches to populate related entries. I decided to try some basic caching optimizations to see if I could speed things up. This type of performance tuning exercise seemed like a good time to see if using an APM tool like AppOptics Dev Edition would make a difference.
Since the returned places don’t change very often, this seemed like a good place to implement caching. I rewrote my controller action, so when a request comes through, it will query the elastic search service the first time, and the rails app should then stick the query response into a Redis cache. This means the next time the same query comes across, instead of hitting the elastic service, it will hit the Redis cache, which performs faster.
Measuring Success
When I look at the logs for the first request in Papertrail (debug logging enabled) I can see a general route. I can tell this one went to the Elastic service, but I can’t easily see the details.
When I view the request in AppOptics, however, I can see all the services and spans included in the request. The route details and total time for the request are much easier to spot and understand. Here’s what an uncached request looked like in AppOptics:
To evaluate if the caching I built improves the app performance, I search for the next occurrence of this request in the Papertrail logs to see if it’s hitting the elastic service. There isn’t a reference to the elastic search service, so it looks like it’s hitting the cache, but I can’t see anything regarding the call to Redis here by default.
However, when I look at the trace ID in AppOptics Dev Edition, it’s much clearer. I can see there are no Faraday and net.https spans, and the transaction is ultimately completed faster.
The duration metric makes it easy to measure and report on the difference. For example, I went from 106.65 ms for the first request to 23.46 ms for the second instance of the same request. I can tell the caching optimizations trimmed more than 75% off the total request time. The AppOptics heat map for the transaction makes it easy to quickly understand what’s going on. I can see how using this type of visual data would let me move faster and with more confidence when developing or testing new code or familiarizing myself with older code I haven’t looked at in a while.
Bottom Line
Performance tuning was easier with AppOptics Dev Edition (and more fun) than relying on logs alone. Visually exploring the response time breakdown allowed me to quickly identify the slowest paths and performance bottlenecks to prioritize for caching. The request durations allowed me to compare different tuning options, and produce “before” and “after” stats.
I also found it easier to trigger investigations on conditions in AppOptics, like response time or exceptions, and then jump from the visual trace of the problematic request into the logs produced across the app as a result of the request. This is by far one of the best integrated workflows I’ve used for debugging code level issues.
I’ll still be in the Papertrail event viewer for most of my log searching and detailed debugging, but the AppOptics Dev Edition is useful for performance tuning or visually exploring trace requests.