Scout Help Docs
Welcome to the help site for Scout Application Monitoring. Don’t have an account? Get started.
Browse through the sidebar, search, email us, or join our Slack community. We’re here to help.
Ruby Agent
View our docs for installing, configuring, and troubleshooting the Scout Ruby agent.
Elixir Agent
View our docs for installing, configuring, and troubleshooting the Scout Elixir agent.
Python Agent
View our docs for installing, configuring, and troubleshooting the Scout Python agent.
PHP Agent
View our docs for installing, configuring, and troubleshooting the Scout PHP agent.
NodeJS Agent
View our docs for installing, configuring, and troubleshooting the Scout NodeJS agent.
Overview
Scout Application Monitoring is a lightweight, production-grade application monitoring service built for modern development teams. Just embed our agent in your application: we handle the rest.
Here’s an overview of the key functionality in our application monitoring service:
Agents
We support Ruby on Rails, Elixir, NodeJS, Python, and PHP apps.
Our agent is designed to run in production environments and has low overhead. Every minute, the agent transmits metrics to our service over SSL.
There’s nothing to install on your servers.
User Interface
A complete overview of the Scout UX is available in the features area of this help site.
Ruby Agent
Requirements
Our Ruby agent supports Ruby on Rails 2.2+ and Ruby 1.8.7+. See a list of libraries we auto-instrument.
Memory Bloat detection and ScoutProf require Ruby 2.1+.
Installation
Tailored instructions are provided within our user interface. General instructions:
1 |
Your Gemfile: gem 'scout_apm' Shell: bundle install |
2 | Download your customized config file*, placing it at Your customized config file is available within your Scout account. |
3 | Deploy. |
* - If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets the required settings via config vars and a configuration file isn’t required.
Troubleshooting
No Data
Not seeing any data?
1 |
Is there a Yes: Examine the log file for error messages: tail -n1000 log/scout_apm.log | grep "Starting monitoring" -A20 See something noteworthy? Proceed to to the last step. Otherwise, continue to step 2. No: The gem was never initialized by the application.
Ensure that the group :production do gem 'unicorn' gem 'scout_apm' end Jump to the last step if |
2 |
Was the bundle list scout_apm |
3 |
Did you download the config file, placing it in |
4 |
Did you restart the app and let it run for a while? |
5 |
Are you sure the application has processed any requests? tail -n1000 log/production.log | grep "Processing" |
6 |
Using Unicorn? Add the |
7 |
Oops! Still not seeing any data? Check out the GitHub issues and send us an email with the following:
We typically respond within a couple of hours during the business day. |
Significant time spent in “Controller” or “Job”
When viewing a transaction trace, you may see time spent in the “controller” or “job” layers. This is time that falls outside of Scout’s default instrumentation. There are two options for gathering additional instrumentation:
- Custom Instrumentation - use our API to instrument pieces of code that are potential bottlenecks.
- ScoutProf - install our BETA agent which adds ScoutProf. ScoutProf breaks down time spent within the controller layer. Note that ScoutProf does not instrument background jobs.
Missing memory metrics
Memory allocation metrics require the following:
- Ruby version 2.1+
scout_apm
version 2.0+
If the above requirements are not met, Scout continues to function but does not report allocation-related metrics.
Updating to the Newest Version
The latest version of scout_apm
is SEE CHANGELOG
.
1 |
Ensure your Gemfile entry for Scout is: |
2 | Run |
3 | Re-deploy your application. |
The gem version changelog is available here.
Configuration Options
The Ruby agent can be configured via the config/scout_apm.yml
Yaml file and/or environment variables. A config file with your organization key is available for download as part of the install instructions.
ERB evaluation
ERB is evaluated when loading the config file. For example, you can set the app name based on the hostname:
common: &defaults
name: <%= "ProjectPlanner.io (#{Rails.env})" %>
Configuration Reference
The following configuration settings are available:
Setting Name | Description | Default | Required |
---|---|---|---|
name | Name of the application (ex: ‘Photos App’). |
Rails.application.class.to_s.
sub(/::Application$/, '')
|
Yes |
key | The organization API key. | Yes | |
monitor | Whether monitoring should be enabled. |
false
|
No |
log_level | The logging level of the agent. |
INFO
|
No |
log_file_path | The path to the scout_apm.log log file directory. Use stdout to log to STDOUT.
|
Environment#root+log/ or STDOUT if running on Heroku.
|
No |
hostname | The hostname the metrics should be aggregrated under. |
Socket.gethostname
|
No |
proxy | Specify the proxy URL (ex: https://proxy ) if a proxy is required.
|
No | |
host | The protocol + domain where the agent should report. |
https://scoutapm.com
|
No |
uri_reporting |
By default Scout reports the URL and filtered query parameters with transaction traces. Sensitive parameters in the URL will be redacted. To exclude query params entirely, use
path .
|
filtered_params
|
No |
disabled_instruments |
An Array of instruments that Scout should not install. Each Array element should should be a string-ified, case-sensitive class name (ex: ['Elasticsearch','HttpClient'] ). The default installed instruments can be viewed in the agent source.
|
[]
|
No |
ignore |
An Array of web endpoints that Scout should not instrument. Routes that match the prefixed path (ex: ['/health', '/status'] ) will be ignored by the agent.
|
[]
|
No |
enable_background_jobs | Indicates if background jobs should be monitored. |
true
|
No |
dev_trace | Indicates if DevTrace, the Scout development profiler, should be enabled. Note this setting only applies to the development environment. |
false
|
No |
profile | Indicates if ScoutProf, the Scout code profiler, should be enabled. |
true
|
No |
revision_sha | The Git SHA that corresponds to the version of the app being deployed. | See docs | No |
detailed_middleware | When true, the time spent in each middleware is visible in transaction traces vs. an aggregrate across all middlewares. This adds additional overhead and is disabled by default as middleware is an uncommon bottleneck. |
false
|
No |
collect_remote_ip | Automatically capture end user IP addresses as part of each trace’s context. |
true
|
No |
timeline_traces | Send traces in both the summary and timeline formats. |
true
|
No |
auto_instruments | Instrument custom code with AutoInstruments. |
false
|
No |
auto_instruments_ignore |
Excludes the listed files names from being autoinstrumented. Ex: ['application_controller'] .
|
[]
|
No |
Environment Variables
You can also configure Scout APM via environment variables. Environment variables override settings provided in scout_apm.yml
.
To configure Scout via enviroment variables, uppercase the config key and prefix it with SCOUT
. For example, to set the key via environment variables:
export SCOUT_KEY=YOURKEY
Deploy Tracking Config
Scout can track deploys, making it easier to correlate changes in your app to performance. To enable deploy tracking, first ensure you are on the latest version of scout_apm
. See our upgrade instructions.
Scout identifies deploys via the following:
- If you are using Capistrano, no extra configuration is required. Scout reads the contents of the
REVISION
and/orrevisions.log
file and parses out the SHA of the most recent release. - If you are using Heroku, enable Dyno Metadata. This adds a
HEROKU_SLUG_COMMIT
environment variable to your dynos, which Scout then associates with deploys. - If you are deploying via a custom approach, set a
SCOUT_REVISION_SHA
environment variable equal to the SHA of your latest release. - If the app resides in a Git repo, Scout parses the output of
git rev-parse --short HEAD
to determine the revision SHA.
Enabling DevTrace
To enable DevTrace, our in-browser profiler:
1 |
Ensure you are on the latest version of |
2 |
Add |
3 |
Refresh your browser window and look for the speed badge. |
Server Timing
View performance metrics (time spent in ActiveRecord, Redis, etc) for each of your app’s requests in Chrome’s Developer tools with the server_timing gem. Production-safe.
For install instructions and configuration options, see server_timing on GitHub.
ActionController::Metal
Prior to agent version 2.1.26, an extra step was required to instrument ActionController::Metal
and ActionController::Api
controllers. After 2.1.26, this is automatic.
The previous instructions which had an explicit include
are no longer
needed, but if that code is still in your controller, it will not harm
anything. It will be ignored by the agent and have no effect.
Rack
Rack instrumentation is more explicit than Rails instrumentation, since Rack applications can take nearly any form. After installing our agent, instrumenting Rack is a three step process:
- Configuring the agent
- Starting the agent
- Wrapping endpoints in tracing
Configuration
Rack apps are configured using the same approach as a Rails app: either via a config/scout_apm.yml
config file or environment variables.
- configuration file: create a
config/scout_apm.yml
file under your application root directory. The file structure is outlined here. - environment variables: see our docs on configuring the agent via environment variables.
Starting the Agent
Add the ScoutApm::Rack.install!
startup call as close to the spot you
run
your Rack application as possible. install!
should be called after you require other gems (ActiveRecord, Mongo, etc)
to install instrumentation for those libraries.
# config.ru
require 'scout_apm'
ScoutApm::Rack.install!
run MyApp
Adding endpoints
Wrap each endpoint in a call to ScoutApm::Rack#transaction(name, env)
.
name
- an unchanging string argument for what the endpoint is. Example:"API User Listing"
env
- the rack environment hash
This may be fairly application specific in details.
Example:
app = Proc.new do |env|
ScoutApm::Rack.transaction("API User Listing", env) do
User.all.to_json
['200', {'Content-Type' => 'application/json'}, [users]]
end
end
If you run into any issues, or want advice on naming or wrapping endpoints, contact us at support@scoutapm.com for additional help.
Sinatra
Instrumenting a Sinatra application is similar to instrumenting a generic Rack application.
Configuration
The agent configuration (API key, app name, etc) follows the same process as the Rack application config.
Starting the agent
Add the ScoutApm::Rack.install!
startup call as close to the spot you
run
your Sinatra application as possible. install!
should be called after you require other gems (ActiveRecord, Mongo, etc).
require './main'
require 'scout_apm'
ScoutApm::Rack.install!
run Sinatra::Application
Adding endpoints
Wrap each endpoint in a call to ScoutApm::Rack#transaction(name, env)
. For example:
get '/' do
ScoutApm::Rack.transaction("get /", request.env) do
ActiveRecord::Base.connection.execute("SELECT * FROM pg_catalog.pg_tables")
"Hello!"
end
end
See our Rack docs for adding an endpoint for more details.
Custom Context
Context lets you see the forest from the trees. For example, you can add custom context to answer critical questions like:
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
It’s simple to add custom context to your app. There are two types of context:
User Context
For context used to identify users (ex: email, name):
ScoutApm::Context.add_user({})
Examples:
ScoutApm::Context.add_user(email: @user.email)
ScoutApm::Context.add_user(email: @user.email, location: @user.location.to_s)
General Context
ScoutApm::Context.add({})
Examples:
ScoutApm::Context.add(account: @account.name)
ScoutApm::Context.add(database_shard: @db.shard_id, monthly_spend: @account.monthly_spend)
Default Context
Scout reports the Request URI and the user’s remote IP Address by default.
Context Types
Context values can be any of the following types:
- Numeric
- String
- Boolean
- Time
- Date
Context Field Name Restrictions
Custom context names may contain alphanumeric characters, dashes, and underscores. Spaces are not allowed.
Attempts to add invalid context will be ignored.
Example: adding the current user’s email as context
Add the following to your ApplicationController
class:
before_filter :set_scout_context
Create the following method:
def set_scout_context
ScoutApm::Context.add_user(email: current_user.email) if current_user.is_a?(User)
end
Example: adding the monthly spend as context
Add the following line to the ApplicationController#set_scout_context
method defined above:
ScoutApm::Context.add(monthly_spend: current_org.monthly_spend) if current_org
Custom Instrumentation
Traces that allocate significant amount of time to Controller
or Job
are good candidates to add custom instrumentation. This indicates a significant amount of time is falling outside our default instrumentation.
Limits
We limit the number of metrics that can be instrumented. Tracking too many unique metrics can impact the performance of our UI. Do not dynamically generate metric types in your instrumentation (ie self.class.instrument("user_#{user.id}", "generate") { ... })
as this can quickly exceed our rate limits.
Instrumenting method calls
To instrument a method call, add the following to the class containing the method:
class User
include ScoutApm::Tracer
def export_activity
# Do export work
end
instrument_method :export_activity
end
The call to instrument_method
should be after the method definition.
Naming methods instrumented via instrument_method
In the example above, the metric will appear in traces as User#export_activity
. On timeseries charts, the time will be allocated to a Custom
type.
To modify the type:
instrument_method :export_activity, type: 'Exporting'
A new Exporting
metric will now appear on charts. The trace item will be displayed as Exporting/User/export_activity
.
To modify the name:
instrument_method :export_activity, type: 'Exporting', name: 'user_activity'
The trace item will now be displayed as Exporting/user_activity
.
Instrumenting blocks of code
To instrument a block of code, add the following:
class User
include ScoutApm::Tracer
def generate_profile_pic
self.class.instrument("User", "generate_profile_pic") do
# Do work
end
end
end
Naming methods instrumented via instrument(type, name)
In the example above, the metric appear in traces as User/generate_profile_pic
. On timeseries charts, the time will be allocated to a User
type. To modify the type or simply, simply change the instrument
corresponding method arguments.
Renaming transactions
There may be cases where you require more control over how Scout names transactions. For example, if you have a controller-action that renders both JSON and HTML formats and the rendering time varies significantly between the two, it may make sense to define a unique transaction name for each.
Use ScoutApm::Transaction#rename
:
class PostsController < ApplicationController
def index
ScoutApm::Transaction.rename("posts/foobar")
@posts = Post.all
end
end
In the example above, the default name for the transaction is posts/index
, which appears as PostsController#index
in the Scout UI. Renaming the transaction to posts/foobar
identifies the transaction as PostsController#foobar
in the Scout UI.
Do not generate highly cardinality transaction names (ex: ScoutApm::Transaction.rename("posts/foobar_#{current_user.id}")
) as we limit the number of transactions that can be tracked. High-cardinality transaction names can quickly surpass this limit.
Testing instrumentation
Improper instrumentation can break your application. It’s important to test before deploying to production. The easiest way to validate your instrumentation is by running DevTrace and ensuring the new metric appears as desired.
After restarting your dev server with DevTrace enabled, refresh the browser page and view the trace output. The new metric should appear in the trace:
Rake + Rails Runner
Scout doesn’t have a dedicated API for instrumenting rake
tasks or transactions called via rails runner
. Instead, we suggest creating basic wrapper tasks that spawn a background job in a framework we support. These jobs are automatically monitored by Scout and appear in the Scout UI under “background jobs”.
For example, the following is a CronJob that triggers the execution of an IntercomSync
background job:
10 * * * * cd /home/deploy/your_app/current && rails runner 'IntercomSync.perform_later'
Sneakers
Scout doesn’t instrument Sneakers (a background processing framework for Ruby and RabbitMQ) automatically. To add Sneakers instrumentation:
- Download the contents of this gist. Place the file inside your application’s
/lib
folder or similar. - In
config/boot.rb
, add:require File.expand_path('lib/scout_sneakers.rb', __FILE__)
- In your
Worker
class, immediately following thework
method, addinclude ScoutApm::BackgroundJobIntegrations::Sneakers::Instruments
.
This treats calls to the work
method as distinct transactions, named with the worker class.
Example usage:
class BaseWorker
include Sneakers::Worker
def work(attributes)
# Do work
end
# This MUST be included AFTER the work method is defined.
include ScoutApm::BackgroundJobIntegrations::Sneakers::Instruments
end
Docker 
Scout runs within Docker containers without any special configuration.
It’s common to configure Docker containers with environment variables. Scout can use environment variables instead of the scout_apm.yml
config file.
Heroku 
Scout runs on Heroku without any special configuration. When Scout detects that an app is being served via Heroku:
- Logging is set to
STDOUT
vs. logging to a file. Log messages are prefixed with[Scout]
for easy filtering. - The dyno name (ex:
web.1
) is reported vs. the dyno hostname. Dyno hostnames are dynamically generated and don’t have any meaningful information.
Configuration
Scout can be configured via environment variables. This means you can use heroku config:set
to configure the agent. For example, you can set the application name that appears in the Scout UI with:
heroku config:set SCOUT_NAME='My Heroku App'
See the configuration section for more information on the available config settings and environment variable functionality.
Using the Scout Heroku Add-on
Scout is also available as a Heroku Add-on. The add-on automates setting the proper Heroku config variables during the provisioning process.
Cloud Foundry 
Scout runs on Cloud Foundry without any special configuration.
We suggest a few configuration changes in the scout_apm.yml
file to best take advantage of Cloud Foundry:
- Set
log_file_path: STDOUT
to send your the Scout APM log contents to the Loggregator. - Use the application name configured via Cloud Foundry to identify the app.
- Override the hostname reported to Scout. Cloud Foundry hostnames are dynamically generated and don’t have any meaningful information. We suggest using a combination of the application name and the instance index.
A sample config for Cloud Foundry that implements the above suggestions:
common: &defaults
key: YOUR_KEY
monitor: true
# use the configured application name to identify the app.
name: <%= ENV['VCAP_APPLICATION'] ? JSON.parse(ENV['VCAP_APPLICATION'])['application_name'] : "YOUR APP NAME (#{Rails.env})" %>
# make logs available to the Loggregator
log_file_path: STDOUT
# reports w/a more identifiable instance name using the application name and instance index. ex: rails-sample.0
hostname: <%= ENV['VCAP_APPLICATION'] ? "#{JSON.parse(ENV['VCAP_APPLICATION'])['application_name']}.#{ENV['CF_INSTANCE_INDEX']}" : Socket.gethostname %>
production:
<<: *defaults
development:
<<: *defaults
monitor: false
test:
<<: *defaults
monitor: false
staging:
<<: *defaults
GraphQL 
If you have a GraphQL endpoint which serves any number of queries, you likely want to have each of those types of queries show up in the Scout UI as different endpoints. You can accomplish this by renaming the transaction during the request like so:
scout_transaction_name = "GraphQL/" + operation_name
ScoutApm::Transaction.rename(scout_transaction_name)
Where operation_name
is determined dynamically based on the GraphQL query. E.g. get_profile
, find_user
, etc.
Do not generate highly cardinality transaction names, like ScoutApm::Transaction.rename("GraphQL/foobar_#{current_user.id}")
, as we limit the number of transactions that can be tracked. High-cardinality transaction names can quickly surpass this limit.
Instrumented Libraries
The following libraries are currently instrumented:
- Datastores
- ActiveRecord
- ElasticSearch
- Mongoid
- Moped
- Redis
- Rack frameworks
- Rails
- Sinatra
- Grape
- Middleware
- Rails libraries
- ActionView
- ActionController
- External HTTP calls
- HTTPClient
- Net::HTTP
- Background Job Processing
- Sidekiq
- DelayedJob
- Resque
- Sneakers
- Shoryuken
Additionally, Scout can also instrument request queuing time.
You can instrument your own code or other libraries via custom instrumentation.
Environments
It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout and ignore the development and test environments. Configure a unique app name for each environment as Scout aggregates data by the app name.
There are 2 approaches:
1. Modifying your scout_apm.yml config file
Here’s an example scout_apm.yml
configuration to achieve this:
common: &defaults
name: <%= "YOUR_APP_NAME (#{Rails.env})" %>
key: YOUR_KEY
log_level: info
monitor: true
production:
<<: *defaults
development:
<<: *defaults
monitor: false
test:
<<: *defaults
monitor: false
staging:
<<: *defaults
2. Setting the SCOUT_NAME environment variable
Setting the SCOUT_NAME
and SCOUT_MONITOR
environment variables will override settings settings your scout_apm.yml
config file.
To isolate data for a staging environment: SCOUT_NAME="YOUR_APP_NAME (Staging)"
.
To disable monitoring: SCOUT_MONITOR=false
.
See the full list of configuration options.
Disabling a Node
To disable Scout APM on any node in your environment, just set monitor: false
in your scout_apm.yml
configuration file on that server, and restart your app server. Example:
common: &defaults
name: <%= "YOUR_APP_NAME (#{Rails.env})" %>
key: YOUR_KEY
log_level: info
monitor: false
production:
<<: *defaults
Since the YAML config file allows ERB evaluation, you can even programatically enable/disable nodes based on host name. This example enables Scout APM on web1 through web5:
common: &defaults
name: <%= "YOUR_APP_NAME (#{Rails.env})" %>
key: YOUR_KEY
log_level: info
monitor: <%= Socket.gethostname.match(/web[1..5]/) %>
production:
<<: *defaults
Aft you’ve disabled a node in your configuration file and restarted your app server, the node show up as inactive in the UI after 10 minutes.
Ignoring transactions
There are a couple of approaches to ignore web requests and background jobs you don’t care to instrument. These approaches are listed below.
By the web endpoint path name
You can ignore requests to web endpoints that match specific paths (like /health_check
). See the ignore
setting in the configuration options.
In your code
To selectively ignore a web request or background job in your code, add the following within the transaction:
ScoutApm::Transaction.ignore!
Sampling web requests
Use probability sampling to limit the number of web requests Scout analyzes:
# app/controllers/application_controller.rb
before_action :sample_requests_for_scout
def sample_requests_for_scout
# Sample rate should range from 0-1:
# * 0: captures no requests
# * 0.75: captures 75% of requests
# * 1: captures all requests
sample_rate = 0.75
if rand > sample_rate
Rails.logger.debug("[Scout] Ignoring request: #{request.original_url}")
ScoutApm::Transaction.ignore!
end
end
Ignoring all background jobs
You can ignore all background jobs by setting enable_background_jobs: false
in your configuration file. See the configuration options.
Overhead Considerations
Scout is built for production monitoring and is engineered to add minimal overhead. We test against several open-source benchmarks on significant releases to prevent releasing performance regressions.
There are a couple of scenarios worth mentioning where more overhead than expected may be observed.
Enabling the detailed_middleware option
By default, Scout aggregates all middleware timings together into a single “Middleware” category. Scout can provide a detailed breakdown of middleware timings by setting detailed_middleware: true
in the configuration settings.
This is false
by default as instrumenting each piece of middleware adds additional overhead. It’s common for Rails apps to use more than a dozen pieces of middleware. Typically, time spent in middleware is very small and isn’t worth instrumenting. Additionally, most of these middleware pieces are maintained by third-parties and are thus more difficult to optimize.
Resque Instrumentation
Since Resque works by forking a child process to run each job and exiting immediately when the job is finished, our instrumentation needs a way to aggregate the timing results and traces into a central store before reporting the information to our service. To support Resque, the Resque child process sends a simple payload to the parent which is listening via WEBRick on localhost. As long as there is one WEBRick instance listening on the configured port, then any Resque children will be able to send results back to it.
The overhead is usually small, but it is more significant than instrumenting background job frameworks like Sidekiq and DelayedJob that do not use forking. The lighter the jobs are, more overhead is incurred in the serialization and reporting to WEBRick. In our testing, for jobs that took ~18 ms each, we found that the overhead is about ~8%. If your jobs take longer than that, on average, the overhead will be lower.
Elixir Agent
Requirements
Our Elixir agent supports Phoenix 1.2.0+, Ecto 2.0+, and Elixir 1.4+.
Installation
Tailored instructions are provided within our user interface. General instructions for a Phoenix 1.3+ app:
AAdd the scout_apm
dependency.
Your mix.exs
file:
# mix.exs
def deps do
[{:phoenix, "~> 1.4.0"},
...
{:scout_apm, "~> 1.0"}]
end
If your Mixfile manually specifies applications
, :scout_apm
must be added:
# mix.exs
def application do
[mod: {YourApp, []},
applications: [..., :scout_apm]]
end
Shell:
B Download your customized config file, placing it at config/scout_apm.exs
.
Your customized config file is available within your Scout account. Inside the file, replace "YourApp"
with the app name you’d like to appear within Scout.
CIntegrate into your Phoenix app.
Instrument Controllers. In lib/your_app_web.ex
:
# lib/your_app_web.ex
defmodule YourApp.Web do
def controller do
quote do
use Phoenix.Controller
use ScoutApm.Instrumentation
...
Instrument Templates. In config/config.exs
:
# config/config.exs
config :phoenix, :template_engines,
eex: ScoutApm.Instruments.EExEngine,
exs: ScoutApm.Instruments.ExsEngine
DIntegrate Ecto
Using Ecto 2.x?. In config/config.exs
:
# config/config.exs import_config "scout_apm.exs" config :your_app, YourApp.Repo, loggers: [{Ecto.LogEntry, :log, []}, {ScoutApm.Instruments.EctoLogger, :log, []}]
Using Ecto 3.x?. In lib/my_app/application.ex
:
# lib/my_app/application.ex
defmodule MyApp.Application do
use Application
def start(_type, _args) do
import Supervisor.Spec
children = [
# ...
]
:ok = ScoutApm.Instruments.EctoTelemetry.attach(MyApp.Repo)
# ...
Supervisor.start_link(children, opts)
end
end
ERestart your app.
Troubleshooting
Not seeing data?
1 |
Examine your log file for any lines that match Look for: [info] Setup ScoutApm.Watcher on ScoutApm.Store [info] Setup ScoutApm.Watcher on ScoutApm.Config [info] Setup ScoutApm.Watcher on ScoutApm.PersistentHistogram [info] Setup ScoutApm.Watcher on ScoutApm.Logger [info] Setup ScoutApm.Watcher on ScoutApm.Supervisor
If none of the above appears, ensure |
2 |
Run |
3 |
Is This step is frequently missed if you are using multiple controller modules. See the third step in the Elixir install instructions. |
4 |
Still stuck? Email us. The following process helps us resolve issues faster:
We typically respond within a couple of hours during the business day. |
Configuration
The Elixir agent can be configured via the config/scout_apm.exs
file. A config file with your organization key is available for download as part of the install instructions. A application name and key is required:
config :scout_apm,
name: "Your App", # The app name that will appear within the Scout UI
key: "YOUR SCOUT KEY"
Alternately, you can also use environment variables of your choosing by formatting your configuration as a tuple with :system as the first value and the environment variable expected as the second.
config :scout_apm,
name: { :system, "SCOUT_APP_NAME" },
key: { :system, "SCOUT_APP_KEY" }
Configuration Reference
The following configuration settings are available:
Setting Name | Description | Default | Required |
---|---|---|---|
name | Name of the application (ex: ‘Photos App’). | Yes | |
monitor | Whether monitoring data should be reported. |
true
|
No |
key | The organization API key. | Yes | |
dev_trace | Indicates if DevTrace, the Scout development profiler, should be enabled. |
false
|
No |
host | The protocol + domain where the agent should report. |
https://scoutapm.com
|
No |
log_level |
The logging level of the agent. Possible values: :debug , :info , :warn , and :error .
|
:info
|
No |
revision_sha | The Git SHA associated with this release. | See docs | No |
ignore |
An array of URL prefixes to ignore in the Scout Plug instrumentation. Routes that match the prefixed path (ex: ['/health', '/status'] ) will be ignored by the agent.
|
[]
|
No |
core_agent_dir | Path to create the directory which will store the Core Agent. |
/tmp/scout_apm_core
|
No |
core_agent_download | Whether to download the Core Agent automatically, if needed. |
True
|
No |
core_agent_launch | Whether to start the Core Agent automatically, if needed. |
True
|
No |
core_agent_tcp_ip | The TCP IP address the Core Agent uses to communicate with your application. |
{127, 0, 0, 1}
|
No |
core_agent_tcp_port | The TCP port the Core Agent uses to communicate with your application. |
9000
|
No |
core_agent_triple |
If you are running a MUSL based Linux (such as ArchLinux), you may need to explicitly specify the platform triple. E.g. x86_64-unknown-linux-musl
|
Auto detected | No |
hostname | The host registered with the . | No |
Updating to the Newest Version
AEnsure your mix.exs
dependency entry for scout_apm
is: {:scout_apm, "~> 0.0"}`
Bmix deps.get scout_apm
CRecompile and deploy.
Deploy Tracking Config
Scout can track deploys, making it easier to correlate changes in your app to performance. To enable deploy tracking, first ensure you are on the latest version of scout_apm
. See our upgrade instructions.
Scout identifies deploys via the following:
- A
revision_sha
config setting. - A
SCOUT_REVISION_SHA
environment variable equal to the SHA of your latest release. - If you are using Heroku, enable Dyno Metadata. This adds a
HEROKU_SLUG_COMMIT
environment variable to your dynos, which Scout then associates with deploys.
Auto-Instrumented Libraries
Our install instructions walk through instrumenting the following libraries:
- Phoenix
- controllers
- views
- templates
- Ecto 2.0/3.0
- Slime Templates
See instrumenting common libraries for guides on instrumenting other Elixir libraries.
Instrumenting Common Libraries
We’ve collected best practices for instrumenting common transactions and timing functions below. If you have a suggestion, please share it. See our custom instrumentation quickstart for more details on adding instrumentation.
- Transactions
- Timing
Phoenix Channels
Web or background transactions?
- web: For channel-related functions that impact the user-facing experience. Time spent in these transactions will appear on your app overboard dashboard and appear in the “Web” area of the UI.
- background: For functions that don’t have an impact on the user-facing experience (example: click-tracking). These will be available in the “Background Jobs” area of the UI.
Naming channel transactions
Provide an identifiable name based on the message the handle_out/
or handle_in/
function receives.
An example:
defmodule FirestormWeb.Web.PostsChannel do
use FirestormWeb.Web, :channel
import ScoutApm.Tracing
# Will appear under "Web" in the UI, named "PostsChannel.update"
@transaction_opts [type: "web", name: "PostsChannel.update"]
deftransaction handle_out("update", msg, socket) do
push socket, "update", FetchView.render("index.json", msg)
end
end
Plug Chunked Response (HTTP Streaming)
In a Plug application, a chunked response needs to be instrumented directly, rather than relying on
the default Scout instrumentation Plug. The key part is to start_layer
beforehand, and then call
before_send
after the chunked response is complete.
def chunked(conn, _params) do
# The "Controller" argument is required, and should not be changed. The second argument is the
# name this endpoint will appear as in the Scout UI. The `action_name` function determines this
# automatically.
ScoutApm.TrackedRequest.start_layer("Controller", ScoutApm.Plugs.ControllerTimer.action_name(conn))
conn =
conn
|> put_resp_content_type("text/plain")
|> send_chunked(200)
{:ok, conn} =
Repo.transaction(fn ->
Example.build_chunked_query(...)
|> Enum.reduce_while(conn, fn data, conn ->
case chunk(conn, data) do
{:ok, conn} ->
{:cont, conn}
{:error, :closed} ->
{:halt, conn}
end
end)
end)
ScoutApm.Plugs.ControllerTimer.before_send(conn)
conn
end
Then have the default instrumentation ignore the endpoint’s URL prefix (since it is manually instrumented now). See the ignore configuration for more details.
config :scout_apm,
name: "My Scout App Name",
key: "My Scout Key",
ignore: ["/chunked"]
GenServer calls
It’s common to use GenServer
to handle background work outside the web request flow. Suggestions:
- Treat these as
background
transactions - Provide a
name
based on the message eachhandle_call/
function handles.
An example:
defmodule Flight.Checker do
use GenServer
import ScoutApm.Tracing
# Will appear under "Background Jobs" in the UI, named "Flight.handle_check".
@transaction_opts [type: "background", name: "check"]
def handle_call({:check, flight}, _from, state) do
# Do work...
end
end
Task.start
These execute asynchronously, so treat as a background
transaction.
Task.start(fn ->
# Will appear under "Background Jobs" in the UI, named "Crawler.crawl".
ScoutApm.Tracing.transaction(:background,"Crawler.crawl") do
Crawler.crawl(url)
end
end)
Task.Supervisor.start_child
Like Task.start
, these execute asynchronously, so treat as a background
transaction.
Task.Supervisor.start_child(YourApp.TaskSupervisor, fn ->
# Will appear under "Background Jobs" in the UI, named "Crawler.crawl".
ScoutApm.Tracing.transaction(:background,"Crawler.crawl") do
Crawler.crawl(url)
end
end)
Exq
To instrument Exq background jobs, import ScoutApm.Tracing
, use deftransaction
to define the function, and add a @transaction_opts
module attribute to optionally override the name and type:
defmodule MyWorker do
import ScoutApm.Tracing
# Will appear under "Background Jobs" in the UI, named "MyWorker.perform".
@transaction_opts [type: "background"]
deftransaction perform(arg1, arg2) do
# do work
end
end
Absinthe
Requests to the Absinthe plug can be grouped by the GraphQL operationName
under the “Web” UI by adding this plug to your pipeline.
HTTPoison
Download this Demo.HTTPClient module (you can rename to something more fitting) into your app’s /lib
folder, then alias Demo.HTTPClient
when calling HTTPoison
functions:
defmodule Demo.Web.PageController do
use Demo.Web, :controller
# Will route function calls to `HTTPoision` through `Demo.HTTPClient`, which times the execution of the HTTP call.
alias Demo.HTTPClient
def index(conn, _params) do
# "HTTP" will appear on timeseries charts. "HTTP/get" and the url "https://cnn.com" will appear in traces.
case HTTPClient.get("https://cnn.com") do
{:ok, %HTTPoison.Response{} = response} ->
# do something with response
render(conn, "index.html")
{:error, %HTTPoison.Error{} = error} ->
# do something with error
render(conn, "error.html")
end
HTTPClient.post("https://cnn.com", "")
HTTPClient.get!("http://localhost:4567")
render(conn, "index.html")
end
end
MongoDB Ecto
Download this example MongoDB Repo module to use inplace of your existing MongoDB Repo module.
Custom Instrumentation
You can extend Scout to record additional types of transactions (background jobs, for example) and time the execution of code that fall outside our auto instrumentation.
For full details on instrumentation functions, see our ScoutApm.Tracing Hex docs.
Transactions & Timing
Scout’s instrumentation is divided into 2 areas:
- Transactions: these wrap around a flow of work, like a web request or a GenServer call. The UI groups data under transactions. Use the
deftransaction/2
macro or wrap blocks of code with thetransaction/4
macro. - Timing: these measure individual pieces of work, like an HTTP request to an outside service or an Ecto query, and displays timing information within a transaction trace. Use the
deftiming/2
macro or thetiming/4
macro.
Instrumenting transactions
deftransaction Macro Example
Replace your function def
with deftransaction
to instrument it. You can override the name and type by setting the @transaction_opts
attribute right before the function.
defmodule YourApp.Web.RoomChannel do
use Phoenix.Channel
import ScoutApm.Tracing
# Will appear under "Web" in the UI, named "YourApp.Web.RoomChannel.join".
@transaction_opts [type: "web"]
deftransaction join("topic:html", _message, socket) do
{:ok, socket}
end
# Will appear under "Background Jobs" in the UI, named "RoomChannel.ping".
@transaction_opts [type: "background", name: "RoomChannel.ping"]
deftransaction handle_in("ping", %{"body" => body}, socket) do
broadcast! socket, "new_msg", %{body: body}
{:noreply, socket}
end
transaction/4 Example
Wrap the block of code you’d like to instrument with transaction/4
:
import ScoutApm.Tracking
def do_async_work do
Task.start(fn ->
# Will appear under "Background Jobs" in the UI, named "Do Work".
transaction(:background, "Do Work") do
# Do work...
end
end)
end
See the ScoutApm.Tracing Hexdocs for details on instrumenting transactions.
Timing functions and blocks of code
deftiming Macro Example
Replace your function def
with deftiming
to instrument it. You can override the name and category by setting the @timing_opts
attribute right before the function.
defmodule Searcher do
import ScoutApm.Tracing
# Time associated with this function will appear under "Hound" in timeseries charts.
# The function will appear as `Hound/open_search` in transaction traces.
@timing_opts [category: "Hound"]
deftiming open_search(url) do
navigate_to(url)
end
# Time associated with this function will appear under "Hound" in timeseries charts.
# The function will appear as `Hound/homepage` in transaction traces.
@timing_opts [category: "Hound", name: "homepage"]
deftiming open_homepage(url) do
navigate_to(url)
end
timing/4 Example
Wrap the block of code you’d like to instrument with timing/4
:
defmodule PhoenixApp.PageController do
use PhoenixApp.Web, :controller
import ScoutApm.Tracing
def index(conn, _params) do
timing("Timer", "sleep") do
:timer.sleep(3000)
end
render conn, "index.html"
end
See the ScoutApm.Tracing Hexdocs for details on timing functions and blocks of code.
Limits on category arity
We limit the number of unique categories that can be instrumented. Tracking too many unique category can impact the performance of our UI. Do not dynamically generate categories in your instrumentation (ie timing("user_#{user.id}", "generate", do: do_work())
as this can quickly exceed our rate limits.
Adding a description
Call ScoutApm.Tracing.update_desc/1
to add relevant information to the instrumented item. This description is then viewable in traces. An example:
timing("HTTP", "GitHub_Avatar") do
url = "https://github.com/#{user.id}.png"
update_desc("GET #{url}")
HTTPoison.get(url)
end
Tracking already executed time
Libraries like Ecto log details on executed queries. This includes timing information. To add a trace item for this, use ScoutApm.Tracing.track
. An example:
defmodule YourApp.Mongo.Repo do
use Ecto.Repo
# Scout instrumentation of Mongo queries. These appear in traces as "Ecto/Read", "Ecto/Write", etc.
def log(entry) do
ScoutApm.Tracing.track(
"Ecto",
query_name(entry),
entry.query_time,
:microseconds
)
super entry
end
end
In the example above, the metric will appear in traces as Ecto/#{query_time(entry)}
. On timeseries charts, the time will be allocated to Ecto
.
See the scout_apm hex docs for more information on track/
.
Testing instrumentation
Improper instrumentation can break your application. It’s important to test before deploying to production. The easiest way to validate your instrumentation is by running DevTrace and ensuring the new metric appears as desired.
After restarting your app with DevTrace enabled, refresh the browser page and view the trace output. The new metric should appear in the trace.
Custom Context
Context lets you see the forest from the trees. For example, you can add custom context to answer critical questions like:
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
It’s simple to add custom context to your app. There are two types of context:
User Context
For context used to identify users (ex: email, name):
ScoutApm.add_user(key, value)
Examples:
ScoutApm.Context.add_user(:email, user.email)
ScoutApm.Context.add_user(:name, user.name)
General Context
ScoutApm.Context.add(key, value)
Examples:
ScoutApm.Context.add(:account, account.name)
ScoutApm.Context.add(:monthly_spend, account.monthly_spend)
Default Context
Scout reports the Request URI and the user’s remote IP Address by default.
Context Value Types
Context values can be any of the following types:
- Printable strings (
String/printable?/1
returnstrue
) - Boolean
- Number
Context Key Restrictions
Context keys can be an Atom
or String
with only printable characters. Custom context keys may contain alphanumeric characters, dashes, and underscores. Spaces are not allowed.
Attempts to add invalid context will be ignored.
Environments
It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout. To do so, configure a unique app name for each environment. Scout aggregates data by the app name.
An example:
# config/staging.exs
config :scout_apm,
name: "YOUR APP - Staging"
Ignoring Transactions
There are a couple of approaches to ignore web requests and background jobs you don’t care to instrument. These approaches are listed below.
By the web endpoint path name
You can ignore requests to web endpoints that match specific paths (like /health_check
). See the ignore
setting in the configuration options.
In your code
To selectively ignore a web request or background job in your code, add the following within the transaction:
ScoutApm.TrackedRequest.ignore()
Enabling DevTrace
To enable DevTrace, our in-browser profiler:
1 |
Add # config/dev.exs config :scout_apm, dev_trace: true |
2 |
Restart your app. |
3 |
Refresh your browser window and look for the speed badge. |
Server Timing
View performance metrics (time spent in Controller, Ecto, etc) for each of your app’s requests in Chrome’s Developer tools with the plug_server_timing package. Production-safe.
For install instructions and configuration options, see plug_server_timing on GitHub.
NodeJS Agent
Scout’s NodeJS agent supports many popular libraries to instrument middleware, request times, SQL queries, and more.
The base package is called @scout_apm/scout-apm
. See our install instructions for more details.
Source code and issues can be found on our @scout_apm/scout-apm
GitHub repository.
Requirements
@scout_apm/scout-apm
requires:
- NodeJS
- A POSIX operating system, such as Linux or macOS.
Instrumented Libraries
Scout provides instrumentation for:
For all integrations, scout
should be required as early as possible:
const scout = require("@scout_apm/scout-apm");
Requiring scout
before other dependencies ensures that it is set up for use with your other dependencies. For example Postgres (or some library that depends on pg
):
const scout = require("@scout_apm/scout-apm");
const pg = require("pg");
In a Typescript project, if you do not import all of scout
, you will need to run setupRequireIntegrations
with the packages you want to set up:
import { setupRequireIntegrations } from "@scout_apm/scout-apm"; // alternatively, `import "@scout_apm/scout-apm";`
setupRequireIntegrations(["pg"]);
import { Client } from "pg";
Some configuration required
The libraries below require a small number of configuration updates. Click on the respective library for instructions.
You can instrument your own code or other libraries via custom instrumentation. You can suggest additional libraries you’d like Scout to instrument on GitHub.
Express
Scout supports Express 4.x+.
1 |
Install the $ yarn add @scout_apm/scout-apm |
2 |
Add to Express Middleware: // Require scout-apm first, before other requires const scout = require("@scout_apm/scout-apm"); const express = require("express"); // The "main" function async function start() { // Trigger the download and installation of the core-agent await scout.install({ allowShutdown: true, // allow shutting down spawned scout-agent processes from this program monitor: true, // enable monitoring name: " |
3 |
Configure Scout via ENV variables: export SCOUT_MONITOR=true export SCOUT_KEY="[AVAILABLE IN THE SCOUT UI]" export SCOUT_NAME="A FRIENDLY NAME FOR YOUR APP"**NOTE** Pass configuration to `scout.install` and if a scout agent instance does not exist already one will be created for you on the fly. After `await`ing or `.then`ing the `Promise` returned by `scout.install`, you can be sure that the scout agent is available and enable the middleware by calling `app.use(scout.expressMiddleware())`. If you do *not* call `scout.install({ … })` and wait for setup to complete, the first inbound request will start the setup and eventually requests will be recorded (setup will not block requests, and recording will start when the agent has been set up).
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
4 |
Deploy. It takes just a few minutes for your data to first appear within the Scout UI. |
Troubleshooting
Not seeing data? Email support@scoutapm.com with:
- A link to your app within Scout (if applicable)
- Your NodeJS version
- The name of the framework and version you are trying to instrument, e.g. Express 4.17.0
- Scout logs
We typically respond within a couple of hours during the business day.
Configuration Reference
Setting Name | Description | Default | Required |
---|---|---|---|
SCOUT_KEY | The organization API key. | Yes | |
SCOUT_NAME | Name of the application (ex: ‘Photos App’). | Yes | |
SCOUT_MONITOR | Whether monitoring data should be reported. | false |
Yes |
SCOUT_REVISION_SHA | The Git SHA associated with this release. | See docs | No |
SCOUT_LOG_LEVEL | Override the SCOUT log level. Can only be used to quiet the agent, will not override the underlying logger’s level | No | |
SCOUT_SCM_SUBDIRECTORY | The relative path from the base of your Git repo to the directory which contains your application code. | No | |
SCOUT_CORE_AGENT_DIR | Path to create the directory which will store the Core Agent. |
/tmp/scout_apm_core
|
No |
SCOUT_CORE_AGENT_DOWNLOAD | Whether to download the Core Agent automatically, if needed. |
True
|
No |
SCOUT_CORE_AGENT_LAUNCH | Whether to start the Core Agent automatically, if needed. |
True
|
No |
SCOUT_CORE_AGENT_PERMISSIONS | The permission bits to set when creating the directory of the Core Agent. |
700
|
No |
SCOUT_CORE_AGENT_TRIPLE |
If you are running a MUSL based Linux (such as ArchLinux), you may need
to explicitly specify the platform triple. E.g.
x86_64-unknown-linux-musl
|
Auto detected | No |
SCOUT_CORE_AGENT_LOG_LEVEL |
The log level of the core agent process. This should be one of:
"trace" , "debug" , "info" ,
"warn" , "error" .
This does not affect the log level of the NodeJS library. To change that, directly configure logging as per the documentation.
|
"info"
|
No |
SCOUT_CORE_AGENT_LOG_FILE | The log file for the core agent process |
"/path/to/your/log/file"
|
No |
Environments
It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout and ignore the development and test environments. Configure a unique app name for each environment as Scout aggregates data by the app name.
For example, app-staging
might be used to represent a Staging environment where as app-production
would represent a Production environment.
Logging
Scout logs internal activity via a configured logging function with the signature (msg: string, level: LogLevel) => void
.
Express middleware logging
To enable agent logging with the express
middleware, your middleware should be set up like the following:
const scout = require("@scout_apm/scout-apm");
const express = require("express");
// The "main" function of your application
async function start() {
// Create your express application
const app = express();
// Install scout
await scout.install(
// Configuration for the scout agent
{
allowShutdown: true, // allow shutting down spawned scout-agent processes from this program
monitor: true, // enable monitoring
name: "<application name>",
key: "<scout key>",
},
// Additional scout options
{
logFn: scout.consoleLogFn,
}
);
// Enable the app-wide scout middleware
app.use(scout.expressMiddleware());
// ... Add other middleware/handlers ...
app.listen(...);
}
If you are using winston
you may build a logFn
by passing a winston.Logger
to the exported scout.buildWinstonLogger
helper function:
logFn: scout.buildWinstonLogger(yourLogger),
If a winston.Logger
instance is provided, Scout’s logging defaults to the same log level as the instance, otherwise it defaults to ERROR
. You may set the logging to a stricter level to quiet the agent’s logging via the logLevel
in the config
sub-object (or SCOUT_LOG_LEVEL
via ENV). The underlying LoggerInterface’s level will take precedence if it is tighter than the logLevel
configuration.
Custom Instrumentation
You can extend Scout to trace transactions outside our officially supported libraries (e.g. Cron jobs and other web frameworks) and time the execution of sections of code that falls outside our provided instrumentation.
Asynchronous functionality can be marked as a transaction with code similar to the following:
scout.api.WebTransaction.run("transaction-name", (finishTransaction) => {
yourAsyncFunction()
.then(() => finishTransaction())
.catch(err => {
// error handling code goes here
finishTransaction();
});
});
For Asynchronous functionality in a callback-passing style:
scout.api.WebTransaction.run("transaction-name", (finishTransaction) => {
yourCallbackStyleAsyncFunction((err) => {
if (err) {
// error handling code goes here
return;
}
finishTransaction();
});
});
Synchronous functionality can be marked as transactions with code similar to the following:
scout.api.WebTransaction.runSync("sync-transaction-name", () => {
yourSyncFunction();
});
Transactions & Timing
Scout’s instrumentation is divided into 2 areas:
- Transactions: these wrap around an entire flow of work, like a web request or Cron job. The Scout Web UI groups data under transactions.
- Timing: these measure small pieces of code that occur inside of a transaction, like an HTTP request to an outside service, or a database call. This is displayed within a transaction trace in the UI.
Instrumenting Transactions
A transaction groups a sequence of work under in the Scout UI. These are used to generate transaction traces. For example, you may create a transaction that wraps around the entire execution of a NodeJS script that is ran as a Cron Job.
The Express integration does this all for you. You only will need to manually instrument transactions in special cases. Contact us at support@scoutapm.com for help.
Limits
We limit the number of unique transactions that can be instrumented. Tracking too many uniquely named transactions can impact the performance of the UI. Do not dynamically generate transaction names in your instrumentation as this can quickly exceed our rate limits. Use context to add high-dimensionality information instead.
Web or Background transactions?
Scout distinguishes between two types of transactions:
WebTransaction
: For transactions that impact the user-facing experience. Time spent in these transactions will appear on your app overboard dashboard and appear in the “Web” area of the UI.BackgroundTransaction
: For transactions that don’t have an impact on the user-facing experience (example: cron jobs). These will be available in the “Background Jobs” area of the UI.
scout.api.WebTransaction.run("GET /users", () => { ... your code ... });
scout.api.BackgroundTransaction.run("your-bg-transaction", () => { ... your code ... });
Timing functions and blocks of code
In existing transactions, both automatically created with Express instruments, and also manually created, you can time sections of code that are interesting to your application.
Traces that allocate significant amount of time to Controller
or Job
layers
are good candidates to add custom instrumentation. This indicates a significant
amount of time is falling outside our default instrumentation.
Asynchronous functionality may be instrumented with code similar to the following:
// NOTE: The transaction is *implicit* inside of express route handlers, if you are using the express middleware
scout.api.WebTransaction.run("transaction-name", (finishTransaction) => {
// Start the first instrumentation
const first = scout.api.instrument("instrument-name", (finishInstrument) => {
// instrument code
return yourAsyncFunction()
.then(() => finishInstrument());
});
// Start the second instrumentation
scout.api.instrument("instrument-name", (finishInstrument) => {
// instrument code
return yourAsyncFunction()
.then(() => finishInstrument());
});
// Finish the transaction once all instrumentations are recorded
Promise.all([first, second])
.then(() => finishTransaction());
});
For Asynchronous functionality in a callback-passing style:
// NOTE: The transaction is *implicit* inside of express route handlers, if you are using the express middleware
scout.api.WebTransaction.run("transaction-name", (finishTransaction) => {
// Start the first instrumentation
const first = scout.api.instrument("first-instrumentation", (finishFirst) => {
// instrument code
yourCallbackStyleAsyncFunction((err) => {
if (err) {
// error handling code here
return;
}
finishFirst();
// Start a second instrumentation
const second = scout.api.instrument("second-instrumentation", (finishSecond) => {
// instrument code
yourCallbackStyleAsyncFunction((err) => {
if (err) {
// error handling code here
return;
}
finishSecond();
finishTransaction();
});
});
});
});
});
Synchronous functionality can be instrumented with code similar to the following:
// NOTE: The transaction is *implicit* inside of express route handlers, if you are using the express middleware
scout.api.WebTransaction.runSync("sync-transaction-name", (finishTransaction) => {
scout.api.instrumentSync("first-instrumentation", () => {
yourSyncFunction();
});
scout.api.instrumentSync("second-instrumentation", () => {
yourSyncFunction();
});
});
Limits
We limit the number of metrics that can be instrumented. Tracking too many unique metrics can impact the performance of our UI. Do not dynamically generate metric types in your instrumentation as this can quickly exceed our rate limits.
For high-cardinality details, use context.
Getting Started
With existing code like:
// A handler that handles GET /
const handler = (req, res) => {
// Functionality here
};
The express middleware automatically wraps your request and handler with a transaction/instrumentation as if you’d written the following:
scout.api.WebTransaction.run("Controller/GET /<your route>", finishTransaction => { // transaction name format is `<kind>/<name>`
scout.api.instrument("Controller/GET /<your route>", finishSpan => { // instrumentation name format is `<kind>/<name>`
// The original handler code
});
});
kind
- A high level area of the application. This defaults toCustom
. Your whole application should have a very low number of unique strings here. In our built-in instruments, this is things likeTemplate
andSQL
. For custom instrumentation, it can be strings likeMongoDB
orHTTP
or similar. This should not change based on input or state of the application.name
- A semi-detailed version of what the section of code is. It should be static between different invocations of the method. Individual details like a user ID, or counts or other data points can be added as tags. Names likeretreive_from_api
orGET
are good names.span
- An object that represents instrumenting this section of code. You can set tags on it by calling$span->tag("key", "value")
tags
- A dictionary of key/value pairs. Key should be a string, but value can be any json-able structure. High-cardinality fields like a user ID are permitted.
Ignoring Transactions
If you don’t want to track the current transaction, at any point you can call scout.api.ignoreTransaction()
to ignore it:
const scout = require("@scout_apm/scout-apm");
if (isHealthCheck) {
scout.api.ignoreTransaction_transaction()
}
You can use this whether the transaction was started from a built-in integration or custom instrumentation.
You can also ignore a set of URL path prefixes by configuring the ignore
setting in your ScoutConfiguration
:
scout.buildScoutConfiguration({
ignore: ["/health-check/", "/admin/"],
});
When specifying this as an environment variable, it should be a comma-separated list:
export SCOUT_IGNORE='/health-check/,/admin/'
Custom Context
Context lets you see the key attributes of requests. For example, you can add custom context to answer critical questions like:
- Which plan was the customer who had a slow request on?
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
It’s simple to add custom context to your app:
// Express only: Add context inside a handler function
app.get("/", (req, req) => {
scout.api.Context.add("Key", "Value"); // returns a Promise
})
Context Key Restrictions
The Context key
must be a String
with only printable characters. Custom
context keys may contain alphanumeric characters, dashes, and underscores.
Spaces are not allowed.
Attempts to add invalid context will be ignored.
Context Value Types
Context values can be any json-serializable type. Examples:
"1.1.1.1"
"free"
100
Updating to the Newest Version
yarn upgrade @scout_apm/scout-apm
The package changelog is available here.
Deploy Tracking Config
Scout can track deploys, making it easier to correlate specific deploys to changes in performance.
To ensure scout tracks your deploy, please provide the SCOUT_REVISION_SHA
environment variable. You may also set the revisionSHA
on a ScountConfiguration
object instance:
const config = scout.buildScoutConfiguration({
monitor: true,
key: "<app key>",
name: "<app name>",
revisionSHA: "<sha>",
});
Python Agent
Scout’s Python agent supports many popular libraries to instrument SQL queries, template rendering, HTTP requests and more.
The package is called scout-apm
on PyPI.
Source code and issues can be found on our scout_apm_python GitHub repository.
Requirements
scout-apm
requires :
- Python 2.7 or 3.4+
- A POSIX operating system, such as Linux or macOS (Request Windows support).
Instrumented Libraries
Scout provides instrument for most of the popular Python libraries. Instrumentation may require some configuration (Django
) or is automatically applied (Requests
) by our agent.
Some configuration required
The libraries below require a small number of configuration updates. Click on the respective library for instructions.
- Bottle
- CherryPy
- Celery
- Dash
- Django
- Dramatiq
- Falcon
- Flask
- Flask SQLAlchemy
- Huey
- Hug
- Nameko
- Pyramid
- RQ
- SQLAlchemy
Additionally, Scout can also instrument request queuing time.
Automatically applied
The libraries below are automatically detected by the agent during the startup process and do not require explicit configuration to add instrumentation.
- ElasticSearch
- Jinja2
- PyMongo
- Redis
- UrlLib3 (used by the popular Requests)
You can instrument your own code or other libraries via custom instrumentation. You can suggest additional libraries you’d like Scout to instrument on GitHub.
Bottle
General instructions for a Bottle app:
1 |
Install the pip install scout-apm |
2 |
Add Scout to your Bottle config: from scout_apm.bottle import ScoutPlugin app = bottle.default_app() app.config.update({ "scout.name": "YOUR_APP_NAME", "scout.key": "YOUR_KEY", "scout.monitor": True, }) scout = ScoutPlugin() bottle.install(scout) If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
CherryPy
Scout supports CherryPy 18.0.0+.
General instructions for a CherryPy app:
1 |
Install the pip install scout-apm |
2 |
Attach the Scout plugin to your app: import cherrypy from scout_apm.api import Config from scout_apm.cherrypy import ScoutPlugin class Views(object): @cherrypy.expose def index(self): return "Hi" app = cherrypy.Application(Views(), "/") Config.set( key="[AVAILABLE IN THE SCOUT UI]", monitor=True, name="A FRIENDLY NAME FOR YOUR APP", ) scout_plugin = ScoutPlugin(cherrypy.engine) scout_plugin.subscribe() If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Celery
Scout supports Celery 3.1+.
Add the following to instrument Celery workers:
1 |
Install the pip install scout-apm |
2 |
Configure Scout in your Celery application file: import scout_apm.celery from scout_apm.api import Config from celery import Celery app = Celery('tasks', backend='redis://localhost', broker='redis://localhost') # If you are using app.config_from_object() to point to your Django settings # and have configured Scout there, this is not necessary: Config.set( key="[AVAILABLE IN THE SCOUT UI]", name="Same as Web App Name", monitor=True, ) scout_apm.celery.install(app)The `app` argument is optional and was added in version 2.12.0, but you should provide it for complete instrumentation. If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. Tasks will appear in the “Background Jobs” area of the Scout UI. |
Dash
Plotly Dash is built on top of Flask. Therefore you should use the Scout Flask integration with the underlying Flask application object. For example:
import dash from scout_apm.flask import ScoutApm app = dash.Dash("myapp") app.config.suppress_callback_exceptions = True flask_app = app.server # Setup as per Flask integration ScoutApm(flask_app) flask_app.config["SCOUT_NAME"] = "A FRIENDLY NAME FOR YOUR APP"
For full instructions, see the Flask integration.
Django
Scout supports Django 1.8+.
General instructions for a Django app:
1 |
Install the pip install scout-apm |
2 |
Configure Scout in your # settings.py INSTALLED_APPS = [ "scout_apm.django", # should be listed first # ... other apps ... ] # Scout settings SCOUT_MONITOR = True SCOUT_KEY = "[AVAILABLE IN THE SCOUT UI]" SCOUT_NAME = "A FRIENDLY NAME FOR YOUR APP" If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Middleware
Scout automatically inserts its middleware into your settings on Django startup in its AppConfig.ready()
.
It adds one at the very start of the middleware stack, and one at the end, allowing it to profile your middleware and views.
This normally works just fine. However, if you need to customize the middleware order or prevent your settings being changed, you can include the Scout middleware classes in your settings yourself. Scout will detect this and not automatically insert its middleware.
If you do customize, your metrics will be affected. Anything included before the first middleware timing middleware will not be profiled by Scout at all (unless you add custom instrumentation). Anything included after the view middleware will be profiled as part of your view, rather than as middleware.
To add the middleware if you’re using new-style Django middleware in the MIDDLEWARE
setting, which was added in Django 1.10:
# settings.py MIDDLEWARE = [ # ... any middleware to run first ... "scout_apm.django.middleware.MiddlewareTimingMiddleware", # ... your normal middleware stack ... "scout_apm.django.middleware.ViewTimingMiddleware", # ... any middleware to run last ... ]
To add the middleware if you’re using old-style Django middleware in the MIDDLEWARE_SETTINGS
setting, which was removed in Django 2.0:
# settings.py MIDDLEWARE_CLASSES = [ # ... any middleware to run first ... "scout_apm.django.middleware.OldStyleMiddlewareTimingMiddleware", # ... your normal middleware stack ... "scout_apm.django.middleware.OldStyleViewMiddleware", # ... any middleware to run last ... ]
Dramatiq
Scout supports Dramatiq 1.0+.
Add the following to instrument Dramatiq workers:
1 |
Install the pip install scout-apm |
2 |
Add Scout to your Dramatiq broker: import dramatiq from dramatiq.brokers.rabbitmq import RabbitmqBroker from scout_apm.dramatiq import ScoutMiddleware from scout_apm.api import Config broker = RabbitmqBroker() broker.add_middleware(ScoutMiddleware(), before=broker.middleware[0].__class__) Config.set( key="[AVAILABLE IN THE SCOUT UI]", name="Same as Web App Name", monitor=True, ) If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. Tasks will appear in the “Background Jobs” area of the Scout UI. |
Falcon
Scout supports Falcon 2.0+.
General instructions for a Falcon app:
1 |
Install the pip install scout-apm |
2 |
Attach the Scout middleware to your Falcon app: import falcon from scout_apm.falcon import ScoutMiddleware scout_middleware = ScoutMiddleware(config={ "key": "[AVAILABLE IN THE SCOUT UI]", "monitor": True, "name": "A FRIENDLY NAME FOR YOUR APP", }) api = falcon.API(middleware=[ScoutMiddleware()]) # Required for accessing extra per-request information scout_middleware.set_api(api) If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Flask
Scout supports Flask 0.10+.
General instructions for a Flask app:
1 |
Install the pip install scout-apm |
2 |
Configure Scout inside your Flask app: from scout_apm.flask import ScoutApm # Setup a flask 'app' as normal # Attach ScoutApm to the Flask App ScoutApm(app) # Scout settings app.config["SCOUT_MONITOR"] = True app.config["SCOUT_KEY"] = "[AVAILABLE IN THE SCOUT UI]" app.config["SCOUT_NAME"] = "A FRIENDLY NAME FOR YOUR APP" If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
If your app uses flask-sqlalchemy
, see below for additional instrumentation instructions.
Flask SQLAlchemy
Instrument flask-sqlalchemy
queries by calling instrument_sqlalchemy()
on your SQLAlchemy
instance:
from flask_sqlalchemy import SQLAlchemy
from scout_apm.flask.sqlalchemy import instrument_sqlalchemy
app = ... # Your Flask app
db = SQLAlchemy(app)
instrument_sqlalchemy(db)
Huey
Scout supports Huey 2.0+.
Add the following to instrument your Huey application:
1 |
Install the pip install scout-apm |
2 |
If you are using Huey’s Django integration, you only need to set up the Django integration. Your Huey instance will be automatically instrumented. If you’re using Huey outside of the Django integration, add Scout to your Huey instance: from huey import SqliteHuey from scout_apm.api import Config from scout_apm.huey import attach_scout broker = SqliteHuey() Config.set( monitor=True, name="A FRIENDLY NAME FOR YOUR APP", key="[AVAILABLE IN THE SCOUT UI]", ) attach_scout(huey) If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. Tasks will appear in the “Background Jobs” area of the Scout UI. |
Hug
Scout supports Hug 2.5.1+. Hug is based on Falcon so a Falcon version supported by our integration is also needed.
General instructions for a Hug app:
1 |
Install the pip install scout-apm |
2 |
Configure Scout inside your Hug app:
from scout_apm.hug import integrate_scout
# Setup your Hug endpoints as usual
@hug.get("/")
def home():
return "Welcome home."
# Integrate scout with the Hug application for this module
integrate_scout(
__name__,
config={
"key": "[AVAILABLE IN THE SCOUT UI]",
"monitor": True,
"name": "A FRIENDLY NAME FOR YOUR APP",
},
)
If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Nameko
General instructions for a Nameko app:
1 |
Install the pip install scout-apm |
2 |
Configure scout once in the root of your app, and add a `ScoutReporter` to each Nameko service: from scout_apm.api import Config from scout_apm.nameko import ScoutReporter Config.set( key="[AVAILABLE IN THE SCOUT UI]", name="A FRIENDLY NAME FOR YOUR APP", monitor=True, ) class Service(object): name = "myservice" scout = ScoutReporter() @http("GET", "/") def home(self, request): return "Welcome home." If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Pyramid
General instructions for a Pyramid app:
1 |
Install the pip install scout-apm |
2 |
Add Scout to your Pyramid config: import scout_apm.pyramid if __name__ == "__main__": with Configurator() as config: config.add_settings( SCOUT_KEY="[AVAILABLE IN THE SCOUT UI]", SCOUT_MONITOR=True, SCOUT_NAME="A FRIENDLY NAME FOR YOUR APP" ) config.include("scout_apm.pyramid") # Rest of your config... If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
RQ
Scout supports RQ 1.0+.
Do the following to instrument your RQ jobs:
1 |
Install the pip install scout-apm |
2 |
Use the Scout RQ worker class. If you’re using RQ directly, you can pass the
rq worker --job-class scout_apm.rq.Worker myqueue
If you’re using the RQ Heroku pattern, you can change your code to use the from scout_apm.rq import HerokuWorker as Worker If you’re using Django-RQ, instead use the custom worker setting to point to our custom Worker class: RQ = { "WORKER_CLASS": "scout_apm.rq.Worker", } If you’re using your own
from scout_apm.rq import Worker
class MyWorker(Worker):
# your custom behaviour here
pass
Or if you’re combining one or more other
from some.other.rq.extension import CustomWorker
from scout_apm.rq import WorkerMixin
class MyWorker(WorkerMixin, CustomWorker):
pass
|
3 |
Configure Scout. If you’re using Django-RQ, ensure you have the Django integration installed, and this is handled for you. If you’re using RQ directly, create a config file for it that runs the Scout API’s from scout_apm.api import Config Config.set( key="YOUR_SCOUT_KEY", name="Same as Web App Name", monitor=True, ) Pass the config file to If you wish to configure Scout via environment variables, you don’t need a config file. Set
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
4 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. Tasks will appear in the “Background Jobs” area of the Scout UI. |
SQLAlchemy
Instrument SQLAlchemy queries:
from scout_apm.sqlalchemy import instrument_sqlalchemy
# Assuming something like engine = create_engine('sqlite:///:memory:', echo=True)
instrument_sqlalchemy(engine)
Starlette
Scout supports Starlette 0.12+.
General instructions for a Starlette app:
1 |
Install the pip install scout-apm |
2 |
Configure Scout and attach its middleware to your Starlette app: from scout_apm.api import Config from scout_apm.async_.starlette import ScoutMiddleware from starlette.applications import Starlette from starlette.middleware import Middleware Config.set( key="[AVAILABLE IN THE SCOUT UI]", name="A FRIENDLY NAME FOR YOUR APP", monitor=True, ) middleware = [ # Should be *first* in your stack, so it's the outermost and can track all requests Middleware(ScoutMiddleware), ] app = Starlette(middleware=middleware) If you’re using Starlette <0.13, which refactored the middleware API, instead use If you wish to configure Scout via environment variables, use
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
3 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Troubleshooting
Not seeing data? Email support@scoutapm.com with:
- A link to your app within Scout (if applicable)
- Your Python version
- The name of the framework and version you are trying to instrument, e.g. Flask 0.10.
We typically respond within a couple of hours during the business day.
Configuration Reference
Setting Name | Description | Default | Required |
---|---|---|---|
key | The organization API key. | Yes | |
name | Name of the application (ex: ‘Photos App’). | Yes | |
monitor | Whether monitoring data should be reported. |
false
|
Yes |
hostname | The hostname the metrics should be aggregrated under. |
hostname
|
Yes |
collect_remote_ip | Automatically capture end user IP addresses as part of each trace’s context. |
true
|
No |
ignore | A list of (relative) URL path prefixes to avoid collecting metrics for. If specified as an environment variable, it should be a comma-separated list. See Ignoring Transactions. |
[]
|
No |
revision_sha | The Git SHA associated with this release. | See docs | No |
shutdown_timeout_seconds | Maximum amount of time, in seconds, to spend at flushing outstanding events to the core agent at shutdown. Set to 0 to disable. | 2.0 | No |
scm_subdirectory | The relative path from the base of your Git repo to the directory which contains your application code. | No |
There are also some configuration options that affect how the core agent process is run. Typically you don’t need to change these:
Setting Name | Description | Default | Required |
---|---|---|---|
core_agent_dir | Path to create the directory which will store the Core Agent. |
/tmp/scout_apm_core
|
No |
core_agent_config_file |
Point to a configuration file for the Core Agent.
This may be useful for debugging your setup with files provided by Scout APM staff.
Prior to version 2.13.0, this was called config_file. That name now works as an alias, and takes precedence to allow old configuration to continue to work. |
No | |
core_agent_download | Whether to download the Core Agent automatically, if needed. |
True
|
No |
core_agent_launch | Whether to start the Core Agent automatically, if needed. |
True
|
No |
core_agent_log_file |
The log file for the Core Agent to write its logs to.
If not set, it won’t be written.
This does not affect the logging configuration of the Python library. To change that, directly configure the python logging module as per the below documentation.
Prior to version 2.13.0, this was called log_file. That name now works as an alias, and takes precedence to allow old configuration to continue to work. |
No | |
core_agent_log_level |
The log level of the Core Agent.
This should be one of: "trace" , "debug" , "info" , "warn" , "error" .
This does not affect the log level of the Python library. To change that, directly configure the python logging module as per the below documentation.
Prior to version 2.6.0, this was called log_level. That name now works as an alias, and takes precedence to allow old configuration to continue to work. |
"info"
|
No |
core_agent_permissions | The permission bits to set when creating the directory of the Core Agent. |
700
|
No |
core_agent_socket_path |
The path to the socket to connect to the Core Agent, passed to it when launching.
This does not normally need to be set, as it can be automatically derived to live in the same directoy as the core agent.
Prior to version 2.13.0, this was called socket_path. That name now works as an alias, and takes precedence to allow old configuration to continue to work. |
Auto detected | No |
core_agent_triple |
If you are running a MUSL based Linux (such as ArchLinux), you may need to explicitly specify the platform triple. E.g. x86_64-unknown-linux-musl
|
Auto detected | No |
Environment Variables
You can also configure Scout APM via environment variables. Environment variables override settings provided from within Python.
To configure Scout via environment variables, uppercase the config key and prefix it with SCOUT
. For example, to set the key via environment variables:
export SCOUT_KEY=YOURKEY
Environments
It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout and ignore the development and test environments. Configure a unique app name for each environment as Scout aggregates data by the app name.
A common approach is to set a SCOUT_NAME
environment variable that includes the app environment:
export SCOUT_NAME="YOUR_APP_NAME (Staging)"
This will override the SCOUT_NAME
value provided in your settings.py
file.
Logging
Scout logs via the built-in Python logger, which means you can add a handler to the scout_apm
package. If you don’t setup logging, use the examples below as a starting point.
Log Levels
The following log levels are available:
- CRITICAL
- ERROR
- WARNING
- INFO
- DEBUG
Django Logging
To log Scout agent output in your Django application, copy the following into your settings.py
file:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'stdout': {
'format': '%(asctime)s %(levelname)s %(message)s',
'datefmt': '%Y-%m-%dT%H:%M:%S%z',
},
},
'handlers': {
'stdout': {
'class': 'logging.StreamHandler',
'formatter': 'stdout',
},
'scout_apm': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'scout_apm_debug.log',
},
},
'root': {
'handlers': ['stdout'],
'level': os.environ.get('LOG_LEVEL', 'DEBUG'),
},
'loggers': {
'scout_apm': {
'handlers': ['scout_apm'],
'level': 'DEBUG',
'propagate': True,
},
},
}
Flask Logging
Add the following your Flask app:
dictConfig({
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'stdout': {
'format': '%(asctime)s %(levelname)s %(message)s',
'datefmt': '%Y-%m-%dT%H:%M:%S%z',
},
},
'handlers': {
'stdout': {
'class': 'logging.StreamHandler',
'formatter': 'stdout',
},
'scout_apm': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'scout_apm_debug.log',
},
},
'root': {
'handlers': ['stdout'],
'level': os.environ.get('LOG_LEVEL', 'DEBUG'),
},
'loggers': {
'scout_apm': {
'handlers': ['scout_apm'],
'level': 'DEBUG',
'propagate': True,
},
},
})
If LOGGING
is already defined, merge the above into the existing Dictionary.
Celery Logging
Add the following to our default Celery configuration:
import logging
logging.basicConfig(level='DEBUG')
Custom Instrumentation logging
If you’ve custom Scout instrumentation, add the following to record the agent logs:
import logging
logging.basicConfig(level='DEBUG')
Custom Instrumentation
You can extend Scout to trace transactions outside our officially supported frameworks (e.g. Cron jobs and micro web frameworks) and time the execution of code that falls outside our auto instrumentation.
Transactions & Timing
Scout’s instrumentation is divided into 2 areas:
- Transactions: these wrap around a flow of work, like a web request or Cron job. The UI groups data under transactions. Use the
@scout_apm.api.WebTransaction()
decorator or wrap blocks of code via thewith scout_apm.api.WebTransaction('')
context manager. - Timing: these measure individual pieces of work, like an HTTP request to an outside service and display timing information within a transaction trace. Use the
@scout_apm.api.instrument()
decorator or thewith scout_apm.api.instrument() as instrument
context manager.
Instrumenting transactions
A transaction groups a sequence of work under in the Scout UI. These are used to generate transaction traces. For example, you may create a transaction that wraps around the entire execution of a Python script that is ran as a Cron Job.
Limits
We limit the number of unique transactions that can be instrumented. Tracking too many unique transactions can impact the performance of our UI. Do not dynamically generate transaction names in your instrumentation (i.e. with scout_apm.api.WebTransaction('update_user_"+user.id')
) as this can quickly exceed our rate limits. Use context to add high-dimensionality information instead.
Getting Started
Import the API module and configure Scout:
import scout_apm.api
# A dict containing the configuration for APM.
# See our Python help docs for all configuration options.
config = {
"name": "My App Name",
"key": "APM_Key",
"monitor": True,
}
# The `install()` method must be called early on within your app code in order
# to install the APM agent code and instrumentation.
scout_apm.api.install(config=config)
Web or Background transactions?
Scout distinguishes between two types of transactions:
WebTransaction
: For transactions that impact the user-facing experience. Time spent in these transactions will appear on your app overboard dashboard and appear in the “Web” area of the UI.BackgroundTransaction
: For transactions that don’t have an impact on the user-facing experience (example: cron jobs). These will be available in the “Background Jobs” area of the UI.
Explicit
scout_apm.api.WebTransaction.start("Foo") # or BackgroundTransaction.start()
# do some app work
scout_apm.api.WebTransaction.stop()
As a context manager
with scout_apm.api.WebTransaction("Foo"): # or BackgroundTransaction()
# do some app work
As a decorator
@scout_apm.api.WebTransaction("Foo") # or BackgroundTransaction()
def my_foo_action(path):
# do some app work
Cron Job Example
#!/usr/bin/env python
import requests
import scout_apm.api
# A dict containing the configuration for APM.
# See our Python help docs for all configuration options.
config = {
"name": "My App Name",
"key": "YOUR_SCOUT_KEY",
"monitor": True,
}
# The `install()` method must be called early on within your app code in order
# to install the APM agent code and instrumentation.
scout_apm.api.install(config=config)
# Will appear under Background jobs in the Scout UI
with scout_apm.api.BackgroundTransaction("Foo"):
response = requests.get("https://httpbin.org/status/418")
print(response.text)
Timing functions and blocks of code
Traces that allocate significant amount of time to View
, Job
, or Template
are good candidates to add custom instrumentation. This indicates a significant amount of time is falling outside our default instrumentation.
Limits
We limit the number of metrics that can be instrumented. Tracking too many unique metrics can impact the performance of our UI. Do not dynamically generate metric types in your instrumentation (ie with scout_apm.api.instrument("Computation_for_user_" + user.email)
) as this can quickly exceed our rate limits.
For high-cardinality details, use tags: with scout_apm.api.instrument("Computation", tags={"user": user.email})
.
Getting Started
Import the API module:
import scout_apm.api
# or to not use the whole prefix on each call:
from scout_apm.api import instrument
scout_apm.api.instrument(name, tags={}, kind="Custom")
name
- A semi-detailed version of what the section of code is. It should be static between different invocations of the method. Individual details like a user ID, or counts or other data points can be added as tags. Names likeretreive_from_api
orGET
are good names.kind
- A high level area of the application. This defaults toCustom
. Your whole application should have a very low number of unique strings here. In our built-in instruments, this is things likeTemplate
andSQL
. For custom instrumentation, it can be strings likeMongoDB
orHTTP
or similar. This should not change based on input or state of the application.tags
- A dictionary of key/value pairs. Key should be a string, but value can be any json-able structure. High-cardinality fields like a user ID are permitted.
As a context manager
Wrap a section of code in a unique “span” of work.
The yielded object can be used to add additional tags individually.
def foo():
with scout_apm.api.instrument("Computation 1") as instrument:
instrument.tag("record_count", 100)
# Work
with scout_apm.api.instrument("Computation 2", tags={"filtered_record_count": 50}) as instrument:
# Work
As a decorator
Wraps a whole function, timing the execution of specified function within a transaction trace. This uses the same API as the ContextManager style.
@scout_apm.api.instrument("Computation")
def bar():
# Work
Ignoring Transactions
If you don’t want to track the current transaction, at any point you can call ignore_transaction()
to ignore it:
import scout_apm.api
if is_health_check():
scout_apm.api.ignore_transaction()
You can use this whether the transaction was started from a built-in integration or custom instrumentation.
You can also ignore a set of URL path prefixes by configuring the ignore
setting:
Config.set(
ignore=["/health-check/", "/admin/"],
)
When specifying this as an environment variable, it should be a comma-separated list:
export SCOUT_IGNORE='/health-check/,/admin/'
Renaming Transactions
If you want to rename the current transaction, call rename_transaction()
with the new name:
import scout_apm.api
scout_apm.api.rename_transaction("Controller/" + derive_graphql_name())
You can use this whether the transaction was started from a built-in integration or custom instrumentation.
Custom Context
Context lets you see the forest from the trees. For example, you can add custom context to answer critical questions like:
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
It’s simple to add custom context to your app:
import scout_apm.api
# scout_apm.api.Context.add(key, value)
scout_apm.api.Context.add("user_email", request.user.email)
Context Key Restrictions
The Context key
must be a String
with only printable characters. Custom context keys may contain alphanumeric characters, dashes, and underscores. Spaces are not allowed.
Attempts to add invalid context will be ignored.
Context Value Types
Context values can be any json-serializable type. Examples:
"1.1.1.1"
"free"
100
Updating to the Newest Version
pip install scout-apm --upgrade
The package changelog is available here.
Deploy Tracking Config
Scout can track deploys, making it easier to correlate specific deploys to changes in performance.
Scout identifies deploys via the following approaches:
- Setting the
revision_sha
configuration value:
from scout_apm.api import Config
Config.set(revision_sha=os.popen("git rev-parse HEAD").read().strip()) # if the app directory is a git repo
- Setting a
SCOUT_REVISION_SHA
environment variable equal to the SHA of your latest release. - If you are using Heroku, enable Dyno Metadata. This adds a
HEROKU_SLUG_COMMIT
environment variable to your dynos, which Scout then associates with deploys.
Ignoring Transactions
You can ignore transactions two ways:
The ignore
configuration option. This is a list of URI paths that will be ignored if they match the path seen in Django, Flask, Bottle, Pyramid.
By calling scout_apm.api.ignore_transaction()
from within your own code.
from scout_apm.api
import Config
Config.set(ignore=[“/healthcheck”])
settings.py
:
python
SCOUT_IGNORE = ["/healthcheck"]
urls.py
:
“`python
def healthcheck(request):
return HttpResponse()
urlpatterns = [ url(r’^healthcheck/?$‘, healthcheck), … ] ”`
PHP Agent
Scout’s PHP agent supports many popular libraries to instrument middleware, request times, SQL queries, and more.
The base package is called scoutapp/scout-apm-php
, Laravel instrumentation
is in the scoutapp/scout-apm-laravel
, and Symfony instrumentation in scoutapp/scout-apm-symfony-bundle
. See our
install instructions for more details..
Source code and issues can be found on our scout-apm-php GitHub repository.
Requirements
scout-apm-php
requires:
- PHP
- A POSIX operating system, such as Linux or macOS.
Instrumented Libraries
Scout provides instrumentation for Laravel, and Symfony.
Some configuration required
The libraries below require a small number of configuration updates. Click on the respective library for instructions.
- Laravel
- Middleware
- Controllers
- SQL queries
- Job queues
- Blade rendering
- Symfony
- Controllers
- SQL queries (Doctrine)
- Twig rendering
Additionally, Scout can also instrument request queuing time.
You can instrument your own code or other libraries via custom instrumentation. You can suggest additional libraries you’d like Scout to instrument on GitHub.
Laravel
Scout supports Laravel 5.5+.
1 |
Install the composer require scoutapp/scout-apm-laravel
Note that the |
2 |
Install the sudo pecl install scoutapm
Several instruments require the native extension to be included, including timing of |
3 |
Configure Scout in your # Scout settings SCOUT_MONITOR=true SCOUT_KEY="[AVAILABLE IN THE SCOUT UI]" SCOUT_NAME="A FRIENDLY NAME FOR YOUR APP"
If you’ve installed Scout via the Heroku Addon, the provisioning process automatically sets |
4 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Code Based Configuration
If for any reason you can’t use environment based configuration, or it’d simply be easier to manage Scout in code, you can configure Scout with a Laravel config file.
First create the skeleton configuration file at config/scout_apm.php
:
php artisan vendor:publish --provider="Scoutapm\Laravel\Providers\ScoutApmServiceProvider"
Then add any keys you want to override to the bottom of the file, following the template.
The keys should be in lower case, with no prefixed SCOUT_
. Any keys not
mentioned will continue to be read from the environment.
$config['name'] = 'Overriden Name';
Finally, deploy and remember update any cached configs.
Middleware
Scout automatically inserts its middleware into your application on Laravel startup.
It adds one at the very start of the middleware stack, and one at the end, allowing it to profile your middleware and controllers.
Symfony
Scout supports Symfony 4+.
1 |
Install the composer require scoutapp/scout-apm-symfony-bundle
Note that the |
2 |
Install the sudo pecl install scoutapm
Several instruments require the native extension to be included, including timing of |
3 |
Configure Scout in your <?xml version="1.0" ?> <container xmlns="http://symfony.com/schema/dic/services" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:scoutapm="http://example.org/schema/dic/scout_apm" xsi:schemaLocation="http://symfony.com/schema/dic/services https://symfony.com/schema/dic/services/services-1.0.xsd"> <scoutapm:config> <scoutapm:scoutapm name="my application name..." key="%env(SCOUT_KEY)%" monitor="true" /> </scoutapm:config> </container> It is recommended not to commit the Scout APM key to version control. Instead, configure via environment variables, e.g. in `.env.local`: SCOUT_KEY=your_scout_key_here Since the configuration XML above uses |
4 |
Add the bundle to `config/bundles.php`. <?php return [ // ... other bundles... Scoutapm\ScoutApmBundle\ScoutApmBundle::class => ['all' => true], ]; |
5 |
Deploy. It takes approximatively five minutes for your data to first appear within the Scout UI. |
Troubleshooting
Not seeing data? Email support@scoutapm.com with:
- A link to your app within Scout (if applicable)
- Your PHP version
- The name of the framework and version you are trying to instrument, e.g. Laravel 5.8
- Scout logs
We typically respond within a couple of hours during the business day.
Configuration Reference
Setting Name | Description | Default | Required |
---|---|---|---|
SCOUT_KEY | The organization API key. | Yes | |
SCOUT_NAME | Name of the application (ex: ‘Photos App’). | Yes | |
SCOUT_MONITOR | Whether monitoring data should be reported. | false |
Yes |
SCOUT_REVISION_SHA | The Git SHA associated with this release. | See docs | No |
SCOUT_LOG_LEVEL | Override the SCOUT log level. Can only be used to quiet the agent, will not override the underlying logger’s level | No | |
SCOUT_SCM_SUBDIRECTORY | The relative path from the base of your Git repo to the directory which contains your application code. | No | |
SCOUT_CORE_AGENT_DIR | Path to create the directory which will store the Core Agent. |
/tmp/scout_apm_core
|
No |
SCOUT_CORE_AGENT_DOWNLOAD | Whether to download the Core Agent automatically, if needed. |
True
|
No |
SCOUT_CORE_AGENT_LAUNCH | Whether to start the Core Agent automatically, if needed. |
True
|
No |
SCOUT_CORE_AGENT_PERMISSIONS | The permission bits to set when creating the directory of the Core Agent. |
700
|
No |
SCOUT_CORE_AGENT_TRIPLE |
If you are running a MUSL based Linux (such as ArchLinux), you may need
to explicitly specify the platform triple. E.g.
x86_64-unknown-linux-musl
|
Auto detected | No |
SCOUT_CORE_AGENT_LOG_LEVEL |
The log level of the core agent process. This should be one of:
"trace" , "debug" , "info" ,
"warn" , "error" .
This does not affect the log level of the PHP library. To change that, directly configure logging as per the documentation.
|
"info"
|
No |
SCOUT_CORE_AGENT_LOG_FILE | The log file for the core agent process |
"info"
|
No |
SCOUT_DISABLED_INSTRUMENTS | A string containing a JSON array of instruments that Scout should not install. |
[]
|
No |
Environments
It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout and ignore the development and test environments. Configure a unique app name for each environment as Scout aggregates data by the app name.
Logging
Scout logs internal activity via a configured Psr\Log\LoggerInterface
. The
Laravel instruments automatically wire up the framework’s logger to the
agent’s logging.
If required, you can override this by changing the container service log
.
Scout’s logging defaults to the same log level as the LoggerInterface provided,
but that can be set to a stricter level to quiet the agent’s logging via the
log_level
configuration. The underlying LoggerInterface’s level will take
precedence if it is tighter than the log_level
configuration.
Custom Instrumentation
You can extend Scout to trace transactions outside our officially supported libraries (e.g. Cron jobs and other web frameworks) and time the execution of sections of code that falls outside our provided instrumentation.
Transactions & Timing
Scout’s instrumentation is divided into 2 areas:
- Transactions: these wrap around an entire flow of work, like a web request or Cron job. The Scout Web UI groups data under transactions.
- Timing: these measure small pieces of code that occur inside of a transaction, like an HTTP request to an outside service, or a database call. This is displayed within a transaction trace in the UI.
Instrumenting Transactions
A transaction groups a sequence of work under in the Scout UI. These are used to generate transaction traces. For example, you may create a transaction that wraps around the entire execution of a PHP script that is ran as a Cron Job.
The Laravel instrumentation does this all for you. You only will need to manually instrument transactions in special cases. Contact us at support@scoutapm.com for help.
Limits
We limit the number of unique transactions that can be instrumented. Tracking too many uniquely named transactions can impact the performance of the UI. Do not dynamically generate transaction names in your instrumentation as this can quickly exceed our rate limits. Use context to add high-dimensionality information instead.
Web or Background transactions?
Scout distinguishes between two types of transactions:
WebTransaction
: For transactions that impact the user-facing experience. Time spent in these transactions will appear on your app overboard dashboard and appear in the “Web” area of the UI.BackgroundTransaction
: For transactions that don’t have an impact on the user-facing experience (example: cron jobs). These will be available in the “Background Jobs” area of the UI.
$agent->webTransaction("GET Users", function() { ... your code ... });
$agent->send();
Timing functions and blocks of code
In existing transactions, both automatically created with Laravel instruments, and also manually created, you can time sections of code that are interesting to your application.
Traces that allocate significant amount of time to Controller
or Job
layers
are good candidates to add custom instrumentation. This indicates a significant
amount of time is falling outside our default instrumentation.
Limits
We limit the number of metrics that can be instrumented. Tracking too many unique metrics can impact the performance of our UI. Do not dynamically generate metric types in your instrumentation as this can quickly exceed our rate limits.
For high-cardinality details, use tags.
Getting Started
With existing code like:
$request = new ServiceRequest();
$request->setApiVersion($version);
It is wrapped with instrumentation:
// At top, with other imports
use Scoutapm\Laravel\Facades\ScoutApm;
// Replacing the above code
$request = ScoutApm::instrument(
"Custom", // Kind
"Building Service Request", // Name
function ($span) use ($version) {
$request = new ServiceRequest();
$request->setApiVersion($version);
return $request;
}
);
kind
- A high level area of the application. This defaults toCustom
. Your whole application should have a very low number of unique strings here. In our built-in instruments, this is things likeTemplate
andSQL
. For custom instrumentation, it can be strings likeMongoDB
orHTTP
or similar. This should not change based on input or state of the application.name
- A semi-detailed version of what the section of code is. It should be static between different invocations of the method. Individual details like a user ID, or counts or other data points can be added as tags. Names likeretreive_from_api
orGET
are good names.span
- An object that represents instrumenting this section of code. You can set tags on it by calling$span->tag("key", "value")
tags
- A dictionary of key/value pairs. Key should be a string, but value can be any json-able structure. High-cardinality fields like a user ID are permitted.
Custom Context
Context lets you see the key attributes of requests. For example, you can add custom context to answer critical questions like:
- Which plan was the customer who had a slow request on?
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
It’s simple to add custom context to your app:
use Scoutapm\Laravel\Facades\ScoutApm; // Laravel only: Add near the other use statements
ScoutApm::addContext("Key", "Value");
// or if you have an $agent instance:
$agent->addContext("Key", "Value");
Context Key Restrictions
The Context key
must be a String
with only printable characters. Custom
context keys may contain alphanumeric characters, dashes, and underscores.
Spaces are not allowed.
Attempts to add invalid context will be ignored.
Context Value Types
Context values can be any json-serializable type. Examples:
"1.1.1.1"
"free"
100
Updating to the Newest Version
composer update scout-apm-laravel
The package changelog is available here.
Deploy Tracking Config
Scout can track deploys, making it easier to correlate specific deploys to changes in performance.
Scout identifies deploys via the following approaches:
- Detecting the current git sha (this is automatically detected when
composer install
is run)
Core Agent
Some of the languages instrumented by Scout depend on a standalone binary for collecting and reporting data. We call this binary the Core Agent.
If the Core Agent is required for your language, the Scout agent library for that language will handle downloading, configuring, and launching the Core Agent automatically. However, you may manually manage the Core Agent through configuration options.
Launching Core Agent manually
1 |
Create a directory which your app has permissions to read, write and execute into (for our example we will use: /var/www) cd /var/www mkdir scout_apm_core |
2 |
Download and test the core agent: # 1. cd into the scout_apm_core directory cd ./scout_apm-core # 2. Download the core agent tarball curl https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release/scout_apm_core-latest-x86_64-unknown-linux-gnu.tgz --output core-agent-download.tgz # 3. Unzip the core-agent tar -xvzf core-agent-download.tgz # 4. Test that core agent is executable ./core-agent If everything has run successfully, you should see something similar to the following output: ![]() |
3 |
Start the core agent: ./core-agent start --daemonize true Note: this will not persist past a reboot. We suggest adding the core agent to upstart, systemd, or any other processes manager you may be using.
For additional startup flags, check the in-executable help with |
4 |
Check to see that core agent socket is running: ./core-agent probe If you are using one of the supported languages (PHP, Python, Elixir, and Node.js), you will have to set the following configuration variables to point to the correct socket (as well as disabling the agent from re-downloading and launching the core agent again):
|
Downloading the core agent to another directory
By default, the core agent will be downloaded into the /tmp directory. However due to /tmp being as mounted as not executable, or SELinux configuration, or your umask permissions, you may not be able to execute the core-agent in that directory.
To change the directory that Scout downloads to, use the configuration SCOUT_CORE_AGENT_DIR. Your app must have read, write, and execute permissions for this directory.
Read your language’s agent configuration reference for more detail.
Troubleshooting
Checking if the core agent is executable
In some cases, the core agent won’t be able to execute. You may be presented with an error message that looks similar to:
[Scout] Failed to launch core agent - exception core-agent exited with non-zero status.
Output: sh: 1: /tmp/scout_apm_core/scout_apm_core-v1.2.9-x86_64-unknown-linux-musl/core-agent: Permission denied
Try following Downloading the core agent to another directory above to see if you are able to execute the core agent in another directory to determine if there is a permissions issue with the default location.
If you continue having issues, please reach out to us at support@scoutapm.com.
Available platforms and architectures
Builds of the Core Agent are available for these platforms and architectures:
- Linux i686 (glibc)
- Linux x86-64 (glibc)
- Linux i686 (musl)
- Linux x86-64 (musl)
- OSX/Darwin x86-64
Other languages
Want to add tracing but Scout doesn’t support your app’s language? You can instrument just about anything (assuming you can communicate via a Unix Domain Socket) with Scout’s Core Agent API. For information, view the Core Agent API on GitHub.
Features
Scout is Application Monitoring built for modern development teams. It’s built to provide the fastest path to a slow line-of-code. Signup for a trial.
App Performance Overview
The overview page provides an at-a-glance, auto-refreshing view of your app’s performance and resource usage (mean response time by category, 95th percentile response time, throughput, error rate, and more). You can quickly dive into endpoint activity via click-and-drag (or pinch-and-expand with a mobile device) on the overview chart.
Additionally, you can compare metrics in the overview chart and see how your app’s performance compares to different time periods.
Endpoint Details
You can view metrics for specific controller-action and background job workers. There is a similar chart interaction to the App Performance Overview page, with one difference: your selection will render an updated list of transaction traces that correspond to the selected time period:
You can sort traces by response time, object allocations, date, and more.
Transaction Traces
Scout collects detailed transactions across your web endpoints and background jobs automatically. The transaction traces provide a number of visual queues to direct you to hotspots. Dig into bottlenecks - down to the line-of-code, author, commit date, and deploy time - from this view.
SQL Queries
Scout captures a sanitized version of SQL queries. Click the “SQL” button next to a call to view details.
Don’t see an SQL button next to a database query?
Scout collects a sanitized version of SQL queries and displays these in transaction traces. To limit agent overhead sanitizing queries, we do not collect query statements with more than 16k characters.
This limit was raised to 16k characters from 4k characters in version 2.3.3 of the Ruby agent after determining the higher threshold was safe for production environments. If you have an older version of scout_apm
, update to the latest.
Code Backtraces
You’ll see “CODE” buttons next to method calls that are >= 500 ms. If you’ve enabled the GitHub integration, you can see the line-of-code, associated SQL or HTTP endpoint (if applicable), author, commit date, and deploy time for the relevant slow code.
If you don’t enable the GitHub integration, you’ll see a backtrace.
Trace Views
There are two displays for showing the details of a transaction trace:
- Summary View - Method calls are aggregated together and ordered from most to least expensive.
- Timeline View - Shows the execution order of calls as they occur during the transaction.
Summary View
Method calls are aggregated together and listed from most expensive to least expensive. The time displayed is the total time across all calls (not the time per-call).
Timeline View
See the execution order of your code.
The timeline view is especially helpful for:
- understanding the distribution of
Controller
time across a request. Is there a lot of time spent in your custom code at the beginning of a request? Is it spread out? Is it at the end of a request? - understanding the timing of distinct SQL queries. Is one instance of many nearly identical queries slow or all of them?
- getting the complete picture of parent and children method calls. How many SQL calls are being triggered by the same view partial?
Upgrading to the timeline view
If you see a message in the UI prompting you to upgrade, follow our Ruby agent upgrade instructions to update to the latest agent, which supports sending the timeline trace format.
Timeline view limitations
- No ScoutProf support
- No Background job support
- No DevTrace support
Trace Explorer
What was the slowest request yesterday? How has the app performed for user@domain.com
? Which endpoints are generating the bulk of slow requests? Trace Explorer lets you quickly filter the transaction traces collected by Scout, giving you answers to your unique questions.
Trace Explorer is accessed via the “Traces” navigation link when viewing an app.
How to use Trace Explorer
There are two main areas of Trace Explorer:
- Dimension Histograms - the top portion of the page generates a histogram representation for a number of trace dimensions (the response time distribution, count of traces by endpoints, and a display for each piece of custom context). Selecting a specific area of a chart filters the transactions to just the selected data.
- List of transaction traces - the bottom portion of the page lists the individual traces. The traces are updated to reflect those that match any filtered dimensions. You can increase the height of this pane by clicking and dragging the top portion of the pane. Clicking on a trace URI opens the transaction trace in a new browser tab.
AutoInstruments
In many apps, more than 30% of the time spent in a transaction is within custom code written by your development team. In traces, this shows up as time spent in “Controller” or “Job”. AutoInstruments helps break down the time spent in your custom code without the need to add custom instrumentation on your own.
AutoInstruments instruments code expressions in Ruby on Rails controllers by instrumenting Ruby’s Abstract Syntax Tree (AST) as code is loaded. These code expressions then appear in traces, just like the many libraries Scout already instruments:
In the screenshot of a trace above, 68% of the time would be allocated to the Controller
without enabling AutoInstruments. With AutoInstruments enabled, Controller
time is just 3% of the request and we can clearly see that most of the time is spent inside two method calls.
AutoInstruments is currently available for Ruby on Rails applications.
Enabling AutoInstruments
AutoInstruments is a BETA feature and available to apps using Ruby 2.3.1+. To enable:
1 | Within your Rails app’s directory, run: bundle update scout_apm AutoInstruments was released in |
2 | Set the # config/scout_apm.yml production: auto_instruments: trueIf you are using environment variables: SCOUT_AUTO_INSTRUMENTS=true |
3 | Deploy |
A detailed AutoInstruments FAQ is available in our reference area.
ScoutProf
Every millisecond, ScoutProf captures a backtrace of what each thread in your application is currently running. Over many backtraces, when you combine them, it tells a story of what code paths are taking up the most time in your application.
Compared with our more traditional instrumentation of libraries like ActiveRecord
, Net::HTTP
and similar, ScoutProf works with your custom code. Now, when your application spends time processing your data in custom application code, or in libraries that Scout doesn’t yet instrument, instead of only being able to assign that to a large bucket of ActionController
time, the time can be broken down to exactly what is taking up the most time.
Notice how the time in ActionController
is broken down:
We still employ our traditional instrumentation, because it can give us deeper insights into common libraries, capturing the SQL being run, or the URL being fetched.
Enabling ScoutProf
ScoutProf is a BETA feature. To enable:
1 |
Modify your |
2 |
|
3 | Deploy |
A detailed ScoutProf FAQ is available in our reference area.
Database Monitoring
When the database monitoring addon is enabled, you’ll gain access to both a high-level overview of your database query performance and detailed information on specific queries. Together, these pieces make it easier to get to the source of slow query performance.
Database Queries Overview
The high-level view helps you identify where to start:
The chart at the top shows your app’s most time-consuming queries over time. Beneath the chart, you’ll find a sortable list of queries grouped by a label (for Rails apps, this is the ActiveRecord model and operation) and the caller (a web endpoint or a background job):
This high-level view is engineered to reduce the investigation time required to:
- identify slow queries: it’s easy for queries to become more inefficient over time as the size of your data grows. Sorting queries by “95th percentile response time” and “mean response time” makes it easy to identify your slowest queries.
- solve capacity issues: an overloaded database can have a dramatic impact on your app’s performance. Sorting the list of queries by “% time consumed” shows you which queries are consuming the most time in your database.
Zooming
If there is a spike in time consumed or throughput, you can easily see what changed during that period. Click and drag over the area of interest on the chart:
Annotations are added to the queries list when zooming:
- The change in rank, based on % time consumed, of each query. Queries that jump significantly in rank may trigger a dramatic change in database performance.
- The % change across metrics in the zoom window vs. the larger timeframe. If the % change is not significant, the metric is faded.
Database Events
Scout highlights significant events in database performance in the sidebar. For example, if time spent in database queries increases dramatically, you’ll find an insight here. Clicking on an insight jumps to the time window referenced by the insight.
Database Query Details
After identifying an expensive query, you need to see where the query is called and the underlying SQL. Click on a query to reveal details:
You’ll see the raw SQL and a list of individual query execution times that appeared in transaction traces. Scout collects backtraces on queries consuming more than 500 ms. If we’ve collected a backtrace for the query, you’ll see an icon next to the timing information. Click on one of the traces to reveal that trace in a new window:
The source of that trace is immediately displayed.
Slow Query Insights
When the database monitoring addon is enabled, a new “Slow Query” insight is activated on your app dashboard:
This insight analyzes your queries in three dimensions, helping you focus on database optimizations that will most improve your app:
- Which queries are most impacting app performance? This is based on the total time consumed of each query, where time consumed is the average query latency multiplied by the query throughput.
- Which queries are significant bottlenecks inside web endpoints and background jobs? A single query that is responsible for a large percentage of the time spent in a transaction is a great place to investigate for a performance win.
- Which queries are consistently slow? These are queries that have a high average latency.
Pricing
Database Monitoring is available as an addon. See your billing page for pricing information.
Database Addon Installation
Update - or install - the scout_apm
gem in your application. There’s no special libraries to install on your database servers.
Database Monitoring Library Support
Scout currently monitors queries executed via ActiveRecord, which includes most relational databases (PostgreSQL, MySQL, etc).
What does SQL#other mean?
Some queries may be identified by a generic SQL#other
label. This indicates our agent was unable to generate a friendly label from the raw SQL query. Ensure you are running version 2.3.3 of the scout_apm
gem or higher as this release includes more advanced query labeling logic.
Memory Bloat Detection
If a user triggers a request to your Rails application that results in a large number of object allocations (example: loading a large number of ActiveRecord objects), your app may require additional memory. The additional memory required to load the objects in memory is released back very slowly. Therefore, a single memory-hungry request will have a long-term impact on your Rails app’s memory usage.
There are 3 specific features of Scout to aid in fixing memory bloat.
Memory Bloat Insights
The Insights area of the dashboard identifies controller-actions and background jobs that have triggered significant memory increases. An overview of the object allocation breakdown by tier (ActiveRecord, ActionView, etc) is displayed on the dashboard.
Memory Traces
When inspecting a transaction trace, you’ll see a “Memory Allocation Breakdown” section:
For perspective, we display how this trace’s allocations compare to the norm.
Alerting
Alerting keeps your team updated if your app’s performance degrades. Alerts can be configured on the app as a whole and on individual endpoints. Metrics include:
- mean response time
- 95th percentile response time
- Apdex
- error rate
- throughput
Alert conditions
Configure alert conditions via the “Alerts” pill in the UI:
Notification groups
Alerts are sent to a notification group, which is composed of notification channels. You can configure these under your org’s settings menu:
Deploy Tracking
Correlate deploys with your app’s performance: Scout’s GitHub-enhanced deploy tracking makes it easy to identify the Git branch or tag running now and which team members contributed to every deploy.
Scout tracks your deploys without additional configuration if you are running Capistrano. If you aren’t using Capistrano or deploying your app to Heroku, see our deploy tracking configuration docs.
Sorting
You can sort by memory allocations throughout the UI: from the list of endpoints, to our pulldowns, to transaction traces.
Context
Context lets you see the forest from the trees. For example, you can add custom context to answer critical questions like:
- How many users are impacted by slow requests?
- How many trial customers are impacted by slow requests?
- How much of an impact are slow requests having on our highest paying customers?
Adding custom context is easy - learn how via Ruby, Elixir, or Python.
Context information is displayed in two areas:
- When viewing a transaction trace - click the “Context” section to see the context Scout has collected.
- When using Trace Explorer - filter traces by context.
Endpoints Performance
Endpoints Overview
The endpoints area within Scout provides a sortable view of your app’s overall performance aggregated by endpoint name. Click on an endpoint to view details.
Time Comparisons
You can easily compare the performance of your application between different time periods via the time selection on the top right corner of the UI.
Digest Email
At a frequency of your choice (daily or weekly), Scout crunches the numbers on your app’s performance (both web endpoints and background jobs). Performance is compared to the previous week, and highlights are mentioned in the email.
The email identifies performance trends, slow outliers, and attempts to narrow down issues to a specific cause (like slow HTTP requests to another service).
DevTrace
DevTrace is our development profiler: it’s included with our Ruby and Elixir libraries. DevTrace can be used for free without signup. Enabling DevTrace adds a speed badge when navigating your app in development. Clicking the speed badge reveals a shareable transaction trace of the request.
View our Ruby and Elixir instructions.
Request Queuing
Scout’s PHP, Python, and Ruby integrations can measure the time it takes a request to reach your application from farther upstream (a load balancer or web server). This appears in Scout as “Request Queuing” and provides an indication of your application’s capacity. Large request queuing time is an indication that your app needs more capacity.
To see this metric within Scout, you need to configure your upstream software, adding an HTTP header that our agent reads. This is typically a one-line change.
HTTP Header
The Scout agent depends on an HTTP request header set by an upstream load balancer (ex: HAProxy) or web server (ex: Apache, Ngnix).
Protip: We suggest adding the header as early as possible in your infrastructure. This ensures you won’t miss performance issues that appear before the header is set.
The agent will read any of the following headers as the start time of the request:
%w(X-Queue-Start X-Request-Start X-QUEUE-START X-REQUEST-START x-queue-start x-request-start)
Include a value in the format t=MICROSECONDS_SINCE_EPOCH
where MICROSECONDS_SINCE_EPOCH
is an integer value of the number of microseconds that have elapsed since the beginning of Unix epoch.
Nearly any front-end HTTP server or load balancer can be configured to add this header. Some examples are below.
AWS
The Python integration also parses the X-Amzn-Trace-Id
header set by AWS ELB’s, if a queue start or request start header is not present.
Heroku
Time in queue is automatically collected for apps deployed on Heroku. This measures the time from when a request hits the Heroku router and when your app begins processing the request.
Apache
Apache’s mod_headers module includes a %t
variable that is formatted for Scout usage. To enable request queue reporting, add this code to your Apache config:
RequestHeader set X-Request-Start "%t"
Apache Request Queuing and File Uploads
If you are using Apache, you may observe a spike in queue time within Scout for actions that process large file uploads. Apache adds the X-Request-Start
header as soon as the request hits Apache. So, all of the time spent uploading a file will be reported as queue time.
This is different from Nginx, which will first buffer the file to a tmp file on disk, then once the upload is complete, add headers to the request.
HAProxy
HAProxy 1.5+ supports timestamped headers and can be set in the frontend or backend section. We suggest putting this in the frontend to get a more accurate number:
http-request set-header X-Request-Start t=%Ts
Nginx
Nginx 1.2.6+ supports the use of the #{msec}
variable. This makes adding the request queuing header straightforward.
General Nginx usage:
proxy_set_header X-Request-Start "t=${msec}";
Passenger 5+:
passenger_set_header X-Request-Start "t=${msec}";
Older Passsenger versions:
passenger_set_cgi_param X-REQUEST-START "t=${msec}";
Note: The Nginx option is local to the location block, and isn’t inherited.
Chart Embeds
You can embed an app’s overview chart inside another web page (ex: an internal key metrics dashboard):
- Access the application dashboard within the Scout UI.
- Adjust the timeframe and metrics to those you’d like to include in the embedded chart.
- Click the embed icon and copy the relevant code.
Note that you’ll need to update the provided iframe url with a Scout API key.
When clicking on an embedded chart, you’ll be redirected to the relevant application.
Data Retention
Scout stores 30 days of metrics and seven days of transaction traces.
Integrations
GitHub
Scout annotates several areas of the UI with additional data from the app’s associated Git repository when the GitHub integration is enabled.
Traces
When the GitHub integration is enabled, Scout displays the actual code from backtraces collected from transaction traces. The code is annotated with the git blame
data (the author and commit date), making it easier to track down developers most familar with bottlenecks.
Deploys
When the GitHub integration is enabled, Scout annotates deploys with the associated Git branch or tag along. When hovering over a deploy, a diff
summary is displayed. This displays the changes between the selected deploy and the previous deploy.
Configuration
The GitHub integration is an app-specific integration, authenticated via OAuth. After authenticating, choose the Git repository name and branch name used for your application.
Missing some repositories?
When configuring the GitHub integration, you may notice that only personal repositories are shown and repositories owned by organizations are missing. Your organization is likely leveraging trusted applications. See GitHub’s docs on organization-approved applications for instructions approving Scout. Once Scout is listed as an approved application, the org’s repositories will be available within Scout.
Rollbar
When the Rollbar integration is enabled, Scout displays errors from the app’s associated Rollbar project alongside performance data within the Scout UI.
When the error count is in orange, a new error has appeared in the current timeframe. When the error count is in gray, older errors are continuing in this timeframe.
Configuration
The Rollbar configuration is an app-specific integration, configured by providing a read-only Rollbar Project Access Token (not an Account Access Token) in the app settings within Scout.
Sentry
When the Sentry integration is enabled, Scout displays errors from the app’s associated Sentry project alongside performance data within the Scout UI. You can either use the hosted service found on Sentry.io or you can use the self-hosted Sentry option.
When the error count is in orange, a new error has appeared in the current timeframe. When the error count is in gray, older errors are continuing in this timeframe.
Configuration
The Sentry configuration is an app-specific integration, configured by providing a read-only Sentry Access Token in the app settings within Scout.
Note: If you are using the self-hosted option, please make sure to include the full URL in the base URL field. https://self-hosted.sentry.com
, not selfhost.sentry.com
.
Slack
To integrate Slack with Scout’s Alert Notification system, you can utilize the Webhook feature on the Application > Notification Channels page. In order for Scout and Slack to be able to work together, you need to use a third-party service called Zapier. Zapier is a service which allows you to connect different web services together to make custom work-flows. As well as Slack, a similar method to the one described below can be used to integrate with many different services. You can read more on our GitHub pages about how to integrate with PagerDuty, VictorOps and xMatters.
Zapier Configuration
First of all you will need to create an account with Zapier, and once you have done this, you can go ahead and create a Zap, by clicking on the Make a Zap! button on the top right-hand side of the screen, as shown in the image below.
You need to create a Trigger (for Scout) and an Action (for Slack) in order to make the two systems able to communicate. First of all, create the trigger by selecting Webhooks by Zapier as the App you want to work with.
Next you will need to select the type of trigger that you want, select Catch Hook. Next you will be given a URL, which is the Webhook that we will use to link to in Scout. Copy this URL and then open up Scout.
Scout Configuration
In Scout, navigate to Application > Notification Channels and create a new Webhook, like the picture below, copying in the Zapier URL.
Next you will need to add or edit a Notification Group to include this new channel.
Create an Alert
At this point if you try to carry on creating the Zap in Zapier, it will try to pull a sample Alert from Scout using the Webhook that we set up. The reason it does this is that it requires sample data from Scout in order to understand the format of the trigger, and what fields are available from the Scout. However, at this point there are not going to be any Alerts it can use because this Webhook has only just been set up. So here you have two options:
- Create a quick Alert in Scout to generate this sample data.
- Click Skip This Step and then Continue Without Samples.
We strongly recommend the first option, because later on when you are specifying the message that you are going to send to Slack, if you do not have sample data, you will not be able to use data that came from Scout.
To create a quick Alert, open up Scout, go to Alert > Alert Conditions and create a simple condition that will alert, and choose the Slack Notification Group we set up earlier.
Choose the Alert in Zapier
After the Alert has occurred in Scout, go back to Zapier and click the Ok, I did this button and it will connect with Scout and look for an Alert with this matching Webhook. Choose this as the sample you want to use and click Continue.
Create a Slack Action
Next you need to add an Action step to the work-flow, this is the part were we integrate Slack. Click Add a Step on the left-hand side of the page.
Next click the Action/Search option, and you will be given the option to choose an app to connect.
Choose Slack, and then a new Action will be created on the left-hand side of the screen.
There are many different types of Slack Action that you can choose to perform, but let’s choose Send Channel Message.
Next you can configure many aspects of the message that will be sent, such as which channel to send the message to and what particular data comes from the Scout Alert (shown in green). It is only possible to pull this data from Scout here if you created an Alert earlier like we advised.
Then you can send a test message to Slack to preview how it will look.
Then all that’s left to do is to give your Zap a descriptive name and enable it.
Now everything is set up so that whenever an Alert occurs in Scout which is linked to this Notification Channel, you will see a message in Slack.
API
Introduction
The ScoutAPM API is currently fairly narrow in scope. It is intended to support 3rd party dashboards, exporting summary data from your applications.
If you have ideas on API enhancements, please contact us at support@scoutapm.com
Authorization
Obtaining a token
- Log into the ScoutAPM website
- Go to your organization’s settings
- Enter a name (for your use), and obtain a token
Sending Authorization
The key must be provided with every request, via one of several methods:
- An HTTP Header named
X-SCOUT-API
- As part of the JSON request body, as the top level key:
key
- A URL query string argument named
key
Response Format
Every endpoint returns a JSON object. The object will at the minimum,
have a "header"
field with embedded status and message fields. If the
endpoint returned results, it will be under a results keys.
{
"header": {
"status": {
"code": 200,
"message": "OK"
},
"apiVersion": "0.1"
},
"results": { ... }
}
API Endpoints
Applications
Applications List
/api/v0/apps
- returns a list of applications and their ids
Results:
"results": {
"apps": [
{
"name": "MyApp Staging",
"id": 100
},
{
"name": "MyApp Production",
"id": 101
}
}
Application Detail
/api/v0/apps/:id
- returns information about a specific application
Results:
"results": {
"app": {
"id": 101,
"name": "MyApp Production"
}
}
Metrics
Known Metric List
/api/v0/apps/:id/metrics
returns a list of known metric types
Results:
"results": {
availableMetrics": [
"response_time",
"response_time_95th",
"errors",
"throughput",
"queue_time"
]
}
Metric Data
/api/v0/apps/:id/metrics/:metric_type
- will return a time series dataset for the metric.
Parameters:
- from - start time, ISO8601 formatted
- to - end time, ISO8601 formatted
These two times must not be more than 2 weeks apart
Results:
"results": {
series": {
"response_time": [
[
"2016-05-16T22:00:00Z",
90.33333333333333
],
[
"2016-05-16T22:01:00Z",
86.87233333333333
]
]
}
}
Compliance and Privacy
What Data is Collected by the Scout APM Agent?
When you install our APM agent into your application, we instrument your code in order to gather timing and other data. The data collected for all transactions includes:
- Numeric metrics (timing, object allocations, memory)
- Controller (in MVC terms) name and invoked controller function name
- Background job name and invoked function name
- SQL table and operation (e.g. Users#select)
In addition to collecting general data for every transaction, Scout uses an algorithm to pick out the most interesting transactions. These detailed transactions gather more information about the specifics of the transaction including:
- URL path
- URL parameters
- SQL query strings (scrubbed and sanitized before being sent to Scout)
- Outgoing HTTP request URLs (of instrumented HTTP libraries)
- End user IP (the IP of a user making a request to your web server)
- File name and line number of slow functions (used to display a backtrace)
Some of this information can be disabled for detailed transactions. Refer to our configuration section for your language at https://docs.scoutapm.com
In Ruby, you can set log_level = debug
to inspect the entire payload sent by our agent.
HIPAA
Our agent can be installed safely in HIPAA compliant environments. To ensure user data is properly de-identified:
- Disable sending HTTP query params if these contain sensitive data via the
uri_reporting
config option. - Do not add custom context (like reporting the current user in the session).
Email support@scoutapm.com with any questions regarding HIPAA.
GDPR
While our monitoring agents are primarily metric-focused, they can be configured to send personal data if the customer wishes.
Under the GDPR, Scout is defined as a Data Processor. You can view and sign our Data Processing Agreement on behalf of your organization.
PCI DSS
Scout’s payment and card information is handled by Stripe, which has been audited by an independent PCI Qualified Security Assessor and is certified as a PCI Level 1 Service Provider, the most stringent level of certification available in the payments industry.
Scout does not typically receive credit card data, making it compliant with Payment Card Industry Data Security Standards (PCI DSS) in most situations.
Service Status
We’re transparent about our uptime and service issues. If you appear to be experiencing issues with our service:
- Check out status site. You can subscribe to service incidents.
- Email us
Contacting Support
Don’t hesitate to contact us at support@scoutapm.com with any issues. We typically respond in a couple of hours during the business day.
Or, join us on Slack. We are often, but not always, in Slack.
Reference
How we collect metrics
Scout is engineered to monitor the performance of mission-critical production applications. Here’s a short overview of how this happens:
- Our agent is added as an application dependency (ex: for Ruby apps, add our gem to your Gemfile).
- The agent instruments key libraries (database access, controllers, views, etc) automatically using low-overhead instrumentation.
- Every minute, the agent connects over HTTPS through a 256-bit secure, encrypted connection and sends metrics to our servers.
Performance Overhead
Our agent is designed to run in production environments and is extensively benchmarked to ensure it performs on high-traffic applications.
Our most recent benchmarks (lower is better):
We’ve open-sourced our benchmarks so you can test on our own. If your results differ, reach out to us at support@scoutapm.com.
Call Aggregation
During a transaction, the Scout agent records each database call, each external HTTP request, each rendering of a view, and several other instrumented libraries. While each individual pieces of this overall trace has a tiny memory footprint, large transactions can sometimes build up thousands and thousands of them.
To limit our agent’s memory usage, we stop recording the details of every instrument after a relatively high limit. Detailed metrics and backtraces are collected for all calls up to the limit and aggregated metrics are collected for calls over the limit.
Security
We take the security of your code metrics extremely seriously. Keeping your data secure is fundamental to our business. Scout is nearing a decade storing critical metrics and those same fundamentals are applied here:
- All data transmitted by our agent to our servers is sent as serialized JSON over SSL.
- Our UI is only served under SSL.
- When additional data is collected for slow calls (ex: SQL queries), query parameters are sanitized before sending these to our servers.
- Our infrastructure resides in an SOC2 compliant datacenter.
Information sent to our servers
The following data is sent to our servers from the agent:
- Timing information collected from our instrumentation
- Gems used by your application
- Transaction traces, which include:
- The URL, including query parameters, of the slow request. This can be modified to exclude query params via the
uri_reporting
configuration option. - IP Address of the client initiating the request
- Sanitized SQL query statements
- The URL, including query parameters, of the slow request. This can be modified to exclude query params via the
- Process memory and CPU usage
- Error counts
Git Integration
Scout only needs read-only access to your repository, but unfortunately, Github doesn’t currently allow this - they only offer read-write permissions through their OAuth API.
We have asked Github to offer read-only permissions, and they’ve said that the feature coming soon. In the mean time, we’re limited to the permissions structure Github offers. Our current Git security practices:
- we don’t clone your repository’s code; we only pull the commit history
- the commit history is secured on our servers according to industry best practices
- authentication subsystems within our application ensure your commit history is never exposed to anyone outside your account.
All that said, we suggest the following:
- Contact Github about allowing read-only access. This will ensure it stays top-of-mind.
- This is optional and you are able to view backtrace information w/o the integration. It’s likely possible to even write a UserScript to open the code locally in your editor or on Github.
Workaround for read-only Github Access
With a few extra steps, you can grant Scout read-only access. Here’s how:
- Create a team in your Github organization with read-only access to the respective application repositories.
- Create a new Github user and make them a member of that team.
- Authenticate with this user.
AutoInstruments FAQ
What files within a Rails app does AutoInstruments attempt to instrument?
AutoInstruments applies instrumentation to file names that match RAILS_ROOT/app/controllers/*_controller.rb
.
Why is Autoinstruments limited to controllers?
Adding instrumentation induces a small amount of overhead to each instrumented code expression. If we added instrumentation to every line of code within a Rails app, the overhead would be too significant on a production deployment. By limiting Autoinstruments to controllers, we’re striking a balance between visibility and overhead.
What are some examples of code expressions that are instrumented?
Below are some examples of how autoinstrumented spans appear in traces.
# RAILS_ROOT/app/controllers/users_controller.rb
# This file will be instrumented as its name matches `app/controllers/*_controller.rb`.
class UsersController < ApplicationController
def index
fetch_users # <- Appears as `fetch_users` in traces.
if rss? || xml? # <- This is broken into 2 spans within traces: (`rss?` and `xml?`)
formatter = proc do |row| # <- The entire block will appear under "proc do |row|..."
row.to_json
end
return render_xml # <- Appears as `return render_xml`
end
end
private
def fetch_users
return unless authorized? # <- Appears as `return unless authorized?` in traces.
source ||= params[:source].present? # <- Appears as `params[:source].present?`
@users = User.all(limit: 10) # <- ActiveRecord queries are instrumented w/our AR instrumentation
end
Is every method call to an autoinstrumented code expression recorded?
Prior to storing a span, our agent checks if the span’s total execution time is at least 5 ms. If the time spent is under this threshold, the span is thrown away and the time is allocated to the parent span. This decreases the amount of noise that appears in traces (spans consuming < 5ms are unlikely optimization candidates) and decreases the memory usage of the agent. Only autoinstrumented spans are thrown away - spans that are explicity instrumented are retained.
What do charts look like when autoinstruments is enabled?
When autoinstruments is enabled, a large portion of controller time will shift to autoinstruments:
This is expected.
How much overhead does autoinstruments add?
When When autoinstruments is enabled, you can estimate the additional overhead by inspecting your overview chart. Measure the mean controller
time before the deploy then controller
+ autoinstruments
after. The difference between these numbers is the additional overhead.
How can the overhead of autoinstruments be reduced?
By default, the Scout agent adds autoinstruments to every controller in your Rails app. You can exclude controllers from instrumentation, which will reduce overhead via the autoinstruments_ignore
option. To determine which controllers should be ignored:
- Ensure you are running version 2.6.1 of
scout_apm
or later. - Adjust the Scout agent log level to
DEBUG
. - Restart your app.
- After about 10 minutes run the following command inside your
RAILS_ROOT
:
grep -A20 "AutoInstrument Significant Layer Histograms" log/scout_apm.log
For each controller file, this will display the total number of spans recorded and the ratio of significant to total spans. Look for controllers that have a large total
and a small percentage of significant
spans. In the output below, it makes sense to ignore application_controller
as only 10% of those spans are significant:
[09/23/19 07:27:52 -0600 Dereks-MacBook-Pro.local (87116)] DEBUG : AutoInstrument Significant Layer Histograms: {"/Users/dlite/projects/scout/apm/app/controllers/application_controller.rb"=>
{:total=>545, :significant=>0.1},
"/Users/dlite/projects/scout/apm/app/controllers/apps_controller.rb"=>
{:total=>25, :significant=>0.56},
"/Users/dlite/projects/scout/apm/app/controllers/checkin_controller.rb"=>
{:total=>31, :significant=>0.39},
"/Users/dlite/projects/scout/apm/app/controllers/status_pages_controller.rb"=>
{:total=>2, :significant=>0.5},
"/Users/dlite/projects/scout/apm/app/controllers/errors_controller.rb"=>
{:total=>2, :significant=>1.0},
"/Users/dlite/projects/scout/apm/app/controllers/insights_controller.rb"=>
{:total=>2, :significant=>1.0}}
Add the following to the common: &defaults
section of the config/scout_apm.yml
file to avoid instrumenting application_controller.rb
:
common: &defaults
auto_instruments_ignore: ['application_controller']
ScoutProf FAQ
Does ScoutProf work with Stackprof?
ScoutProf and StackProf are not guaranteed to operate at the same time. If you wish to use Stackprof, temporarily disable profiling in your config file (profile: false
) or via environment variables (SCOUT_PROFILE=false
). See the agent’s configuration options.
How is ScoutProf different than Stackprof?
Stackprof was the inspiration for ScoutProf. Although little original Stackprof code remains, we started with Stackprof’s core approach, and integrated it with our APM agent gem, changed it heavily to work with threaded applications, and implemented an easy to understand UI on our trace view.
What do sample counts mean ?
ScoutProf attempts to sample your application every millisecond, capturing a snapshot backtrace of what is running in each thread. For each successful backtrace captured, that is a sample. Later, when we process the raw backtraces, identical traces get combined and the sample count is how many of each unique backtrace were seen.
Why do sample counts vary?
Samples will be paused automatically for a few different reasons:
- If Ruby is in the middle of a GC run, samples won’t be taken.
- If the previous sampling hasn’t been run, a new sampling request won’t be added.
The specifics of exactly how often these scenarios happen depend on how and in what order your ruby code runs. Different sample counts can be expected, even for the same endpoint.
What are the ScoutProf requirements?
- A Linux-based operating system
- Ruby 2.1+
What’s supported during BETA?
During our BETA period, ScoutProf has a few limitations:
- ScoutProf only runs on Linux. Support for additional distros will be added.
- ScoutProf only breaks down time spent in ActionController, not ActionView and not Sidekiq. Support for other areas will be added.
The ScoutProf-enabled version of scout_apm
can be safely installed on all environments our agent supports: the limitations above only prevent ScoutProf from running.
Can ScoutProf be enabled for ActionController::API and ActionController::Metal actions?
Yes. Add the following to your controller:
def enable_scoutprof?; true; end
Billing
Free Trial
We offer a no risk, fully featured, free trial. Enter a credit or debit card anytime to continue using Scout APM after the end of your trial.
Billing Date
Your first bill is 30 days after your signup date.
Subscription Style
Per-Request
We currently offer three transaction-based pricing plans. Custom plans are available for higher transaction volume. Contact support@scoutapm.com for pricing options.
Replacing New Relic
Scout is an attractive alternative to New Relic for modern dev teams. We provide a laser-focus on getting to slow custom application code fast vs. wide breadth as debugging slow custom application code is typically the most time-intensive performance optimization work.
In many cases, Scout is able to replace New Relic as-is. However, there are cases where your app has specific needs we currently don’t provide. Don’t fret - here’s some of the more common scenarios and our suggestions for building a monitoring stack you’ll love:
Exception Monitoring - Scout doesn’t provide exception monitoring, but we do integrate with (Rollbar and Sentry) to provide a side-by-side view of your performance metrics and errors within the Scout UI.
Browser Monitoring (Real User Monitoring) - there are a number of dedicated tools for both Real User Monitoring (RUM) and synthetic monitoring. We’ve reviewed Raygun Pulse, an attractive RUM product. You can also continue to use New Relic for browser monitoring and use Scout for application monitoring.
Our Monitoring Stack
Curious about what a company that lives-and-breathes monitoring (us!) uses to monitor our apps? We shared our complete monitoring stack on our blog.
Talk to us about your monitoring stack
Don’t hesitate to email us if you need to talk through your monitoring stack. Monitoring is something we know and love.