We use cookies or similar technologies to personalize your online experience and tailor marketing to you. Many of our product features require cookies to function properly. Your use of this site and online product constitutes your consent to these personalization technologies. Read our Privacy Policy to find out more.

X

Ingesting MongoDB logs

Our connector reads MongoDB log output and extracts gobs of useful attributes for your exploration pleasure, like:

Honeycomb is unique in its ability to calculate metrics and statistics on the fly, while retaining the full-resolution log lines (and the original MongoDB query that started it all!).

Once you’ve got data flowing, be sure to take a look at our starter queries! Our entry points will help you see how we recommend comparing lock retention by normalized query, scan efficiency by collection, or read vs. write distribution by host.

The agent you’ll use to translate logs to events and send them to Honeycomb is called honeytail.

Configure Mongo query logging

By default, MongoDB will only log queries that took longer than 100ms to execute. For the most insight into your system, you’ll either want to lower that threshold or simply instruct your database to log all queries.

You can read more about the MongoDB Database Profiler at the MongoDB docs here.

To turn on full query logging, run the following in your MongoDB shell:

> dbnames = db.getMongo().getDBNames()
> for (var k in dbnames) { adb = new Mongo().getDB(dbnames[k]); adb.setProfilingLevel(2, -1); }

To permanently make the change to use full query logging on versions 2.6+, add this to your mongodb.conf file:

operationProfiling:
  slowOpThresholdMs: -1
  mode: all

Or, for versions older than 2.6:

profile = 2

Note: Enabling full query logging can slow down MongoDB. If you have a high query volume, use Honeycomb’s TCP collector to capture your full query workload, and stick with the default profiling levels.

Install and run Honeytail

Download and install the latest honeytail by running:

wget -q https://honeycomb.io/download/honeytail/linux/honeytail_1.762_amd64.deb && \
      echo 'd7bed8a005cbc6a34b232c54f0f84b945f0bb90905c67f85cceaedee9bbbad1e  honeytail_1.762_amd64.deb' | sha256sum -c && \
      sudo dpkg -i honeytail_1.762_amd64.deb

The packages install honeytail, its config file /etc/honeytail/honeytail.conf, and some start scripts. The binary is just honeytail, available if you need it in an unpackaged form or for ad-hoc use.

You should modify the config file and uncomment and set:

Make sure to run through the Configure Mongo Query Logging above before running honeytail, in order to get the most out of your logs.

You are currently logged in to the team, so we have populated the write key here to the first write key for that team.

For the current MongoDB logfile, often located at /var/log/mongodb/mongod.log, first backfill the file to make sure that existing log lines are uploaded:

honeytail \
    --writekey=YOUR_API_KEY \
    --dataset=Mongo \
    --parser=mongo \
    --file=/var/log/mongodb/mongod.log \
    --mongo.log_partials \
    --backfill

And then set honeytail up to tail new lines:

honeytail \
    --writekey=YOUR_API_KEY \
    --dataset=MongoLogs \
    --parser=mongo \
    --file=/var/log/mongodb/mongod.log \
    --mongo.log_partials

The --mongo.log_partials flag is not required. We recommend it, though, to send data to Honeycomb even if honeytail had trouble parsing some parts of the log line.

Backfilling archived logs

Regardless of whether you pick Automated or Manual setup, you may have other archived logs that you’d like to import into honeycomb. After either setup process, you’ll have a honeytail agent downloaded that you can use.

If you have a MongoDB logfile located at /var/log/mongodb/mongod.16.log, you can backfill using this command:

honeytail \
    --writekey=YOUR_API_KEY \
    --dataset=MongoDB \
    --parser=mongo \
    --file=/var/log/mongodb/mongod.16.log \
    --mongo.log_partials \
    --backfill

This command can be used at any point to backfill from archived log files. You can read more about our agent honeytail or its backfill behavior here.

Note: honeytail does not unzip log files, so you’ll need to do this before backfilling.

Once you’ve finished backfilling your old logs, we recommend transitioning to the default streaming behavior to stream live logs to Honeycomb.

Troubleshooting

First, check out honeytail Troubleshooting for general debugging tips.

No data is being sent, and --debug reveals logline didn't parse, skipping messages

Take a look at the --file being handed to honeytail and make sure they look like MongoDB log files (you can find the expected log message output for your MongoDB version in the MongoDB docs.)

If the logs look correct but honeytail is still failing to send events to Honeycomb, let us know! We’re available to help anytime via email or chat .

Only some queries seem to appear in Honeycomb

Did you remember to turn on full query logging? Our parser relies on reading your server’s MongoDB output logs, and that often requires a bit of configuration on your end.

Try checking the output of:

> db.getProfilingStatus()

If the returned profile level is 0, take another look at the steps described in Configure Mongo Query Logging.

Still having trouble?

We’re happy to help—send us a message via chat anytime!

Example extracted MongoDB fields

Ingesting a MongoDB log line (resulting from an update—note that different MongoDB version use significantly different log formats, so your mileage may vary):

Tue Sep 13 21:10:33.961 I COMMAND  [conn11896572] command data.$cmd command: update { update: "currentMood", updates: [ { q: { mood: "bright" }, u: { $set: { mood: "dark" } } } ], writeConcern: { getLastError: 1, w: 1 }, ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:95 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } user_key_comparison_count:466 block_cache_hit_count:10 block_read_count:0 block_read_byte:0 internal_key_skipped_count:17 internal_delete_skipped_count:0 get_from_memtable_count:0 seek_on_memtable_count:2 seek_child_seek_count:12 0ms

… will produce MongoDB events for Honeycomb that look like:

field name type value
block_cache_hit_count float 10.0
block_read_byte float 0.0
block_read_count float 0.0
collection string $cmd
collection_write_lock float 1.0
command json {“ordered”: true, “update”: “currentMood”, “updates”: [{“q”: {“mood”: “bright”}, “u”: {“$set”: {“mood”: “dark”}}}], “writeConcern”: {“getLastError”: 1, “w”: 1}}
command_type string update
component string COMMAND
context string conn11896572
database string data
database_write_lock float 1.0
duration_ms float 0.0
get_from_memtable_count float 0.0
global_read_lock float 1.0
global_write_lock float 1.0
internal_delete_skipped_count float 0.0
internal_key_skipped_count float 17.0
keyUpdates float 0.0
namespace string data.$cmd
normalized_query string { “updates”: [ { “$query”: { “mood”: 1 }, “$update”: { “$set”: { “mood”: 1 } } } ] }
numYields float 0.0
operation string command
query json {“updates”: [{“$query”: {“mood”: “bright”}, “$update”: {“$set”: {“mood”: “dark”}}}]}
reslen float 95.0
seek_child_seek_count float 12.0
seek_on_memtable_count float 2.0
severity string informational
user_key_comparison_count float 466.0
writeConflicts float 0.0

Note: MongoDB log formats (and the information encoded within) vary widely between different MongoDB versions, and the fields extracted from your MongoDB log output may differ from those shown above.

Numbers are ingested as floats by default in Honeycomb, though you can coerce a field to integers in the Schema section of your dataset’s Overview.

To learn more about those differences and what each of these fields mean, please refer to the MongoDB docs for your version.

Scrubbing personally identifiable information

While we believe strongly in the value of being able to track down the precise query causing a problem, we understand the concerns of exporting log data which may contain sensitive user information.

With that in mind, we recommend using honeytail’s MongoDB parser, but adding a --scrub_field=query flag to hash the concrete query value. The normalized_query attribute will still be representative of the shape of the query, and identifying patterns including specific queries will still be possible—but the sensitive information will be completely obscured before leaving your servers.

More information about dropping or scrubbing sensitive fields can be found here.