metrics/metrics/src/lib.rs

212 lines
9.2 KiB
Rust
Raw Normal View History

//! High-speed metrics collection library.
//!
//! `metrics` provides a generalized metrics collection library targeted at users who want to log
//! metrics at high volume and high speed.
//!
//! # Design
//!
//! The library follows a pattern of "senders" and a "receiver."
//!
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! Callers create a [`Receiver`], which acts as a registry for all metrics that flow through it.
//! It allows creating new sinks as well as controllers, both necessary to push in and pull out
//! metrics from the system. It also manages background resources necessary for the registry to
//! operate.
//!
//! Once a [`Receiver`] is created, callers can either create a [`Sink`] for sending metrics, or a
//! [`Controller`] for getting metrics out.
//!
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! A [`Sink`] can be cheaply cloned, and offers convenience methods for getting the current time
//! as well as getting direct handles to a given metric. This allows users to either work with the
//! fuller API exposed by [`Sink`] or to take a compositional approach and embed fields that
//! represent each particular metric to be sent.
//!
//! A [`Controller`] provides both a synchronous and asynchronous snapshotting interface, which is
//! [`metrics-core`][metrics_core] compatible for exporting. This allows flexibility in
//! integration amongst traditional single-threaded or hand-rolled multi-threaded applications and
//! the emerging asynchronous Rust ecosystem.
//!
//! # Performance
//!
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! Users can expect to be able to send tens of millions of samples per second, with ingest
//! latencies at roughly 65-70ns at p50, and 250ns at p99. Depending on the workload -- counters
//! vs histograms -- latencies may be even lower, as counters and gauges are markedly faster to
//! update than histograms. Concurrent updates of the same metric will also cause natural
//! contention and lower the throughput/increase the latency of ingestion.
//!
//! # Metrics
//!
//! Counters, gauges, and histograms are supported, and follow the definitions outlined in
//! [`metrics-core`][metrics_core].
//!
//! Here's a simple example of creating a receiver and working with a sink:
//!
//! ```
//! # extern crate metrics;
//! use metrics::Receiver;
//! use std::{thread, time::Duration};
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let receiver = Receiver::builder().build().expect("failed to create receiver");
//! let mut sink = receiver.get_sink();
//!
//! // We can update a counter. Counters are monotonic, unsigned integers that start at 0 and
//! // increase over time.
//! sink.record_count("widgets", 5);
//!
//! // We can update a gauge. Gauges are signed, and hold on to the last value they were updated
//! // to, so you need to track the overall value on your own.
//! sink.record_gauge("red_balloons", 99);
//!
//! // We can update a timing histogram. For timing, we're using the built-in `Sink::now` method
//! // which utilizes a high-speed internal clock. This method returns the time in nanoseconds, so
//! // we get great resolution, but giving the time in nanoseconds isn't required! If you want to
//! // send it in another unit, that's fine, but just pay attention to that fact when viewing and
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! // using those metrics once exported. We also support passing `Instant` values -- both `start`
//! // and `end` need to be the same type, though! -- and we'll take the nanosecond output of that.
//! let start = sink.now();
//! thread::sleep(Duration::from_millis(10));
//! let end = sink.now();
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! sink.record_timing("db.queries.select_products_ns", start, end);
//!
//! // Finally, we can update a value histogram. Technically speaking, value histograms aren't
//! // fundamentally different from timing histograms. If you use a timing histogram, we do the
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! // math for you of getting the time difference, but other than that, identical under the hood.
//! let row_count = 46;
//! sink.record_value("db.queries.select_products_num_rows", row_count);
//! ```
//!
//! # Scopes
//!
//! Metrics can be scoped, not unlike loggers, at the [`Sink`] level. This allows sinks to easily
//! nest themselves without callers ever needing to care about where they're located.
//!
//! This feature is a simpler approach to tagging: while not as semantically rich, it provides the
//! level of detail necessary to distinguish a single metric between multiple callsites.
//!
//! For example, after getting a [`Sink`] from the [`Receiver`], we can easily nest ourselves under
//! the root scope and then send some metrics:
//!
//! ```
//! # extern crate metrics;
//! use metrics::Receiver;
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let receiver = Receiver::builder().build().expect("failed to create receiver");
//!
//! // This sink has no scope aka the root scope. The metric will just end up as "widgets".
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let mut root_sink = receiver.get_sink();
//! root_sink.record_count("widgets", 42);
//!
//! // This sink is under the "secret" scope. Since we derived ourselves from the root scope,
//! // we're not nested under anything, but our metric name will end up being "secret.widgets".
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let mut scoped_sink = root_sink.scoped("secret");
//! scoped_sink.record_count("widgets", 42);
//!
//! // This sink is under the "supersecret" scope, but we're also nested! The metric name for this
//! // sample will end up being "secret.supersecret.widget".
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let mut scoped_sink_two = scoped_sink.scoped("supersecret");
//! scoped_sink_two.record_count("widgets", 42);
//!
//! // Sinks retain their scope even when cloned, so the metric name will be the same as above.
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let mut cloned_sink = scoped_sink_two.clone();
//! cloned_sink.record_count("widgets", 42);
//!
//! // This sink will be nested two levels deeper than its parent by using a slightly different
//! // input scope: scope can be a single string, or multiple strings, which is interpreted as
//! // nesting N levels deep.
//! //
//! // This metric name will end up being "super.secret.ultra.special.widgets".
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! let mut scoped_sink_three = scoped_sink.scoped(&["super", "secret", "ultra", "special"]);
//! scoped_sink_two.record_count("widgets", 42);
//! ```
//!
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! # Snapshots
//!
//! Naturally, we need a way to get the metrics out of the system, which is where snapshots come
//! into play. By utilizing a [`Controller`], we can take a snapshot of the current metrics in the
//! registry, and then output them to any desired system/interface by utilizing
//! [`Recorder`](metrics_core::Recorder). A number of pre-baked recorders (which only concern
//! themselves with formatting the data) and exporters (which take the formatted data and either
//! serve it up, such as exposing an HTTP endpoint, or write it somewhere, like stdout) are
//! available, some of which are exposed by this crate.
//!
//! Let's take an example of writing out our metrics in a yaml-like format, writing them via
//! `log!`:
//! ```
//! # extern crate metrics;
//! use metrics::{Receiver, recorders::TextRecorder, exporters::LogExporter};
//! use log::Level;
//! use std::{thread, time::Duration};
//! let receiver = Receiver::builder().build().expect("failed to create receiver");
//! let mut sink = receiver.get_sink();
//!
//! // We can update a counter. Counters are monotonic, unsigned integers that start at 0 and
//! // increase over time.
//! // Take some measurements, similar to what we had in other examples:
//! sink.record_count("widgets", 5);
//! sink.record_gauge("red_balloons", 99);
//!
//! let start = sink.now();
//! thread::sleep(Duration::from_millis(10));
//! let end = sink.now();
//! sink.record_timing("db.queries.select_products_ns", start, end);
//! sink.record_timing("db.gizmo_query", start, end);
//!
//! let num_rows = 46;
//! sink.record_value("db.queries.select_products_num_rows", num_rows);
//!
//! // Now create our exporter/recorder configuration, and wire it up.
//! let exporter = LogExporter::new(receiver.get_controller(), TextRecorder::new(), Level::Info);
//!
//! // This exporter will now run every 5 seconds, taking a snapshot, rendering it, and writing it
//! // via `log!` at the informational level. This particular exporter is running directly on the
//! // current thread, and not on a background thread.
//! //
//! // exporter.run(Duration::from_secs(5));
//! ```
//! Most exporters have the ability to run on the current thread or to be converted into a future
//! which can be spawned on any Tokio-compatible runtime.
//!
2019-06-11 18:38:05 -07:00
//! # Facade
//!
//! `metrics` is `metrics-facade` compatible, and can be installed as the global metrics facade:
//! ```
2019-06-11 19:31:34 -07:00
//! # #[macro_use] extern crate metrics_facade;
//! extern crate metrics;
2019-06-11 18:38:05 -07:00
//! use metrics::Receiver;
//!
//! Receiver::builder()
//! .build()
//! .expect("failed to create receiver")
//! .install();
//!
//! counter!("items_processed", 42);
//! ```
//!
//! [metrics_core]: https://docs.rs/metrics-core
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
//! [`Recorder`]: https://docs.rs/metrics-core/0.3.1/metrics_core/trait.Recorder.html
2019-06-01 12:55:28 -07:00
#![deny(missing_docs)]
#![warn(unused_extern_crates)]
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
mod builder;
mod common;
mod config;
mod control;
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
pub mod data;
mod helper;
mod receiver;
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
mod registry;
mod sink;
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
#[cfg(any(feature = "metrics-exporter-log", feature = "metrics-exporter-http"))]
pub mod exporters;
2019-05-01 08:47:17 -07:00
#[cfg(any(
feature = "metrics-recorder-text",
feature = "metrics-recorder-prometheus"
))]
pub mod recorders;
pub use self::{
Entirely remove the event loop and switch to pure atomics. Originally, metrics (and `hotmic` before it was converted) was based on an event loop that centered around `mio`'s `Poll` interface with a custom channel to read and write metrics to. That model required a dedicated thread to run to poll for writes, and ingest them, managing the internal data structures in turn. Eventually, I rewrote that portion to be based on `crossbeam-channel` but we still depended on a background thread to pop samples off the channel and process them. Instead, we've rewritten the core of metrics to be based purely on atomics, with the caveat that we still do have a background thread. Instead of a single channel that metrics are funneled into, each underlying metric becomes a single-track codepath: each metric is backed by an atomic structure which means we can pass handles to that storage as far as the callers themselves, eliminating the need to funnel metrics into the "core" where they all contend for processing. Counters are gauges are now, effectively, wrapped atomic integers, which means we can process over 100 million counter/gauge updates per core. Histograms are based on a brand-new atomic "bucket" that allows for fast, unbounded writes and the ability to snapshot at any time. The end result is that we can process a mixed workload (counter, gauge, and histogram) at sample rates of up to 30 million samples per second per core, with p999 ingest latencies of in the low hundreds of nanoseconds. Taking snapshots also now avoids stalling the event loop and driving up tail latencies for ingest, and writers can proceed with no interruption. There is still a background thread that is part of quanta's new "recent" time support, which allows a background thread to incrementally update a shared global time, which can be accessed more quickly than grabbing the time directly. For our purposes, only histograms need the time to perform the window upkeep inherent to the sliding windows we use, and the time they need can be far less precise than what quanta is capable of. This background thread is spawned automatically when creating a receiver, and drops when the receiver goes away. By default, it updates 20 times a second performing an operation which itself takes less than 100ns, so all in all, this background thread should be imperceptible, performance-wise, on all systems*. On top of all of this, we've embraced the natural pattern of defining metrics individually at the variable/field level, and added supported for proxy types, which can be acquired from a sink and embedded as fields within your existing types, which lets you directly update metrics with the ease of accessing a field in an object. Sinks still have the ability to have metrics pushed directly into them, but this just opens up more possibilities. * - famous last words
2019-05-27 17:41:42 -07:00
builder::{Builder, BuilderError},
common::{Delta, MetricName, MetricScope},
control::{Controller, SnapshotError},
receiver::Receiver,
sink::{AsScoped, Sink, SinkError},
};