2019-03-28 18:04:08 -07:00
|
|
|
use crate::{
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
builder::{Builder, BuilderError},
|
|
|
|
common::MetricScope,
|
2019-06-01 12:55:28 -07:00
|
|
|
config::Configuration,
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
control::Controller,
|
|
|
|
registry::{MetricRegistry, ScopeRegistry},
|
2019-03-28 18:04:08 -07:00
|
|
|
sink::Sink,
|
|
|
|
};
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
use quanta::{Builder as UpkeepBuilder, Clock, Handle as UpkeepHandle};
|
|
|
|
use std::sync::Arc;
|
|
|
|
|
|
|
|
/// Central store for metrics.
|
|
|
|
///
|
|
|
|
/// `Receiver` is the nucleus for all metrics operations. While no operations are performed by it
|
|
|
|
/// directly, it holds the registeries and references to resources and so it must live as long as
|
|
|
|
/// any [`Sink`] or `[`Controller`] does.
|
2019-03-28 18:04:08 -07:00
|
|
|
pub struct Receiver {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
metric_registry: Arc<MetricRegistry>,
|
|
|
|
scope_registry: Arc<ScopeRegistry>,
|
2019-03-28 18:04:08 -07:00
|
|
|
clock: Clock,
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
_upkeep_handle: UpkeepHandle,
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
impl Receiver {
|
2019-06-01 12:55:28 -07:00
|
|
|
pub(crate) fn from_config(config: Configuration) -> Result<Receiver, BuilderError> {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
// Configure our clock and configure the quanta upkeep thread. The upkeep thread does that
|
|
|
|
// for us, and keeps us within `upkeep_interval` of the true time. The reads of this cache
|
|
|
|
// time are faster than calling the underlying time source directly, and for histogram
|
|
|
|
// windowing, we can afford to have a very granular value compared to the raw nanosecond
|
|
|
|
// precsion provided by quanta by default.
|
|
|
|
let clock = Clock::new();
|
2019-06-01 12:55:28 -07:00
|
|
|
let upkeep = UpkeepBuilder::new_with_clock(config.upkeep_interval, clock.clone());
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
let _upkeep_handle = upkeep.start().map_err(|_| BuilderError::UpkeepFailure)?;
|
|
|
|
|
|
|
|
let scope_registry = Arc::new(ScopeRegistry::new());
|
|
|
|
let metric_registry = Arc::new(MetricRegistry::new(
|
|
|
|
scope_registry.clone(),
|
2019-06-01 12:55:28 -07:00
|
|
|
config,
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
clock.clone(),
|
|
|
|
));
|
|
|
|
|
|
|
|
Ok(Receiver {
|
|
|
|
metric_registry,
|
|
|
|
scope_registry,
|
|
|
|
clock,
|
|
|
|
_upkeep_handle,
|
|
|
|
})
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
/// Creates a new [`Builder`] for building a [`Receiver`].
|
|
|
|
pub fn builder() -> Builder {
|
|
|
|
Builder::default()
|
2019-04-02 21:30:24 -07:00
|
|
|
}
|
2019-03-28 18:04:08 -07:00
|
|
|
|
2019-04-02 21:30:24 -07:00
|
|
|
/// Creates a [`Sink`] bound to this receiver.
|
2019-03-28 18:04:08 -07:00
|
|
|
pub fn get_sink(&self) -> Sink {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
Sink::new(
|
|
|
|
self.metric_registry.clone(),
|
|
|
|
self.scope_registry.clone(),
|
|
|
|
MetricScope::Root,
|
2019-03-28 18:04:08 -07:00
|
|
|
self.clock.clone(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
2019-04-02 21:30:24 -07:00
|
|
|
/// Creates a [`Controller`] bound to this receiver.
|
|
|
|
pub fn get_controller(&self) -> Controller {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
Controller::new(self.metric_registry.clone(), self.scope_registry.clone())
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
}
|