2019-07-05 18:14:08 -07:00
|
|
|
use crate::common::ValueHandle;
|
2019-03-28 18:04:08 -07:00
|
|
|
|
2019-07-28 17:10:56 -07:00
|
|
|
/// A reference to a [`Gauge`].
|
|
|
|
///
|
|
|
|
/// A [`Gauge`] is used for directly updating a gauge, without any lookup overhead.
|
|
|
|
#[derive(Clone)]
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
pub struct Gauge {
|
2019-07-05 18:14:08 -07:00
|
|
|
handle: ValueHandle,
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
impl Gauge {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
/// Records a value for the gauge.
|
|
|
|
pub fn record(&self, value: i64) {
|
|
|
|
self.handle.update_gauge(value);
|
2019-04-02 21:30:24 -07:00
|
|
|
}
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
|
2019-07-05 18:14:08 -07:00
|
|
|
impl From<ValueHandle> for Gauge {
|
|
|
|
fn from(handle: ValueHandle) -> Self {
|
Entirely remove the event loop and switch to pure atomics.
Originally, metrics (and `hotmic` before it was converted) was based on
an event loop that centered around `mio`'s `Poll` interface with a
custom channel to read and write metrics to. That model required a
dedicated thread to run to poll for writes, and ingest them, managing
the internal data structures in turn.
Eventually, I rewrote that portion to be based on `crossbeam-channel`
but we still depended on a background thread to pop samples off the
channel and process them.
Instead, we've rewritten the core of metrics to be based purely on
atomics, with the caveat that we still do have a background thread.
Instead of a single channel that metrics are funneled into, each
underlying metric becomes a single-track codepath: each metric is backed
by an atomic structure which means we can pass handles to that storage
as far as the callers themselves, eliminating the need to funnel metrics
into the "core" where they all contend for processing.
Counters are gauges are now, effectively, wrapped atomic integers, which
means we can process over 100 million counter/gauge updates per core.
Histograms are based on a brand-new atomic "bucket" that allows for
fast, unbounded writes and the ability to snapshot at any time.
The end result is that we can process a mixed workload (counter, gauge,
and histogram) at sample rates of up to 30 million samples per second
per core, with p999 ingest latencies of in the low hundreds of
nanoseconds. Taking snapshots also now avoids stalling the event loop
and driving up tail latencies for ingest, and writers can proceed with
no interruption.
There is still a background thread that is part of quanta's new "recent"
time support, which allows a background thread to incrementally update a
shared global time, which can be accessed more quickly than grabbing the
time directly. For our purposes, only histograms need the time to
perform the window upkeep inherent to the sliding windows we use, and
the time they need can be far less precise than what quanta is capable
of. This background thread is spawned automatically when creating a
receiver, and drops when the receiver goes away. By default, it updates
20 times a second performing an operation which itself takes less than
100ns, so all in all, this background thread should be imperceptible,
performance-wise, on all systems*.
On top of all of this, we've embraced the natural pattern of defining
metrics individually at the variable/field level, and added supported
for proxy types, which can be acquired from a sink and embedded as
fields within your existing types, which lets you directly update
metrics with the ease of accessing a field in an object. Sinks still
have the ability to have metrics pushed directly into them, but this
just opens up more possibilities.
* - famous last words
2019-05-27 17:41:42 -07:00
|
|
|
Self { handle }
|
2019-03-28 18:04:08 -07:00
|
|
|
}
|
|
|
|
}
|