refactor: next-generation metrics (#80)

This commit is contained in:
Toby Lawrence 2020-09-26 22:26:39 -04:00 committed by GitHub
parent a796126c27
commit 36834dd6c6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
126 changed files with 4824 additions and 5789 deletions

13
.editorconfig Normal file
View File

@ -0,0 +1,13 @@
# http://editorconfig.org
root = true
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.rs]
indent_size = 4

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
/target
**/*.rs.bk
Cargo.lock
/.vscode

View File

@ -1,12 +1,10 @@
[workspace]
members = [
"metrics-core",
"metrics",
"metrics-runtime",
"metrics-macros",
"metrics-util",
"metrics-exporter-log",
"metrics-exporter-http",
"metrics-observer-yaml",
"metrics-observer-prometheus",
"metrics-observer-json",
"metrics-benchmark",
"metrics-exporter-tcp",
"metrics-exporter-prometheus",
"metrics-tracing-context",
]

View File

@ -21,7 +21,7 @@ The Metrics project: a metrics ecosystem for Rust.
Running applications in production can be hard when you don't have insight into what the application is doing. We're lucky to have so many good system monitoring programs and services to show us how our servers are performing, but we still have to do the work of instrumenting our applications to gain deep insight into their behavior and performance.
_Metrics_ makes it easy to instrument your application to provide real-time insight into what's happening. It provides a number of practical features that make it easy for library and application authors to start collecting and exporting metrics from their codebase.
`metrics` makes it easy to instrument your application to provide real-time insight into what's happening. It provides a number of practical features that make it easy for library and application authors to start collecting and exporting metrics from their codebase.
# why would I collect metrics?
@ -32,50 +32,35 @@ Some of the most common scenarios for collecting metrics from an application:
Importantly, this works for both library authors and application authors. If the libraries you use are instrumented, you unlock the power of being able to collect those metrics in your application for free, without any extra configuration. Everyone wins, and learns more about their application performance at the end of the day.
# project goals
Firstly, we want to establish standardized interfaces by which everyone can interoperate: this is the goal of the `metrics` and `metrics-core` crates.
`metrics` provides macros similar to `log`, which are essentially zero cost and invisible when not in use, but automatically funnel their data when a user opts in and installs a metrics recorder. This allows library authors to instrument their libraries without needing to care which metrics system end users will be utilizing.
`metrics-core` provides foundational traits for core components of the metrics ecosystem, primarily the output side. There are a large number of output formats and transports that application authors may consider or want to use. By focusing on the API boundary between the systems that collect metrics and the systems they're exported to, these pieces can be easily swapped around depending on the needs of the end user.
Secondly, we want to provide a best-in-class reference runtime: this is the goal of the `metrics-runtime` crate.
Unfortunately, a great interface is no good without a suitable implementation, and we want to make sure that for users looking to instrument their applications for the first time, that they have a batteries-included option that gets them off to the races quickly. The `metrics-runtime` crate provides a best-in-class implementation of a metrics collection system, including support for the core metric types -- counters, gauges, and histograms -- as well as support for important features such as scoping, labels, flexible approaches to recording, and more.
On top of that, collecting metrics isn't terribly useful unless you can export those values, and so `metrics-runtime` pulls in a small set of default observers and exporters to allow users to quickly set up their application to be observable by their existing downstream metrics aggregation/storage.
# project layout
The Metrics project provides a number of crates for both library and application authors.
If you're a library author, you'll only care about using [`metrics`] to instrument your library. If you're an application author, you'll primarily care about [`metrics-runtime`], but you may also want to use [`metrics`] to make instrumenting your own code even easier.
If you're a library author, you'll only care about using [`metrics`] to instrument your library. If
you're an application author, you'll likely also want to instrument your application, but you'll
care about "exporters" as a means to take those metrics and ship them somewhere for analysis.
Overall, this repository is home to the following crates:
* [`metrics`][metrics]: A lightweight metrics facade, similar to [`log`](https://docs.rs/log).
* [`metrics-core`][metrics-core]: Foundational traits for interoperable metrics libraries.
* [`metrics-runtime`][metrics-runtime]: A batteries-included metrics library.
* [`metrics-exporter-http`][metrics-exporter-http]: A metrics-core compatible exporter for serving metrics over HTTP.
* [`metrics-exporter-log`][metrics-exporter-log]: A metrics-core compatible exporter for forwarding metrics to logs.
* [`metrics-observer-json`][metrics-observer-json]: A metrics-core compatible observer that outputs JSON.
* [`metrics-observer-yaml`][metrics-observer-yaml]: A metrics-core compatible observer that outputs YAML.
* [`metrics-observer-prometheus`][metrics-observer-prometheus]: A metrics-core compatible observer that outputs the Prometheus exposition format.
* [`metrics-util`][metrics-util]: Helper types/functions used by the metrics ecosystem.
* [`metrics-macros`][metrics-macros]: Procedural macros that power `metrics`.
* [`metrics-tracing-context`][metrics-tracing-context]: Allow capturing [`tracing`][tracing] span
fields as metric labels.
* [`metrics-exporter-tcp`][metrics-exporter-tcp]: A `metrics`-compatible exporter for serving metrics over TCP.
* [`metrics-exporter-prometheus`][metrics-exporter-prometheus]: A `metrics`-compatible exporter for
serving a Prometheus scrape endpoint.
* [`metrics-util`][metrics-util]: Helper types/functions used by the `metrics` ecosystem.
# contributing
We're always looking for users who have thoughts on how to make metrics better, or users with interesting use cases. Of course, we're also happy to accept code contributions for outstanding feature requests! 😀
We're always looking for users who have thoughts on how to make `metrics` better, or users with interesting use cases. Of course, we're also happy to accept code contributions for outstanding feature requests! 😀
We'd love to chat about any of the above, or anything else, really! You can find us over on [Discord](https://discord.gg/eTwKyY9).
[metrics]: https://github.com/metrics-rs/metrics/tree/master/metrics
[metrics-core]: https://github.com/metrics-rs/metrics/tree/master/metrics-core
[metrics-runtime]: https://github.com/metrics-rs/metrics/tree/master/metrics-runtime
[metrics-exporter-http]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-http
[metrics-exporter-log]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-log
[metrics-observer-json]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-json
[metrics-observer-yaml]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-yaml
[metrics-observer-prometheus]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-prometheus
[metrics-macros]: https://github.com/metrics-rs/metrics/tree/master/metrics-macros
[metrics-tracing-context]: https://github.com/metrics-rs/metrics/tree/master/metrics-tracing-context
[metrics-exporter-tcp]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-tcp
[metrics-exporter-prometheus]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-prometheus
[metrics-util]: https://github.com/metrics-rs/metrics/tree/master/metrics-util
[tracing]: https://tracing.rs

View File

@ -15,6 +15,6 @@ jobs:
steps:
- template: azure-install-rust.yml
parameters:
rust_version: 1.39.0
rust_version: 1.40.0
- script: cargo test
displayName: cargo test

View File

@ -0,0 +1,15 @@
[package]
name = "metrics-benchmark"
version = "0.1.1-alpha.1"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
[dependencies]
log = "0.4"
env_logger = "0.7"
getopts = "0.2"
hdrhistogram = "7.0"
quanta = "0.6"
atomic-shim = "0.1"
metrics = { version = "0.13.0-alpha.0", path = "../metrics" }
metrics-util = { version = "0.4.0-alpha.0", path = "../metrics-util" }

View File

@ -1,22 +1,12 @@
#[macro_use]
extern crate log;
extern crate env_logger;
extern crate getopts;
extern crate hdrhistogram;
extern crate metrics_core;
extern crate metrics_runtime;
extern crate tokio;
#[macro_use]
extern crate metrics;
use atomic_shim::AtomicU64;
use getopts::Options;
use hdrhistogram::Histogram;
use metrics_runtime::{exporters::HttpExporter, observers::JsonBuilder, Receiver};
use quanta::Clock;
use log::{error, info};
use metrics::{gauge, histogram, increment};
use metrics_util::DebuggingRecorder;
use std::{
env,
ops::Sub,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
@ -28,23 +18,21 @@ use std::{
const LOOP_SAMPLE: u64 = 1000;
struct Generator {
t0: Option<u64>,
t0: Option<Instant>,
gauge: i64,
hist: Histogram<u64>,
done: Arc<AtomicBool>,
rate_counter: Arc<AtomicU64>,
clock: Clock,
}
impl Generator {
fn new(done: Arc<AtomicBool>, rate_counter: Arc<AtomicU64>, clock: Clock) -> Generator {
fn new(done: Arc<AtomicBool>, rate_counter: Arc<AtomicU64>) -> Generator {
Generator {
t0: None,
gauge: 0,
hist: Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap(),
done,
rate_counter,
clock,
}
}
@ -59,22 +47,22 @@ impl Generator {
self.gauge += 1;
let t1 = self.clock.now();
let t1 = Instant::now();
if let Some(t0) = self.t0 {
let start = if counter % LOOP_SAMPLE == 0 {
self.clock.now()
let start = if counter % 1000 == 0 {
Some(Instant::now())
} else {
0
None
};
counter!("ok.gotem", 1);
timing!("ok.gotem", t0, t1);
gauge!("total", self.gauge);
increment!("ok");
gauge!("total", self.gauge as f64);
histogram!("ok", t1.sub(t0));
if start != 0 {
let delta = self.clock.now() - start;
self.hist.saturating_record(delta);
if let Some(val) = start {
let delta = Instant::now() - val;
self.hist.saturating_record(delta.as_nanos() as u64);
// We also increment our global counter for the sample rate here.
self.rate_counter
@ -121,8 +109,7 @@ pub fn opts() -> Options {
opts
}
#[tokio::main]
async fn main() {
fn main() {
env_logger::init();
let args: Vec<String> = env::args().collect();
@ -159,35 +146,23 @@ async fn main() {
info!("duration: {}s", seconds);
info!("producers: {}", producers);
let receiver = Receiver::builder()
.histogram(Duration::from_secs(5), Duration::from_millis(100))
.build()
.expect("failed to build receiver");
let recorder = DebuggingRecorder::new();
let snapshotter = recorder.snapshotter();
recorder.install().expect("failed to install recorder");
let controller = receiver.controller();
let addr = "0.0.0.0:23432"
.parse()
.expect("failed to parse http listen address");
let builder = JsonBuilder::new().set_pretty_json(true);
let exporter = HttpExporter::new(controller.clone(), builder, addr);
tokio::spawn(exporter.async_run());
receiver.install();
info!("receiver configured");
info!("sink configured");
// Spin up our sample producers.
let done = Arc::new(AtomicBool::new(false));
let rate_counter = Arc::new(AtomicU64::new(0));
let mut handles = Vec::new();
let clock = Clock::new();
for _ in 0..producers {
let d = done.clone();
let r = rate_counter.clone();
let c = clock.clone();
let handle = thread::spawn(move || {
Generator::new(d, r, c).run();
let mut gen = Generator::new(d, r);
gen.run();
});
handles.push(handle);
@ -202,7 +177,7 @@ async fn main() {
let t1 = Instant::now();
let start = Instant::now();
let _snapshot = controller.snapshot();
let _snapshot = snapshotter.snapshot();
let end = Instant::now();
snapshot_hist.saturating_record(duration_as_nanos(end - start) as u64);
@ -219,7 +194,7 @@ async fn main() {
info!("--------------------------------------------------------------------------------");
info!(" ingested samples total: {}", total);
info!(
"snapshot end-to-end: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
"snapshot retrieval: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(snapshot_hist.min()),
nanos_to_readable(snapshot_hist.value_at_percentile(50.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(95.0)),

View File

@ -1,47 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.5.2] - 2019-11-21
### Changed
- Fixed a bug with the display output for `Key`. (#59)
## [0.5.1] - 2019-08-08
### Changed
- Fixed a bug with macros calling inner macros without a fully qualified name.
## [0.5.0] - 2019-07-29
### Added
- `Key` now supports labels. (#27)
- `Builder` for building observers in a more standardized way. (#30)
### Changed
- `Recorder` is now `Observer`. (#35)
## [0.4.0] - 2019-06-11
### Added
- Add `Key` as the basis for metric names. (#20)
- Add `AsNanoseconds` for defining types that can be used for start/end times. (#20)
## [0.3.1] - 2019-04-30
### Removed
- Removed extraneous import.
## [0.3.0] - 2019-04-30
### Added
- Added snapshot traits for composable snapshotting. (#8)
### Changed
- Reduced stuttering in type names. (#8)
## [0.2.0] - 2019-04-23
### Changed
- Changed from "exporter" to "recorder" in type names, documentation, etc.
## [0.1.2] - 2019-03-26
### Added
- Effective birth of the crate -- earlier versions were purely for others to experiment with. (#1)

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,16 +0,0 @@
[package]
name = "metrics-core"
version = "0.5.2"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "Foundational traits for interoperable metrics libraries."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-core"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "interface", "common"]

View File

@ -1,24 +0,0 @@
# metrics-core
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-core.svg
[release-badge]: https://img.shields.io/crates/v/metrics-core.svg
[license-badge]: https://img.shields.io/crates/l/metrics-core.svg
[docs-badge]: https://docs.rs/metrics-core/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-core
[docs]: https://docs.rs/metrics-core
__metrics-core__ defines foundational traits for interoperable metrics libraries in Rust.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].
## mandate / goals
This crate acts as the minimum viable trait for metrics libraries, and consumers of that data, for interoperating with each other.
If your library allows users to collect metrics, it should support metrics-core to allow for flexibility in output targets. If your library provides support for a target metrics backend, it should support metrics-core so that it can be easily plugged into applications using a supported metrics library.

View File

@ -1,347 +0,0 @@
//! Foundational traits for interoperable metrics libraries in Rust.
//!
//! # Common Ground
//! Most libraries, under the hood, are all based around a core set of data types: counters,
//! gauges, and histograms. While the API surface may differ, the underlying data is the same.
//!
//! # Metric Types
//!
//! ## Counters
//! Counters represent a single value that can only ever be incremented over time, or reset to
//! zero.
//!
//! Counters are useful for tracking things like operations completed, or errors raised, where
//! the value naturally begins at zero when a process or service is started or restarted.
//!
//! ## Gauges
//! Gauges represent a single value that can go up _or_ down over time.
//!
//! Gauges are useful for tracking things like the current number of connected users, or a stock
//! price, or the temperature outside.
//!
//! ## Histograms
//! Histograms measure the distribution of values for a given set of measurements.
//!
//! Histograms are generally used to derive statistics about a particular measurement from an
//! operation or event that happens over and over, such as the duration of a request, or number of
//! rows returned by a particular database query.
//!
//! Histograms allow you to answer questions of these measurements, such as:
//! - "What were the fastest and slowest requests in this window?"
//! - "What is the slowest request we've seen out of 90% of the requests measured? 99%?"
//!
//! Histograms are a convenient way to measure behavior not only at the median, but at the edges of
//! normal operating behavior.
#![deny(missing_docs)]
use std::{borrow::Cow, fmt, slice::Iter, time::Duration};
/// An allocation-optimized string.
///
/// We specify `ScopedString` to attempt to get the best of both worlds: flexibility to provide a
/// static or dynamic (owned) string, while retaining the performance benefits of being able to
/// take ownership of owned strings and borrows of completely static strings.
pub type ScopedString = Cow<'static, str>;
/// A key/value pair used to further describe a metric.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct Label(ScopedString, ScopedString);
impl Label {
/// Creates a `Label` from a key and value.
pub fn new<K, V>(key: K, value: V) -> Self
where
K: Into<ScopedString>,
V: Into<ScopedString>,
{
Label(key.into(), value.into())
}
/// The key of this label.
pub fn key(&self) -> &str {
self.0.as_ref()
}
/// The value of this label.
pub fn value(&self) -> &str {
self.1.as_ref()
}
/// Consumes this `Label`, returning the key and value.
pub fn into_parts(self) -> (ScopedString, ScopedString) {
(self.0, self.1)
}
}
/// A metric key.
///
/// A key always includes a name, but can optional include multiple labels used to further describe
/// the metric.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct Key {
name: ScopedString,
labels: Vec<Label>,
}
impl Key {
/// Creates a `Key` from a name.
pub fn from_name<N>(name: N) -> Self
where
N: Into<ScopedString>,
{
Key {
name: name.into(),
labels: Vec::new(),
}
}
/// Creates a `Key` from a name and vector of `Label`s.
pub fn from_name_and_labels<N, L>(name: N, labels: L) -> Self
where
N: Into<ScopedString>,
L: IntoLabels,
{
Key {
name: name.into(),
labels: labels.into_labels(),
}
}
/// Adds a new set of labels to this key.
///
/// New labels will be appended to any existing labels.
pub fn add_labels<L>(&mut self, new_labels: L)
where
L: IntoLabels,
{
self.labels.extend(new_labels.into_labels());
}
/// Name of this key.
pub fn name(&self) -> ScopedString {
self.name.clone()
}
/// Labels of this key, if they exist.
pub fn labels(&self) -> Iter<Label> {
self.labels.iter()
}
/// Maps the name of this `Key` to a new name.
pub fn map_name<F, S>(self, f: F) -> Self
where
F: FnOnce(ScopedString) -> S,
S: Into<ScopedString>,
{
Key {
name: f(self.name).into(),
labels: self.labels,
}
}
/// Consumes this `Key`, returning the name and any labels.
pub fn into_parts(self) -> (ScopedString, Vec<Label>) {
(self.name, self.labels)
}
}
impl fmt::Display for Key {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.labels.is_empty() {
write!(f, "Key({})", self.name)
} else {
let kv_pairs = self
.labels
.iter()
.map(|label| format!("{} = {}", label.0, label.1))
.collect::<Vec<_>>();
write!(f, "Key({}, [{}])", self.name, kv_pairs.join(", "))
}
}
}
impl From<String> for Key {
fn from(name: String) -> Key {
Key::from_name(name)
}
}
impl From<&'static str> for Key {
fn from(name: &'static str) -> Key {
Key::from_name(name)
}
}
impl From<ScopedString> for Key {
fn from(name: ScopedString) -> Key {
Key::from_name(name)
}
}
impl<K, L> From<(K, L)> for Key
where
K: Into<ScopedString>,
L: IntoLabels,
{
fn from(parts: (K, L)) -> Key {
Key::from_name_and_labels(parts.0, parts.1)
}
}
impl<K, V> From<(K, V)> for Label
where
K: Into<ScopedString>,
V: Into<ScopedString>,
{
fn from(pair: (K, V)) -> Label {
Label::new(pair.0, pair.1)
}
}
impl<K, V> From<&(K, V)> for Label
where
K: Into<ScopedString> + Clone,
V: Into<ScopedString> + Clone,
{
fn from(pair: &(K, V)) -> Label {
Label::new(pair.0.clone(), pair.1.clone())
}
}
/// A value that can be converted to `Label`s.
pub trait IntoLabels {
/// Consumes this value, turning it into a vector of `Label`s.
fn into_labels(self) -> Vec<Label>;
}
impl IntoLabels for Vec<Label> {
fn into_labels(self) -> Vec<Label> {
self
}
}
impl<T, L> IntoLabels for &T
where
Self: IntoIterator<Item = L>,
L: Into<Label>,
{
fn into_labels(self) -> Vec<Label> {
self.into_iter().map(|l| l.into()).collect()
}
}
/// Used to do a nanosecond conversion.
///
/// This trait allows us to interchangably accept raw integer time values, ones already in
/// nanoseconds, as well as the more conventional [`Duration`] which is a result of getting the
/// difference between two [`Instant`](std::time::Instant)s.
pub trait AsNanoseconds {
/// Performs the conversion.
fn as_nanos(&self) -> u64;
}
impl AsNanoseconds for u64 {
fn as_nanos(&self) -> u64 {
*self
}
}
impl AsNanoseconds for Duration {
fn as_nanos(&self) -> u64 {
self.as_nanos() as u64
}
}
/// A value that observes metrics.
pub trait Observer {
/// The method called when a counter is observed.
///
/// From the perspective of an observer, a counter and gauge are essentially identical, insofar
/// as they are both a single value tied to a key. From the perspective of a collector,
/// counters and gauges usually have slightly different modes of operation.
///
/// For the sake of flexibility on the exporter side, both are provided.
fn observe_counter(&mut self, key: Key, value: u64);
/// The method called when a gauge is observed.
///
/// From the perspective of a observer, a counter and gauge are essentially identical, insofar
/// as they are both a single value tied to a key. From the perspective of a collector,
/// counters and gauges usually have slightly different modes of operation.
///
/// For the sake of flexibility on the exporter side, both are provided.
fn observe_gauge(&mut self, key: Key, value: i64);
/// The method called when an histogram is observed.
///
/// Observers are expected to tally their own histogram views, so this will be called with all
/// of the underlying observed values, and callers will need to process them accordingly.
///
/// There is no guarantee that this method will not be called multiple times for the same key.
fn observe_histogram(&mut self, key: Key, values: &[u64]);
}
/// A value that can build an observer.
///
/// Observers are containers used for rendering a snapshot in a particular format.
/// As many systems are multi-threaded, we can't easily share a single recorder amongst
/// multiple threads, and so we create a recorder per observation, tying them together.
///
/// A builder allows us to generate an observer on demand, giving each specific recorder an
/// interface by which they can do any necessary configuration, initialization, etc of the
/// observer before handing it over to the exporter.
pub trait Builder {
/// The observer created by this builder.
type Output: Observer;
/// Creates a new recorder.
fn build(&self) -> Self::Output;
}
/// A value that can produce a `T` by draining its content.
///
/// After being drained, the value should be ready to be reused.
pub trait Drain<T> {
/// Drain the `Observer`, producing a `T`.
fn drain(&mut self) -> T;
}
/// A value whose metrics can be observed by an `Observer`.
pub trait Observe {
/// Observe point-in-time view of the collected metrics.
fn observe<O: Observer>(&self, observer: &mut O);
}
/// Helper macro for generating a set of labels.
///
/// While a `Label` can be generated manually, most users will tend towards the key => value format
/// commonly used for defining hashes/maps in many programming languages. This macro allows users
/// to do the exact same thing in calls that depend on [`metrics_core::IntoLabels`].
///
/// # Examples
/// ```rust
/// # #[macro_use] extern crate metrics_core;
/// # use metrics_core::IntoLabels;
/// fn takes_labels<L: IntoLabels>(name: &str, labels: L) {
/// println!("name: {} labels: {:?}", name, labels.into_labels());
/// }
///
/// takes_labels("requests_processed", labels!("request_type" => "admin"));
/// ```
#[macro_export]
macro_rules! labels {
(@ { $($out:expr),* $(,)* } $(,)*) => {
std::vec![ $($out),* ]
};
(@ { } $k:expr => $v:expr, $($rest:tt)*) => {
$crate::labels!(@ { $crate::Label::new($k, $v) } $($rest)*)
};
(@ { $($out:expr),+ } $k:expr => $v:expr, $($rest:tt)*) => {
$crate::labels!(@ { $($out),+, $crate::Label::new($k, $v) } $($rest)*)
};
($($args:tt)*) => {
$crate::labels!(@ { } $($args)*, )
};
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,19 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.2.0] - 2019-07-29
### Changed
- Switch to metrics-core 0.5.0.
## [0.1.2] - 2019-05-11
### Changed
- Switch to metrics-core 0.4.0.
## [0.1.0] - 2019-05-05
### Added
- Effective birth of the crate.

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,21 +0,0 @@
[package]
name = "metrics-exporter-http"
version = "0.3.0"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-core compatible exporter for serving metrics over HTTP."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-exporter-http"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "metrics-core", "exporter", "http"]
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
hyper = "^0.13"
log = "^0.4"

View File

@ -1,67 +0,0 @@
//! Exports metrics over HTTP.
//!
//! This exporter can utilize observers that are able to be converted to a textual representation
//! via [`Drain<String>`]. It will respond to any requests, regardless of the method or path.
//!
//! Awaiting on `async_run` will drive an HTTP server listening on the configured address.
#![deny(missing_docs)]
use hyper::{
service::{make_service_fn, service_fn},
{Body, Error, Response, Server},
};
use metrics_core::{Builder, Drain, Observe, Observer};
use std::{net::SocketAddr, sync::Arc};
/// Exports metrics over HTTP.
pub struct HttpExporter<C, B> {
controller: C,
builder: B,
address: SocketAddr,
}
impl<C, B> HttpExporter<C, B>
where
C: Observe + Send + Sync + 'static,
B: Builder + Send + Sync + 'static,
B::Output: Drain<String> + Observer,
{
/// Creates a new [`HttpExporter`] that listens on the given `address`.
///
/// Observers expose their output by being converted into strings.
pub fn new(controller: C, builder: B, address: SocketAddr) -> Self {
HttpExporter {
controller,
builder,
address,
}
}
/// Starts an HTTP server on the `address` the exporter was originally configured with,
/// responding to any request with the output of the configured observer.
pub async fn async_run(self) -> hyper::error::Result<()> {
let builder = Arc::new(self.builder);
let controller = Arc::new(self.controller);
let make_svc = make_service_fn(move |_| {
let builder = builder.clone();
let controller = controller.clone();
async move {
Ok::<_, Error>(service_fn(move |_| {
let builder = builder.clone();
let controller = controller.clone();
async move {
let mut observer = builder.build();
controller.observe(&mut observer);
let output = observer.drain();
Ok::<_, Error>(Response::new(Body::from(output)))
}
}))
}
});
Server::bind(&self.address).serve(make_svc).await
}
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,23 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.3.0] - 2019-07-29
### Changed
- Switch to metrics-core 0.5.0.
## [0.2.1] - 2019-06-11
### Changed
- Switch to metrics-core 0.4.0.
## [0.2.0] - 2019-05-01
### Changed
- Switch to metrics-core 0.3.0.
## [0.1.0] - 2019-04-23
### Added
- Effective birth of the crate.

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,21 +0,0 @@
[package]
name = "metrics-exporter-log"
version = "0.4.0"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-core compatible exporter for forwarding metrics to logs."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-exporter-log"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "metrics-core", "exporter", "log"]
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
log = "^0.4"
tokio = { version = "0.2", features = ["time"] }

View File

@ -1,76 +0,0 @@
//! Exports metrics via the `log` crate.
//!
//! This exporter can utilize observers that are able to be converted to a textual representation
//! via [`Drain<String>`]. It will emit that output by logging via the `log` crate at the specified
//! level.
//!
//! # Run Modes
//! - Using `run` will block the current thread, capturing a snapshot and logging it based on the
//! configured interval.
//! - Using `async_run` will return a future that can be awaited on, mimicing the behavior of
//! `run`.
#![deny(missing_docs)]
#[macro_use]
extern crate log;
use log::Level;
use metrics_core::{Builder, Drain, Observe, Observer};
use std::{thread, time::Duration};
use tokio::time;
/// Exports metrics by converting them to a textual representation and logging them.
pub struct LogExporter<C, B>
where
B: Builder,
{
controller: C,
observer: B::Output,
level: Level,
interval: Duration,
}
impl<C, B> LogExporter<C, B>
where
B: Builder,
B::Output: Drain<String> + Observer,
C: Observe,
{
/// Creates a new [`LogExporter`] that logs at the configurable level.
///
/// Observers expose their output by being converted into strings.
pub fn new(controller: C, builder: B, level: Level, interval: Duration) -> Self {
LogExporter {
controller,
observer: builder.build(),
level,
interval,
}
}
/// Runs this exporter on the current thread, logging output at the interval
/// given on construction.
pub fn run(&mut self) {
loop {
thread::sleep(self.interval);
self.turn();
}
}
/// Run this exporter, logging output only once.
pub fn turn(&mut self) {
self.controller.observe(&mut self.observer);
let output = self.observer.drain();
log!(self.level, "{}", output);
}
/// Converts this exporter into a future which logs output at the interval
/// given on construction.
pub async fn async_run(mut self) {
let mut interval = time::interval(self.interval);
loop {
interval.tick().await;
self.turn();
}
}
}

View File

@ -4,7 +4,9 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
<!-- next-header -->
## [Unreleased] - ReleaseDate
## [0.1.0] - 2019-07-29
### Added

View File

@ -0,0 +1,29 @@
[package]
name = "metrics-exporter-prometheus"
version = "0.1.0-alpha.4"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-compatible exporter that serves a Prometheus scrape endpoint."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-exporter-prometheus"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "prometheus"]
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics" }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util"}
hdrhistogram = "^7.1"
hyper = { version = "^0.13", default-features = false, features = ["tcp"] }
tokio = { version = "^0.2", features = ["rt-core", "tcp", "time", "macros"] }
parking_lot = "^0.10"
[dev-dependencies]
quanta = "^0.5"
tracing = "^0.1"
tracing-subscriber = "^0.2"

View File

@ -1,17 +1,17 @@
# metrics-exporter-http
# metrics-exporter-prometheus
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-exporter-http.svg
[release-badge]: https://img.shields.io/crates/v/metrics-exporter-http.svg
[license-badge]: https://img.shields.io/crates/l/metrics-exporter-http.svg
[docs-badge]: https://docs.rs/metrics-exporter-http/badge.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-exporter-prometheus.svg
[release-badge]: https://img.shields.io/crates/v/metrics-exporter-prometheus.svg
[license-badge]: https://img.shields.io/crates/l/metrics-exporter-prometheus.svg
[docs-badge]: https://docs.rs/metrics-exporter-prometheus/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-exporter-http
[docs]: https://docs.rs/metrics-exporter-http
[crate]: https://crates.io/crates/metrics-exporter-prometheus
[docs]: https://docs.rs/metrics-exporter-prometheus
__metrics-exporter-http__ is a metrics-core compatible exporter for serving metrics over HTTP.
__metrics-exporter-prometheus__ is a `metrics`-compatible exporter that serves a Prometheus scrape endpoint.
## code of conduct

View File

@ -0,0 +1,48 @@
use std::thread;
use std::time::Duration;
use metrics::{histogram, increment, register_counter, register_histogram};
use metrics_exporter_prometheus::PrometheusBuilder;
use quanta::Clock;
fn main() {
tracing_subscriber::fmt::init();
let builder = PrometheusBuilder::new();
builder
.install()
.expect("failed to install Prometheus recorder");
// We register these metrics, which gives us a chance to specify a description for them. The
// Prometheus exporter records this description and adds it as HELP text when the endpoint is
// scraped.
//
// Registering metrics ahead of using them is not required, but is the only way to specify the
// description of a metric.
register_counter!(
"tcp_server_loops",
"The iterations of the TCP server event loop so far."
);
register_histogram!(
"tcp_server_loop_delta_ns",
"The time taken for iterations of the TCP server event loop."
);
let clock = Clock::new();
let mut last = None;
// Loop over and over, pretending to do some work.
loop {
increment!("tcp_server_loops", "system" => "foo");
if let Some(t) = last {
let delta: Duration = clock.now() - t;
histogram!("tcp_server_loop_delta_ns", delta, "system" => "foo");
}
last = Some(clock.now());
thread::sleep(Duration::from_millis(750));
}
}

View File

@ -0,0 +1,581 @@
//! Records metrics in the Prometheus exposition format.
#![deny(missing_docs)]
use std::future::Future;
use hyper::{
service::{make_service_fn, service_fn},
{Body, Error as HyperError, Response, Server},
};
use metrics::{Key, Recorder, SetRecorderError};
use metrics_util::{
parse_quantiles, CompositeKey, Handle, Histogram, MetricKind, Quantile, Registry,
};
use parking_lot::RwLock;
use std::io;
use std::iter::FromIterator;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use std::thread;
use std::{collections::HashMap, time::SystemTime};
use tokio::{pin, runtime, select};
type PrometheusRegistry = Registry<CompositeKey, Handle>;
type HdrHistogram = hdrhistogram::Histogram<u64>;
/// Errors that could occur while installing a Prometheus recorder/exporter.
#[derive(Debug)]
pub enum Error {
/// Creating the networking event loop did not succeed.
Io(io::Error),
/// Installing the recorder did not succeed.
Recorder(SetRecorderError),
}
impl From<io::Error> for Error {
fn from(e: io::Error) -> Self {
Error::Io(e)
}
}
impl From<SetRecorderError> for Error {
fn from(e: SetRecorderError) -> Self {
Error::Recorder(e)
}
}
#[derive(Clone)]
enum Distribution {
/// A Prometheus histogram.
///
/// Exposes "bucketed" values to Prometheus, counting the number of samples
/// below a given threshold i.e. 100 requests faster than 20ms, 1000 requests
/// faster than 50ms, etc.
Histogram(Histogram),
/// A Prometheus summary.
///
/// Computes and exposes value quantiles directly to Prometheus i.e. 50% of
/// requests were faster than 200ms, and 99% of requests were faster than
/// 1000ms, etc.
Summary(HdrHistogram, u64),
}
struct Snapshot {
pub counters: HashMap<String, HashMap<Vec<String>, u64>>,
pub gauges: HashMap<String, HashMap<Vec<String>, f64>>,
pub distributions: HashMap<String, HashMap<Vec<String>, Distribution>>,
}
struct Inner {
registry: PrometheusRegistry,
distributions: RwLock<HashMap<String, HashMap<Vec<String>, Distribution>>>,
quantiles: Vec<Quantile>,
buckets: Vec<u64>,
buckets_by_name: Option<HashMap<String, Vec<u64>>>,
descriptions: RwLock<HashMap<String, &'static str>>,
}
impl Inner {
pub fn registry(&self) -> &PrometheusRegistry {
&self.registry
}
fn get_recent_metrics(&self) -> Snapshot {
let metrics = self.registry.get_handles();
let mut counters = HashMap::new();
let mut gauges = HashMap::new();
let mut sorted_overrides = self
.buckets_by_name
.as_ref()
.map(|h| Vec::from_iter(h.iter()))
.unwrap_or_else(|| vec![]);
sorted_overrides.sort_by(|(a, _), (b, _)| b.len().cmp(&a.len()));
for (key, handle) in metrics.into_iter() {
let (kind, key) = key.into_parts();
let (name, labels) = key_to_parts(key);
match kind {
MetricKind::Counter => {
let entry = counters
.entry(name)
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert(0);
*entry = handle.read_counter();
}
MetricKind::Gauge => {
let entry = gauges
.entry(name)
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert(0.0);
*entry = handle.read_gauge();
}
MetricKind::Histogram => {
let buckets = sorted_overrides
.iter()
.find(|(k, _)| name.ends_with(*k))
.map(|(_, buckets)| *buckets)
.unwrap_or(&self.buckets);
let mut wg = self.distributions.write();
let entry = wg
.entry(name.clone())
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert_with(|| match buckets.is_empty() {
false => {
let histogram = Histogram::new(buckets)
.expect("failed to create histogram with buckets defined");
Distribution::Histogram(histogram)
}
true => {
let summary =
HdrHistogram::new(3).expect("failed to create histogram");
Distribution::Summary(summary, 0)
}
});
match entry {
Distribution::Histogram(histogram) => handle
.read_histogram_with_clear(|samples| histogram.record_many(samples)),
Distribution::Summary(summary, sum) => {
handle.read_histogram_with_clear(|samples| {
for sample in samples {
let _ = summary.record(*sample);
*sum += *sample;
}
})
}
}
}
}
}
let distributions = self.distributions.read().clone();
Snapshot {
counters,
gauges,
distributions,
}
}
pub fn render(&self) -> String {
let mut sorted_overrides = self
.buckets_by_name
.as_ref()
.map(|h| Vec::from_iter(h.iter()))
.unwrap_or_else(|| vec![]);
sorted_overrides.sort_by(|(a, _), (b, _)| b.len().cmp(&a.len()));
let Snapshot {
mut counters,
mut gauges,
mut distributions,
} = self.get_recent_metrics();
let ts = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.map(|d| d.as_secs())
.unwrap_or(0);
let mut output = format!(
"# metrics snapshot (ts={}) (prometheus exposition format)\n",
ts
);
let descriptions = self.descriptions.read();
for (name, mut by_labels) in counters.drain() {
if let Some(desc) = descriptions.get(name.as_str()) {
output.push_str("# HELP ");
output.push_str(name.as_str());
output.push_str(" ");
output.push_str(desc);
output.push_str("\n");
}
output.push_str("# TYPE ");
output.push_str(name.as_str());
output.push_str(" counter\n");
for (labels, value) in by_labels.drain() {
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
for (name, mut by_labels) in gauges.drain() {
if let Some(desc) = descriptions.get(name.as_str()) {
output.push_str("# HELP ");
output.push_str(name.as_str());
output.push_str(" ");
output.push_str(desc);
output.push_str("\n");
}
output.push_str("# TYPE ");
output.push_str(name.as_str());
output.push_str(" gauge\n");
for (labels, value) in by_labels.drain() {
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
let mut sorted_overrides = self
.buckets_by_name
.as_ref()
.map(|h| Vec::from_iter(h.iter()))
.unwrap_or_else(|| vec![]);
sorted_overrides.sort_by(|(a, _), (b, _)| b.len().cmp(&a.len()));
for (name, mut by_labels) in distributions.drain() {
if let Some(desc) = descriptions.get(name.as_str()) {
output.push_str("# HELP ");
output.push_str(name.as_str());
output.push_str(" ");
output.push_str(desc);
output.push_str("\n");
}
let has_buckets = sorted_overrides
.iter()
.any(|(k, _)| !self.buckets.is_empty() || name.ends_with(*k));
output.push_str("# TYPE ");
output.push_str(name.as_str());
output.push_str(" ");
output.push_str(if has_buckets { "histogram" } else { "summary" });
output.push_str("\n");
for (labels, distribution) in by_labels.drain() {
let (sum, count) = match distribution {
Distribution::Summary(summary, sum) => {
for quantile in &self.quantiles {
let value = summary.value_at_quantile(quantile.value());
let mut labels = labels.clone();
labels.push(format!("quantile=\"{}\"", quantile.value()));
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
(sum, summary.len())
}
Distribution::Histogram(histogram) => {
for (le, count) in histogram.buckets() {
let mut labels = labels.clone();
labels.push(format!("le=\"{}\"", le));
let bucket_name = format!("{}_bucket", name);
let full_name = render_labeled_name(&bucket_name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(count.to_string().as_str());
output.push_str("\n");
}
let mut labels = labels.clone();
labels.push("le=\"+Inf\"".to_owned());
let bucket_name = format!("{}_bucket", name);
let full_name = render_labeled_name(&bucket_name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(histogram.count().to_string().as_str());
output.push_str("\n");
(histogram.sum(), histogram.count())
}
};
let sum_name = format!("{}_sum", name);
let full_sum_name = render_labeled_name(&sum_name, &labels);
output.push_str(full_sum_name.as_str());
output.push_str(" ");
output.push_str(sum.to_string().as_str());
output.push_str("\n");
let count_name = format!("{}_count", name);
let full_count_name = render_labeled_name(&count_name, &labels);
output.push_str(full_count_name.as_str());
output.push_str(" ");
output.push_str(count.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
output
}
}
/// A Prometheus recorder.
///
/// This recorder should be composed with other recorders or installed globally via
/// [`metrics::set_boxed_recorder`][set_boxed_recorder].
///
///
pub struct PrometheusRecorder {
inner: Arc<Inner>,
}
impl PrometheusRecorder {
fn add_description_if_missing(&self, key: &Key, description: Option<&'static str>) {
if let Some(description) = description {
let mut descriptions = self.inner.descriptions.write();
if !descriptions.contains_key(key.name().as_ref()) {
descriptions.insert(key.name().to_string(), description);
}
}
}
}
/// Builder for creating and installing a Prometheus recorder/exporter.
pub struct PrometheusBuilder {
listen_address: SocketAddr,
quantiles: Vec<Quantile>,
buckets: Vec<u64>,
buckets_by_name: Option<HashMap<String, Vec<u64>>>,
}
impl PrometheusBuilder {
/// Creates a new [`PrometheusBuilder`].
pub fn new() -> Self {
let quantiles = parse_quantiles(&[0.0, 0.5, 0.9, 0.95, 0.99, 0.999, 1.0]);
Self {
listen_address: SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 9000),
quantiles,
buckets: vec![],
buckets_by_name: None,
}
}
/// Sets the listen address for the Prometheus scrape endpoint.
///
/// The HTTP listener that is spawned will respond to GET requests on any request path.
///
/// Defaults to `127.0.0.1:9000`.
pub fn listen_address(mut self, addr: impl Into<SocketAddr>) -> Self {
self.listen_address = addr.into();
self
}
/// Sets the quantiles to use when rendering histograms.
///
/// Quantiles represent a scale of 0 to 1, where percentiles represent a scale of 1 to 100, so
/// a quantile of 0.99 is the 99th percentile, and a quantile of 0.99 is the 99.9th percentile.
///
/// By default, the quantiles will be set to: 0.0, 0.5, 0.9, 0.95, 0.99, 0.999, and 1.0. This means
/// that all histograms will be exposed as Prometheus summaries.
///
/// If buckets are set (via [`set_buckets`] or [`set_buckets_for_metric`]) then all histograms will
/// be exposed as summaries instead.
pub fn set_quantiles(mut self, quantiles: &[f64]) -> Self {
self.quantiles = parse_quantiles(quantiles);
self
}
/// Sets the buckets to use when rendering histograms.
///
/// Buckets values represent the higher bound of each buckets. If buckets are set, then all
/// histograms will be rendered as true Prometheus histograms, instead of summaries.
pub fn set_buckets(mut self, values: &[u64]) -> Self {
self.buckets = values.to_vec();
self
}
/// Sets the buckets for a specific metric, overidding the default.
///
/// The match is suffix-based, and the longest match found will be used.
///
/// Buckets values represent the higher bound of each buckets. If buckets are set, then any
/// histograms that match will be rendered as true Prometheus histograms, instead of summaries.
///
/// This option changes the observer's output of histogram-type metric into summaries.
/// It only affects matching metrics if set_buckets was not used.
pub fn set_buckets_for_metric(mut self, name: &str, values: &[u64]) -> Self {
let buckets = self.buckets_by_name.get_or_insert_with(|| HashMap::new());
buckets.insert(name.to_owned(), values.to_vec());
self
}
/// Builds the recorder and exporter and installs them globally.
///
/// An error will be returned if there's an issue with creating the HTTP server or with
/// installing the recorder as the global recorder.
pub fn install(self) -> Result<(), Error> {
let (recorder, exporter) = self.build();
metrics::set_boxed_recorder(Box::new(recorder))?;
let mut runtime = runtime::Builder::new()
.basic_scheduler()
.enable_all()
.build()?;
thread::Builder::new()
.name("metrics-exporter-prometheus-http".to_string())
.spawn(move || {
runtime.block_on(async move {
pin!(exporter);
loop {
select! {
_ = &mut exporter => {}
}
}
});
})?;
Ok(())
}
/// Builds the recorder and exporter and returns them both.
///
/// In most cases, users should prefer to use [`PrometheusBuilder::install`] to create and
/// install the recorder and exporter automatically for them. If a caller is combining
/// recorders, or needs to schedule the exporter to run in a particular way, this method
/// provides the flexibility to do so.
pub fn build(
self,
) -> (
PrometheusRecorder,
impl Future<Output = Result<(), HyperError>> + Send + Sync + 'static,
) {
let inner = Arc::new(Inner {
registry: Registry::new(),
distributions: RwLock::new(HashMap::new()),
quantiles: self.quantiles.clone(),
buckets: self.buckets.clone(),
buckets_by_name: self.buckets_by_name.clone(),
descriptions: RwLock::new(HashMap::new()),
});
let recorder = PrometheusRecorder {
inner: inner.clone(),
};
let address = self.listen_address;
let exporter = async move {
let make_svc = make_service_fn(move |_| {
let inner = inner.clone();
async move {
Ok::<_, HyperError>(service_fn(move |_| {
let inner = inner.clone();
async move {
let output = inner.render();
Ok::<_, HyperError>(Response::new(Body::from(output)))
}
}))
}
});
Server::bind(&address).serve(make_svc).await
};
(recorder, exporter)
}
}
impl Recorder for PrometheusRecorder {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Counter, key),
|_| {},
|| Handle::counter(),
);
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Gauge, key),
|_| {},
|| Handle::gauge(),
);
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Histogram, key),
|_| {},
|| Handle::histogram(),
);
}
fn increment_counter(&self, key: Key, value: u64) {
self.inner.registry().op(
CompositeKey::new(MetricKind::Counter, key),
|h| h.increment_counter(value),
|| Handle::counter(),
);
}
fn update_gauge(&self, key: Key, value: f64) {
self.inner.registry().op(
CompositeKey::new(MetricKind::Gauge, key),
|h| h.update_gauge(value),
|| Handle::gauge(),
);
}
fn record_histogram(&self, key: Key, value: u64) {
self.inner.registry().op(
CompositeKey::new(MetricKind::Histogram, key),
|h| h.record_histogram(value),
|| Handle::histogram(),
);
}
}
fn key_to_parts(key: Key) -> (String, Vec<String>) {
let name = key.name();
let labels = key.labels();
let sanitize = |c| c == '.' || c == '=' || c == '{' || c == '}' || c == '+' || c == '-';
let name = name.replace(sanitize, "_");
let labels = labels
.into_iter()
.map(|label| {
let k = label.key();
let v = label.value();
format!(
"{}=\"{}\"",
k,
v.replace("\\", "\\\\")
.replace("\"", "\\\"")
.replace("\n", "\\n")
)
})
.collect();
(name, labels)
}
fn render_labeled_name(name: &str, labels: &[String]) -> String {
let mut output = name.to_string();
if !labels.is_empty() {
let joined = labels.join(",");
output.push_str("{");
output.push_str(&joined);
output.push_str("}");
}
output
}

View File

@ -0,0 +1,35 @@
[package]
name = "metrics-exporter-tcp"
version = "0.1.0-alpha.3"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-compatible exporter that outputs metrics to clients over TCP."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-exporter-tcp"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "tcp"]
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util" }
bytes = "0.5"
crossbeam-channel = "0.4"
prost = "0.6"
prost-types = "0.6"
mio = { version = "0.7", features = ["os-poll", "tcp"] }
tracing = "0.1"
[build-dependencies]
prost-build = "0.6"
built = "0.4"
[dev-dependencies]
quanta = "0.6"
tracing = "0.1"
tracing-subscriber = "0.2"

View File

@ -1,17 +1,17 @@
# metrics-exporter-log
# metrics-exporter-tcp
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-exporter-log.svg
[release-badge]: https://img.shields.io/crates/v/metrics-exporter-log.svg
[license-badge]: https://img.shields.io/crates/l/metrics-exporter-log.svg
[docs-badge]: https://docs.rs/metrics-exporter-log/badge.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-exporter-tcp.svg
[release-badge]: https://img.shields.io/crates/v/metrics-exporter-tcp.svg
[license-badge]: https://img.shields.io/crates/l/metrics-exporter-tcp.svg
[docs-badge]: https://docs.rs/metrics-exporter-tcp/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-exporter-log
[docs]: https://docs.rs/metrics-exporter-log
[crate]: https://crates.io/crates/metrics-exporter-tcp
[docs]: https://docs.rs/metrics-exporter-tcp
__metrics-exporter-log__ is a metrics-core compatible exporter for forwarding metrics to logs.
__metrics-exporter-tcp__ is a metrics-compatible exporter that outputs metrics to clients over TCP.
## code of conduct

View File

@ -0,0 +1,9 @@
fn main() {
println!("cargo:rerun-if-changed=proto/event.proto");
let mut prost_build = prost_build::Config::new();
prost_build.btree_map(&["."]);
prost_build
.compile_protos(&["proto/event.proto"], &["proto/"])
.unwrap();
built::write_built_file().unwrap();
}

View File

@ -0,0 +1,33 @@
use std::io::Read;
use std::net::TcpStream;
use bytes::{BufMut, BytesMut};
use prost::Message;
mod proto {
include!(concat!(env!("OUT_DIR"), "/event.proto.rs"));
}
fn main() {
let mut stream =
TcpStream::connect("127.0.0.1:5000").expect("failed to connect to TCP endpoint");
let mut buf = BytesMut::new();
let mut rbuf = [0u8; 1024];
loop {
match stream.read(&mut rbuf[..]) {
Ok(0) => {
println!("server disconnected, closing");
break;
}
Ok(n) => buf.put_slice(&rbuf[..n]),
Err(e) => eprintln!("read error: {:?}", e),
};
match proto::Metric::decode_length_delimited(&mut buf) {
Err(e) => eprintln!("decode error: {:?}", e),
Ok(msg) => println!("metric: {:?}", msg),
}
}
}

View File

@ -0,0 +1,30 @@
use std::thread;
use std::time::Duration;
use metrics::{histogram, increment};
use metrics_exporter_tcp::TcpBuilder;
use quanta::Clock;
fn main() {
tracing_subscriber::fmt::init();
let builder = TcpBuilder::new();
builder.install().expect("failed to install TCP recorder");
let mut clock = Clock::new();
let mut last = None;
loop {
increment!("tcp_server_loops", "system" => "foo");
if let Some(t) = last {
let delta: Duration = clock.now() - t;
histogram!("tcp_server_loop_delta_ns", delta, "system" => "foo");
}
last = Some(clock.now());
thread::sleep(Duration::from_millis(750));
}
}

View File

@ -0,0 +1,28 @@
syntax = "proto3";
import "google/protobuf/timestamp.proto";
package event.proto;
message Metric {
string name = 1;
google.protobuf.Timestamp timestamp = 2;
map<string, string> labels = 3;
oneof value {
Counter counter = 4;
Gauge gauge = 5;
Histogram histogram = 6;
}
}
message Counter {
uint64 value = 1;
}
message Gauge {
double value = 1;
}
message Histogram {
uint64 value = 1;
}

View File

@ -0,0 +1,463 @@
//! A [`metrics`][metrics]-compatible exporter that outputs metrics to clients over TCP.
//!
//! This exporter creates a TCP server, that when connected to, will stream individual metrics to
//! the client using a Protocol Buffers encoding.
//!
//! # Backpressure
//! The exporter has configurable buffering, which allows users to trade off how many metrics they
//! want to be queued up at any given time. This buffer limit applies both to incoming metrics, as
//! well as the individual buffers for each connected client.
//!
//! By default, the buffer limit is set at 1024 metrics. When the incoming buffer -- metrics being
//! fed to the exported -- is full, metrics will be dropped. If a client's buffer is full,
//! potentially due to slow network conditions or slow processing, then messages in the client's
//! buffer will be dropped in FIFO order in order to allow the exporter to continue fanning out
//! metrics to clients.
//!
//! If no buffer limit is set, then te exporter will ingest and enqueue as many metrics as possible,
//! potentially up until the point of memory exhaustion. A buffer limit is advised for this reason,
//! even if it is many multiples of the default.
//!
//! # Encoding
//! Metrics are encoded using Protocol Buffers. The protocol file can be found in the repository at
//! `proto/event.proto`.
//!
//! # Usage
//! The TCP exporter can be constructed by creating a [`TcpBuilder], configuring it as needed, and
//! calling [`TcpBuilder::install`] to both spawn the TCP server as well as install the exporter
//! globally.
//!
//! If necessary, the recorder itself can be returned so that it can be composed separately, while
//! still installing the TCP server itself, by calling [`TcpBuilder::build`].
//!
//! ```
//! # use metrics_exporter_tcp::TcpBuilder;
//! # fn direct() {
//! // Install the exporter directly:
//! let builder = TcpBuilder::new();
//! builder.install().expect("failed to install TCP exporter");
//!
//! // Or install the TCP server and get the recorder:
//! let builder = TcpBuilder::new();
//! let recorder = builder.build().expect("failed to install TCP exporter");
//! # }
//! ```
//!
//! [metrics]: https://docs.rs/metrics
use std::collections::{BTreeMap, HashMap, VecDeque};
use std::io::{self, Write};
use std::net::SocketAddr;
use std::sync::Arc;
use std::thread;
use std::time::SystemTime;
use bytes::Bytes;
use crossbeam_channel::{bounded, unbounded, Receiver, Sender};
use metrics::{Key, Recorder, SetRecorderError};
use mio::{
net::{TcpListener, TcpStream},
Events, Interest, Poll, Token, Waker,
};
use prost::{EncodeError, Message};
use tracing::{error, trace, trace_span};
const WAKER: Token = Token(0);
const LISTENER: Token = Token(1);
const START_TOKEN: Token = Token(2);
const CLIENT_INTEREST: Interest = Interest::READABLE.add(Interest::WRITABLE);
mod proto {
include!(concat!(env!("OUT_DIR"), "/event.proto.rs"));
}
enum MetricValue {
Counter(u64),
Gauge(f64),
Histogram(u64),
}
/// Errors that could occur while installing a TCP recorder/exporter.
#[derive(Debug)]
pub enum Error {
/// Creating the networking event loop did not succeed.
Io(io::Error),
/// Installing the recorder did not succeed.
Recorder(SetRecorderError),
}
impl From<io::Error> for Error {
fn from(e: io::Error) -> Self {
Error::Io(e)
}
}
impl From<SetRecorderError> for Error {
fn from(e: SetRecorderError) -> Self {
Error::Recorder(e)
}
}
/// A TCP recorder.
pub struct TcpRecorder {
tx: Sender<(Key, MetricValue)>,
waker: Arc<Waker>,
}
/// Builder for creating and installing a TCP recorder/exporter.
pub struct TcpBuilder {
listen_addr: SocketAddr,
buffer_size: Option<usize>,
}
impl TcpBuilder {
/// Creates a new `TcpBuilder`.
pub fn new() -> TcpBuilder {
TcpBuilder {
listen_addr: ([127, 0, 0, 1], 5000).into(),
buffer_size: Some(1024),
}
}
/// Sets the listen address.
///
/// The exporter will accept connections on this address and immediately begin forwarding
/// metrics to the client.
///
/// Defaults to `127.0.0.1:5000`.
pub fn listen_address<A>(mut self, addr: A) -> TcpBuilder
where
A: Into<SocketAddr>,
{
self.listen_addr = addr.into();
self
}
/// Sets the buffer size for internal operations.
///
/// The buffer size controls two operational aspects: the number of metrics processed
/// per iteration of the event loop, and the number of buffered metrics each client
/// can hold.
///
/// This setting allows trading off responsiveness for throughput, where a smaller buffer
/// size will ensure that metrics are pushed to clients sooner, versus a larger buffer
/// size that allows us to push more at a time.alloc
///
/// As well, the larger the buffer, the more messages a client can temporarily hold.
/// Clients have a circular buffer implementation so if their buffers are full, metrics
/// will be dropped as necessary to avoid backpressure in the recorder.
pub fn buffer_size(mut self, size: Option<usize>) -> TcpBuilder {
self.buffer_size = size;
self
}
/// Installs the recorder and exporter.
///
/// An error will be returned if there's an issue with creating the TCP server or with
/// installing the recorder as the global recorder.
pub fn install(self) -> Result<(), Error> {
let recorder = self.build()?;
metrics::set_boxed_recorder(Box::new(recorder))?;
Ok(())
}
/// Builds and installs the exporter, but returns the recorder.
///
/// In most cases, users should prefer to use [`TcpBuilder::install`] to create and install
/// the recorder and exporter automatically for them. If a caller is combining recorders,
/// however, then this method allows the caller the flexibility to do so.
pub fn build(self) -> Result<TcpRecorder, Error> {
let buffer_size = self.buffer_size;
let (tx, rx) = match buffer_size {
None => unbounded(),
Some(size) => bounded(size),
};
let poll = Poll::new()?;
let waker = Arc::new(Waker::new(poll.registry(), WAKER)?);
let mut listener = TcpListener::bind(self.listen_addr)?;
poll.registry()
.register(&mut listener, LISTENER, Interest::READABLE)?;
let recorder = TcpRecorder {
tx,
waker: Arc::clone(&waker),
};
thread::spawn(move || run_transport(poll, waker, listener, rx, buffer_size));
Ok(recorder)
}
}
impl TcpRecorder {
fn push_metric(&self, key: Key, value: MetricValue) {
let _ = self.tx.try_send((key, value));
let _ = self.waker.wake();
}
}
impl Recorder for TcpRecorder {
fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
fn increment_counter(&self, key: Key, value: u64) {
self.push_metric(key, MetricValue::Counter(value));
}
fn update_gauge(&self, key: Key, value: f64) {
self.push_metric(key, MetricValue::Gauge(value));
}
fn record_histogram(&self, key: Key, value: u64) {
self.push_metric(key, MetricValue::Histogram(value));
}
}
fn run_transport(
mut poll: Poll,
waker: Arc<Waker>,
listener: TcpListener,
rx: Receiver<(Key, MetricValue)>,
buffer_size: Option<usize>,
) {
let buffer_limit = buffer_size.unwrap_or(std::usize::MAX);
let mut events = Events::with_capacity(1024);
let mut clients = HashMap::new();
let mut clients_to_remove = Vec::new();
let mut next_token = START_TOKEN;
let mut buffered_pmsgs = VecDeque::with_capacity(buffer_limit);
loop {
let _span = trace_span!("transport");
// Poll until we get something. All events -- metrics wake-ups and network I/O -- flow
// through here so we can block without issue.
let _evspan = trace_span!("event loop");
if let Err(e) = poll.poll(&mut events, None) {
error!(error = %e, "error during poll");
continue;
}
drop(_evspan);
// Technically, this is an abuse of size_hint() but Mio will return the number of events
// for both parts of the tuple.
trace!(events = events.iter().size_hint().0, "return from poll");
let _pspan = trace_span!("process events");
for event in events.iter() {
match event.token() {
WAKER => {
// Read until we hit our buffer limit or there are no more messages.
let _mrxspan = trace_span!("metrics in");
loop {
if buffered_pmsgs.len() >= buffer_limit {
// We didn't drain ourselves here, so schedule a future wake so we
// continue to drain remaining metrics.
let _ = waker.wake();
break;
}
let msg = match rx.try_recv() {
Ok(msg) => msg,
Err(e) if e.is_empty() => {
trace!("metric rx drained");
break;
}
// If our sender is dead, we can't do anything else, so just return.
Err(_) => return,
};
let (key, value) = msg;
match convert_metric_to_protobuf_encoded(key, value) {
Ok(pmsg) => buffered_pmsgs.push_back(pmsg),
Err(e) => error!(error = ?e, "error encoding metric"),
}
}
drop(_mrxspan);
if buffered_pmsgs.is_empty() {
trace!("woken for metrics but no pmsgs buffered");
continue;
}
// Now fan out each of these items to each client.
for (token, (conn, wbuf, msgs)) in clients.iter_mut() {
// Before we potentially do any draining, try and drive the connection to
// make sure space is freed up as much as possible.
let done = drive_connection(conn, wbuf, msgs);
if done {
clients_to_remove.push(*token);
continue;
}
// With the encoded metrics, we push them into each client's internal
// list. We try to write as many of those buffers as possible to the
// client before being told to back off. If we encounter a partial write
// of a buffer, we store the remaining of that message in a special field
// so that we don't write incomplete metrics to the client.
//
// If there are more messages to hand off to a client than the client's
// internal list has room for, we remove as many as needed to do so. This
// means we prioritize sending newer metrics if connections are backed up.
let available = if msgs.len() < buffer_limit {
buffer_limit - msgs.len()
} else {
0
};
let to_drain = buffered_pmsgs.len().saturating_sub(available);
let _ = msgs.drain(0..to_drain);
msgs.extend(buffered_pmsgs.iter().take(buffer_limit).cloned());
let done = drive_connection(conn, wbuf, msgs);
if done {
clients_to_remove.push(*token);
}
}
// We've pushed each metric into each client's internal list, so we can clear
// ourselves and continue on.
buffered_pmsgs.clear();
// Remove any clients that were done.
for token in clients_to_remove.drain(..) {
if let Some((conn, _, _)) = clients.get_mut(&token) {
trace!(?conn, ?token, "removing client");
clients.remove(&token);
}
}
}
LISTENER => {
// Accept as many new connections as we can.
loop {
match listener.accept() {
Ok((mut conn, _)) => {
// Get our client's token and register the connection.
let token = next(&mut next_token);
poll.registry()
.register(&mut conn, token, CLIENT_INTEREST)
.expect("failed to register interest for client connection");
// Start tracking them.
clients
.insert(token, (conn, None, VecDeque::new()))
.ok_or(())
.expect_err("client mapped to existing token!");
}
Err(ref e) if would_block(e) => break,
Err(e) => {
error!("caught error while accepting client connections: {:?}", e);
return;
}
}
}
}
token => {
if event.is_writable() {
if let Some((conn, wbuf, msgs)) = clients.get_mut(&token) {
let done = drive_connection(conn, wbuf, msgs);
if done {
trace!(?conn, ?token, "removing client");
clients.remove(&token);
}
}
}
}
}
}
}
}
#[tracing::instrument(skip(wbuf, msgs))]
fn drive_connection(
conn: &mut TcpStream,
wbuf: &mut Option<Bytes>,
msgs: &mut VecDeque<Bytes>,
) -> bool {
trace!(?conn, "driving client");
loop {
let mut buf = match wbuf.take() {
// Send the leftover buffer first, if we have one.
Some(buf) => buf,
None => match msgs.pop_front() {
Some(msg) => msg,
None => {
trace!("client write queue drained");
return false;
}
},
};
match conn.write(&buf) {
// Zero write = client closedd their connection, so remove 'em.
Ok(0) => {
trace!(?conn, "zero write, closing client");
return true;
}
Ok(n) if n < buf.len() => {
// We sent part of the buffer, but not everything. Keep track of the remaining
// chunk of the buffer. TODO: do we need to reregister ourselves to track writable
// status??
let remaining = buf.split_off(n);
trace!(
?conn,
written = n,
remaining = remaining.len(),
"partial write"
);
wbuf.replace(remaining);
return false;
}
Ok(_) => continue,
Err(ref e) if would_block(e) => return false,
Err(ref e) if interrupted(e) => return drive_connection(conn, wbuf, msgs),
Err(e) => {
error!(?conn, error = %e, "write failed");
return true;
}
}
}
}
fn convert_metric_to_protobuf_encoded(key: Key, value: MetricValue) -> Result<Bytes, EncodeError> {
let name = key.name().to_string();
let labels = key
.labels()
.map(|label| (label.key().to_owned(), label.value().to_owned()))
.collect::<BTreeMap<_, _>>();
let mvalue = match value {
MetricValue::Counter(cv) => proto::metric::Value::Counter(proto::Counter { value: cv }),
MetricValue::Gauge(gv) => proto::metric::Value::Gauge(proto::Gauge { value: gv }),
MetricValue::Histogram(hv) => {
proto::metric::Value::Histogram(proto::Histogram { value: hv })
}
};
let now: prost_types::Timestamp = SystemTime::now().into();
let metric = proto::Metric {
name,
labels,
timestamp: Some(now),
value: Some(mvalue),
};
let mut buf = Vec::new();
metric.encode_length_delimited(&mut buf)?;
Ok(Bytes::from(buf))
}
fn next(current: &mut Token) -> Token {
let next = current.0;
current.0 += 1;
Token(next)
}
fn would_block(err: &io::Error) -> bool {
err.kind() == io::ErrorKind::WouldBlock
}
fn interrupted(err: &io::Error) -> bool {
err.kind() == io::ErrorKind::Interrupted
}

25
metrics-macros/Cargo.toml Normal file
View File

@ -0,0 +1,25 @@
[package]
name = "metrics-macros"
version = "0.1.0-alpha.3"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "Macros for the metrics crate."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "facade", "macros"]
[lib]
proc-macro = true
[dependencies]
syn = "1.0"
quote = "1.0"
proc-macro2 = "1.0"
proc-macro-hack = "0.5"

4
metrics-macros/README.md Normal file
View File

@ -0,0 +1,4 @@
# metrics-macros
This crate houses all of the procedural macros that are re-exported by `metrics`. Refer to the
documentation for `metrics` to find examples and more information on the available macros.

367
metrics-macros/src/lib.rs Normal file
View File

@ -0,0 +1,367 @@
extern crate proc_macro;
use self::proc_macro::TokenStream;
use proc_macro_hack::proc_macro_hack;
use quote::{format_ident, quote, ToTokens};
use syn::parse::{Error, Parse, ParseStream, Result};
use syn::{parse_macro_input, Expr, LitStr, Token};
#[cfg(test)]
mod tests;
enum Key {
NotScoped(LitStr),
Scoped(LitStr),
}
enum Labels {
Existing(Expr),
Inline(Vec<(LitStr, Expr)>),
}
struct WithoutExpression {
key: Key,
labels: Option<Labels>,
}
struct WithExpression {
key: Key,
op_value: Expr,
labels: Option<Labels>,
}
struct Registration {
key: Key,
description: Option<LitStr>,
labels: Option<Labels>,
}
impl Parse for WithoutExpression {
fn parse(mut input: ParseStream) -> Result<Self> {
let key = read_key(&mut input)?;
let labels = parse_labels(&mut input)?;
Ok(WithoutExpression { key, labels })
}
}
impl Parse for WithExpression {
fn parse(mut input: ParseStream) -> Result<Self> {
let key = read_key(&mut input)?;
input.parse::<Token![,]>()?;
let op_value: Expr = input.parse()?;
let labels = parse_labels(&mut input)?;
Ok(WithExpression {
key,
op_value,
labels,
})
}
}
impl Parse for Registration {
fn parse(mut input: ParseStream) -> Result<Self> {
let key = read_key(&mut input)?;
// This may or may not be the start of labels, if the description has been omitted, so
// we hold on to it until we can make sure nothing else is behind it, or if it's a full
// fledged set of labels.
let (description, labels) = if input.peek(Token![,]) && input.peek3(Token![=>]) {
// We have a ", <something> =>" pattern, which can only be labels, so we have no
// description.
let labels = parse_labels(&mut input)?;
(None, labels)
} else if input.peek(Token![,]) && input.peek2(LitStr) {
// We already know we're not working with labels only, and if we have ", <literal
// string>" then we have to at least have a description, possibly with labels.
input.parse::<Token![,]>()?;
let description = input.parse::<LitStr>().ok();
let labels = parse_labels(&mut input)?;
(description, labels)
} else {
// We might have labels passed as an expression.
let labels = parse_labels(&mut input)?;
(None, labels)
};
Ok(Registration {
key,
description,
labels,
})
}
}
#[proc_macro_hack]
pub fn register_counter(input: TokenStream) -> TokenStream {
let Registration {
key,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("counter", key, description, labels).into()
}
#[proc_macro_hack]
pub fn register_gauge(input: TokenStream) -> TokenStream {
let Registration {
key,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("gauge", key, description, labels).into()
}
#[proc_macro_hack]
pub fn register_histogram(input: TokenStream) -> TokenStream {
let Registration {
key,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("histogram", key, description, labels).into()
}
#[proc_macro_hack]
pub fn increment(input: TokenStream) -> TokenStream {
let WithoutExpression { key, labels } = parse_macro_input!(input as WithoutExpression);
let op_value = quote! { 1 };
get_expanded_callsite("counter", "increment", key, labels, op_value).into()
}
#[proc_macro_hack]
pub fn counter(input: TokenStream) -> TokenStream {
let WithExpression {
key,
op_value,
labels,
} = parse_macro_input!(input as WithExpression);
get_expanded_callsite("counter", "increment", key, labels, op_value).into()
}
#[proc_macro_hack]
pub fn gauge(input: TokenStream) -> TokenStream {
let WithExpression {
key,
op_value,
labels,
} = parse_macro_input!(input as WithExpression);
get_expanded_callsite("gauge", "update", key, labels, op_value).into()
}
#[proc_macro_hack]
pub fn histogram(input: TokenStream) -> TokenStream {
let WithExpression {
key,
op_value,
labels,
} = parse_macro_input!(input as WithExpression);
get_expanded_callsite("histogram", "record", key, labels, op_value).into()
}
fn get_expanded_registration(
metric_type: &str,
key: Key,
description: Option<LitStr>,
labels: Option<Labels>,
) -> proc_macro2::TokenStream {
let register_ident = format_ident!("register_{}", metric_type);
let key = key_to_quoted(key, labels);
let description = match description {
Some(s) => quote! { Some(#s) },
None => quote! { None },
};
quote! {
{
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
// Registrations are fairly rare, don't attempt to cache here
// and just use an owned ref.
recorder.#register_ident(metrics::Key::Owned(#key), #description);
}
}
}
}
fn get_expanded_callsite<V>(
metric_type: &str,
op_type: &str,
key: Key,
labels: Option<Labels>,
op_values: V,
) -> proc_macro2::TokenStream
where
V: ToTokens,
{
let use_fast_path = can_use_fast_path(&labels);
let key = key_to_quoted(key, labels);
let op_values = if metric_type == "histogram" {
quote! {
metrics::__into_u64(#op_values)
}
} else {
quote! { #op_values }
};
let op_ident = format_ident!("{}_{}", op_type, metric_type);
if use_fast_path {
// We're on the fast path here, so we'll build our key, statically cache it,
// and use a borrowed reference to it for this and future operations.
quote! {
{
static CACHED_KEY: metrics::OnceKeyData = metrics::OnceKeyData::new();
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
// Initialize our fast path.
let key = CACHED_KEY.get_or_init(|| { #key });
recorder.#op_ident(metrics::Key::Borrowed(&key), #op_values);
}
}
}
} else {
// We're on the slow path, so basically we register every single time.
//
// Recorders are expected to deduplicate any duplicate registrations.
quote! {
{
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
recorder.#op_ident(metrics::Key::Owned(#key), #op_values);
}
}
}
}
}
fn can_use_fast_path(labels: &Option<Labels>) -> bool {
match labels {
None => true,
Some(labels) => match labels {
Labels::Existing(_) => false,
Labels::Inline(pairs) => pairs.iter().all(|(_, v)| matches!(v, Expr::Lit(_))),
},
}
}
fn read_key(input: &mut ParseStream) -> Result<Key> {
if let Ok(_) = input.parse::<Token![<]>() {
let s = input.parse::<LitStr>()?;
input.parse::<Token![>]>()?;
Ok(Key::Scoped(s))
} else {
let s = input.parse::<LitStr>()?;
Ok(Key::NotScoped(s))
}
}
fn quote_key_name(key: Key) -> proc_macro2::TokenStream {
match key {
Key::NotScoped(s) => {
quote! { #s }
}
Key::Scoped(s) => {
quote! {
format!("{}.{}", std::module_path!().replace("::", "."), #s)
}
}
}
}
fn key_to_quoted(key: Key, labels: Option<Labels>) -> proc_macro2::TokenStream {
let name = quote_key_name(key);
match labels {
None => quote! { metrics::KeyData::from_name(#name) },
Some(labels) => match labels {
Labels::Inline(pairs) => {
let labels = pairs
.into_iter()
.map(|(key, val)| quote! { metrics::Label::new(#key, #val) });
quote! { metrics::KeyData::from_name_and_labels(#name, vec![#(#labels),*]) }
}
Labels::Existing(e) => quote! { metrics::KeyData::from_name_and_labels(#name, #e) },
},
}
}
fn parse_labels(input: &mut ParseStream) -> Result<Option<Labels>> {
if input.is_empty() {
return Ok(None);
}
if !input.peek(Token![,]) {
// This is a hack to generate the proper error message for parsing the comma next without
// actually parsing it and thus removing it from the parse stream. Just makes the following
// code a bit cleaner.
input
.parse::<Token![,]>()
.map_err(|e| Error::new(e.span(), "expected labels, but comma not found"))?;
}
// Two possible states for labels: references to a label iterator, or key/value pairs.
//
// We check to see if we have the ", key =>" part, which tells us that we're taking in key/value
// pairs. If we don't have that, we check to see if we have a "`, <expr" part, which could us
// getting handed a labels iterator. The type checking for `IntoLabels` in `metrics::Recorder`
// will do the heavy lifting from that point forward.
if input.peek(Token![,]) && input.peek2(LitStr) && input.peek3(Token![=>]) {
let mut labels = Vec::new();
loop {
if input.is_empty() {
break;
}
input.parse::<Token![,]>()?;
if input.is_empty() {
break;
}
let lkey: LitStr = input.parse()?;
input.parse::<Token![=>]>()?;
let lvalue: Expr = input.parse()?;
labels.push((lkey, lvalue));
}
return Ok(Some(Labels::Inline(labels)));
}
// Has to be an expression otherwise, or a trailing comma.
input.parse::<Token![,]>()?;
// Unless it was an expression - clear the trailing comma.
if input.is_empty() {
return Ok(None);
}
let lvalue: Expr = input.parse().map_err(|e| {
Error::new(
e.span(),
"expected label expression, but expression not found",
)
})?;
// Expression can end with a trailing comma, handle it.
if input.peek(Token![,]) {
input.parse::<Token![,]>()?;
}
Ok(Some(Labels::Existing(lvalue)))
}

138
metrics-macros/src/tests.rs Normal file
View File

@ -0,0 +1,138 @@
use syn::parse_quote;
use super::*;
#[test]
fn test_quote_key_name_scoped() {
let stream = quote_key_name(Key::Scoped(parse_quote! {"qwerty"}));
let expected =
"format ! (\"{}.{}\" , std :: module_path ! () . replace (\"::\" , \".\") , \"qwerty\")";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_quote_key_name_not_scoped() {
let stream = quote_key_name(Key::NotScoped(parse_quote! {"qwerty"}));
let expected = "\"qwerty\"";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_registration() {
let stream = get_expanded_registration(
"mytype",
Key::NotScoped(parse_quote! {"mykeyname"}),
None,
None,
);
let expected = concat!(
"{ if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . register_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_name (\"mykeyname\")) , ",
"None",
") ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
/// If there are no dynamic labels - generate an invocation with caching.
#[test]
fn test_get_expanded_callsite_fast_path() {
let stream = get_expanded_callsite(
"mytype",
"myop",
Key::NotScoped(parse_quote! {"mykeyname"}),
None,
quote! { 1 },
);
let expected = concat!(
"{ ",
"static CACHED_KEY : metrics :: OnceKeyData = metrics :: OnceKeyData :: new () ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"let key = CACHED_KEY . get_or_init (|| { ",
"metrics :: KeyData :: from_name (\"mykeyname\") ",
"}) ; ",
"recorder . myop_mytype (metrics :: Key :: Borrowed (& key) , 1) ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
/// If there are dynamic labels - generate a direct invocation.
#[test]
fn test_get_expanded_callsite_regular_path() {
let stream = get_expanded_callsite(
"mytype",
"myop",
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Existing(parse_quote! { mylabels })),
quote! { 1 },
);
let expected = concat!(
"{ ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . myop_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , mylabels)) , ",
"1",
") ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_key_to_quoted_no_labels() {
let stream = key_to_quoted(Key::NotScoped(parse_quote! {"mykeyname"}), None);
let expected = "metrics :: KeyData :: from_name (\"mykeyname\")";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_key_to_quoted_existing_labels() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Existing(Expr::Path(parse_quote! { mylabels }))),
);
let expected = "metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , mylabels)";
assert_eq!(stream.to_string(), expected);
}
/// Registration can only operate on static labels (i.e. labels baked into the
/// Key).
#[test]
fn test_key_to_quoted_inline_labels() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Inline(vec![
(parse_quote! {"mylabel1"}, parse_quote! { mylabel1 }),
(parse_quote! {"mylabel2"}, parse_quote! { "mylabel2" }),
])),
);
let expected = concat!(
"metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , vec ! [",
"metrics :: Label :: new (\"mylabel1\" , mylabel1) , ",
"metrics :: Label :: new (\"mylabel2\" , \"mylabel2\")",
"])"
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_key_to_quoted_inline_labels_empty() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Inline(vec![])),
);
let expected = concat!(
"metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , vec ! [",
"])"
);
assert_eq!(stream.to_string(), expected);
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,11 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.1.0] - 2019-07-29
### Added
- Effective birth of the crate.

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,22 +0,0 @@
[package]
name = "metrics-observer-json"
version = "0.1.1"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-core compatible observer that outputs JSON."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-observer-json"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "json"]
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
metrics-util = { path = "../metrics-util", version = "^0.3" }
hdrhistogram = { version = "^6.3", default-features = false }
serde_json = "^1.0"

View File

@ -1,18 +0,0 @@
# metrics-observer-json
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-observer-json.svg
[release-badge]: https://img.shields.io/crates/v/metrics-observer-json.svg
[license-badge]: https://img.shields.io/crates/l/metrics-observer-json.svg
[docs-badge]: https://docs.rs/metrics-observer-json/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-observer-json
[docs]: https://docs.rs/metrics-observer-json
__metrics-observer-json__ is a metrics-core compatible observer that outputs JSON.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].

View File

@ -1,187 +0,0 @@
//! Observes metrics in JSON format.
//!
//! Metric scopes are used to provide the hierarchy of metrics. As an example, for a
//! snapshot with two metrics — `server.msgs_received` and `server.msgs_sent` — we would
//! expect to see this output:
//!
//! ```c
//! {"server":{"msgs_received":42,"msgs_sent":13}}
//! ```
//!
//! If we added another metric — `configuration_reloads` — we would expect to see:
//!
//! ```c
//! {"configuration_reloads":2,"server":{"msgs_received":42,"msgs_sent":13}}
//! ```
//!
//! Metrics are sorted alphabetically.
//!
//! ## Histograms
//!
//! Histograms are rendered with a configurable set of quantiles that are provided when creating an
//! instance of `JsonBuilder`. They are formatted using human-readable labels when displayed to
//! the user. For example, 0.0 is rendered as "min", 1.0 as "max", and anything in between using
//! the common "pXXX" format i.e. a quantile of 0.5 or percentile of 50 would be p50, a quantile of
//! 0.999 or percentile of 99.9 would be p999, and so on.
//!
//! All histograms have the sample count of the histogram provided in the output.
//!
//! ```c
//! {"connect_time_count":15,"connect_time_min":1334,"connect_time_p50":1934,
//! "connect_time_p99":5330,"connect_time_max":139389}
//! ```
//!
#![deny(missing_docs)]
use hdrhistogram::Histogram;
use metrics_core::{Builder, Drain, Key, Label, Observer};
use metrics_util::{parse_quantiles, MetricsTree, Quantile};
use std::collections::HashMap;
/// Builder for [`JsonObserver`].
pub struct JsonBuilder {
quantiles: Vec<Quantile>,
pretty: bool,
}
impl JsonBuilder {
/// Creates a new [`JsonBuilder`] with default values.
pub fn new() -> Self {
let quantiles = parse_quantiles(&[0.0, 0.5, 0.9, 0.95, 0.99, 0.999, 1.0]);
Self {
quantiles,
pretty: false,
}
}
/// Sets the quantiles to use when rendering histograms.
///
/// Quantiles represent a scale of 0 to 1, where percentiles represent a scale of 1 to 100, so
/// a quantile of 0.99 is the 99th percentile, and a quantile of 0.99 is the 99.9th percentile.
///
/// By default, the quantiles will be set to: 0.0, 0.5, 0.9, 0.95, 0.99, 0.999, and 1.0.
pub fn set_quantiles(mut self, quantiles: &[f64]) -> Self {
self.quantiles = parse_quantiles(quantiles);
self
}
/// Sets whether or not to render the JSON as "pretty."
///
/// Pretty JSON refers to the formatting and identation, where different fields are on
/// different lines, and depending on their depth from the root object, are indented.
///
/// By default, pretty mode is not enabled.
pub fn set_pretty_json(mut self, pretty: bool) -> Self {
self.pretty = pretty;
self
}
}
impl Builder for JsonBuilder {
type Output = JsonObserver;
fn build(&self) -> Self::Output {
JsonObserver {
quantiles: self.quantiles.clone(),
pretty: self.pretty,
tree: MetricsTree::default(),
histos: HashMap::new(),
}
}
}
impl Default for JsonBuilder {
fn default() -> Self {
Self::new()
}
}
/// Observes metrics in JSON format.
pub struct JsonObserver {
pub(crate) quantiles: Vec<Quantile>,
pub(crate) pretty: bool,
pub(crate) tree: MetricsTree,
pub(crate) histos: HashMap<Key, Histogram<u64>>,
}
impl Observer for JsonObserver {
fn observe_counter(&mut self, key: Key, value: u64) {
let (levels, name) = key_to_parts(key);
self.tree.insert_value(levels, name, value);
}
fn observe_gauge(&mut self, key: Key, value: i64) {
let (levels, name) = key_to_parts(key);
self.tree.insert_value(levels, name, value);
}
fn observe_histogram(&mut self, key: Key, values: &[u64]) {
let entry = self
.histos
.entry(key)
.or_insert_with(|| Histogram::<u64>::new(3).expect("failed to create histogram"));
for value in values {
entry
.record(*value)
.expect("failed to observe histogram value");
}
}
}
impl Drain<String> for JsonObserver {
fn drain(&mut self) -> String {
for (key, h) in self.histos.drain() {
let (levels, name) = key_to_parts(key);
let values = hist_to_values(name, h.clone(), &self.quantiles);
self.tree.insert_values(levels, values);
}
let result = if self.pretty {
serde_json::to_string_pretty(&self.tree)
} else {
serde_json::to_string(&self.tree)
};
let rendered = result.expect("failed to render json output");
self.tree.clear();
rendered
}
}
fn key_to_parts(key: Key) -> (Vec<String>, String) {
let (name, labels) = key.into_parts();
let mut parts = name.split('.').map(ToOwned::to_owned).collect::<Vec<_>>();
let name = parts.pop().expect("name didn't have a single part");
let labels = labels
.into_iter()
.map(Label::into_parts)
.map(|(k, v)| format!("{}=\"{}\"", k, v))
.collect::<Vec<_>>()
.join(",");
let label = if labels.is_empty() {
String::new()
} else {
format!("{{{}}}", labels)
};
let fname = format!("{}{}", name, label);
(parts, fname)
}
fn hist_to_values(
name: String,
hist: Histogram<u64>,
quantiles: &[Quantile],
) -> Vec<(String, u64)> {
let mut values = Vec::new();
values.push((format!("{} count", name), hist.len()));
for quantile in quantiles {
let value = hist.value_at_quantile(quantile.value());
values.push((format!("{} {}", name, quantile.label()), value));
}
values
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,21 +0,0 @@
[package]
name = "metrics-observer-prometheus"
version = "0.1.4"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-core compatible observer that outputs the Prometheus exposition output."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-observer-prometheus"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "prometheus"]
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
metrics-util = { path = "../metrics-util", version = "^0.3" }
hdrhistogram = { version = "^6.3", default-features = false }

View File

@ -1,18 +0,0 @@
# metrics-observer-prometheus
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-observer-prometheus.svg
[release-badge]: https://img.shields.io/crates/v/metrics-observer-prometheus.svg
[license-badge]: https://img.shields.io/crates/l/metrics-observer-prometheus.svg
[docs-badge]: https://docs.rs/metrics-observer-prometheus/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-observer-prometheus
[docs]: https://docs.rs/metrics-observer-prometheus
__metrics-observer-prometheus__ is a metrics-core compatible observer that outputs the Prometheus exposition format.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].

View File

@ -1,298 +0,0 @@
//! Records metrics in the Prometheus exposition format.
#![deny(missing_docs)]
use hdrhistogram::Histogram;
use metrics_core::{Builder, Drain, Key, Label, Observer};
use metrics_util::{parse_quantiles, Quantile};
use std::iter::FromIterator;
use std::{collections::HashMap, time::SystemTime};
/// Builder for [`PrometheusObserver`].
pub struct PrometheusBuilder {
quantiles: Vec<Quantile>,
buckets: Vec<u64>,
buckets_by_name: Option<HashMap<String, Vec<u64>>>,
}
impl PrometheusBuilder {
/// Creates a new [`PrometheusBuilder`] with default values.
pub fn new() -> Self {
let quantiles = parse_quantiles(&[0.0, 0.5, 0.9, 0.95, 0.99, 0.999, 1.0]);
Self {
quantiles,
buckets: vec![],
buckets_by_name: None,
}
}
/// Sets the quantiles to use when rendering histograms.
///
/// Quantiles represent a scale of 0 to 1, where percentiles represent a scale of 1 to 100, so
/// a quantile of 0.99 is the 99th percentile, and a quantile of 0.99 is the 99.9th percentile.
///
/// By default, the quantiles will be set to: 0.0, 0.5, 0.9, 0.95, 0.99, 0.999, and 1.0.
pub fn set_quantiles(mut self, quantiles: &[f64]) -> Self {
self.quantiles = parse_quantiles(quantiles);
self
}
/// Sets the buckets to use when rendering summaries.
///
/// Buckets values represent the higher bound of each buckets.
///
/// This option changes the observer's output of histogram-type metric into summaries.
pub fn set_buckets(mut self, values: &[u64]) -> Self {
self.buckets = values.to_vec();
self
}
/// Sets the buckets for a specific metric, overidding the default.
///
/// Matches the metric name's suffix, the longest match will be used.
///
/// This option changes the observer's output of histogram-type metric into summaries.
/// It only affects matching metrics if set_buckets was not used.
pub fn set_buckets_for_metric(mut self, name: &str, values: &[u64]) -> Self {
let buckets = self.buckets_by_name.get_or_insert_with(|| HashMap::new());
buckets.insert(name.to_owned(), values.to_vec());
self
}
}
impl Builder for PrometheusBuilder {
type Output = PrometheusObserver;
fn build(&self) -> Self::Output {
PrometheusObserver {
quantiles: self.quantiles.clone(),
buckets: self.buckets.clone(),
histos: HashMap::new(),
output: get_prom_expo_header(),
counters: HashMap::new(),
gauges: HashMap::new(),
buckets_by_name: self.buckets_by_name.clone(),
}
}
}
impl Default for PrometheusBuilder {
fn default() -> Self {
Self::new()
}
}
/// Records metrics in the Prometheus exposition format.
pub struct PrometheusObserver {
pub(crate) quantiles: Vec<Quantile>,
pub(crate) buckets: Vec<u64>,
pub(crate) histos: HashMap<String, HashMap<Vec<String>, (u64, Histogram<u64>)>>,
pub(crate) output: String,
pub(crate) counters: HashMap<String, HashMap<Vec<String>, u64>>,
pub(crate) gauges: HashMap<String, HashMap<Vec<String>, i64>>,
pub(crate) buckets_by_name: Option<HashMap<String, Vec<u64>>>,
}
impl Observer for PrometheusObserver {
fn observe_counter(&mut self, key: Key, value: u64) {
let (name, labels) = key_to_parts(key);
let entry = self
.counters
.entry(name)
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert_with(|| 0);
*entry += value;
}
fn observe_gauge(&mut self, key: Key, value: i64) {
let (name, labels) = key_to_parts(key);
let entry = self
.gauges
.entry(name)
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert_with(|| 0);
*entry = value;
}
fn observe_histogram(&mut self, key: Key, values: &[u64]) {
let (name, labels) = key_to_parts(key);
let entry = self
.histos
.entry(name)
.or_insert_with(|| HashMap::new())
.entry(labels)
.or_insert_with(|| {
let h = Histogram::<u64>::new(3).expect("failed to create histogram");
(0, h)
});
let (sum, h) = entry;
for value in values {
h.record(*value).expect("failed to observe histogram value");
*sum += *value;
}
}
}
impl Drain<String> for PrometheusObserver {
fn drain(&mut self) -> String {
let mut output: String = self.output.drain(..).collect();
for (name, mut by_labels) in self.counters.drain() {
output.push_str("\n# TYPE ");
output.push_str(name.as_str());
output.push_str(" counter\n");
for (labels, value) in by_labels.drain() {
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
}
for (name, mut by_labels) in self.gauges.drain() {
output.push_str("\n# TYPE ");
output.push_str(name.as_str());
output.push_str(" gauge\n");
for (labels, value) in by_labels.drain() {
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
}
let mut sorted_overrides = self
.buckets_by_name
.as_ref()
.map(|h| Vec::from_iter(h.iter()))
.unwrap_or_else(|| vec![]);
sorted_overrides.sort_by(|(a, _), (b, _)| b.len().cmp(&a.len()));
for (name, mut by_labels) in self.histos.drain() {
let buckets = sorted_overrides
.iter()
.find_map(|(k, buckets)| {
if name.ends_with(*k) {
Some(*buckets)
} else {
None
}
})
.unwrap_or(&self.buckets);
let use_quantiles = buckets.is_empty();
output.push_str("\n# TYPE ");
output.push_str(name.as_str());
output.push_str(" ");
output.push_str(if use_quantiles {
"summary"
} else {
"histogram"
});
output.push_str("\n");
for (labels, sh) in by_labels.drain() {
let (sum, hist) = sh;
if use_quantiles {
for quantile in &self.quantiles {
let value = hist.value_at_quantile(quantile.value());
let mut labels = labels.clone();
labels.push(format!("quantile=\"{}\"", quantile.value()));
let full_name = render_labeled_name(&name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
} else {
for bucket in buckets {
let value = hist.count_between(0, *bucket);
let mut labels = labels.clone();
labels.push(format!("le=\"{}\"", bucket));
let bucket_name = format!("{}_bucket", name);
let full_name = render_labeled_name(&bucket_name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
let mut labels = labels.clone();
labels.push("le=\"+Inf\"".to_owned());
let bucket_name = format!("{}_bucket", name);
let full_name = render_labeled_name(&bucket_name, &labels);
output.push_str(full_name.as_str());
output.push_str(" ");
output.push_str(hist.len().to_string().as_str());
output.push_str("\n");
}
let sum_name = format!("{}_sum", name);
let full_sum_name = render_labeled_name(&sum_name, &labels);
output.push_str(full_sum_name.as_str());
output.push_str(" ");
output.push_str(sum.to_string().as_str());
output.push_str("\n");
let count_name = format!("{}_count", name);
let full_count_name = render_labeled_name(&count_name, &labels);
output.push_str(full_count_name.as_str());
output.push_str(" ");
output.push_str(hist.len().to_string().as_str());
output.push_str("\n");
}
}
output
}
}
fn key_to_parts(key: Key) -> (String, Vec<String>) {
let (name, labels) = key.into_parts();
let sanitize = |c| c == '.' || c == '=' || c == '{' || c == '}' || c == '+' || c == '-';
let name = name.replace(sanitize, "_");
let labels = labels
.into_iter()
.map(Label::into_parts)
.map(|(k, v)| {
format!(
"{}=\"{}\"",
k,
v.replace("\\", "\\\\")
.replace("\"", "\\\"")
.replace("\n", "\\n")
)
})
.collect();
(name, labels)
}
fn render_labeled_name(name: &str, labels: &[String]) -> String {
let mut output = name.to_string();
if !labels.is_empty() {
let joined = labels.join(",");
output.push_str("{");
output.push_str(&joined);
output.push_str("}");
}
output
}
fn get_prom_expo_header() -> String {
let ts = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.map(|d| d.as_secs())
.unwrap_or(0);
format!(
"# metrics snapshot (ts={}) (prometheus exposition format)",
ts
)
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,22 +0,0 @@
[package]
name = "metrics-observer-yaml"
version = "0.1.1"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A metrics-core compatible observer that outputs YAML."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics-observer-yaml"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "yaml"]
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
metrics-util = { path = "../metrics-util", version = "^0.3" }
hdrhistogram = { version = "^6.3", default-features = false }
serde_yaml = "^0.8"

View File

@ -1,18 +0,0 @@
# metrics-observer-yaml
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-observer-yaml.svg
[release-badge]: https://img.shields.io/crates/v/metrics-observer-yaml.svg
[license-badge]: https://img.shields.io/crates/l/metrics-observer-yaml.svg
[docs-badge]: https://docs.rs/metrics-observer-yaml/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-observer-yaml
[docs]: https://docs.rs/metrics-observer-yaml
__metrics-observer-yaml__ is a metrics-core compatible observer that outputs YAML.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].

View File

@ -1,173 +0,0 @@
//! Observes metrics in YAML format.
//!
//! Metric scopes are used to provide the hierarchy and indentation of metrics. As an example, for
//! a snapshot with two metrics — `server.msgs_received` and `server.msgs_sent` — we would
//! expect to see this output:
//!
//! ```c
//! server:
//! msgs_received: 42
//! msgs_sent: 13
//! ```
//!
//! If we added another metric — `configuration_reloads` — we would expect to see:
//!
//! ```c
//! configuration_reloads: 2
//! server:
//! msgs_received: 42
//! msgs_sent: 13
//! ```
//!
//! Metrics are sorted alphabetically.
//!
//! ## Histograms
//!
//! Histograms are rendered with a configurable set of quantiles that are provided when creating an
//! instance of `YamlBuilder`. They are formatted using human-readable labels when displayed to
//! the user. For example, 0.0 is rendered as "min", 1.0 as "max", and anything in between using
//! the common "pXXX" format i.e. a quantile of 0.5 or percentile of 50 would be p50, a quantile of
//! 0.999 or percentile of 99.9 would be p999, and so on.
//!
//! All histograms have the sample count of the histogram provided in the output.
//!
//! ```c
//! connect_time count: 15
//! connect_time min: 1334
//! connect_time p50: 1934
//! connect_time p99: 5330
//! connect_time max: 139389
//! ```
//!
#![deny(missing_docs)]
use hdrhistogram::Histogram;
use metrics_core::{Builder, Drain, Key, Label, Observer};
use metrics_util::{parse_quantiles, MetricsTree, Quantile};
use std::collections::HashMap;
/// Builder for [`YamlObserver`].
pub struct YamlBuilder {
quantiles: Vec<Quantile>,
}
impl YamlBuilder {
/// Creates a new [`YamlBuilder`] with default values.
pub fn new() -> Self {
let quantiles = parse_quantiles(&[0.0, 0.5, 0.9, 0.95, 0.99, 0.999, 1.0]);
Self { quantiles }
}
/// Sets the quantiles to use when rendering histograms.
///
/// Quantiles represent a scale of 0 to 1, where percentiles represent a scale of 1 to 100, so
/// a quantile of 0.99 is the 99th percentile, and a quantile of 0.99 is the 99.9th percentile.
///
/// By default, the quantiles will be set to: 0.0, 0.5, 0.9, 0.95, 0.99, 0.999, and 1.0.
pub fn set_quantiles(mut self, quantiles: &[f64]) -> Self {
self.quantiles = parse_quantiles(quantiles);
self
}
}
impl Builder for YamlBuilder {
type Output = YamlObserver;
fn build(&self) -> Self::Output {
YamlObserver {
quantiles: self.quantiles.clone(),
tree: MetricsTree::default(),
histos: HashMap::new(),
}
}
}
impl Default for YamlBuilder {
fn default() -> Self {
Self::new()
}
}
/// Observess metrics in YAML format.
pub struct YamlObserver {
pub(crate) quantiles: Vec<Quantile>,
pub(crate) tree: MetricsTree,
pub(crate) histos: HashMap<Key, Histogram<u64>>,
}
impl Observer for YamlObserver {
fn observe_counter(&mut self, key: Key, value: u64) {
let (levels, name) = key_to_parts(key);
self.tree.insert_value(levels, name, value);
}
fn observe_gauge(&mut self, key: Key, value: i64) {
let (levels, name) = key_to_parts(key);
self.tree.insert_value(levels, name, value);
}
fn observe_histogram(&mut self, key: Key, values: &[u64]) {
let entry = self
.histos
.entry(key)
.or_insert_with(|| Histogram::<u64>::new(3).expect("failed to create histogram"));
for value in values {
entry
.record(*value)
.expect("failed to observe histogram value");
}
}
}
impl Drain<String> for YamlObserver {
fn drain(&mut self) -> String {
for (key, h) in self.histos.drain() {
let (levels, name) = key_to_parts(key);
let values = hist_to_values(name, h.clone(), &self.quantiles);
self.tree.insert_values(levels, values);
}
let rendered = serde_yaml::to_string(&self.tree).expect("failed to render yaml output");
self.tree.clear();
rendered
}
}
fn key_to_parts(key: Key) -> (Vec<String>, String) {
let (name, labels) = key.into_parts();
let mut parts = name.split('.').map(ToOwned::to_owned).collect::<Vec<_>>();
let name = parts.pop().expect("name didn't have a single part");
let labels = labels
.into_iter()
.map(Label::into_parts)
.map(|(k, v)| format!("{}=\"{}\"", k, v))
.collect::<Vec<_>>()
.join(",");
let label = if labels.is_empty() {
String::new()
} else {
format!("{{{}}}", labels)
};
let fname = format!("{}{}", name, label);
(parts, fname)
}
fn hist_to_values(
name: String,
hist: Histogram<u64>,
quantiles: &[Quantile],
) -> Vec<(String, u64)> {
let mut values = Vec::new();
values.push((format!("{} count", name), hist.len()));
for quantile in quantiles {
let value = hist.value_at_quantile(quantile.value());
values.push((format!("{} {}", name, quantile.label()), value));
}
values
}

View File

@ -1,3 +0,0 @@
/target
**/*.rs.bk
Cargo.lock

View File

@ -1,90 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.12.0] - 2019-10-18
### Changed
- Rename `Sink::record_counter` to `increment_counter`, `Sink::record_gauge` to `update_gauge`. (#47)
- Rename `Receiver::get_controller` and `Receiver::get_sink` to `controller` and `sink`, respectively. (#48)
- Switch to the new updated `Recorder` trait from `metrics`. (#47)
### Fixed
- Fixed some broken tests and incorrect documentation. (#53)
## [0.11.0] - 2019-07-29
### Added
- Metrics now support labels. (#27)
- Add support for proxy metrics. (#39)
### Changed
- `metrics` becomes `metrics-runtime` after switching the runtime and the facade crate. (#27)
- Switch from "recorders" to "observers." (#35)
## [0.10.0] - 2019-06-11
### Changed
- Entirely remove the event loop and switch to pure atomics. (#13)
## [0.9.1] - 2019-05-01
### Added
- Expose exporters/recorders via a facade module in `metrics`. (#8)
## [0.9.0] - 2019-04-03
### Changed
- `hotmic` is renamed to `metrics. (#2)
## [0.8.2] - 2019-03-19
### Added
- Histograms now track the sum of all values they record, to support target systems like Prometheus.
- Added the ability to get percentiles as quantiles. This is also to support target systems like Prometheus. These are derived from the existing percentile values and so can have extra decimal precision. This will be unified in a future breaking update.
## [0.8.1] - 2019-03-15
### Changed
- Fixed some issues with type visibility and documentation.
## [0.8.0] - 2019-03-15
### Changed
- Removed accessors from `Snapshot`. It is not an opaque type that can be turned into an iterator which will provide access to typed metric values so that an external consumer can get all of the values in the snapshot, including their type, for proper exporting.
### Added
- A new "simple" snapshot type -- `SimpleSnapshot` -- which has easy-to-use accessors for metrics, identical to what `Snapshot` used to have.
- Allow retrieving snapshots asynchronously via `Controller::get_snapshot_async`. Utilizes a oneshot channel so the caller can poll asynchronously.
## [0.7.1] - 2019-01-28
### Changed
- Fixed a bug where new sinks with the same scope would overwrite each others metrics. [#20](https://github.com/nuclearfurnace/hotmic/pull/20)
## [0.7.0] - 2019-01-27
### Changed
- Sink scopes can now be either a `&str` or `&[&str]`.
- Fixed a bug where the receiver loop ran its thread at 100%.
## [0.6.0] - 2019-01-24
### Changed
- Metrics auto-register themselves now. [#16](https://github.com/nuclearfurnace/hotmic/pull/16)
## [0.5.2] - 2019-01-19
### Changed
- Snapshot now implements [`Serialize`](https://docs.rs/serde/1.0.85/serde/trait.Serialize.html).
## [0.5.1] - 2019-01-19
### Changed
- Controller is now `Clone`.
## [0.5.0] - 2019-01-19
### Added
- Revamp API to provide easier usage. [#14](https://github.com/nuclearfurnace/hotmic/pull/14)
## [0.4.0] - 2019-01-14
Minimum supported Rust version is now 1.31.0, courtesy of switching to the 2018 edition.
### Changed
- Switch to integer-backed metric scopes. [#10](https://github.com/nuclearfurnace/hotmic/pull/10)
### Added
- Add clock support via `quanta`. [#12](https://github.com/nuclearfurnace/hotmic/pull/12)
## [0.3.0] - 2018-12-22
### Added
- Switch to crossbeam-channel and add scopes. [#4](https://github.com/nuclearfurnace/hotmic/pull/4)

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,50 +0,0 @@
[package]
name = "metrics-runtime"
version = "0.13.0"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
license = "MIT"
description = "A batteries-included metrics library."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "histogram", "counter", "gauge"]
[features]
default = ["exporters", "observers"]
exporters = ["metrics-exporter-log", "metrics-exporter-http"]
observers = ["metrics-observer-yaml", "metrics-observer-json", "metrics-observer-prometheus"]
[[bench]]
name = "histogram"
harness = false
[dependencies]
metrics-core = { path = "../metrics-core", version = "^0.5" }
metrics-util = { path = "../metrics-util", version = "^0.3" }
metrics = { path = "../metrics", version = "^0.12", features = ["std"] }
im = "^15"
arc-swap = "^0.4"
parking_lot = "^0.10"
quanta = "^0.3"
crossbeam-utils = "^0.7"
metrics-exporter-log = { path = "../metrics-exporter-log", version = "^0.4", optional = true }
metrics-exporter-http = { path = "../metrics-exporter-http", version = "^0.3", optional = true }
metrics-observer-yaml = { path = "../metrics-observer-yaml", version = "^0.1", optional = true }
metrics-observer-prometheus = { path = "../metrics-observer-prometheus", version = "^0.1", optional = true }
metrics-observer-json = { path = "../metrics-observer-json", version = "^0.1", optional = true }
atomic-shim = "0.1.0"
[dev-dependencies]
log = "^0.4"
env_logger = "^0.7"
getopts = "^0.2"
hdrhistogram = "^7.1"
criterion = "^0.3"
lazy_static = "^1.3"
tokio = { version = "^0.2", features = ["macros", "rt-core"] }

View File

@ -1,39 +0,0 @@
# metrics
[![conduct-badge][]][conduct] [![downloads-badge][] ![release-badge][]][crate] [![docs-badge][]][docs] [![license-badge][]](#license)
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[downloads-badge]: https://img.shields.io/crates/d/metrics-runtime.svg
[release-badge]: https://img.shields.io/crates/v/metrics-runtime.svg
[license-badge]: https://img.shields.io/crates/l/metrics-runtime.svg
[docs-badge]: https://docs.rs/metrics-runtime/badge.svg
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[crate]: https://crates.io/crates/metrics-runtime
[docs]: https://docs.rs/metrics-runtime
__metrics__ is a batteries-included metrics library.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].
# what's it all about?
`metrics-runtime` is the high-quality, batteries-included reference metrics runtime for the Metrics project.
This crate serves to provide support for all of the goals espoused by the project as a whole: a runtime that can be used with `metrics`, support for interoperating with `metrics-core` compatible observers and exporters. On top of that, it provides a deliberately designed API meant to help you quickly and easily instrument your application.
As operators of systems at scale, we've attempted to distill this library down to the core features necessary to successfully instrument an application and ensure that you succeed at providing observability into your production systems.
## high-level technical features
- Supports the three most common metric types: counters, gauges, and histograms.
- Based on `metrics-core` for composability at the observer/exporter level.
- Access to ultra-high-speed timing facilities out-of-the-box with [quanta](https://github.com/nuclearfurnace/quanta).
- Scoped and labeled metrics for rich dimensionality.
- Bundled with a number of useful observers/exporters: export your metrics with ease.
## performance
Even as a reference runtime, `metrics-runtime` still has extremely impressive performance. On modern cloud systems, you'll be able to ingest millions of samples per second per core with p99 latencies in the low hundreds of nanoseconds. While `metrics-runtime` will not be low-enough overhead for every use case, it will meet or exceed the performance of other metrics libraries in Rust and in turn providing you wih fast and predictably low-overhead measurements under production workloads.
There are a few example benchmark programs in the crate that simulate basic workloads. These programs specifically do not attempt to fully simulate a production workload, in terms of number of metrics, frequency of ingestion, or dimensionality. They are brute force benchmarks designed to showcase throughput and latency for varied concurrency profiles under high write contention.

View File

@ -1,76 +0,0 @@
#[macro_use]
extern crate criterion;
#[macro_use]
extern crate lazy_static;
use criterion::{Benchmark, Criterion, Throughput};
use metrics_runtime::data::AtomicWindowedHistogram;
use quanta::{Builder as UpkeepBuilder, Clock, Handle as UpkeepHandle};
use std::time::Duration;
lazy_static! {
static ref QUANTA_UPKEEP: UpkeepHandle = {
let builder = UpkeepBuilder::new(Duration::from_millis(10));
builder
.start()
.expect("failed to start quanta upkeep thread")
};
static ref RANDOM_INTS: Vec<u64> = vec![
21061184, 21301862, 21331592, 21457012, 21500016, 21537837, 21581557, 21620030, 21664102,
21678463, 21708437, 21751808, 21845243, 21850265, 21938879, 21971014, 22005842, 22034601,
22085552, 22101746, 22115429, 22139883, 22260209, 22270768, 22298080, 22299780, 22307659,
22354697, 22355668, 22359397, 22463872, 22496590, 22590978, 22603740, 22706352, 22820895,
22849491, 22891538, 22912955, 22919915, 22928920, 22968656, 22985992, 23033739, 23061395,
23077554, 23138588, 23185172, 23282479, 23290830, 23316844, 23386911, 23641319, 23677058,
23742930, 25350389, 25399746, 25404925, 25464391, 25478415, 25480015, 25632783, 25639769,
25645612, 25688228, 25724427, 25862192, 25954476, 25994479, 26008752, 26036460, 26038202,
26078874, 26118327, 26132679, 26207601, 26262418, 26270737, 26274860, 26431248, 26434268,
26562736, 26580134, 26593740, 26618561, 26844181, 26866971, 26907883, 27005270, 27023584,
27024044, 27057184, 23061395, 23077554, 23138588, 23185172, 23282479, 23290830, 23316844,
23386911, 23641319, 23677058, 23742930, 25350389, 25399746, 25404925, 25464391, 25478415,
25480015, 25632783, 25639769, 25645612, 25688228, 25724427, 25862192, 25954476, 25994479,
26008752, 26036460, 26038202, 26078874, 26118327, 26132679, 26207601, 26262418, 26270737,
26274860, 26431248, 26434268, 26562736, 26580134, 26593740, 26618561, 26844181, 26866971,
26907883, 27005270, 27023584, 27024044, 27057184, 23061395, 23077554, 23138588, 23185172,
23282479, 23290830, 23316844, 23386911, 23641319, 23677058, 23742930, 25350389, 25399746,
25404925, 25464391, 25478415, 25480015, 25632783, 25639769, 25645612, 25688228, 25724427,
25862192, 25954476, 25994479, 26008752, 26036460, 26038202, 26078874, 26118327, 26132679,
26207601, 26262418, 26270737, 26274860, 26431248, 26434268, 26562736, 26580134, 26593740,
26618561, 26844181, 26866971, 26907883, 27005270, 27023584, 27024044, 27057184, 23061395,
23077554, 23138588, 23185172, 23282479, 23290830, 23316844, 23386911, 23641319, 23677058,
23742930, 25350389, 25399746, 25404925, 25464391, 25478415, 25480015, 25632783, 25639769,
25645612, 25688228, 25724427, 25862192, 25954476, 25994479, 26008752, 26036460, 26038202,
26078874, 26118327, 26132679, 26207601, 26262418, 26270737, 26274860, 26431248, 26434268,
26562736, 26580134, 26593740, 26618561, 26844181, 26866971, 26907883, 27005270, 27023584,
27024044, 27057184, 27088034, 27088550, 27302898, 27353925, 27412984, 27488633, 27514155,
27558052, 27601937, 27606339, 27624514, 27680396, 27684064, 27963602, 27414982, 28450673
];
}
fn bucket_benchmark(c: &mut Criterion) {
// Trigger the quanta upkeep thread to spawn and start updating the time.
let _handle = &QUANTA_UPKEEP;
c.bench(
"histogram",
Benchmark::new("record", |b| {
let clock = Clock::new();
let bucket = AtomicWindowedHistogram::new(
Duration::from_secs(1),
Duration::from_millis(100),
clock,
);
b.iter(|| {
for value in RANDOM_INTS.iter() {
bucket.record(*value);
}
})
})
.throughput(Throughput::Elements(RANDOM_INTS.len() as u64)),
);
}
criterion_group!(benches, bucket_benchmark);
criterion_main!(benches);

View File

@ -1,306 +0,0 @@
#[macro_use]
extern crate log;
extern crate env_logger;
extern crate getopts;
extern crate hdrhistogram;
extern crate metrics_core;
extern crate metrics_runtime;
use atomic_shim::AtomicU64;
use getopts::Options;
use hdrhistogram::Histogram;
use metrics_runtime::{Receiver, Sink};
use quanta::Clock;
use std::{
env,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread,
time::{Duration, Instant},
};
const LOOP_SAMPLE: u64 = 1000;
struct Generator {
stats: Sink,
t0: Option<u64>,
gauge: i64,
hist: Histogram<u64>,
done: Arc<AtomicBool>,
rate_counter: Arc<AtomicU64>,
clock: Clock,
}
impl Generator {
fn new(
stats: Sink,
done: Arc<AtomicBool>,
rate_counter: Arc<AtomicU64>,
clock: Clock,
) -> Generator {
Generator {
stats,
t0: None,
gauge: 0,
hist: Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap(),
done,
rate_counter,
clock,
}
}
fn run(&mut self) {
let mut counter = 0;
loop {
counter += 1;
if self.done.load(Ordering::Relaxed) {
break;
}
self.gauge += 1;
let t1 = self.stats.now();
if let Some(t0) = self.t0 {
let start = if counter % 1000 == 0 {
self.stats.now()
} else {
0
};
let _ = self.stats.increment_counter("ok", 1);
let _ = self.stats.record_timing("ok", t0, t1);
let _ = self.stats.update_gauge("total", self.gauge);
if start != 0 {
let delta = self.stats.now() - start;
self.hist.saturating_record(delta);
// We also increment our global counter for the sample rate here.
self.rate_counter
.fetch_add(LOOP_SAMPLE * 3, Ordering::AcqRel);
}
}
self.t0 = Some(t1);
}
}
fn run_cached(&mut self) {
let mut counter = 0;
let counter_handle = self.stats.counter("ok");
let timing_handle = self.stats.histogram("ok");
let gauge_handle = self.stats.gauge("total");
loop {
counter += 1;
if self.done.load(Ordering::Relaxed) {
break;
}
self.gauge += 1;
let t1 = self.clock.recent();
if let Some(t0) = self.t0 {
let start = if counter % LOOP_SAMPLE == 0 {
self.stats.now()
} else {
0
};
let _ = counter_handle.record(1);
let _ = timing_handle.record_timing(t0, t1);
let _ = gauge_handle.record(self.gauge);
if start != 0 {
let delta = self.stats.now() - start;
self.hist.saturating_record(delta);
// We also increment our global counter for the sample rate here.
self.rate_counter
.fetch_add(LOOP_SAMPLE * 3, Ordering::AcqRel);
}
}
self.t0 = Some(t1);
}
}
}
impl Drop for Generator {
fn drop(&mut self) {
info!(
" sender latency: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(self.hist.min()),
nanos_to_readable(self.hist.value_at_percentile(50.0)),
nanos_to_readable(self.hist.value_at_percentile(95.0)),
nanos_to_readable(self.hist.value_at_percentile(99.0)),
nanos_to_readable(self.hist.value_at_percentile(99.9)),
nanos_to_readable(self.hist.max())
);
}
}
fn print_usage(program: &str, opts: &Options) {
let brief = format!("Usage: {} [options]", program);
print!("{}", opts.usage(&brief));
}
pub fn opts() -> Options {
let mut opts = Options::new();
opts.optopt(
"d",
"duration",
"number of seconds to run the benchmark",
"INTEGER",
);
opts.optopt("p", "producers", "number of producers", "INTEGER");
opts.optflag("c", "cached", "whether or not to use cached handles");
opts.optflag("h", "help", "print this help menu");
opts
}
fn main() {
env_logger::init();
let args: Vec<String> = env::args().collect();
let program = &args[0];
let opts = opts();
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
error!("Failed to parse command line args: {}", f);
return;
}
};
if matches.opt_present("help") {
print_usage(program, &opts);
return;
}
let use_cached = matches.opt_present("cached");
if use_cached {
info!("using cached handles");
}
info!("metrics benchmark");
// Build our sink and configure the facets.
let seconds = matches
.opt_str("duration")
.unwrap_or_else(|| "60".to_owned())
.parse()
.unwrap();
let producers = matches
.opt_str("producers")
.unwrap_or_else(|| "1".to_owned())
.parse()
.unwrap();
info!("duration: {}s", seconds);
info!("producers: {}", producers);
let receiver = Receiver::builder()
.histogram(Duration::from_secs(5), Duration::from_millis(100))
.build()
.expect("failed to build receiver");
let sink = receiver.sink();
let sink = sink.scoped(&["alpha", "pools", "primary"]);
info!("sink configured");
// Spin up our sample producers.
let done = Arc::new(AtomicBool::new(false));
let rate_counter = Arc::new(AtomicU64::new(0));
let mut handles = Vec::new();
let clock = Clock::new();
for _ in 0..producers {
let s = sink.clone();
let d = done.clone();
let r = rate_counter.clone();
let c = clock.clone();
let handle = thread::spawn(move || {
let mut gen = Generator::new(s, d, r, c);
if use_cached {
gen.run_cached();
} else {
gen.run();
}
});
handles.push(handle);
}
// Spin up the sink and let 'er rip.
let controller = receiver.controller();
// Poll the controller to figure out the sample rate.
let mut total = 0;
let mut t0 = Instant::now();
let mut snapshot_hist = Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap();
for _ in 0..seconds {
let t1 = Instant::now();
let start = Instant::now();
let _snapshot = controller.snapshot();
let end = Instant::now();
snapshot_hist.saturating_record(duration_as_nanos(end - start) as u64);
let turn_total = rate_counter.load(Ordering::Acquire);
let turn_delta = turn_total - total;
total = turn_total;
let rate = turn_delta as f64 / (duration_as_nanos(t1 - t0) / 1_000_000_000.0);
info!("sample ingest rate: {:.0} samples/sec", rate);
t0 = t1;
thread::sleep(Duration::new(1, 0));
}
info!("--------------------------------------------------------------------------------");
info!(" ingested samples total: {}", total);
info!(
"snapshot retrieval: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(snapshot_hist.min()),
nanos_to_readable(snapshot_hist.value_at_percentile(50.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(95.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(99.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(99.9)),
nanos_to_readable(snapshot_hist.max())
);
// Wait for the producers to finish so we can get their stats too.
done.store(true, Ordering::SeqCst);
for handle in handles {
let _ = handle.join();
}
}
fn duration_as_nanos(d: Duration) -> f64 {
(d.as_secs() as f64 * 1e9) + d.subsec_nanos() as f64
}
fn nanos_to_readable(t: u64) -> String {
let f = t as f64;
if f < 1_000.0 {
format!("{}ns", f)
} else if f < 1_000_000.0 {
format!("{:.0}μs", f / 1_000.0)
} else if f < 2_000_000_000.0 {
format!("{:.2}ms", f / 1_000_000.0)
} else {
format!("{:.3}s", f / 1_000_000_000.0)
}
}

View File

@ -1,203 +0,0 @@
#[macro_use]
extern crate log;
extern crate env_logger;
extern crate getopts;
extern crate hdrhistogram;
use atomic_shim::AtomicU64;
use getopts::Options;
use hdrhistogram::Histogram;
use quanta::Clock;
use std::{
env,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread,
time::{Duration, Instant},
};
struct Generator {
counter: Arc<AtomicU64>,
clock: Clock,
hist: Histogram<u64>,
done: Arc<AtomicBool>,
}
impl Generator {
fn new(counter: Arc<AtomicU64>, done: Arc<AtomicBool>) -> Generator {
Generator {
counter,
clock: Clock::new(),
hist: Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap(),
done,
}
}
fn run(&mut self) {
let mut counter = 0;
loop {
if self.done.load(Ordering::Relaxed) {
break;
}
let start = if counter % 100 == 0 {
self.clock.now()
} else {
0
};
counter = self.counter.fetch_add(1, Ordering::AcqRel);
if start != 0 {
let delta = self.clock.now() - start;
self.hist.saturating_record(delta);
}
}
}
}
impl Drop for Generator {
fn drop(&mut self) {
info!(
" sender latency: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(self.hist.min()),
nanos_to_readable(self.hist.value_at_percentile(50.0)),
nanos_to_readable(self.hist.value_at_percentile(95.0)),
nanos_to_readable(self.hist.value_at_percentile(99.0)),
nanos_to_readable(self.hist.value_at_percentile(99.9)),
nanos_to_readable(self.hist.max())
);
}
}
fn print_usage(program: &str, opts: &Options) {
let brief = format!("Usage: {} [options]", program);
print!("{}", opts.usage(&brief));
}
pub fn opts() -> Options {
let mut opts = Options::new();
opts.optopt(
"d",
"duration",
"number of seconds to run the benchmark",
"INTEGER",
);
opts.optopt("p", "producers", "number of producers", "INTEGER");
opts.optflag("h", "help", "print this help menu");
opts
}
fn main() {
env_logger::init();
let args: Vec<String> = env::args().collect();
let program = &args[0];
let opts = opts();
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
error!("Failed to parse command line args: {}", f);
return;
}
};
if matches.opt_present("help") {
print_usage(program, &opts);
return;
}
info!("metrics benchmark");
// Build our sink and configure the facets.
let seconds = matches
.opt_str("duration")
.unwrap_or_else(|| "60".to_owned())
.parse()
.unwrap();
let producers = matches
.opt_str("producers")
.unwrap_or_else(|| "1".to_owned())
.parse()
.unwrap();
info!("duration: {}s", seconds);
info!("producers: {}", producers);
// Spin up our sample producers.
let counter = Arc::new(AtomicU64::new(0));
let done = Arc::new(AtomicBool::new(false));
let mut handles = Vec::new();
for _ in 0..producers {
let c = counter.clone();
let d = done.clone();
let handle = thread::spawn(move || {
Generator::new(c, d).run();
});
handles.push(handle);
}
// Poll the controller to figure out the sample rate.
let mut total = 0;
let mut t0 = Instant::now();
let mut snapshot_hist = Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap();
for _ in 0..seconds {
let t1 = Instant::now();
let start = Instant::now();
let turn_total = counter.load(Ordering::Acquire);
let end = Instant::now();
snapshot_hist.saturating_record(duration_as_nanos(end - start) as u64);
let turn_delta = turn_total - total;
total = turn_total;
let rate = turn_delta as f64 / (duration_as_nanos(t1 - t0) / 1_000_000_000.0);
info!("sample ingest rate: {:.0} samples/sec", rate);
t0 = t1;
thread::sleep(Duration::new(1, 0));
}
info!("--------------------------------------------------------------------------------");
info!(" ingested samples total: {}", total);
info!(
"snapshot retrieval: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(snapshot_hist.min()),
nanos_to_readable(snapshot_hist.value_at_percentile(50.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(95.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(99.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(99.9)),
nanos_to_readable(snapshot_hist.max())
);
// Wait for the producers to finish so we can get their stats too.
done.store(true, Ordering::SeqCst);
for handle in handles {
let _ = handle.join();
}
}
fn duration_as_nanos(d: Duration) -> f64 {
(d.as_secs() as f64 * 1e9) + d.subsec_nanos() as f64
}
fn nanos_to_readable(t: u64) -> String {
let f = t as f64;
if f < 1_000.0 {
format!("{}ns", f)
} else if f < 1_000_000.0 {
format!("{:.0}μs", f / 1_000.0)
} else if f < 2_000_000_000.0 {
format!("{:.2}ms", f / 1_000_000.0)
} else {
format!("{:.3}s", f / 1_000_000_000.0)
}
}

View File

@ -1,96 +0,0 @@
use crate::{config::Configuration, Receiver};
use std::{error::Error, fmt, time::Duration};
/// Errors during receiver creation.
#[derive(Debug, Clone)]
pub enum BuilderError {
/// Failed to spawn the upkeep thread.
///
/// As histograms are windowed, reads and writes require getting the current time so they can
/// perform the required maintenance, or upkeep, on the internal structures to roll over old
/// buckets, etc.
///
/// Acquiring the current time is fast compared to most operations, but is a significant
/// portion of the other time it takes to write to a histogram, which limits overall throughput
/// under high load.
///
/// We spin up a background thread, or the "upkeep thread", which updates a global time source
/// that the read and write operations exclusively rely on. While this source is not as
/// up-to-date as the real clock, it is much faster to access.
UpkeepFailure,
#[doc(hidden)]
_NonExhaustive,
}
impl Error for BuilderError {}
impl fmt::Display for BuilderError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
BuilderError::UpkeepFailure => write!(f, "failed to spawn quanta upkeep thread"),
BuilderError::_NonExhaustive => write!(f, "non-exhaustive matching"),
}
}
}
/// Builder for [`Receiver`].
#[derive(Clone)]
pub struct Builder {
pub(crate) histogram_window: Duration,
pub(crate) histogram_granularity: Duration,
pub(crate) upkeep_interval: Duration,
}
impl Default for Builder {
fn default() -> Self {
Self {
histogram_window: Duration::from_secs(10),
histogram_granularity: Duration::from_secs(1),
upkeep_interval: Duration::from_millis(50),
}
}
}
impl Builder {
/// Creates a new [`Builder`] with default values.
pub fn new() -> Self {
Default::default()
}
/// Sets the histogram configuration.
///
/// Defaults to a 10 second window with 1 second granularity.
///
/// This controls both how long of a time window we track histogram data for, and the
/// granularity in which we roll off old data.
///
/// As an example, with the default values, we would keep the last 10 seconds worth of
/// histogram data, and would remove 1 seconds worth of data at a time as the window rolled
/// forward.
pub fn histogram(mut self, window: Duration, granularity: Duration) -> Self {
self.histogram_window = window;
self.histogram_granularity = granularity;
self
}
/// Sets the upkeep interval.
///
/// Defaults to 50 milliseconds.
///
/// This controls how often the time source, used internally by histograms, is updated with the
/// real time. For performance reasons, histograms use a sampled time source when they perform
/// checks to see if internal maintenance needs to occur. If the histogram granularity is set
/// very low, then this interval might need to be similarly reduced to make sure we're able to
/// update the time more often than histograms need to perform upkeep.
pub fn upkeep_interval(mut self, interval: Duration) -> Self {
self.upkeep_interval = interval;
self
}
/// Create a [`Receiver`] based on this configuration.
pub fn build(self) -> Result<Receiver, BuilderError> {
let config = Configuration::from_builder(&self);
Receiver::from_config(config)
}
}

View File

@ -1,382 +0,0 @@
use crate::data::AtomicWindowedHistogram;
use arc_swap::ArcSwapOption;
use atomic_shim::{AtomicI64, AtomicU64};
use metrics_core::Key;
use metrics_util::StreamingIntegers;
use quanta::Clock;
use std::{
fmt,
ops::Deref,
sync::{atomic::Ordering, Arc},
time::{Duration, Instant},
};
/// A scope, or context, for a metric.
#[doc(hidden)]
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub enum Scope {
/// Root scope.
Root,
/// A nested scope, with arbitrarily deep nesting.
Nested(Vec<String>),
}
impl Scope {
/// Adds a new part to this scope.
pub fn add_part<S>(self, part: S) -> Self
where
S: Into<String>,
{
match self {
Scope::Root => Scope::Nested(vec![part.into()]),
Scope::Nested(mut parts) => {
parts.push(part.into());
Scope::Nested(parts)
}
}
}
pub(crate) fn into_string<S>(self, name: S) -> String
where
S: Into<String>,
{
match self {
Scope::Root => name.into(),
Scope::Nested(mut parts) => {
parts.push(name.into());
parts.join(".")
}
}
}
}
pub(crate) type ScopeHandle = u64;
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub(crate) enum Kind {
Counter,
Gauge,
Histogram,
Proxy,
}
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub(crate) struct Identifier(Key, ScopeHandle, Kind);
impl Identifier {
pub fn new<K>(key: K, handle: ScopeHandle, kind: Kind) -> Self
where
K: Into<Key>,
{
Identifier(key.into(), handle, kind)
}
pub fn kind(&self) -> Kind {
self.2.clone()
}
pub fn into_parts(self) -> (Key, ScopeHandle, Kind) {
(self.0, self.1, self.2)
}
}
#[derive(Debug)]
enum ValueState {
Counter(AtomicU64),
Gauge(AtomicI64),
Histogram(AtomicWindowedHistogram),
Proxy(ArcSwapOption<Box<ProxyFn>>),
}
#[derive(Debug)]
pub(crate) enum ValueSnapshot {
Single(Measurement),
Multiple(Vec<(Key, Measurement)>),
}
/// A point-in-time metric measurement.
#[derive(Debug)]
pub enum Measurement {
/// Counters represent a single value that can only ever be incremented over time, or reset to
/// zero.
Counter(u64),
/// Gauges represent a single value that can go up _or_ down over time.
Gauge(i64),
/// Histograms measure the distribution of values for a given set of measurements.
///
/// Histograms are slightly special in our case because we want to maintain full fidelity of
/// the underlying dataset. We do this by storing all of the individual data points, but we
/// use [`StreamingIntegers`] to store them in a compressed in-memory form. This allows
/// callers to pass around the compressed dataset and decompress/access the actual integers on
/// demand.
Histogram(StreamingIntegers),
}
#[derive(Clone, Debug)]
/// Handle to the underlying measurement for a metric.
pub(crate) struct ValueHandle {
state: Arc<ValueState>,
}
impl ValueHandle {
fn new(state: ValueState) -> Self {
ValueHandle {
state: Arc::new(state),
}
}
pub fn counter() -> Self {
Self::new(ValueState::Counter(AtomicU64::new(0)))
}
pub fn gauge() -> Self {
Self::new(ValueState::Gauge(AtomicI64::new(0)))
}
pub fn histogram(window: Duration, granularity: Duration, clock: Clock) -> Self {
Self::new(ValueState::Histogram(AtomicWindowedHistogram::new(
window,
granularity,
clock,
)))
}
pub fn proxy() -> Self {
Self::new(ValueState::Proxy(ArcSwapOption::new(None)))
}
pub fn update_counter(&self, value: u64) {
match self.state.deref() {
ValueState::Counter(inner) => {
inner.fetch_add(value, Ordering::Release);
}
_ => unreachable!("tried to access as counter, not a counter"),
}
}
pub fn update_gauge(&self, value: i64) {
match self.state.deref() {
ValueState::Gauge(inner) => inner.store(value, Ordering::Release),
_ => unreachable!("tried to access as gauge, not a gauge"),
}
}
pub fn increment_gauge(&self, value: i64) {
match self.state.deref() {
ValueState::Gauge(inner) => inner.fetch_add(value, Ordering::Release),
_ => unreachable!("tried to access as gauge, not a gauge"),
};
}
pub fn decrement_gauge(&self, value: i64) {
match self.state.deref() {
ValueState::Gauge(inner) => inner.fetch_sub(value, Ordering::Release),
_ => unreachable!("tried to access as gauge, not a gauge"),
};
}
pub fn update_histogram(&self, value: u64) {
match self.state.deref() {
ValueState::Histogram(inner) => inner.record(value),
_ => unreachable!("tried to access as histogram, not a histogram"),
}
}
pub fn update_proxy<F>(&self, value: F)
where
F: Fn() -> Vec<(Key, Measurement)> + Send + Sync + 'static,
{
match self.state.deref() {
ValueState::Proxy(inner) => {
inner.store(Some(Arc::new(Box::new(value))));
}
_ => unreachable!("tried to access as proxy, not a proxy"),
}
}
pub fn snapshot(&self) -> ValueSnapshot {
match self.state.deref() {
ValueState::Counter(inner) => {
let value = inner.load(Ordering::Acquire);
ValueSnapshot::Single(Measurement::Counter(value))
}
ValueState::Gauge(inner) => {
let value = inner.load(Ordering::Acquire);
ValueSnapshot::Single(Measurement::Gauge(value))
}
ValueState::Histogram(inner) => {
let stream = inner.snapshot();
ValueSnapshot::Single(Measurement::Histogram(stream))
}
ValueState::Proxy(maybe) => {
let measurements = match *maybe.load() {
None => Vec::new(),
Some(ref f) => f(),
};
ValueSnapshot::Multiple(measurements)
}
}
}
}
/// Trait for types that represent time and can be subtracted from each other to generate a delta.
pub trait Delta {
/// Get the delta between this value and another value.
///
/// For `Instant`, we explicitly return the nanosecond difference. For `u64`, we return the
/// integer difference, but the timescale itself can be whatever the user desires.
fn delta(&self, other: Self) -> u64;
}
impl Delta for u64 {
fn delta(&self, other: u64) -> u64 {
self.wrapping_sub(other)
}
}
impl Delta for Instant {
fn delta(&self, other: Instant) -> u64 {
let dur = *self - other;
dur.as_nanos() as u64
}
}
pub trait ProxyFnInner: Fn() -> Vec<(Key, Measurement)> {}
impl<F> ProxyFnInner for F where F: Fn() -> Vec<(Key, Measurement)> {}
pub type ProxyFn = dyn ProxyFnInner<Output = Vec<(Key, Measurement)>> + Send + Sync + 'static;
impl fmt::Debug for ProxyFn {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "ProxyFn")
}
}
#[cfg(test)]
mod tests {
use super::{Measurement, Scope, ValueHandle, ValueSnapshot};
use metrics_core::Key;
use quanta::Clock;
use std::borrow::Cow;
use std::time::Duration;
#[test]
fn test_metric_scope() {
let root_scope = Scope::Root;
assert_eq!(root_scope.into_string(""), "".to_string());
let root_scope = Scope::Root;
assert_eq!(root_scope.into_string("jambalaya"), "jambalaya".to_string());
let nested_scope = Scope::Nested(vec![]);
assert_eq!(nested_scope.into_string(""), "".to_string());
let nested_scope = Scope::Nested(vec![]);
assert_eq!(nested_scope.into_string("toilet"), "toilet".to_string());
let nested_scope = Scope::Nested(vec!["chamber".to_string(), "of".to_string()]);
assert_eq!(
nested_scope.into_string("secrets"),
"chamber.of.secrets".to_string()
);
let nested_scope = Scope::Nested(vec![
"chamber".to_string(),
"of".to_string(),
"secrets".to_string(),
]);
assert_eq!(
nested_scope.into_string("toilet"),
"chamber.of.secrets.toilet".to_string()
);
let mut nested_scope = Scope::Root;
nested_scope = nested_scope
.add_part("chamber")
.add_part("of".to_string())
.add_part(Cow::Borrowed("secrets"));
assert_eq!(
nested_scope.into_string(""),
"chamber.of.secrets.".to_string()
);
let mut nested_scope = Scope::Nested(vec![
"chamber".to_string(),
"of".to_string(),
"secrets".to_string(),
]);
nested_scope = nested_scope.add_part("part");
assert_eq!(
nested_scope.into_string("two"),
"chamber.of.secrets.part.two".to_string()
);
}
#[test]
fn test_metric_values() {
let counter = ValueHandle::counter();
counter.update_counter(42);
match counter.snapshot() {
ValueSnapshot::Single(Measurement::Counter(value)) => assert_eq!(value, 42),
_ => panic!("incorrect value snapshot type for counter"),
}
let gauge = ValueHandle::gauge();
gauge.update_gauge(23);
gauge.increment_gauge(20);
gauge.decrement_gauge(1);
match gauge.snapshot() {
ValueSnapshot::Single(Measurement::Gauge(value)) => assert_eq!(value, 42),
_ => panic!("incorrect value snapshot type for gauge"),
}
let (mock, _) = Clock::mock();
let histogram =
ValueHandle::histogram(Duration::from_secs(10), Duration::from_secs(1), mock);
histogram.update_histogram(8675309);
histogram.update_histogram(5551212);
match histogram.snapshot() {
ValueSnapshot::Single(Measurement::Histogram(stream)) => {
assert_eq!(stream.len(), 2);
let values = stream.decompress();
assert_eq!(&values[..], [8675309, 5551212]);
}
_ => panic!("incorrect value snapshot type for histogram"),
}
let proxy = ValueHandle::proxy();
proxy.update_proxy(|| vec![(Key::from_name("foo"), Measurement::Counter(23))]);
match proxy.snapshot() {
ValueSnapshot::Multiple(mut measurements) => {
assert_eq!(measurements.len(), 1);
let measurement = measurements.pop().expect("should have measurement");
assert_eq!(measurement.0.name().as_ref(), "foo");
match measurement.1 {
Measurement::Counter(i) => assert_eq!(i, 23),
_ => panic!("wrong measurement type"),
}
}
_ => panic!("incorrect value snapshot type for proxy"),
}
// This second one just makes sure that replacing the proxy function functions as intended.
proxy.update_proxy(|| vec![(Key::from_name("bar"), Measurement::Counter(24))]);
match proxy.snapshot() {
ValueSnapshot::Multiple(mut measurements) => {
assert_eq!(measurements.len(), 1);
let measurement = measurements.pop().expect("should have measurement");
assert_eq!(measurement.0.name().as_ref(), "bar");
match measurement.1 {
Measurement::Counter(i) => assert_eq!(i, 24),
_ => panic!("wrong measurement type"),
}
}
_ => panic!("incorrect value snapshot type for proxy"),
}
}
}

View File

@ -1,29 +0,0 @@
use crate::Builder;
use std::time::Duration;
/// Holds the configuration for complex metric types.
#[derive(Clone, Debug)]
pub(crate) struct Configuration {
pub histogram_window: Duration,
pub histogram_granularity: Duration,
pub upkeep_interval: Duration,
}
impl Configuration {
pub fn from_builder(builder: &Builder) -> Self {
Self {
histogram_window: builder.histogram_window,
histogram_granularity: builder.histogram_granularity,
upkeep_interval: builder.upkeep_interval,
}
}
#[allow(dead_code)]
pub(crate) fn mock() -> Self {
Self {
histogram_window: Duration::from_secs(5),
histogram_granularity: Duration::from_secs(1),
upkeep_interval: Duration::from_millis(10),
}
}
}

View File

@ -1,43 +0,0 @@
use crate::{
data::Snapshot,
registry::{MetricRegistry, ScopeRegistry},
};
use metrics_core::{Observe, Observer};
use std::sync::Arc;
/// Handle for acquiring snapshots.
///
/// `Controller` is [`metrics-core`]-compatible as a snapshot provider, both for synchronous and
/// asynchronous snapshotting.
///
/// [`metrics-core`]: https://docs.rs/metrics-core
#[derive(Clone)]
pub struct Controller {
metric_registry: Arc<MetricRegistry>,
scope_registry: Arc<ScopeRegistry>,
}
impl Controller {
pub(crate) fn new(
metric_registry: Arc<MetricRegistry>,
scope_registry: Arc<ScopeRegistry>,
) -> Controller {
Controller {
metric_registry,
scope_registry,
}
}
/// Provide a snapshot of its collected metrics.
pub fn snapshot(&self) -> Snapshot {
self.metric_registry.snapshot()
}
}
impl Observe for Controller {
fn observe<O: Observer>(&self, observer: &mut O) {
self.metric_registry.observe(observer)
}
}

View File

@ -1,27 +0,0 @@
use crate::common::ValueHandle;
/// A reference to a [`Counter`].
///
/// A [`Counter`] is used for directly updating a counter, without any lookup overhead.
#[derive(Clone)]
pub struct Counter {
handle: ValueHandle,
}
impl Counter {
/// Records a value for the counter.
pub fn record(&self, value: u64) {
self.handle.update_counter(value);
}
/// Increments the counter by one.
pub fn increment(&self) {
self.handle.update_counter(1);
}
}
impl From<ValueHandle> for Counter {
fn from(handle: ValueHandle) -> Self {
Self { handle }
}
}

View File

@ -1,32 +0,0 @@
use crate::common::ValueHandle;
/// A reference to a [`Gauge`].
///
/// A [`Gauge`] is used for directly updating a gauge, without any lookup overhead.
#[derive(Clone)]
pub struct Gauge {
handle: ValueHandle,
}
impl Gauge {
/// Records a value for the gauge.
pub fn record(&self, value: i64) {
self.handle.update_gauge(value);
}
/// Increments the gauge's value
pub fn increment(&self, value: i64) {
self.handle.increment_gauge(value);
}
/// Decrements the gauge's value
pub fn decrement(&self, value: i64) {
self.handle.decrement_gauge(value);
}
}
impl From<ValueHandle> for Gauge {
fn from(handle: ValueHandle) -> Self {
Self { handle }
}
}

View File

@ -1,375 +0,0 @@
use crate::common::{Delta, ValueHandle};
use crate::helper::duration_as_nanos;
use atomic_shim::AtomicU64;
use crossbeam_utils::Backoff;
use metrics_util::{AtomicBucket, StreamingIntegers};
use quanta::Clock;
use std::cmp;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::time::Duration;
/// A reference to a [`Histogram`].
///
/// A [`Histogram`] is used for directly updating a gauge, without any lookup overhead.
#[derive(Clone)]
pub struct Histogram {
handle: ValueHandle,
}
impl Histogram {
/// Records a timing for the histogram.
pub fn record_timing<D: Delta>(&self, start: D, end: D) {
let value = end.delta(start);
self.handle.update_histogram(value);
}
/// Records a value for the histogram.
pub fn record_value(&self, value: u64) {
self.handle.update_histogram(value);
}
}
impl From<ValueHandle> for Histogram {
fn from(handle: ValueHandle) -> Self {
Self { handle }
}
}
/// An atomic windowed histogram.
///
/// This histogram provides a windowed view of values that rolls forward over time, dropping old
/// values as they exceed the window of the histogram. Writes into the histogram are lock-free, as
/// well as snapshots of the histogram.
#[derive(Debug)]
pub struct AtomicWindowedHistogram {
buckets: Vec<AtomicBucket<u64>>,
bucket_count: usize,
granularity: u64,
upkeep_index: AtomicUsize,
index: AtomicUsize,
next_upkeep: AtomicU64,
clock: Clock,
}
impl AtomicWindowedHistogram {
/// Creates a new [`AtomicWindowedHistogram`].
///
/// Internally, a number of buckets will be created, based on how many times `granularity` goes
/// into `window`. As time passes, buckets will be cleared to avoid values older than the
/// `window` duration.
///
/// As buckets will hold values represneting a period of time up to `granularity`, the
/// granularity can be lowered or raised to roll values off more precisely, or less precisely,
/// against the provided clock.
///
/// # Panics
/// Panics if `granularity` is larger than `window`.
pub fn new(window: Duration, granularity: Duration, clock: Clock) -> Self {
let window_ns = duration_as_nanos(window);
let granularity_ns = duration_as_nanos(granularity);
assert!(window_ns > granularity_ns);
let now = clock.recent();
let bucket_count = ((window_ns / granularity_ns) as usize) + 1;
let mut buckets = Vec::new();
for _ in 0..bucket_count {
buckets.push(AtomicBucket::new());
}
let next_upkeep = now + granularity_ns;
AtomicWindowedHistogram {
buckets,
bucket_count,
granularity: granularity_ns,
upkeep_index: AtomicUsize::new(0),
index: AtomicUsize::new(0),
next_upkeep: AtomicU64::new(next_upkeep),
clock,
}
}
/// Takes a snapshot of the current histogram.
///
/// Returns a [`StreamingIntegers`] value, representing all observed values in the
/// histogram. As writes happen concurrently, along with buckets being cleared, a snapshot is
/// not guaranteed to have all values present at the time the method was called.
pub fn snapshot(&self) -> StreamingIntegers {
// Run upkeep to make sure our window reflects any time passage since the last write.
let index = self.upkeep();
let mut streaming = StreamingIntegers::new();
// Start from the bucket ahead of the currently-being-written-to-bucket so that we outrace
// any upkeep and get access to more of the data.
for i in 0..self.bucket_count {
let bucket_index = (index + i + 1) % self.bucket_count;
let bucket = &self.buckets[bucket_index];
bucket.data_with(|block| streaming.compress(block));
}
streaming
}
/// Records a value to the histogram.
pub fn record(&self, value: u64) {
let index = self.upkeep();
self.buckets[index].push(value);
}
fn upkeep(&self) -> usize {
let backoff = Backoff::new();
loop {
// Start by figuring out if the histogram needs to perform upkeep.
let now = self.clock.recent();
let next_upkeep = self.next_upkeep.load(Ordering::Acquire);
if now <= next_upkeep {
let index = self.index.load(Ordering::Acquire);
let actual_index = index % self.bucket_count;
return actual_index;
}
// We do need to perform upkeep, but someone *else* might actually be doing it already,
// so go ahead and wait until the index is caught up with the upkeep index: the upkeep
// index will be ahead of index until upkeep is complete.
let mut upkeep_in_progress = false;
let mut index;
loop {
index = self.index.load(Ordering::Acquire);
let upkeep_index = self.upkeep_index.load(Ordering::Acquire);
if index == upkeep_index {
break;
}
upkeep_in_progress = true;
backoff.snooze();
}
// If we waited for another upkeep operation to complete, then there's the chance that
// enough time has passed that we're due for upkeep again, so restart our loop.
if upkeep_in_progress {
continue;
}
// Figure out how many buckets, up to the maximum, need to be cleared based on the
// delta between the target upkeep time and the actual time. We always clear at least
// one bucket, but may need to clear them all.
let delta = now - next_upkeep;
let bucket_depth = cmp::min((delta / self.granularity) as usize, self.bucket_count) + 1;
// Now that we we know how many buckets we need to clear, update the index to pointer
// writers at the next bucket past the last one that we will be clearing.
let new_index = index + bucket_depth;
let prev_index = self
.index
.compare_and_swap(index, new_index, Ordering::SeqCst);
if prev_index == index {
// Clear the target bucket first, and then update the upkeep target time so new
// writers can proceed. We may still have other buckets to clean up if we had
// multiple rounds worth of upkeep to do, but this will let new writes proceed as
// soon as possible.
let clear_index = new_index % self.bucket_count;
self.buckets[clear_index].clear();
let now = self.clock.now();
let next_upkeep = now + self.granularity;
self.next_upkeep.store(next_upkeep, Ordering::Release);
// Now that we've cleared the actual bucket that writers will use going forward, we
// have to clear any older buckets that we skipped over. If our granularity was 1
// second, and we skipped over 4 seconds worth of buckets, we would still have
// 3 buckets to clear, etc.
let last_index = new_index - 1;
while index < last_index {
index += 1;
let clear_index = index % self.bucket_count;
self.buckets[clear_index].clear();
}
// We've cleared the old buckets, so upkeep is done. Push our upkeep index forward
// so that writers who were blocked waiting for upkeep to conclude can restart.
self.upkeep_index.store(new_index, Ordering::Release);
}
}
}
}
#[cfg(test)]
mod tests {
use super::{AtomicWindowedHistogram, Clock};
use crossbeam_utils::thread;
use std::time::Duration;
#[test]
fn test_histogram_simple_update() {
let (clock, _ctl) = Clock::mock();
let h = AtomicWindowedHistogram::new(Duration::from_secs(5), Duration::from_secs(1), clock);
h.record(1245);
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 1);
let values = snapshot.decompress();
assert_eq!(values.len(), 1);
assert_eq!(values.get(0).unwrap(), &1245);
}
#[test]
fn test_histogram_complex_update() {
let (clock, _ctl) = Clock::mock();
let h = AtomicWindowedHistogram::new(Duration::from_secs(5), Duration::from_secs(1), clock);
h.record(1245);
h.record(213);
h.record(1022);
h.record(1248);
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 4);
let values = snapshot.decompress();
assert_eq!(values.len(), 4);
assert_eq!(values.get(0).unwrap(), &1245);
assert_eq!(values.get(1).unwrap(), &213);
assert_eq!(values.get(2).unwrap(), &1022);
assert_eq!(values.get(3).unwrap(), &1248);
}
#[test]
fn test_windowed_histogram_rollover() {
let (clock, ctl) = Clock::mock();
// Set our granularity at right below a second, so that when we when add a second, we don't
// land on the same exact value, and our "now" time should always be ahead of the upkeep
// time when we expect it to be.
let h =
AtomicWindowedHistogram::new(Duration::from_secs(5), Duration::from_millis(999), clock);
// Histogram is empty, snapshot is empty.
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 0);
// Immediately add two values, and observe the histogram and snapshot having two values.
h.record(1);
h.record(2);
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 2);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 3);
// Roll forward 3 seconds, should still have everything.
ctl.increment(Duration::from_secs(3));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 2);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 3);
// Roll forward 1 second, should still have everything.
ctl.increment(Duration::from_secs(1));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 2);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 3);
// Roll forward 1 second, should still have everything.
ctl.increment(Duration::from_secs(1));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 2);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 3);
// Pump in some new values. We should have a total of 5 values now.
h.record(3);
h.record(4);
h.record(5);
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 5);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 15);
// Roll forward 6 seconds, in increments. The first one rolls over a single bucket, and
// cleans bucket #0, the first one we wrote to. The second and third ones get us right up
// to the last three values, and then clear them out.
ctl.increment(Duration::from_secs(1));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 3);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 12);
ctl.increment(Duration::from_secs(4));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 3);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 12);
ctl.increment(Duration::from_secs(1));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 0);
// We should also be able to advance by vast periods of time and observe not only old
// values going away but no weird overflow issues or index or anything. This ensures that
// our upkeep code functions not just for under-load single bucket rollovers but also "been
// idle for a while and just got a write" scenarios.
h.record(42);
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 1);
let total: u64 = snapshot.decompress().iter().sum();
assert_eq!(total, 42);
ctl.increment(Duration::from_secs(1000));
let snapshot = h.snapshot();
assert_eq!(snapshot.len(), 0);
}
#[test]
fn test_histogram_write_gauntlet_mt() {
let clock = Clock::new();
let clock2 = clock.clone();
let target = clock.now() + Duration::from_secs(5).as_nanos() as u64;
let h = AtomicWindowedHistogram::new(
Duration::from_secs(20),
Duration::from_millis(500),
clock,
);
thread::scope(|s| {
let t1 = s.spawn(|_| {
let mut total = 0;
while clock2.now() < target {
h.record(42);
total += 1;
}
total
});
let t2 = s.spawn(|_| {
let mut total = 0;
while clock2.now() < target {
h.record(42);
total += 1;
}
total
});
let t3 = s.spawn(|_| {
let mut total = 0;
while clock2.now() < target {
h.record(42);
total += 1;
}
total
});
let t1_total = t1.join().expect("thread 1 panicked during test");
let t2_total = t2.join().expect("thread 2 panicked during test");
let t3_total = t3.join().expect("thread 3 panicked during test");
let total = t1_total + t2_total + t3_total;
let snap = h.snapshot();
assert_eq!(total, snap.len());
})
.unwrap();
}
}

View File

@ -1,12 +0,0 @@
//! Core data types for metrics.
mod counter;
pub use counter::Counter;
mod gauge;
pub use gauge::Gauge;
mod histogram;
pub use histogram::{AtomicWindowedHistogram, Histogram};
mod snapshot;
pub use snapshot::Snapshot;

View File

@ -1,29 +0,0 @@
use crate::common::Measurement;
use metrics_core::Key;
/// A collection of point-in-time metric measurements.
#[derive(Default, Debug)]
pub struct Snapshot {
measurements: Vec<(Key, Measurement)>,
}
impl Snapshot {
pub(crate) fn new(measurements: Vec<(Key, Measurement)>) -> Self {
Self { measurements }
}
/// Number of measurements in this snapshot.
pub fn len(&self) -> usize {
self.measurements.len()
}
/// Whether or not the snapshot is empty.
pub fn is_empty(&self) -> bool {
self.measurements.len() != 0
}
/// Converts a [`Snapshot`] into the internal measurements.
pub fn into_measurements(self) -> Vec<(Key, Measurement)> {
self.measurements
}
}

View File

@ -1,8 +0,0 @@
//! Commonly used exporters.
//!
//! Exporters define where metric output goes: standard output, HTTP, etc.
#[cfg(feature = "metrics-exporter-log")]
pub use metrics_exporter_log::LogExporter;
#[cfg(feature = "metrics-exporter-http")]
pub use metrics_exporter_http::HttpExporter;

View File

@ -1,21 +0,0 @@
use std::time::Duration;
/// Converts a duration to nanoseconds.
pub fn duration_as_nanos(d: Duration) -> u64 {
(d.as_secs() * 1_000_000_000) + u64::from(d.subsec_nanos())
}
#[cfg(test)]
mod tests {
use super::duration_as_nanos;
use std::time::Duration;
#[test]
fn test_simple_duration_as_nanos() {
let d1 = Duration::from_secs(3);
let d2 = Duration::from_millis(500);
assert_eq!(duration_as_nanos(d1), 3_000_000_000);
assert_eq!(duration_as_nanos(d2), 500_000_000);
}
}

View File

@ -1,342 +0,0 @@
//! High-speed metrics collection library.
//!
//! `metrics-runtime` provides a generalized metrics collection library targeted at users who want
//! to log metrics at high volume and high speed.
//!
//! # Design
//!
//! The library follows a pattern of "senders" and a "receiver."
//!
//! Callers create a [`Receiver`], which acts as a registry for all metrics that flow through it.
//! It allows creating new sinks as well as controllers, both necessary to push in and pull out
//! metrics from the system. It also manages background resources necessary for the registry to
//! operate.
//!
//! Once a [`Receiver`] is created, callers can either create a [`Sink`] for sending metrics, or a
//! [`Controller`] for getting metrics out.
//!
//! A [`Sink`] can be cheaply cloned, and offers convenience methods for getting the current time
//! as well as getting direct handles to a given metric. This allows users to either work with the
//! fuller API exposed by [`Sink`] or to take a compositional approach and embed fields that
//! represent each particular metric to be sent.
//!
//! A [`Controller`] provides both a synchronous and asynchronous snapshotting interface, which is
//! [`metrics-core`][metrics_core] compatible for exporting. This allows flexibility in
//! integration amongst traditional single-threaded or hand-rolled multi-threaded applications and
//! the emerging asynchronous Rust ecosystem.
//!
//! # Performance
//!
//! Users can expect to be able to send tens of millions of samples per second, with ingest
//! latencies at roughly 65-70ns at p50, and 250ns at p99. Depending on the workload -- counters
//! vs histograms -- latencies may be even lower, as counters and gauges are markedly faster to
//! update than histograms. Concurrent updates of the same metric will also cause natural
//! contention and lower the throughput/increase the latency of ingestion.
//!
//! # Metrics
//!
//! Counters, gauges, and histograms are supported, and follow the definitions outlined in
//! [`metrics-core`][metrics_core].
//!
//! Here's a simple example of creating a receiver and working with a sink:
//!
//! ```rust
//! # extern crate metrics_runtime;
//! use metrics_runtime::Receiver;
//! use std::{thread, time::Duration};
//! let receiver = Receiver::builder().build().expect("failed to create receiver");
//! let mut sink = receiver.sink();
//!
//! // We can update a counter. Counters are monotonic, unsigned integers that start at 0 and
//! // increase over time.
//! sink.increment_counter("widgets", 5);
//!
//! // We can update a gauge. Gauges are signed, and hold on to the last value they were updated
//! // to, so you need to track the overall value on your own.
//! sink.update_gauge("red_balloons", 99);
//!
//! // We can update a timing histogram. For timing, we're using the built-in `Sink::now` method
//! // which utilizes a high-speed internal clock. This method returns the time in nanoseconds, so
//! // we get great resolution, but giving the time in nanoseconds isn't required! If you want to
//! // send it in another unit, that's fine, but just pay attention to that fact when viewing and
//! // using those metrics once exported. We also support passing `Instant` values -- both `start`
//! // and `end` need to be the same type, though! -- and we'll take the nanosecond output of that.
//! let start = sink.now();
//! thread::sleep(Duration::from_millis(10));
//! let end = sink.now();
//! sink.record_timing("db.queries.select_products_ns", start, end);
//!
//! // Finally, we can update a value histogram. Technically speaking, value histograms aren't
//! // fundamentally different from timing histograms. If you use a timing histogram, we do the
//! // math for you of getting the time difference, but other than that, identical under the hood.
//! let row_count = 46;
//! sink.record_value("db.queries.select_products_num_rows", row_count);
//! ```
//!
//! # Scopes
//!
//! Metrics can be scoped, not unlike loggers, at the [`Sink`] level. This allows sinks to easily
//! nest themselves without callers ever needing to care about where they're located.
//!
//! This feature is a simpler approach to tagging: while not as semantically rich, it provides the
//! level of detail necessary to distinguish a single metric between multiple callsites.
//!
//! For example, after getting a [`Sink`] from the [`Receiver`], we can easily nest ourselves under
//! the root scope and then send some metrics:
//!
//! ```rust
//! # extern crate metrics_runtime;
//! # use metrics_runtime::Receiver;
//! # let receiver = Receiver::builder().build().expect("failed to create receiver");
//! // This sink has no scope aka the root scope. The metric will just end up as "widgets".
//! let mut root_sink = receiver.sink();
//! root_sink.increment_counter("widgets", 42);
//!
//! // This sink is under the "secret" scope. Since we derived ourselves from the root scope,
//! // we're not nested under anything, but our metric name will end up being "secret.widgets".
//! let mut scoped_sink = root_sink.scoped("secret");
//! scoped_sink.increment_counter("widgets", 42);
//!
//! // This sink is under the "supersecret" scope, but we're also nested! The metric name for this
//! // sample will end up being "secret.supersecret.widget".
//! let mut scoped_sink_two = scoped_sink.scoped("supersecret");
//! scoped_sink_two.increment_counter("widgets", 42);
//!
//! // Sinks retain their scope even when cloned, so the metric name will be the same as above.
//! let mut cloned_sink = scoped_sink_two.clone();
//! cloned_sink.increment_counter("widgets", 42);
//!
//! // This sink will be nested two levels deeper than its parent by using a slightly different
//! // input scope: scope can be a single string, or multiple strings, which is interpreted as
//! // nesting N levels deep.
//! //
//! // This metric name will end up being "super.secret.ultra.special.widgets".
//! let mut scoped_sink_three = scoped_sink.scoped(&["super", "secret", "ultra", "special"]);
//! scoped_sink_two.increment_counter("widgets", 42);
//! ```
//!
//! # Labels
//!
//! On top of scope support, metrics can also have labels. If scopes are for organizing metrics in
//! a hierarchy, then labels are for differentiating the same metric being emitted from multiple
//! sources.
//!
//! This is most easily demonstrated with an example:
//!
//! ```rust
//! # extern crate metrics_runtime;
//! # fn run_query(_: &str) -> u64 { 42 }
//! # use metrics_runtime::Receiver;
//! # let receiver = Receiver::builder().build().expect("failed to create receiver");
//! # let mut sink = receiver.sink();
//! // We might have a function that interacts with a database and returns the number of rows it
//! // touched in doing so.
//! fn process_query(query: &str) -> u64 {
//! run_query(query)
//! }
//!
//! // We might call this function multiple times, but hitting different tables.
//! let rows_a = process_query("UPDATE posts SET public = 1 WHERE public = 0");
//! let rows_b = process_query("UPDATE comments SET public = 1 WHERE public = 0");
//!
//! // Now, we want to track a metric that shows how many rows are updated overall, so the metric
//! // name should be the same no matter which table we update, but we'd also like to be able to
//! // differentiate by table, too!
//! sink.record_value_with_labels("db.rows_updated", rows_a, &[("table", "posts")]);
//! sink.record_value_with_labels("db.rows_updated", rows_b, &[("table", "comments")]);
//!
//! // If you want to send a specific set of labels with every metric from this sink, you can also
//! // add default labels. This action is additive, so you can call it multiple times to build up
//! // the set of labels sent with metrics, and labels are inherited when creating a scoped sink or
//! // cloning an existing sink, which allows label usage to either supplement scopes or to
//! // potentially replace them entirely.
//! sink.add_default_labels(&[("database", "primary")]);
//! # fn main() {}
//! ```
//!
//! As shown in the example, labels allow a user to submit values to the underlying metric name,
//! while also differentiating between unique situations, whatever the facet that the user decides
//! to utilize.
//!
//! Naturally, these methods can be slightly cumbersome and visually detracting, in which case
//! you can utilize the metric handles -- [`Counter`](crate::data::Counter),
//! [`Gauge`](crate::data::Gauge), and [`Histogram`](crate::data::Histogram) -- and create them
//! with labels ahead of time.
//!
//! These handles are bound to the given metric type, as well as the name, labels, and scope of the
//! sink. Thus, there is no overhead of looking up the metric as with the `record_*` methods, and
//! the values can be updated directly, and with less overhead, resulting in faster method calls.
//!
//! ```rust
//! # extern crate metrics_runtime;
//! # use metrics_runtime::Receiver;
//! # use std::time::Instant;
//! # let receiver = Receiver::builder().build().expect("failed to create receiver");
//! # let mut sink = receiver.sink();
//! // Let's create a counter.
//! let egg_count = sink.counter("eggs");
//!
//! // I want a baker's dozen of eggs!
//! egg_count.increment();
//! egg_count.record(12);
//!
//! // This updates the same metric as above! We have so many eggs now!
//! sink.increment_counter("eggs", 12);
//!
//! // Gauges and histograms don't have any extra helper methods, just `record`:
//! let gauge = sink.gauge("population");
//! gauge.record(8_000_000_000);
//!
//! let histogram = sink.histogram("distribution");
//!
//! // You can record a histogram value directly:
//! histogram.record_value(42);
//!
//! // Or handily pass it two [`Delta`]-compatible values, and have it calculate the delta for you:
//! let start = Instant::now();
//! let end = Instant::now();
//! histogram.record_timing(start, end);
//!
//! // Each of these methods also has a labels-aware companion:
//! let labeled_counter = sink.counter_with_labels("egg_count", &[("type", "large_brown")]);
//! let labeled_gauge = sink.gauge_with_labels("population", &[("country", "austria")]);
//! let labeled_histogram = sink.histogram_with_labels("distribution", &[("type", "performance")]);
//! # fn main() {}
//! ```
//!
//! # Proxies
//!
//! Sometimes, you may have a need to pull in "external" metrics: values related to your
//! application that your application itself doesn't generate, such as system-level metrics.
//!
//! [`Sink`] allows you to register a "proxy metric", which gives the ability to return metrics
//! on-demand when a snapshot is being taken. Users provide a closure that is run every time a
//! snapshot is being taken, which can return multiple metrics, which are then added to overall
//! list of metrics being held by `metrics-runtime` itself.
//!
//! If metrics are relatively expensive to calculate -- say, accessing the /proc filesytem on Linux
//! -- then this can be a great alternative to polling them yourself and having to update them
//! normally on some sort of schedule.
//!
//! ```rust
//! # extern crate metrics_runtime;
//! # extern crate metrics_core;
//! # use metrics_core::Key;
//! # use metrics_runtime::{Receiver, Measurement};
//! # use std::time::Instant;
//! # let receiver = Receiver::builder().build().expect("failed to create receiver");
//! # let mut sink = receiver.sink();
//! // A proxy is now registered under the name "load_stats", which is prepended to all the metrics
//! // generated by the closure i.e. "load_stats.avg_1min". These metrics are also still scoped
//! // normally based on the [`Sink`].
//! sink.proxy("load_stat", || {
//! let mut values = Vec::new();
//! values.push((Key::from_name("avg_1min"), Measurement::Gauge(19)));
//! values.push((Key::from_name("avg_5min"), Measurement::Gauge(12)));
//! values.push((Key::from_name("avg_10min"), Measurement::Gauge(10)));
//! values
//! });
//! # fn main() { }
//! ```
//!
//! # Snapshots
//!
//! Naturally, we need a way to get the metrics out of the system, which is where snapshots come
//! into play. By utilizing a [`Controller`], we can take a snapshot of the current metrics in the
//! registry, and then output them to any desired system/interface by utilizing
//! [`Observer`](metrics_core::Observer). A number of pre-baked observers (which only concern
//! themselves with formatting the data) and exporters (which take the formatted data and either
//! serve it up, such as exposing an HTTP endpoint, or write it somewhere, like stdout) are
//! available, some of which are exposed by this crate.
//!
//! Let's take an example of writing out our metrics in a yaml-like format, writing them via
//! `log!`:
//! ```rust
//! # extern crate metrics_runtime;
//! use metrics_runtime::{
//! Receiver, observers::YamlBuilder, exporters::LogExporter,
//! };
//! use log::Level;
//! use std::{thread, time::Duration};
//! let receiver = Receiver::builder().build().expect("failed to create receiver");
//! let mut sink = receiver.sink();
//!
//! // We can update a counter. Counters are monotonic, unsigned integers that start at 0 and
//! // increase over time.
//! // Take some measurements, similar to what we had in other examples:
//! sink.increment_counter("widgets", 5);
//! sink.update_gauge("red_balloons", 99);
//!
//! let start = sink.now();
//! thread::sleep(Duration::from_millis(10));
//! let end = sink.now();
//! sink.record_timing("db.queries.select_products_ns", start, end);
//! sink.record_timing("db.gizmo_query", start, end);
//!
//! let num_rows = 46;
//! sink.record_value("db.queries.select_products_num_rows", num_rows);
//!
//! // Now create our exporter/observer configuration, and wire it up.
//! let exporter = LogExporter::new(
//! receiver.controller(),
//! YamlBuilder::new(),
//! Level::Info,
//! Duration::from_secs(5),
//! );
//!
//! // This exporter will now run every 5 seconds, taking a snapshot, rendering it, and writing it
//! // via `log!` at the informational level. This particular exporter is running directly on the
//! // current thread, and not on a background thread.
//! //
//! // exporter.run();
//! ```
//! Most exporters have the ability to run on the current thread or to be converted into a future
//! which can be spawned on any Tokio-compatible runtime.
//!
//! # Facade
//!
//! `metrics-runtime` is `metrics` compatible, and can be installed as the global metrics facade:
//! ```
//! # #[macro_use] extern crate metrics;
//! extern crate metrics_runtime;
//! use metrics_runtime::Receiver;
//!
//! Receiver::builder()
//! .build()
//! .expect("failed to create receiver")
//! .install();
//!
//! counter!("items_processed", 42);
//! ```
//!
//! [metrics_core]: https://docs.rs/metrics-core
//! [`Observer`]: https://docs.rs/metrics-core/0.3.1/metrics_core/trait.Observer.html
#![deny(missing_docs)]
#![warn(unused_extern_crates)]
mod builder;
mod common;
mod config;
mod control;
pub mod data;
mod helper;
mod receiver;
mod registry;
mod sink;
#[cfg(any(feature = "metrics-exporter-log", feature = "metrics-exporter-http"))]
pub mod exporters;
#[cfg(any(
feature = "metrics-observer-yaml",
feature = "metrics-observer-json",
feature = "metrics-observer-prometheus"
))]
pub mod observers;
pub use self::{
builder::{Builder, BuilderError},
common::{Delta, Measurement, Scope},
control::Controller,
receiver::Receiver,
sink::{AsScoped, Sink, SinkError},
};

View File

@ -1,11 +0,0 @@
//! Commonly used observers.
//!
//! Observers define the format of the metric output: YAML, JSON, etc.
#[cfg(feature = "metrics-observer-yaml")]
pub use metrics_observer_yaml::YamlBuilder;
#[cfg(feature = "metrics-observer-json")]
pub use metrics_observer_json::JsonBuilder;
#[cfg(feature = "metrics-observer-prometheus")]
pub use metrics_observer_prometheus::PrometheusBuilder;

View File

@ -1,118 +0,0 @@
use crate::{
builder::{Builder, BuilderError},
common::Scope,
config::Configuration,
control::Controller,
registry::{MetricRegistry, ScopeRegistry},
sink::Sink,
};
use metrics::Recorder;
use metrics_core::Key;
use quanta::{Builder as UpkeepBuilder, Clock, Handle as UpkeepHandle};
use std::{cell::RefCell, sync::Arc};
thread_local! {
static SINK: RefCell<Option<Sink>> = RefCell::new(None);
}
/// Central store for metrics.
///
/// `Receiver` is the nucleus for all metrics operations. While no operations are performed by it
/// directly, it holds the registeries and references to resources and so it must live as long as
/// any [`Sink`] or [`Controller`] does.
pub struct Receiver {
metric_registry: Arc<MetricRegistry>,
scope_registry: Arc<ScopeRegistry>,
clock: Clock,
_upkeep_handle: UpkeepHandle,
}
impl Receiver {
pub(crate) fn from_config(config: Configuration) -> Result<Receiver, BuilderError> {
// Configure our clock and configure the quanta upkeep thread. The upkeep thread does that
// for us, and keeps us within `upkeep_interval` of the true time. The reads of this cache
// time are faster than calling the underlying time source directly, and for histogram
// windowing, we can afford to have a very granular value compared to the raw nanosecond
// precsion provided by quanta by default.
let clock = Clock::new();
let upkeep = UpkeepBuilder::new_with_clock(config.upkeep_interval, clock.clone());
let _upkeep_handle = upkeep.start().map_err(|_| BuilderError::UpkeepFailure)?;
let scope_registry = Arc::new(ScopeRegistry::new());
let metric_registry = Arc::new(MetricRegistry::new(
scope_registry.clone(),
config,
clock.clone(),
));
Ok(Receiver {
metric_registry,
scope_registry,
clock,
_upkeep_handle,
})
}
/// Creates a new [`Builder`] for building a [`Receiver`].
pub fn builder() -> Builder {
Builder::default()
}
/// Installs this receiver as the global metrics facade.
pub fn install(self) {
metrics::set_boxed_recorder(Box::new(self)).unwrap();
}
/// Creates a [`Sink`] bound to this receiver.
pub fn sink(&self) -> Sink {
Sink::new(
self.metric_registry.clone(),
self.scope_registry.clone(),
Scope::Root,
self.clock.clone(),
)
}
/// Creates a [`Controller`] bound to this receiver.
pub fn controller(&self) -> Controller {
Controller::new(self.metric_registry.clone(), self.scope_registry.clone())
}
}
impl Recorder for Receiver {
fn increment_counter(&self, key: Key, value: u64) {
SINK.with(move |sink| {
let mut sink = sink.borrow_mut();
if sink.is_none() {
let new_sink = self.sink();
*sink = Some(new_sink);
}
sink.as_mut().unwrap().increment_counter(key, value);
});
}
fn update_gauge(&self, key: Key, value: i64) {
SINK.with(move |sink| {
let mut sink = sink.borrow_mut();
if sink.is_none() {
let new_sink = self.sink();
*sink = Some(new_sink);
}
sink.as_mut().unwrap().update_gauge(key, value);
});
}
fn record_histogram(&self, key: Key, value: u64) {
SINK.with(move |sink| {
let mut sink = sink.borrow_mut();
if sink.is_none() {
let new_sink = self.sink();
*sink = Some(new_sink);
}
sink.as_mut().unwrap().record_value(key, value);
});
}
}

View File

@ -1,251 +0,0 @@
use crate::common::{Identifier, Kind, Measurement, ValueHandle, ValueSnapshot};
use crate::config::Configuration;
use crate::data::Snapshot;
use crate::registry::ScopeRegistry;
use arc_swap::ArcSwap;
use im::hashmap::HashMap;
use metrics_core::Observer;
use quanta::Clock;
use std::sync::Arc;
#[derive(Debug)]
pub(crate) struct MetricRegistry {
scope_registry: Arc<ScopeRegistry>,
metrics: ArcSwap<HashMap<Identifier, ValueHandle>>,
config: Configuration,
clock: Clock,
}
impl MetricRegistry {
pub fn new(scope_registry: Arc<ScopeRegistry>, config: Configuration, clock: Clock) -> Self {
MetricRegistry {
scope_registry,
metrics: ArcSwap::new(Arc::new(HashMap::new())),
config,
clock,
}
}
pub fn get_or_register(&self, id: Identifier) -> ValueHandle {
loop {
let old_metrics = self.metrics.load();
match old_metrics.get(&id) {
Some(handle) => return handle.clone(),
None => {
let value_handle = match id.kind() {
Kind::Counter => ValueHandle::counter(),
Kind::Gauge => ValueHandle::gauge(),
Kind::Histogram => ValueHandle::histogram(
self.config.histogram_window,
self.config.histogram_granularity,
self.clock.clone(),
),
Kind::Proxy => ValueHandle::proxy(),
};
let mut new_metrics = (**self.metrics.load()).clone();
match new_metrics.insert(id.clone(), value_handle.clone()) {
Some(other_value_handle) => {
// Somebody else beat us to it.
return other_value_handle;
}
None => {
let prev_metrics = self
.metrics
.compare_and_swap(&old_metrics, Arc::new(new_metrics));
if Arc::ptr_eq(&old_metrics, &prev_metrics) {
return value_handle;
}
// If we weren't able to cleanly update the map, then try again.
}
}
}
}
}
}
pub fn snapshot(&self) -> Snapshot {
let mut values = Vec::new();
let metrics = (**self.metrics.load()).clone();
for (id, value) in metrics.into_iter() {
let (key, scope_handle, _) = id.into_parts();
let scope = self.scope_registry.get(scope_handle);
match value.snapshot() {
ValueSnapshot::Single(measurement) => {
let key = key.map_name(|name| scope.into_string(name));
values.push((key, measurement));
}
ValueSnapshot::Multiple(mut measurements) => {
// Tack on the key name that this proxy was registered with to the scope so
// that we can clone _that_, and then scope our individual measurements.
let (base_key, labels) = key.into_parts();
let scope = scope.clone().add_part(base_key);
for (subkey, measurement) in measurements.drain(..) {
let scope = scope.clone();
let mut subkey = subkey.map_name(|name| scope.into_string(name));
subkey.add_labels(labels.clone());
values.push((subkey, measurement));
}
}
}
}
Snapshot::new(values)
}
pub fn observe<O: Observer>(&self, observer: &mut O) {
let metrics = (**self.metrics.load()).clone();
for (id, value) in metrics.into_iter() {
let (key, scope_handle, _) = id.into_parts();
let scope = self.scope_registry.get(scope_handle);
let observe = |observer: &mut O, key, measurement| match measurement {
Measurement::Counter(value) => observer.observe_counter(key, value),
Measurement::Gauge(value) => observer.observe_gauge(key, value),
Measurement::Histogram(stream) => stream.decompress_with(|values| {
observer.observe_histogram(key.clone(), values);
}),
};
match value.snapshot() {
ValueSnapshot::Single(measurement) => {
let key = key.map_name(|name| scope.into_string(name));
observe(observer, key, measurement);
}
ValueSnapshot::Multiple(mut measurements) => {
// Tack on the key name that this proxy was registered with to the scope so
// that we can clone _that_, and then scope our individual measurements.
let (base_key, labels) = key.into_parts();
let scope = scope.clone().add_part(base_key);
for (subkey, measurement) in measurements.drain(..) {
let scope = scope.clone();
let mut subkey = subkey.map_name(|name| scope.into_string(name));
subkey.add_labels(labels.clone());
observe(observer, subkey, measurement);
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::{
Clock, Configuration, Identifier, Kind, Measurement, MetricRegistry, ScopeRegistry,
};
use crate::data::{Counter, Gauge, Histogram};
use metrics_core::{Key, Label};
use metrics_util::StreamingIntegers;
use std::mem;
use std::sync::Arc;
#[test]
fn test_snapshot() {
// Get our registry.
let sr = Arc::new(ScopeRegistry::new());
let config = Configuration::mock();
let (clock, _) = Clock::mock();
let mr = Arc::new(MetricRegistry::new(sr, config, clock));
// Set some metrics.
let cid = Identifier::new("counter", 0, Kind::Counter);
let counter: Counter = mr.get_or_register(cid).into();
counter.record(15);
let gid = Identifier::new("gauge", 0, Kind::Gauge);
let gauge: Gauge = mr.get_or_register(gid).into();
gauge.record(89);
let hid = Identifier::new("histogram", 0, Kind::Histogram);
let histogram: Histogram = mr.get_or_register(hid).into();
histogram.record_value(89);
let pid = Identifier::new("proxy", 0, Kind::Proxy);
let proxy = mr.get_or_register(pid);
proxy.update_proxy(|| vec![(Key::from_name("counter"), Measurement::Counter(13))]);
let mut snapshot = mr.snapshot().into_measurements();
snapshot.sort_by_key(|(k, _)| k.name());
let mut expected = vec![
(Key::from_name("counter"), Measurement::Counter(15)),
(Key::from_name("gauge"), Measurement::Gauge(89)),
(
Key::from_name("histogram"),
Measurement::Histogram(StreamingIntegers::new()),
),
(Key::from_name("proxy.counter"), Measurement::Counter(13)),
];
expected.sort_by_key(|(k, _)| k.name());
assert_eq!(snapshot.len(), expected.len());
for rhs in expected {
let lhs = snapshot.remove(0);
assert_eq!(lhs.0, rhs.0);
assert_eq!(mem::discriminant(&lhs.1), mem::discriminant(&rhs.1));
}
}
#[test]
fn test_snapshot_with_labels() {
// Get our registry.
let sr = Arc::new(ScopeRegistry::new());
let config = Configuration::mock();
let (clock, _) = Clock::mock();
let mr = Arc::new(MetricRegistry::new(sr, config, clock));
let labels = vec![Label::new("type", "test")];
// Set some metrics.
let cid = Identifier::new(("counter", labels.clone()), 0, Kind::Counter);
let counter: Counter = mr.get_or_register(cid).into();
counter.record(15);
let gid = Identifier::new(("gauge", labels.clone()), 0, Kind::Gauge);
let gauge: Gauge = mr.get_or_register(gid).into();
gauge.record(89);
let hid = Identifier::new(("histogram", labels.clone()), 0, Kind::Histogram);
let histogram: Histogram = mr.get_or_register(hid).into();
histogram.record_value(89);
let pid = Identifier::new(("proxy", labels.clone()), 0, Kind::Proxy);
let proxy = mr.get_or_register(pid);
proxy.update_proxy(|| vec![(Key::from_name("counter"), Measurement::Counter(13))]);
let mut snapshot = mr.snapshot().into_measurements();
snapshot.sort_by_key(|(k, _)| k.name());
let mut expected = vec![
(
Key::from_name_and_labels("counter", labels.clone()),
Measurement::Counter(15),
),
(
Key::from_name_and_labels("gauge", labels.clone()),
Measurement::Gauge(89),
),
(
Key::from_name_and_labels("histogram", labels.clone()),
Measurement::Histogram(StreamingIntegers::new()),
),
(
Key::from_name_and_labels("proxy.counter", labels),
Measurement::Counter(13),
),
];
expected.sort_by_key(|(k, _)| k.name());
assert_eq!(snapshot.len(), expected.len());
for rhs in expected {
let lhs = snapshot.remove(0);
assert_eq!(lhs.0, rhs.0);
assert_eq!(mem::discriminant(&lhs.1), mem::discriminant(&rhs.1));
}
}
}

View File

@ -1,5 +0,0 @@
mod scope;
pub(crate) use self::scope::ScopeRegistry;
mod metric;
pub(crate) use self::metric::MetricRegistry;

View File

@ -1,89 +0,0 @@
use crate::common::{Scope, ScopeHandle};
use parking_lot::RwLock;
use std::collections::HashMap;
#[derive(Debug)]
struct Inner {
id: u64,
forward: HashMap<Scope, ScopeHandle>,
backward: HashMap<ScopeHandle, Scope>,
}
impl Inner {
pub fn new() -> Self {
Inner {
id: 1,
forward: HashMap::new(),
backward: HashMap::new(),
}
}
}
#[derive(Debug)]
pub(crate) struct ScopeRegistry {
inner: RwLock<Inner>,
}
impl ScopeRegistry {
pub fn new() -> Self {
Self {
inner: RwLock::new(Inner::new()),
}
}
pub fn register(&self, scope: Scope) -> u64 {
let mut wg = self.inner.write();
// If the key is already registered, send back the existing scope ID.
if wg.forward.contains_key(&scope) {
return wg.forward.get(&scope).cloned().unwrap();
}
// Otherwise, take the current scope ID for this registration, store it, and increment
// the scope ID counter for the next registration.
let scope_id = wg.id;
let _ = wg.forward.insert(scope.clone(), scope_id);
let _ = wg.backward.insert(scope_id, scope);
wg.id += 1;
scope_id
}
pub fn get(&self, scope_id: ScopeHandle) -> Scope {
// See if we have an entry for the scope ID, and clone the scope if so.
let rg = self.inner.read();
rg.backward.get(&scope_id).cloned().unwrap_or(Scope::Root)
}
}
#[cfg(test)]
mod tests {
use super::{Scope, ScopeRegistry};
#[test]
fn test_simple_write_then_read() {
let nested1 = Scope::Root.add_part("nested1");
let nested2 = nested1.clone().add_part("nested2");
let sr = ScopeRegistry::new();
let doesnt_exist0 = sr.get(0);
let doesnt_exist1 = sr.get(1);
let doesnt_exist2 = sr.get(2);
assert_eq!(doesnt_exist0, Scope::Root);
assert_eq!(doesnt_exist1, Scope::Root);
assert_eq!(doesnt_exist2, Scope::Root);
let nested1_original = nested1.clone();
let nested1_id = sr.register(nested1);
let nested2_original = nested2.clone();
let nested2_id = sr.register(nested2);
let exists1 = sr.get(nested1_id);
let exists2 = sr.get(nested2_id);
assert_eq!(exists1, nested1_original);
assert_eq!(exists2, nested2_original);
}
}

View File

@ -1,53 +0,0 @@
use parking_lot::RwLock;
use std::collections::HashMap;
pub struct Inner {
id: u64,
forward: HashMap<String, u64>,
backward: HashMap<u64, String>,
}
impl Inner {
pub fn new() -> Self {
Inner {
id: 1,
forward: HashMap::new(),
backward: HashMap::new(),
}
}
}
pub struct Scopes {
inner: RwLock<Inner>,
}
impl Scopes {
pub fn new() -> Self {
Scopes {
inner: RwLock::new(Inner::new()),
}
}
pub fn register(&self, scope: String) -> u64 {
let mut wg = self.inner.write();
// If the key is already registered, send back the existing scope ID.
if wg.forward.contains_key(&scope) {
return wg.forward.get(&scope).cloned().unwrap();
}
// Otherwise, take the current scope ID for this registration, store it, and increment
// the scope ID counter for the next registration.
let scope_id = wg.id;
let _ = wg.forward.insert(scope.clone(), scope_id);
let _ = wg.backward.insert(scope_id, scope);
wg.id += 1;
scope_id
}
pub fn get(&self, scope_id: u64) -> Option<String> {
// See if we have an entry for the scope ID, and clone the scope if so.
let rg = self.inner.read();
rg.backward.get(&scope_id).cloned()
}
}

View File

@ -1,744 +0,0 @@
use crate::{
common::{Delta, Identifier, Kind, Measurement, Scope, ScopeHandle, ValueHandle},
data::{Counter, Gauge, Histogram},
registry::{MetricRegistry, ScopeRegistry},
};
use metrics_core::{IntoLabels, Key, Label, ScopedString};
use quanta::Clock;
use std::{collections::HashMap, error::Error, fmt, sync::Arc};
/// Errors during sink creation.
#[derive(Debug, Clone)]
pub enum SinkError {
/// The scope value given was invalid i.e. empty or illegal characters.
InvalidScope,
}
impl Error for SinkError {}
impl fmt::Display for SinkError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
SinkError::InvalidScope => write!(f, "given scope is invalid"),
}
}
}
/// A value that can be used as a metric scope.
///
/// This helper trait allows us to accept either a single string or a slice of strings to use as a
/// scope, to avoid needing to allocate in the case where we want to be able to specify multiple
/// scope levels in a single go.
pub trait AsScoped<'a> {
/// Creates a new [`Scope`] by adding `self` to the `base` scope.
fn as_scoped(&'a self, base: Scope) -> Scope;
}
/// Handle for sending metric samples.
#[derive(Debug)]
pub struct Sink {
metric_registry: Arc<MetricRegistry>,
metric_cache: HashMap<Identifier, ValueHandle>,
scope_registry: Arc<ScopeRegistry>,
scope: Scope,
scope_handle: ScopeHandle,
clock: Clock,
default_labels: Vec<Label>,
}
impl Sink {
pub(crate) fn new(
metric_registry: Arc<MetricRegistry>,
scope_registry: Arc<ScopeRegistry>,
scope: Scope,
clock: Clock,
) -> Sink {
let scope_handle = scope_registry.register(scope.clone());
Sink {
metric_registry,
metric_cache: HashMap::default(),
scope_registry,
scope,
scope_handle,
clock,
default_labels: Vec::new(),
}
}
/// Adds default labels for this sink and any derived sinks.
///
/// Default labels are added to all metrics. If a metric is updated and requested and it has
/// its own labels specified, the default labels will be appended to the existing labels.
///
/// Labels are passed on, with scope, to any scoped children or cloned sinks.
pub fn add_default_labels<L>(&mut self, labels: L)
where
L: IntoLabels,
{
let labels = labels.into_labels();
self.default_labels.extend(labels);
}
/// Creates a scoped clone of this [`Sink`].
///
/// Scoping controls the resulting metric name for any metrics sent by this [`Sink`]. For
/// example, you might have a metric called `messages_sent`.
///
/// With scoping, you could have independent versions of the same metric. This is useful for
/// having the same "base" metric name but with broken down values.
///
/// Going further with the above example, if you had a server, and listened on multiple
/// addresses, maybe you would have a scoped [`Sink`] per listener, and could end up with
/// metrics that look like this:
/// - `listener.a.messages_sent`
/// - `listener.b.messages_sent`
/// - `listener.c.messages_sent`
/// - etc
///
/// Scopes are also inherited. If you create a scoped [`Sink`] from another [`Sink`] which is
/// already scoped, the scopes will be merged together using a `.` as the string separator.
/// This makes it easy to nest scopes. Cloning a scoped [`Sink`], though, will inherit the
/// same scope as the original.
pub fn scoped<'a, S: AsScoped<'a> + ?Sized>(&self, scope: &'a S) -> Sink {
let new_scope = scope.as_scoped(self.scope.clone());
let mut sink = Sink::new(
self.metric_registry.clone(),
self.scope_registry.clone(),
new_scope,
self.clock.clone(),
);
if !self.default_labels.is_empty() {
sink.add_default_labels(self.default_labels.clone());
}
sink
}
/// Gets the current time, in nanoseconds, from the internal high-speed clock.
pub fn now(&self) -> u64 {
self.clock.now()
}
/// Increment a value for a counter identified by the given name.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.increment_counter("messages_processed", 1);
/// # }
/// ```
pub fn increment_counter<N>(&mut self, name: N, value: u64)
where
N: Into<Key>,
{
let key = self.construct_key(name);
let id = Identifier::new(key, self.scope_handle, Kind::Counter);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_counter(value);
}
/// Increment a value for a counter identified by the given name and labels.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.increment_counter_with_labels("messages_processed", 1, &[("message_type", "mgmt")]);
/// # }
/// ```
pub fn increment_counter_with_labels<N, L>(&mut self, name: N, value: u64, labels: L)
where
N: Into<ScopedString>,
L: IntoLabels,
{
let key = self.construct_key((name, labels));
let id = Identifier::new(key, self.scope_handle, Kind::Counter);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_counter(value);
}
/// Update a value for a gauge identified by the given name.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.update_gauge("current_offset", -131);
/// # }
/// ```
pub fn update_gauge<N>(&mut self, name: N, value: i64)
where
N: Into<Key>,
{
let key = self.construct_key(name);
let id = Identifier::new(key, self.scope_handle, Kind::Gauge);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_gauge(value);
}
/// Update a value for a gauge identified by the given name and labels.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.update_gauge_with_labels("current_offset", -131, &[("source", "stratum-1")]);
/// # }
/// ```
pub fn update_gauge_with_labels<N, L>(&mut self, name: N, value: i64, labels: L)
where
N: Into<ScopedString>,
L: IntoLabels,
{
let key = self.construct_key((name, labels));
let id = Identifier::new(key, self.scope_handle, Kind::Gauge);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_gauge(value);
}
/// Records the value for a timing histogram identified by the given name.
///
/// Both the start and end times must be supplied, but any values that implement [`Delta`] can
/// be used which allows for raw values from [`quanta::Clock`] to be used, or measurements from
/// [`Instant::now`](std::time::Instant::now).
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let start = sink.now();
/// thread::sleep(Duration::from_millis(10));
/// let end = sink.now();
/// sink.record_timing("sleep_time", start, end);
/// # }
/// ```
pub fn record_timing<N, V>(&mut self, name: N, start: V, end: V)
where
N: Into<Key>,
V: Delta,
{
let delta = end.delta(start);
self.record_value(name, delta);
}
/// Records the value for a timing histogram identified by the given name and labels.
///
/// Both the start and end times must be supplied, but any values that implement [`Delta`] can
/// be used which allows for raw values from [`quanta::Clock`] to be used, or measurements from
/// [`Instant::now`](std::time::Instant::now).
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let start = sink.now();
/// thread::sleep(Duration::from_millis(10));
/// let end = sink.now();
/// sink.record_timing_with_labels("sleep_time", start, end, &[("mode", "low_priority")]);
/// # }
/// ```
pub fn record_timing_with_labels<N, L, V>(&mut self, name: N, start: V, end: V, labels: L)
where
N: Into<ScopedString>,
L: IntoLabels,
V: Delta,
{
let delta = end.delta(start);
self.record_value_with_labels(name, delta, labels);
}
/// Records the value for a value histogram identified by the given name.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.record_value("rows_returned", 42);
/// # }
/// ```
pub fn record_value<N>(&mut self, name: N, value: u64)
where
N: Into<Key>,
{
let key = self.construct_key(name);
let id = Identifier::new(key, self.scope_handle, Kind::Histogram);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_histogram(value);
}
/// Records the value for a value histogram identified by the given name and labels.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// sink.record_value_with_labels("rows_returned", 42, &[("table", "posts")]);
/// # }
/// ```
pub fn record_value_with_labels<N, L>(&mut self, name: N, value: u64, labels: L)
where
N: Into<ScopedString>,
L: IntoLabels,
{
let key = self.construct_key((name, labels));
let id = Identifier::new(key, self.scope_handle, Kind::Histogram);
let value_handle = self.get_cached_value_handle(id);
value_handle.update_histogram(value);
}
/// Creates a handle to the given counter.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying counter without requiring a [`Sink`]. This method can be called multiple times
/// with the same `name` and the handle will point to the single underlying instance.
///
/// [`Counter`] is clonable.
///`
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let counter = sink.counter("messages_processed");
/// counter.record(1);
///
/// // Alternate, simpler usage:
/// counter.increment();
/// # }
/// ```
pub fn counter<N>(&mut self, name: N) -> Counter
where
N: Into<Key>,
{
let key = self.construct_key(name);
self.get_owned_value_handle(key, Kind::Counter).into()
}
/// Creates a handle to the given counter, with labels attached.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying counter without requiring a [`Sink`]. This method can be called multiple times
/// with the same `name`/`labels` and the handle will point to the single underlying instance.
///
/// [`Counter`] is clonable.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let counter = sink.counter_with_labels("messages_processed", &[("service", "secure")]);
/// counter.record(1);
///
/// // Alternate, simpler usage:
/// counter.increment();
/// # }
/// ```
pub fn counter_with_labels<N, L>(&mut self, name: N, labels: L) -> Counter
where
N: Into<ScopedString>,
L: IntoLabels,
{
self.counter((name, labels))
}
/// Creates a handle to the given gauge.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying gauge without requiring a [`Sink`]. This method can be called multiple times
/// with the same `name` and the handle will point to the single underlying instance.
///
/// [`Gauge`] is clonable.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let gauge = sink.gauge("current_offset");
/// gauge.record(-131);
/// # }
/// ```
pub fn gauge<N>(&mut self, name: N) -> Gauge
where
N: Into<Key>,
{
let key = self.construct_key(name);
self.get_owned_value_handle(key, Kind::Gauge).into()
}
/// Creates a handle to the given gauge, with labels attached.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying gauge without requiring a [`Sink`]. This method can be called multiple times
/// with the same `name`/`labels` and the handle will point to the single underlying instance.
///
/// [`Gauge`] is clonable.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let gauge = sink.gauge_with_labels("current_offset", &[("source", "stratum-1")]);
/// gauge.record(-131);
/// # }
/// ```
pub fn gauge_with_labels<N, L>(&mut self, name: N, labels: L) -> Gauge
where
N: Into<ScopedString>,
L: IntoLabels,
{
self.gauge((name, labels))
}
/// Creates a handle to the given histogram.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying histogram without requiring a [`Sink`]. This method can be called multiple
/// times with the same `name` and the handle will point to the single underlying instance.
///
/// [`Histogram`] is clonable.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let histogram = sink.histogram("request_duration");
///
/// let start = sink.now();
/// thread::sleep(Duration::from_millis(10));
/// let end = sink.now();
/// histogram.record_timing(start, end);
///
/// // Alternatively, you can just push the raw value into a histogram:
/// let delta = end - start;
/// histogram.record_value(delta);
/// # }
/// ```
pub fn histogram<N>(&mut self, name: N) -> Histogram
where
N: Into<Key>,
{
let key = self.construct_key(name);
self.get_owned_value_handle(key, Kind::Histogram).into()
}
/// Creates a handle to the given histogram, with labels attached.
///
/// This handle can be embedded into an existing type and used to directly update the
/// underlying histogram without requiring a [`Sink`]. This method can be called multiple
/// times with the same `name` and the handle will point to the single underlying instance.
///
/// [`Histogram`] is clonable.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # use metrics_runtime::Receiver;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
/// let histogram = sink.histogram_with_labels("request_duration", &[("service", "secure")]);
///
/// let start = sink.now();
/// thread::sleep(Duration::from_millis(10));
/// let end = sink.now();
/// histogram.record_timing(start, end);
///
/// // Alternatively, you can just push the raw value into a histogram:
/// let delta = end - start;
/// histogram.record_value(delta);
/// # }
/// ```
pub fn histogram_with_labels<N, L>(&mut self, name: N, labels: L) -> Histogram
where
N: Into<ScopedString>,
L: IntoLabels,
{
self.histogram((name, labels))
}
/// Creates a proxy metric.
///
/// Proxy metrics allow you to register a closure that, when a snapshot of the metric state is
/// requested, will be called and have a chance to return multiple metrics that are added to
/// the overall metric of actual metrics.
///
/// This can be useful for metrics which are expensive to constantly recalculate/poll, allowing
/// you to avoid needing to calculate/push them them yourself, with all the boilerplate that
/// comes with doing so periodically.
///
/// Individual metrics must provide their own key (name), which will be appended to the name
/// given when registering the proxy. A proxy can be reregistered at any time by calling this
/// function again with the same name.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # extern crate metrics_core;
/// # use metrics_runtime::{Receiver, Measurement};
/// # use metrics_core::Key;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
///
/// // A proxy is now registered under the name "load_stats", which is prepended to all the
/// // metrics generated by the closure i.e. "load_stats.avg_1min". These metrics are also
/// // still scoped normally based on the [`Sink`].
/// sink.proxy("load_stats", || {
/// let mut values = Vec::new();
/// values.push((Key::from_name("avg_1min"), Measurement::Gauge(19)));
/// values.push((Key::from_name("avg_5min"), Measurement::Gauge(12)));
/// values.push((Key::from_name("avg_10min"), Measurement::Gauge(10)));
/// values
/// });
/// # }
/// ```
pub fn proxy<N, F>(&mut self, name: N, f: F)
where
N: Into<Key>,
F: Fn() -> Vec<(Key, Measurement)> + Send + Sync + 'static,
{
let id = Identifier::new(name.into(), self.scope_handle, Kind::Proxy);
let handle = self.get_cached_value_handle(id);
handle.update_proxy(f);
}
/// Creates a proxy metric, with labels attached.
///
/// Proxy metrics allow you to register a closure that, when a snapshot of the metric state is
/// requested, will be called and have a chance to return multiple metrics that are added to
/// the overall metric of actual metrics.
///
/// This can be useful for metrics which are expensive to constantly recalculate/poll, allowing
/// you to avoid needing to calculate/push them them yourself, with all the boilerplate that
/// comes with doing so periodically.
///
/// Individual metrics must provide their own key (name), which will be appended to the name
/// given when registering the proxy. A proxy can be reregistered at any time by calling this
/// function again with the same name.
///
/// # Examples
///
/// ```rust
/// # extern crate metrics_runtime;
/// # extern crate metrics_core;
/// # use metrics_runtime::{Receiver, Measurement};
/// # use metrics_core::Key;
/// # use std::thread;
/// # use std::time::Duration;
/// # fn main() {
/// let receiver = Receiver::builder().build().expect("failed to create receiver");
/// let mut sink = receiver.sink();
///
/// let system_name = "web03".to_string();
///
/// // A proxy is now registered under the name "load_stats", which is prepended to all the
/// // metrics generated by the closure i.e. "load_stats.avg_1min". These metrics are also
/// // still scoped normally based on the [`Sink`].
/// sink.proxy_with_labels("load_stats", &[("system", system_name)], || {
/// let mut values = Vec::new();
/// values.push((Key::from_name("avg_1min"), Measurement::Gauge(19)));
/// values.push((Key::from_name("avg_5min"), Measurement::Gauge(12)));
/// values.push((Key::from_name("avg_10min"), Measurement::Gauge(10)));
/// values
/// });
/// # }
/// ```
pub fn proxy_with_labels<N, L, F>(&mut self, name: N, labels: L, f: F)
where
N: Into<ScopedString>,
L: IntoLabels,
F: Fn() -> Vec<(Key, Measurement)> + Send + Sync + 'static,
{
self.proxy((name, labels), f)
}
pub(crate) fn construct_key<K>(&self, key: K) -> Key
where
K: Into<Key>,
{
let mut key = key.into();
if !self.default_labels.is_empty() {
key.add_labels(self.default_labels.clone());
}
key
}
fn get_owned_value_handle<K>(&mut self, key: K, kind: Kind) -> ValueHandle
where
K: Into<Key>,
{
let id = Identifier::new(key.into(), self.scope_handle, kind);
self.get_cached_value_handle(id).clone()
}
fn get_cached_value_handle(&mut self, identifier: Identifier) -> &ValueHandle {
// This gross hack gets around lifetime rules until full NLL is stable. Without it, the
// borrow checker doesn't understand the flow control and thinks the reference lives all
// the way until the of the function, which breaks when we try to take a mutable reference
// for inserting into the handle cache.
if let Some(handle) = self.metric_cache.get(&identifier) {
return unsafe { &*(handle as *const ValueHandle) };
}
let handle = self.metric_registry.get_or_register(identifier.clone());
self.metric_cache.insert(identifier.clone(), handle);
self.metric_cache.get(&identifier).unwrap()
}
}
impl Clone for Sink {
fn clone(&self) -> Sink {
Sink {
metric_registry: self.metric_registry.clone(),
metric_cache: self.metric_cache.clone(),
scope_registry: self.scope_registry.clone(),
scope: self.scope.clone(),
scope_handle: self.scope_handle,
clock: self.clock.clone(),
default_labels: self.default_labels.clone(),
}
}
}
impl<'a> AsScoped<'a> for str {
fn as_scoped(&'a self, base: Scope) -> Scope {
base.add_part(self.to_string())
}
}
impl<'a, 'b, T> AsScoped<'a> for T
where
&'a T: AsRef<[&'b str]>,
T: 'a,
{
fn as_scoped(&'a self, base: Scope) -> Scope {
self.as_ref()
.iter()
.fold(base, |s, ss| s.add_part(ss.to_string()))
}
}
#[cfg(test)]
mod tests {
use super::{Clock, MetricRegistry, Scope, ScopeRegistry, Sink};
use crate::config::Configuration;
use std::sync::Arc;
#[test]
fn test_construct_key() {
// TODO(tobz): this is a lot of boilerplate to get a `Sink` for testing, wonder if there's
// anything better we could be doing?
let sregistry = Arc::new(ScopeRegistry::new());
let config = Configuration::mock();
let (clock, _) = Clock::mock();
let mregistry = Arc::new(MetricRegistry::new(
sregistry.clone(),
config,
clock.clone(),
));
let mut sink = Sink::new(mregistry, sregistry, Scope::Root, clock);
let no_labels = sink.construct_key("foo");
assert_eq!(no_labels.name(), "foo");
assert_eq!(no_labels.labels().count(), 0);
let labels_given = sink.construct_key(("baz", &[("type", "test")]));
assert_eq!(labels_given.name(), "baz");
let label_str = labels_given
.labels()
.map(|l| format!("{}={}", l.key(), l.value()))
.collect::<Vec<_>>()
.join(",");
assert_eq!(label_str, "type=test");
sink.add_default_labels(&[("service", "foo")]);
let no_labels = sink.construct_key("bar");
assert_eq!(no_labels.name(), "bar");
let label_str = no_labels
.labels()
.map(|l| format!("{}={}", l.key(), l.value()))
.collect::<Vec<_>>()
.join(",");
assert_eq!(label_str, "service=foo");
let labels_given = sink.construct_key(("quux", &[("type", "test")]));
assert_eq!(labels_given.name(), "quux");
let label_str = labels_given
.labels()
.map(|l| format!("{}={}", l.key(), l.value()))
.collect::<Vec<_>>()
.join(",");
assert_eq!(label_str, "type=test,service=foo");
}
}

View File

@ -4,8 +4,8 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
<!-- next-header -->
## [0.1.0] - 2019-07-29
## [Unreleased] - ReleaseDate
### Added
- Effective birth of the crate.

View File

@ -0,0 +1,26 @@
[package]
name = "metrics-tracing-context"
version = "0.1.0-alpha.1"
authors = ["MOZGIII <mike-n@narod.ru>"]
edition = "2018"
license = "MIT"
description = "A crate to use tracing context as metrics labels."
homepage = "https://github.com/metrics-rs/metrics"
repository = "https://github.com/metrics-rs/metrics"
documentation = "https://docs.rs/metrics"
readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "tracing"]
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util" }
tracing = "0.1"
tracing-core = "0.1"
tracing-subscriber = "0.2"
[dev-dependencies]
parking_lot = "0.11"

View File

@ -0,0 +1,3 @@
# metrics-tracing-context
A crate to use tracing context as metrics labels.

View File

@ -0,0 +1,20 @@
//! Label filtering.
use metrics::Label;
/// [`LabelFilter`] trait encapsulates the ability to filter labels, i.e.
/// determining whether a particular span field should be included as a label or not.
pub trait LabelFilter {
/// Returns `true` if the passed label should be included in the key.
fn should_include_label(&self, label: &Label) -> bool;
}
/// A [`LabelFilter`] that allows all labels.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct IncludeAll;
impl LabelFilter for IncludeAll {
fn should_include_label(&self, _label: &Label) -> bool {
true
}
}

View File

@ -0,0 +1,165 @@
//! Use [`tracing::span!`] fields as [`metrics`] labels.
//!
//! The `metrics-tracing-context` crate provides tools to enable injecting the
//! contextual data maintained via `span!` macro from the [`tracing`] crate
//! into the metrics.
//!
//! # Use
//!
//! First, set up `tracing` and `metrics` crates:
//!
//! ```rust
//! # use metrics_util::DebuggingRecorder;
//! # use tracing_subscriber::Registry;
//! use metrics_util::layers::Layer;
//! use tracing_subscriber::layer::SubscriberExt;
//! use metrics_tracing_context::{MetricsLayer, TracingContextLayer};
//!
//! // Prepare tracing.
//! # let mysubscriber = Registry::default();
//! let subscriber = mysubscriber.with(MetricsLayer::new());
//! tracing::subscriber::set_global_default(subscriber).unwrap();
//!
//! // Prepare metrics.
//! # let myrecorder = DebuggingRecorder::new();
//! let recorder = TracingContextLayer::all().layer(myrecorder);
//! metrics::set_boxed_recorder(Box::new(recorder)).unwrap();
//! ```
//!
//! Then emit some metrics within spans and see the labels being injected!
//!
//! ```rust
//! # use metrics_util::{layers::Layer, DebuggingRecorder};
//! # use tracing_subscriber::{layer::SubscriberExt, Registry};
//! # use metrics_tracing_context::{MetricsLayer, TracingContextLayer};
//! # let mysubscriber = Registry::default();
//! # let subscriber = mysubscriber.with(MetricsLayer::new());
//! # tracing::subscriber::set_global_default(subscriber).unwrap();
//! # let myrecorder = DebuggingRecorder::new();
//! # let recorder = TracingContextLayer::all().layer(myrecorder);
//! # metrics::set_boxed_recorder(Box::new(recorder)).unwrap();
//! use tracing::{span, Level};
//! use metrics::counter;
//!
//! let user = "ferris";
//! let span = span!(Level::TRACE, "login", user);
//! let _guard = span.enter();
//!
//! counter!("login_attempts", 1, "service" => "login_service");
//! ```
//!
//! The code above will emit a increment for a `login_attempts` counter with
//! the following labels:
//! - `service=login_service`
//! - `user=ferris`
#![deny(missing_docs)]
use metrics::{Key, KeyData, Label, Recorder};
use metrics_util::layers::Layer;
use tracing::Span;
pub mod label_filter;
mod tracing_integration;
pub use label_filter::LabelFilter;
pub use tracing_integration::{MetricsLayer, SpanExt};
/// [`TracingContextLayer`] provides an implementation of a [`metrics::Layer`]
/// for [`TracingContext`].
pub struct TracingContextLayer<F> {
label_filter: F,
}
impl<F> TracingContextLayer<F> {
/// Creates a new [`TracingContextLayer`].
pub fn new(label_filter: F) -> Self {
Self { label_filter }
}
}
impl TracingContextLayer<label_filter::IncludeAll> {
/// Creates a new [`TracingContextLayer`].
pub fn all() -> Self {
Self {
label_filter: label_filter::IncludeAll,
}
}
}
impl<R, F> Layer<R> for TracingContextLayer<F>
where
F: Clone,
{
type Output = TracingContext<R, F>;
fn layer(&self, inner: R) -> Self::Output {
TracingContext {
inner,
label_filter: self.label_filter.clone(),
}
}
}
/// [`TracingContext`] is a [`metrics::Recorder`] that injects labels from the
/// [`tracing::Span`]s.
pub struct TracingContext<R, F> {
inner: R,
label_filter: F,
}
impl<R, F> TracingContext<R, F>
where
F: LabelFilter,
{
fn enhance_labels(&self, labels: &mut Vec<Label>) {
let span = Span::current();
span.with_labels(|new_labels| {
labels.extend(
new_labels
.iter()
.filter(|&label| self.label_filter.should_include_label(label))
.cloned(),
);
});
}
fn enhance_key(&self, key: Key) -> Key {
let (name, mut labels) = key.into_owned().into_parts();
self.enhance_labels(&mut labels);
KeyData::from_name_and_labels(name, labels).into()
}
}
impl<R, F> Recorder for TracingContext<R, F>
where
R: Recorder,
F: LabelFilter,
{
fn register_counter(&self, key: Key, description: Option<&'static str>) {
self.inner.register_counter(key, description)
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
self.inner.register_gauge(key, description)
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
self.inner.register_histogram(key, description)
}
fn increment_counter(&self, key: Key, value: u64) {
let key = self.enhance_key(key);
self.inner.increment_counter(key, value);
}
fn update_gauge(&self, key: Key, value: f64) {
let key = self.enhance_key(key);
self.inner.update_gauge(key, value);
}
fn record_histogram(&self, key: Key, value: u64) {
let key = self.enhance_key(key);
self.inner.record_histogram(key, value);
}
}

View File

@ -0,0 +1,133 @@
//! The code that integrates with the `tracing` crate.
use metrics::Label;
use std::{any::TypeId, marker::PhantomData};
use tracing_core::span::{Attributes, Id, Record};
use tracing_core::{field::Visit, Dispatch, Field, Subscriber};
use tracing_subscriber::{layer::Context, registry::LookupSpan, Layer};
struct Labels(Vec<Label>);
impl Visit for Labels {
fn record_str(&mut self, field: &Field, value: &str) {
let label = Label::new(field.name(), value.to_owned());
self.0.push(label);
}
fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
let value_string = format!("{:?}", value);
let label = Label::new(field.name(), value_string);
self.0.push(label);
}
}
impl Labels {
fn from_attributes(attrs: &Attributes<'_>) -> Self {
let mut labels = Self(Vec::new()); // TODO: Vec::with_capacity?
let record = Record::new(attrs.values());
record.record(&mut labels);
labels
}
}
impl AsRef<Vec<Label>> for Labels {
fn as_ref(&self) -> &Vec<Label> {
&self.0
}
}
pub struct WithContext {
with_labels: fn(&Dispatch, &Id, f: &mut dyn FnMut(&Labels)),
}
impl WithContext {
pub fn with_labels<'a>(&self, dispatch: &'a Dispatch, id: &Id, f: &mut dyn FnMut(&Vec<Label>)) {
let mut ff = |labels: &Labels| f(labels.as_ref());
(self.with_labels)(dispatch, id, &mut ff)
}
}
/// [`MetricsLayer`] is a [`tracing_subscriber::Layer`] that captures the span
/// fields and allows them to be later on used as metrics labels.
pub struct MetricsLayer<S> {
ctx: WithContext,
_subscriber: PhantomData<fn(S)>,
_priv: (),
}
impl<S> MetricsLayer<S>
where
S: Subscriber + for<'span> LookupSpan<'span>,
{
/// Create a new `MetricsLayer`.
pub fn new() -> Self {
let ctx = WithContext {
with_labels: Self::with_labels,
};
Self {
ctx,
_subscriber: PhantomData,
_priv: (),
}
}
fn with_labels(dispatch: &Dispatch, id: &Id, f: &mut dyn FnMut(&Labels)) {
let span = {
let subscriber = dispatch
.downcast_ref::<S>()
.expect("subscriber should downcast to expected type; this is a bug!");
subscriber
.span(id)
.expect("registry should have a span for the current ID")
};
let parents = span.parents();
for span in std::iter::once(span).chain(parents) {
let extensions = span.extensions();
if let Some(value) = extensions.get::<Labels>() {
f(value);
}
}
}
}
impl<S> Layer<S> for MetricsLayer<S>
where
S: Subscriber + for<'a> LookupSpan<'a>,
{
fn new_span(&self, attrs: &Attributes<'_>, id: &Id, cx: Context<'_, S>) {
let span = cx.span(id).expect("span must already exist!");
let labels = Labels::from_attributes(attrs);
span.extensions_mut().insert(labels);
}
unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
match id {
id if id == TypeId::of::<Self>() => Some(self as *const _ as *const ()),
id if id == TypeId::of::<WithContext>() => Some(&self.ctx as *const _ as *const ()),
_ => None,
}
}
}
/// An extention to the `tracing::Span`, enabling the access to labels.
pub trait SpanExt {
/// Run the provided function with a read-only access to labels.
fn with_labels<F>(&self, f: F)
where
F: FnMut(&Vec<Label>);
}
impl SpanExt for tracing::Span {
fn with_labels<F>(&self, mut f: F)
where
F: FnMut(&Vec<Label>),
{
self.with_subscriber(|(id, subscriber)| {
if let Some(ctx) = subscriber.downcast_ref::<WithContext>() {
ctx.with_labels(subscriber, id, &mut f)
}
});
}
}

View File

@ -0,0 +1,347 @@
use std::collections::HashSet;
use metrics::{counter, KeyData, Label};
use metrics_tracing_context::{LabelFilter, MetricsLayer, TracingContextLayer};
use metrics_util::{layers::Layer, DebugValue, DebuggingRecorder, MetricKind, Snapshotter};
use parking_lot::{const_mutex, Mutex, MutexGuard};
use tracing::dispatcher::{set_default, DefaultGuard, Dispatch};
use tracing::{span, Level};
use tracing_subscriber::{layer::SubscriberExt, Registry};
static TEST_MUTEX: Mutex<()> = const_mutex(());
struct TestGuard {
_test_mutex_guard: MutexGuard<'static, ()>,
_tracing_guard: DefaultGuard,
}
fn setup<F>(layer: TracingContextLayer<F>) -> (TestGuard, Snapshotter)
where
F: LabelFilter + Clone + 'static,
{
let test_mutex_guard = TEST_MUTEX.lock();
let subscriber = Registry::default().with(MetricsLayer::new());
let tracing_guard = set_default(&Dispatch::new(subscriber));
let recorder = DebuggingRecorder::new();
let snapshotter = recorder.snapshotter();
let recorder = layer.layer(recorder);
metrics::clear_recorder();
metrics::set_boxed_recorder(Box::new(recorder)).expect("failed to install recorder");
let test_guard = TestGuard {
_test_mutex_guard: test_mutex_guard,
_tracing_guard: tracing_guard,
};
(test_guard, snapshotter)
}
#[test]
fn test_basic_functionality() {
let (_guard, snapshotter) = setup(TracingContextLayer::all());
let user = "ferris";
let email = "ferris@rust-lang.org";
let span = span!(Level::TRACE, "login", user, user.email = email);
let _guard = span.enter();
counter!("login_attempts", 1, "service" => "login_service");
let snapshot = snapshotter.snapshot();
assert_eq!(
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts",
vec![
Label::new("service", "login_service"),
Label::new("user", "ferris"),
Label::new("user.email", "ferris@rust-lang.org"),
],
)
.into(),
DebugValue::Counter(1),
)]
)
}
#[test]
fn test_macro_forms() {
let (_guard, snapshotter) = setup(TracingContextLayer::all());
let user = "ferris";
let email = "ferris@rust-lang.org";
let span = span!(Level::TRACE, "login", user, user.email = email);
let _guard = span.enter();
// No labels.
counter!("login_attempts_no_labels", 1);
// Static labels only.
counter!("login_attempts_static_labels", 1, "service" => "login_service");
// Dynamic labels only.
let node_name = "localhost".to_string();
counter!("login_attempts_dynamic_labels", 1, "node_name" => node_name.clone());
// Static and dynamic.
counter!("login_attempts_static_and_dynamic_labels", 1,
"service" => "login_service", "node_name" => node_name.clone());
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts_no_labels",
vec![
Label::new("user", "ferris"),
Label::new("user.email", "ferris@rust-lang.org"),
],
)
.into(),
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts_static_labels",
vec![
Label::new("service", "login_service"),
Label::new("user", "ferris"),
Label::new("user.email", "ferris@rust-lang.org"),
],
)
.into(),
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts_dynamic_labels",
vec![
Label::new("node_name", "localhost"),
Label::new("user", "ferris"),
Label::new("user.email", "ferris@rust-lang.org"),
],
)
.into(),
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts_static_and_dynamic_labels",
vec![
Label::new("service", "login_service"),
Label::new("node_name", "localhost"),
Label::new("user", "ferris"),
Label::new("user.email", "ferris@rust-lang.org"),
],
)
.into(),
DebugValue::Counter(1),
),
]
.into_iter()
.collect()
)
}
#[test]
fn test_no_labels() {
let (_guard, snapshotter) = setup(TracingContextLayer::all());
let span = span!(Level::TRACE, "login");
let _guard = span.enter();
counter!("login_attempts", 1);
let snapshot = snapshotter.snapshot();
assert_eq!(
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name("login_attempts").into(),
DebugValue::Counter(1),
)]
)
}
#[test]
fn test_multiple_paths_to_the_same_callsite() {
let (_guard, snapshotter) = setup(TracingContextLayer::all());
let shared_fn = || {
counter!("my_counter", 1);
};
let path1 = || {
let path1_specific_dynamic = "foo_dynamic";
let span = span!(
Level::TRACE,
"path1",
shared_field = "path1",
path1_specific = "foo",
path1_specific_dynamic,
);
let _guard = span.enter();
shared_fn();
};
let path2 = || {
let path2_specific_dynamic = "bar_dynamic";
let span = span!(
Level::TRACE,
"path2",
shared_field = "path2",
path2_specific = "bar",
path2_specific_dynamic,
);
let _guard = span.enter();
shared_fn();
};
path1();
path2();
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"my_counter",
vec![
Label::new("shared_field", "path1"),
Label::new("path1_specific", "foo"),
Label::new("path1_specific_dynamic", "foo_dynamic"),
],
)
.into(),
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
"my_counter",
vec![
Label::new("shared_field", "path2"),
Label::new("path2_specific", "bar"),
Label::new("path2_specific_dynamic", "bar_dynamic"),
],
)
.into(),
DebugValue::Counter(1),
)
]
.into_iter()
.collect()
)
}
#[test]
fn test_nested_spans() {
let (_guard, snapshotter) = setup(TracingContextLayer::all());
let inner = || {
let inner_specific_dynamic = "foo_dynamic";
let span = span!(
Level::TRACE,
"inner",
shared_field = "inner",
inner_specific = "foo",
inner_specific_dynamic,
);
let _guard = span.enter();
counter!("my_counter", 1);
};
let outer = || {
let outer_specific_dynamic = "bar_dynamic";
let span = span!(
Level::TRACE,
"outer",
shared_field = "outer",
outer_specific = "bar",
outer_specific_dynamic,
);
let _guard = span.enter();
inner();
};
outer();
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
"my_counter",
vec![
Label::new("shared_field", "inner"),
Label::new("inner_specific", "foo"),
Label::new("inner_specific_dynamic", "foo_dynamic"),
Label::new("shared_field", "outer"),
Label::new("outer_specific", "bar"),
Label::new("outer_specific_dynamic", "bar_dynamic"),
],
)
.into(),
DebugValue::Counter(1),
),]
.into_iter()
.collect()
)
}
#[derive(Clone)]
struct OnlyUser;
impl LabelFilter for OnlyUser {
fn should_include_label(&self, label: &Label) -> bool {
label.key() == "user"
}
}
#[test]
fn test_label_filtering() {
let (_guard, snapshotter) = setup(TracingContextLayer::new(OnlyUser));
let user = "ferris";
let email = "ferris@rust-lang.org";
let span = span!(Level::TRACE, "login", user, user.email = email);
let _guard = span.enter();
counter!("login_attempts", 1, "service" => "login_service");
let snapshot = snapshotter.snapshot();
assert_eq!(
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
"login_attempts",
vec![
Label::new("service", "login_service"),
Label::new("user", "ferris"),
],
)
.into(),
DebugValue::Counter(1),
)]
)
}

View File

@ -4,7 +4,9 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
<!-- next-header -->
## [Unreleased] - ReleaseDate
## [0.3.1] - 2019-11-21
### Changed

View File

@ -1,30 +0,0 @@
# The Code of Conduct
This document is based on the [Rust Code of Conduct](https://www.rust-lang.org/conduct.html) and outlines the standard of conduct which is both expected and enforced as part of this project.
## Conduct
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* Avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term "harassment" as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don't tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the repository Owners immediately. Whether you're a regular contributor or a newcomer, we care about making this community a safe place for you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of conduct. If you feel that a thread needs moderation, please use the contact information above, or mention @tobz or @LucioFranco in the thread.
1. Remarks that violate this Code of Conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In the Rust community we strive to go the extra step to look out for each other. Don't just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they're off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better — remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
## Contacts:
- Toby Lawrence ([toby@nuclearfurnace.com](mailto:toby@nuclearfurnace.com))
- Lucio Franco ([luciofranco14@gmail.com](mailto:luciofranco14@gmail.com))

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-util"
version = "0.3.1"
version = "0.4.0-alpha.3"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -15,20 +15,39 @@ readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "quantile", "percentile"]
[lib]
bench = false
[[bench]]
name = "bucket"
harness = false
[[bench]]
name = "registry"
harness = false
[[bench]]
name = "streaming_integers"
harness = false
[dependencies]
crossbeam-epoch = "^0.8"
serde = "^1.0"
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
crossbeam-epoch = "0.8"
crossbeam-utils = "0.7"
serde = "1.0"
arc-swap = "0.4"
atomic-shim = "0.1"
parking_lot = "0.11"
aho-corasick = { version = "0.7", optional = true }
dashmap = "3"
[dev-dependencies]
crossbeam-utils = "^0.7"
criterion = "^0.2.9"
lazy_static = "^1.3"
rand = "^0.6"
criterion = "0.3"
lazy_static = "1.3"
rand = { version = "0.7", features = ["small_rng"] }
rand_distr = "0.3"
[features]
default = ["std", "layer-filter"]
std = []
layer-filter = ["aho-corasick"]

View File

@ -52,7 +52,7 @@ fn bucket_benchmark(c: &mut Criterion) {
}
})
})
.throughput(Throughput::Elements(RANDOM_INTS.len() as u32)),
.throughput(Throughput::Elements(RANDOM_INTS.len() as u64)),
);
}

View File

@ -0,0 +1,111 @@
#[macro_use]
extern crate criterion;
use criterion::{BatchSize, Benchmark, Criterion};
use metrics::{Key, KeyData, Label, OnceKeyData};
use metrics_util::Registry;
fn registry_benchmark(c: &mut Criterion) {
c.bench(
"registry",
Benchmark::new("cached op (basic)", |b| {
let registry: Registry<Key, ()> = Registry::new();
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key = Key::Borrowed(KEY_DATA.get_or_init(|| KeyData::from_name("simple_key")));
registry.op(key, |_| (), || ())
})
})
.with_function("cached op (labels)", |b| {
let registry: Registry<Key, ()> = Registry::new();
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key = Key::Borrowed(KEY_DATA.get_or_init(|| {
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels("simple_key", labels)
}));
registry.op(key, |_| (), || ())
})
})
.with_function("uncached op (basic)", |b| {
b.iter_batched_ref(
|| Registry::<Key, ()>::new(),
|registry| {
let key = Key::Owned("simple_key".into());
registry.op(key, |_| (), || ())
},
BatchSize::SmallInput,
)
})
.with_function("uncached op (labels)", |b| {
b.iter_batched_ref(
|| Registry::<Key, ()>::new(),
|registry| {
let labels = vec![Label::new("type", "http")];
let key = Key::Owned(("simple_key", labels).into());
registry.op(key, |_| (), || ())
},
BatchSize::SmallInput,
)
})
.with_function("registry overhead", |b| {
b.iter_batched(
|| (),
|_| Registry::<(), ()>::new(),
BatchSize::NumIterations(1),
)
})
.with_function("key data overhead (basic)", |b| {
b.iter(|| {
let key = "simple_key";
KeyData::from_name(key)
})
})
.with_function("key data overhead (labels)", |b| {
b.iter(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels(key, labels)
})
})
.with_function("owned key overhead (basic)", |b| {
b.iter(|| {
let key = "simple_key";
Key::Owned(KeyData::from_name(key))
})
})
.with_function("owned key overhead (labels)", |b| {
b.iter(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
Key::Owned(KeyData::from_name_and_labels(key, labels))
})
})
.with_function("cached key overhead (basic)", |b| {
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key_data = KEY_DATA.get_or_init(|| {
let key = "simple_key";
KeyData::from_name(key)
});
Key::Borrowed(key_data)
})
})
.with_function("cached key overhead (labels)", |b| {
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key_data = KEY_DATA.get_or_init(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels(key, labels)
});
Key::Borrowed(key_data)
})
}),
);
}
criterion_group!(benches, registry_benchmark);
criterion_main!(benches);

Some files were not shown because too many files have changed in this diff Show More