Compare commits

...

82 Commits

Author SHA1 Message Date
Deirdre Connolly bde1a50315 Remove extra new lines 2021-10-08 23:56:20 -04:00
Deirdre Connolly d10286ff1e Do not start output with a '# metrics snapshot' etc prelude 2021-10-08 20:52:14 -04:00
Henry de Valence 971133128e deps: update to tokio 0.3
This uses a git dependency on Hyper for now.
2020-11-19 14:22:16 -08:00
Toby Lawrence 7ef47304ed
Merge pull request #130 from metrics-rs/fix_atomic_bucket
fix concurrent writer/uninitialized memory bug with AtomicBucket
2020-11-16 20:10:08 -05:00
Toby Lawrence 8b57975110 fix concurrent writer/uninitialized memory bug with AtomicBucket 2020-11-16 18:45:15 -05:00
Toby Lawrence 507493a59a
Merge pull request #129 from metrics-rs/unit_tweaks
core: fix unit binary vs decimal wonkiness
2020-11-16 18:23:19 -05:00
Toby Lawrence 027cde096a core: fix unit binary vs decimal wonkiness 2020-11-16 17:53:36 -05:00
Toby Lawrence ec1faac74c
Merge pull request #127 from metrics-rs/dependabot/cargo/arc-swap-1.0
Update arc-swap requirement from 0.4 to 1.0
2020-11-16 15:36:34 -05:00
Toby Lawrence 576a7e42b6
Merge pull request #125 from metrics-rs/experiment/name-parts
Switch to collecting metric names by part.
2020-11-16 15:29:11 -05:00
dependabot[bot] 1cdae103b9
Update arc-swap requirement from 0.4 to 1.0
Updates the requirements on [arc-swap](https://github.com/vorner/arc-swap) to permit the latest version.
- [Release notes](https://github.com/vorner/arc-swap/releases)
- [Changelog](https://github.com/vorner/arc-swap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/vorner/arc-swap/compare/v0.4.0...v1.0.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-16 07:18:14 +00:00
Toby Lawrence ba0530e2cd small docs tweak 2020-11-15 16:42:43 -05:00
Toby Lawrence ee379362f5 collapse from_owned_parts/from_hybrid_parts into from_parts 2020-11-15 15:49:05 -05:00
Toby Lawrence 2cd4e9e100 optimize NameParts::to_string 2020-11-15 15:13:47 -05:00
Toby Lawrence db02cd80da
Merge branch 'main' into experiment/name-parts 2020-11-13 13:26:42 -05:00
Toby Lawrence 55f289261c forgot dat flag 2020-11-13 12:56:48 -05:00
Toby Lawrence 2d807af5b0 gotta bump MSRV to 1.46 for coercing to unsized slices in const fns 2020-11-13 12:30:22 -05:00
Toby Lawrence 30c7ff1a98 a lot of tweaks 2020-11-13 11:06:53 -05:00
Toby Lawrence 7a8f3da859 wip 2020-11-12 23:23:32 -05:00
Toby Lawrence 8ebe921ef7
Merge pull request #123 from str4d/patch-1
PrometheusBuilder::install: Build exporter in runtime context
2020-11-11 14:46:52 -05:00
Toby Lawrence 4312b9f205
Merge pull request #121 from flub/doc-typo 2020-11-11 12:50:42 -05:00
str4d 01fcc020c7
PrometheusBuilder::install: Build exporter in runtime context
Closes https://github.com/metrics-rs/metrics/issues/122
2020-11-11 15:34:30 +00:00
Floris Bruynooghe 4fafc34869 Add docs build to CI
This should avoid accidentally breaking docs.
2020-11-10 21:06:25 +01:00
Floris Bruynooghe 1b1c271531 Trival docs typo 2020-11-10 21:00:05 +01:00
Toby Lawrence fbd603582e
(cargo-release) version 0.1.0-alpha.7 2020-11-02 19:58:55 -05:00
Toby Lawrence d72a744a76 dummy app for getting the sizes of various types related to keys 2020-11-02 19:57:37 -05:00
Toby Lawrence e3e96664d9
Merge pull request #116 from kilpatty/tokio-exporter-feature
metrics-exporter-prometheus: split off tokio and hyper functionality to another (default) feature
2020-11-02 12:16:04 -05:00
kilpatty 274ca273c4
metrics-exporter-prometheus: drop feature gate on build, rename other build to build_with_exporter, and fix doc comments 2020-11-02 07:20:55 -06:00
Sean Kilgarriff 6e61eb493b
Update metrics-exporter-prometheus/src/lib.rs
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2020-11-02 07:17:47 -06:00
Toby Lawrence e926a6b6c6 commit notes + sizes example 2020-11-01 11:29:31 -05:00
Toby Lawrence 5fa7e8fd1d wip: inline vs dynamic custom enum approach 2020-11-01 10:44:44 -05:00
kilpatty e2ba57bfa6
metrics-exporter-prometheus: move PrometheusHandle to struct. Eliminate now useless pub function doc comments 2020-10-30 08:49:48 -05:00
kilpatty 1b852b32a4
metrics-exporter-prometheus: switch methods to gain access to the inner data of the recorder (using a type alias) 2020-10-29 16:18:13 -05:00
Toby Lawrence b67840d7d1 more doc fixes 2020-10-28 23:12:06 -04:00
Toby Lawrence 9560d26990 more doc fixes 2020-10-28 22:51:27 -04:00
Toby Lawrence 4b9b1b8bfe bunch of doc fixes 2020-10-28 22:43:42 -04:00
Toby Lawrence 8085682d49 fix broken intra doc links 2020-10-28 21:19:50 -04:00
Toby Lawrence 1e120b3671 cfg feature flag test redux 2020-10-28 21:14:46 -04:00
Toby Lawrence 248bc3c406 cfg feature flag test 2020-10-28 21:10:31 -04:00
Toby Lawrence d481392563 try putting docs on netlify 2020-10-28 20:59:03 -04:00
Toby Lawrence 6faae7718f add bench for tracing layer + fix record_bool bug 2020-10-28 20:22:37 -04:00
Toby Lawrence 540fcfd25b let the benchmark breathe a little better 2020-10-27 23:43:39 -04:00
kilpatty 10fc3736c4
metrics-exporter-prometheus: split off tokio and hyper functionality to another (default) feature 2020-10-27 14:53:57 -05:00
Toby Lawrence 4f1b57cccc woops 2020-10-27 10:04:56 -04:00
Toby Lawrence e98c94f238 moar README tweaks 2020-10-27 10:04:01 -04:00
Toby Lawrence 9385891cca no text header above splash logo 2020-10-27 09:53:14 -04:00
Toby Lawrence 18838e62f9 upsize splash logo to better fit 2020-10-27 09:52:29 -04:00
Toby Lawrence 282671945e
Merge pull request #114 from metrics-rs/fastest_possible_statics
core: remove scoping and simplify the macro "fast path"
2020-10-27 09:49:49 -04:00
Toby Lawrence cb54c40d13 remove scoping and switch to pure statics for fast path 2020-10-27 08:58:18 -04:00
Toby Lawrence cec241dec9 fix README links 2020-10-27 08:57:54 -04:00
Toby Lawrence b227c9cd49 add new logo to README splash :D :D :D 2020-10-27 08:51:10 -04:00
Toby Lawrence 9a3c293633 massive docs tweaks 2020-10-25 16:59:44 -04:00
Toby Lawrence d7efc64ff7
Merge pull request #112 from metrics-rs/drop_metrics_no_clients_tcp
metrics-exporter-tcp: drop metrics when no clients are connected.
2020-10-24 13:42:51 -04:00
Toby Lawrence 5763651a41 drop metrics if no clients are connected 2020-10-24 12:36:43 -04:00
Toby Lawrence 6e6761acc3 tweaks 2020-10-24 11:20:31 -04:00
Toby Lawrence a2955973d7 remove StreamingIntegers since we don't use it anymore 2020-10-24 11:18:07 -04:00
Toby Lawrence 5b57500d9d
(cargo-release) version 0.1.0-alpha.3 2020-10-24 11:02:26 -04:00
Toby Lawrence 1a9c073067
(cargo-release) version 0.1.0-alpha.6 2020-10-24 11:02:26 -04:00
Toby Lawrence 17813a6f71
(cargo-release) version 0.1.0-alpha.5 2020-10-24 11:02:25 -04:00
Toby Lawrence bb9bbe2dd7
(cargo-release) version 0.4.0-alpha.6 2020-10-24 11:02:25 -04:00
Toby Lawrence dfad2526bb
(cargo-release) version 0.13.0-alpha.8 2020-10-24 11:02:24 -04:00
Toby Lawrence b8583be834
(cargo-release) version 0.1.0-alpha.5 2020-10-24 11:02:24 -04:00
Toby Lawrence 808c290064
(cargo-release) version 0.1.0-alpha.2 2020-10-24 10:59:23 -04:00
Toby Lawrence 7e3535f94e
(cargo-release) version 0.1.0-alpha.5 2020-10-24 10:59:22 -04:00
Toby Lawrence c8137a5dd2
(cargo-release) version 0.1.0-alpha.4 2020-10-24 10:59:22 -04:00
Toby Lawrence fdcf2ad90f
(cargo-release) version 0.1.1-alpha.2 2020-10-24 10:59:22 -04:00
Toby Lawrence 4949c72ff1
(cargo-release) version 0.4.0-alpha.5 2020-10-24 10:59:21 -04:00
Toby Lawrence 4ae7f0dc8f
(cargo-release) version 0.13.0-alpha.7 2020-10-24 10:59:21 -04:00
Toby Lawrence 74ba1eb6bb
(cargo-release) version 0.1.0-alpha.4 2020-10-24 10:59:20 -04:00
Toby Lawrence 7ee3bd1903 fix up bad release 2020-10-24 10:58:35 -04:00
Toby Lawrence cf1d93c979
core: add unit support (#107) 2020-10-24 10:55:12 -04:00
Toby Lawrence 156e0bdced get changelog stuff in place 2020-10-24 10:54:55 -04:00
Toby Lawrence 1aec7fdb51 rejigger metrics-observer so it's not part of tests 2020-10-24 10:28:44 -04:00
Toby Lawrence f12e4101da Atomic::null constness somehow changed 2020-10-24 10:18:24 -04:00
Toby Lawrence 05e451f7e1 fix formatting 2020-10-24 10:14:44 -04:00
Toby Lawrence b8bee9b19e fix streaming bug + no_std attempt with util as the testbench 2020-10-24 10:14:44 -04:00
Toby Lawrence 91be4d0aad helper methods + metrics-observer test case 2020-10-24 10:12:46 -04:00
Toby Lawrence 9f8f9d360c first cut at units 2020-10-24 10:12:46 -04:00
dependabot[bot] 5f26d6567f
Update env_logger requirement from 0.7 to 0.8 (#108)
Updates the requirements on [env_logger](https://github.com/env-logger-rs/env_logger) to permit the latest version.
- [Release notes](https://github.com/env-logger-rs/env_logger/releases)
- [Changelog](https://github.com/env-logger-rs/env_logger/blob/master/CHANGELOG.md)
- [Commits](https://github.com/env-logger-rs/env_logger/compare/v0.7.0...v0.8.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-20 10:12:16 -04:00
dependabot[bot] 1522221f7d
Update crossbeam-channel requirement from 0.4 to 0.5 (#102)
Updates the requirements on [crossbeam-channel](https://github.com/crossbeam-rs/crossbeam) to permit the latest version.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-channel-0.4.0...crossbeam-channel-0.5.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-17 13:43:22 -04:00
dependabot[bot] e14b7ffd11
Update crossbeam-epoch requirement from 0.8 to 0.9 (#103)
Updates the requirements on [crossbeam-epoch](https://github.com/crossbeam-rs/crossbeam) to permit the latest version.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-epoch-0.8.0...crossbeam-epoch-0.9.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Toby Lawrence <tobz@users.noreply.github.com>
2020-10-17 13:42:42 -04:00
dependabot[bot] dc6164fd0c
Update crossbeam-utils requirement from 0.7 to 0.8 (#101)
Updates the requirements on [crossbeam-utils](https://github.com/crossbeam-rs/crossbeam) to permit the latest version.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-utils-0.7.0...crossbeam-utils-0.8.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-17 13:41:54 -04:00
MOZGIII ff4795e74f
Add a const constructor for KeyData (#106) 2020-10-17 10:38:13 -04:00
62 changed files with 3069 additions and 1447 deletions

View File

@ -24,7 +24,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
rust_version: ['1.43.0', 'stable', 'nightly']
rust_version: ['1.46.0', 'stable', 'nightly']
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
- uses: actions/checkout@v1
@ -35,6 +35,23 @@ jobs:
override: true
- name: Run Tests
run: cargo test
docs:
runs-on: ubuntu-latest
env:
RUSTDOCFLAGS: -Dwarnings
steps:
- uses: actions/checkout@v2
- name: Install Rust Nightly
uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
components: rust-docs
- name: Check docs
uses: actions-rs/cargo@v1
with:
command: doc
args: --workspace --no-deps
bench:
name: Bench ${{ matrix.os }}
runs-on: ${{ matrix.os }}
@ -49,4 +66,7 @@ jobs:
toolchain: 'stable'
override: true
- name: Run Benchmarks
run: cargo bench
uses: actions-rs/cargo@v1
with:
command: bench
args: --all-features

127
COPYRIGHT
View File

@ -1,75 +1,94 @@
Short version for non-lawyers:
metrics is MIT licensed.
`metrics` is MIT licensed.
Longer version:
Copyrights in the metrics project are retained by their contributors. No
copyright assignment is required to contribute to the metrics project.
Copyrights in the `metrics` project are retained by their contributors. No copyright assignment is
required to contribute to the `metrics` project.
Some files include explicit copyright notices and/or license notices.
For full authorship information, see the version control history.
Some files include explicit copyright notices and/or license notices. For full authorship
information, see the version control history.
Except as otherwise noted (below and/or in individual files), metrics
is licensed under the MIT license <LICENSE> or
<http://opensource.org/licenses/MIT>.
Except as otherwise noted (below and/or in individual files), `metrics` is licensed under the MIT
license <LICENSE> or <http://opensource.org/licenses/MIT>.
metrics includes packages written by third parties.
The following third party packages are included, and carry
their own copyright notices and license terms:
`metrics` includes packages written by third parties. The following third party packages are
included, and carry their own copyright notices and license terms:
* Portions of the API design are derived from tic
<https://github.com/brayniac/tic>, which carries the following
license:
* Portions of the API design are derived from the `tic` crate which carries the following license:
Copyright (c) 2016 Brian Martin
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
* metrics is a fork of rust-lang-nursery/log which carries the following
license:
* metrics is a fork of `rust-lang-nursery/log` which carries the following license:
Copyright (c) 2014 The Rust Project Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
* metrics-observer reuses code from `std::time::Duration` which carries the following license:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
* metrics includes code from the `beef` crate which carries the following license:
Copyright (c) 2020 Maciej Hirsz <hello@maciej.codes>
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -7,5 +7,9 @@ members = [
"metrics-exporter-tcp",
"metrics-exporter-prometheus",
"metrics-tracing-context",
"metrics-observer",
]
exclude = ["metrics-observer"]
[patch.crates-io]
hyper = { git = "https://github.com/hyperium/hyper/", rev = "ed2b22a7f66899d338691552fbcb6c0f2f4e06b9" }

View File

@ -1,18 +1,25 @@
# metrics
![Metrics - High-performance, protocol-agnostic instrumentation][splash]
[![conduct-badge][]][conduct] [![license-badge][]](#license) [![discord-badge][]][discord] ![last-commit-badge][] ![contributors-badge][]
[splash]: https://raw.githubusercontent.com/metrics-rs/metrics/main/assets/splash.png
[![Code of Conduct][conduct-badge]][conduct]
[![MIT licensed][license-badge]](#license)
[![Documentation][docs-badge]][docs]
[![Discord chat][discord-badge]][discord]
![last-commit-badge][]
![contributors-badge][]
[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg
[license-badge]: https://img.shields.io/badge/license-MIT-blue
[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md
[license-badge]: https://img.shields.io/badge/license-MIT-blue
[docs-badge]: https://docs.rs/metrics/badge.svg
[docs]: https://docs.rs/metrics
[discord-badge]: https://img.shields.io/discord/500028886025895936
[discord]: https://discord.gg/eTwKyY9
[last-commit-badge]: https://img.shields.io/github/last-commit/metrics-rs/metrics
[contributors-badge]: https://img.shields.io/github/contributors/metrics-rs/metrics
The Metrics project: a metrics ecosystem for Rust.
## code of conduct
**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].
@ -36,13 +43,12 @@ Importantly, this works for both library authors and application authors. If th
The Metrics project provides a number of crates for both library and application authors.
If you're a library author, you'll only care about using [`metrics`] to instrument your library. If
you're an application author, you'll likely also want to instrument your application, but you'll
care about "exporters" as a means to take those metrics and ship them somewhere for analysis.
If you're a library author, you'll only care about using [`metrics`][metrics] to instrument your library. If you're an application author, you'll likely also want to instrument your application, but you'll care about "exporters" as a means to take those metrics and ship them somewhere for analysis.
Overall, this repository is home to the following crates:
* [`metrics`][metrics]: A lightweight metrics facade, similar to [`log`](https://docs.rs/log).
* [`metrics`][metrics]: A lightweight metrics facade, similar to [`log`][log].
https://docs.rs/log).
* [`metrics-macros`][metrics-macros]: Procedural macros that power `metrics`.
* [`metrics-tracing-context`][metrics-tracing-context]: Allow capturing [`tracing`][tracing] span
fields as metric labels.
@ -57,10 +63,11 @@ We're always looking for users who have thoughts on how to make `metrics` better
We'd love to chat about any of the above, or anything else, really! You can find us over on [Discord](https://discord.gg/eTwKyY9).
[metrics]: https://github.com/metrics-rs/metrics/tree/master/metrics
[metrics-macros]: https://github.com/metrics-rs/metrics/tree/master/metrics-macros
[metrics-tracing-context]: https://github.com/metrics-rs/metrics/tree/master/metrics-tracing-context
[metrics-exporter-tcp]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-tcp
[metrics-exporter-prometheus]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-prometheus
[metrics-util]: https://github.com/metrics-rs/metrics/tree/master/metrics-util
[metrics]: https://github.com/metrics-rs/metrics/tree/main/metrics
[metrics-macros]: https://github.com/metrics-rs/metrics/tree/main/metrics-macros
[metrics-tracing-context]: https://github.com/metrics-rs/metrics/tree/main/metrics-tracing-context
[metrics-exporter-tcp]: https://github.com/metrics-rs/metrics/tree/main/metrics-exporter-tcp
[metrics-exporter-prometheus]: https://github.com/metrics-rs/metrics/tree/main/metrics-exporter-prometheus
[metrics-util]: https://github.com/metrics-rs/metrics/tree/main/metrics-util
[log]: https://docs.rs/log
[tracing]: https://tracing.rs

BIN
assets/splash.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@ -1,15 +0,0 @@
trigger: ["master"]
pr: ["master"]
jobs:
# Check the crate formatting.
- template: ci/azure-rustfmt.yml
# Actaully test the crate.
- template: ci/azure-test-stable.yml
# Test it to make sure it still works on our minimum version.
- template: ci/azure-test-minimum.yaml
# Now test it against nightly w/ ASM support.
- template: ci/azure-test-nightly.yml

View File

@ -1,12 +1,12 @@
[package]
name = "metrics-benchmark"
version = "0.1.1-alpha.1"
version = "0.1.1-alpha.2"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
[dependencies]
log = "0.4"
env_logger = "0.7"
env_logger = "0.8"
getopts = "0.2"
hdrhistogram = "7.0"
quanta = "0.6"

View File

@ -4,6 +4,7 @@ use hdrhistogram::Histogram;
use log::{error, info};
use metrics::{gauge, histogram, increment};
use metrics_util::DebuggingRecorder;
use quanta::{Clock, Instant as QuantaInstant};
use std::{
env,
ops::Sub,
@ -18,7 +19,7 @@ use std::{
const LOOP_SAMPLE: u64 = 1000;
struct Generator {
t0: Option<Instant>,
t0: Option<QuantaInstant>,
gauge: i64,
hist: Histogram<u64>,
done: Arc<AtomicBool>,
@ -37,6 +38,7 @@ impl Generator {
}
fn run(&mut self) {
let mut clock = Clock::new();
let mut counter = 0;
loop {
counter += 1;
@ -47,11 +49,11 @@ impl Generator {
self.gauge += 1;
let t1 = Instant::now();
let t1 = clock.now();
if let Some(t0) = self.t0 {
let start = if counter % 1000 == 0 {
Some(Instant::now())
let start = if counter % LOOP_SAMPLE == 0 {
Some(clock.now())
} else {
None
};
@ -61,7 +63,7 @@ impl Generator {
histogram!("ok", t1.sub(t0));
if let Some(val) = start {
let delta = Instant::now() - val;
let delta = clock.now() - val;
self.hist.saturating_record(delta.as_nanos() as u64);
// We also increment our global counter for the sample rate here.
@ -78,7 +80,7 @@ impl Generator {
impl Drop for Generator {
fn drop(&mut self) {
info!(
" sender latency: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
" sender latency: min: {:8} p50: {:8} p95: {:8} p99: {:8} p999: {:8} max: {:8}",
nanos_to_readable(self.hist.min()),
nanos_to_readable(self.hist.value_at_percentile(50.0)),
nanos_to_readable(self.hist.value_at_percentile(95.0)),
@ -146,7 +148,7 @@ fn main() {
info!("duration: {}s", seconds);
info!("producers: {}", producers);
let recorder = DebuggingRecorder::new();
let recorder = DebuggingRecorder::with_ordering(false);
let snapshotter = recorder.snapshotter();
recorder.install().expect("failed to install recorder");
@ -194,7 +196,7 @@ fn main() {
info!("--------------------------------------------------------------------------------");
info!(" ingested samples total: {}", total);
info!(
"snapshot retrieval: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
"snapshot retrieval: min: {:8} p50: {:8} p95: {:8} p99: {:8} p999: {:8} max: {:8}",
nanos_to_readable(snapshot_hist.min()),
nanos_to_readable(snapshot_hist.value_at_percentile(50.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(95.0)),

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-exporter-prometheus"
version = "0.1.0-alpha.4"
version = "0.1.0-alpha.7"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -15,15 +15,21 @@ readme = "README.md"
categories = ["development-tools::debugging"]
keywords = ["metrics", "telemetry", "prometheus"]
[features]
default = ["tokio-exporter"]
tokio-exporter = ["hyper", "tokio"]
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics" }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util"}
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util" }
hdrhistogram = "7.1"
hyper = { version = "0.13", default-features = false, features = ["tcp"] }
tokio = { version = "0.2", features = ["rt-core", "tcp", "time", "macros"] }
parking_lot = "0.11"
thiserror = "1.0"
# Optional
hyper = { version = "0.14.0-dev", default-features = false, features = ["tcp", "server", "http1", "http2"], optional = true }
tokio = { version = "0.3", features = ["rt", "net", "time", "macros"], optional = true }
[dev-dependencies]
quanta = "0.6"
tracing = "0.1"

View File

@ -1,23 +1,28 @@
//! Records metrics in the Prometheus exposition format.
#![deny(missing_docs)]
#![cfg_attr(docsrs, feature(doc_cfg), deny(broken_intra_doc_links))]
use std::future::Future;
#[cfg(feature = "tokio-exporter")]
use hyper::{
service::{make_service_fn, service_fn},
{Body, Error as HyperError, Response, Server},
};
use metrics::{Key, Recorder, SetRecorderError};
use metrics::{Key, Recorder, SetRecorderError, Unit};
use metrics_util::{
parse_quantiles, CompositeKey, Handle, Histogram, MetricKind, Quantile, Registry,
};
use parking_lot::RwLock;
use std::collections::HashMap;
use std::io;
use std::iter::FromIterator;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
#[cfg(feature = "tokio-exporter")]
use std::thread;
use std::{collections::HashMap, time::SystemTime};
use std::time::SystemTime;
use thiserror::Error as ThisError;
#[cfg(feature = "tokio-exporter")]
use tokio::{pin, runtime, select};
type PrometheusRegistry = Registry<CompositeKey, Handle>;
@ -31,6 +36,7 @@ pub enum Error {
Io(#[from] io::Error),
/// Binding/listening to the given address did not succeed.
#[cfg(feature = "tokio-exporter")]
#[error("failed to bind to given listen address: {0}")]
Hyper(#[from] HyperError),
@ -206,7 +212,6 @@ impl Inner {
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
for (name, mut by_labels) in gauges.drain() {
@ -228,7 +233,6 @@ impl Inner {
output.push_str(value.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
let mut sorted_overrides = self
@ -311,8 +315,6 @@ impl Inner {
output.push_str(count.to_string().as_str());
output.push_str("\n");
}
output.push_str("\n");
}
output
@ -322,7 +324,7 @@ impl Inner {
/// A Prometheus recorder.
///
/// This recorder should be composed with other recorders or installed globally via
/// [`metrics::set_boxed_recorder`][set_boxed_recorder].
/// [`metrics::set_boxed_recorder`].
///
///
pub struct PrometheusRecorder {
@ -330,16 +332,37 @@ pub struct PrometheusRecorder {
}
impl PrometheusRecorder {
/// Gets a [`PrometheusHandle`] to this recorder.
pub fn handle(&self) -> PrometheusHandle {
PrometheusHandle {
inner: self.inner.clone(),
}
}
fn add_description_if_missing(&self, key: &Key, description: Option<&'static str>) {
if let Some(description) = description {
let mut descriptions = self.inner.descriptions.write();
if !descriptions.contains_key(key.name().as_ref()) {
if !descriptions.contains_key(key.name().to_string().as_str()) {
descriptions.insert(key.name().to_string(), description);
}
}
}
}
/// Handle to [`PrometheusRecorder`].
///
/// Useful for exposing a scrape endpoint on an existing HTTP/HTTPS server.
pub struct PrometheusHandle {
inner: Arc<Inner>,
}
impl PrometheusHandle {
/// Returns the metrics in Prometheus accepted String format.
pub fn render(&self) -> String {
self.inner.render()
}
}
/// Builder for creating and installing a Prometheus recorder/exporter.
pub struct PrometheusBuilder {
listen_address: SocketAddr,
@ -379,8 +402,9 @@ impl PrometheusBuilder {
/// By default, the quantiles will be set to: 0.0, 0.5, 0.9, 0.95, 0.99, 0.999, and 1.0. This means
/// that all histograms will be exposed as Prometheus summaries.
///
/// If buckets are set (via [`set_buckets`] or [`set_buckets_for_metric`]) then all histograms will
/// be exposed as summaries instead.
/// If buckets are set (via [`set_buckets`][Self::set_buckets] or
/// [`set_buckets_for_metric`][Self::set_buckets_for_metric]) then all histograms will be exposed
/// as summaries instead.
pub fn set_quantiles(mut self, quantiles: &[f64]) -> Self {
self.quantiles = parse_quantiles(quantiles);
self
@ -414,15 +438,18 @@ impl PrometheusBuilder {
///
/// An error will be returned if there's an issue with creating the HTTP server or with
/// installing the recorder as the global recorder.
#[cfg(feature = "tokio-exporter")]
pub fn install(self) -> Result<(), Error> {
let (recorder, exporter) = self.build()?;
metrics::set_boxed_recorder(Box::new(recorder))?;
let mut runtime = runtime::Builder::new()
.basic_scheduler()
let runtime = runtime::Builder::new_current_thread()
.enable_all()
.build()?;
let (recorder, exporter) = {
let _guard = runtime.enter();
self.build_with_exporter()
}?;
metrics::set_boxed_recorder(Box::new(recorder))?;
thread::Builder::new()
.name("metrics-exporter-prometheus-http".to_string())
.spawn(move || {
@ -439,13 +466,31 @@ impl PrometheusBuilder {
Ok(())
}
/// Builds the recorder and returns it.
/// This function is only enabled when default features are not set.
pub fn build(self) -> Result<PrometheusRecorder, Error> {
let inner = Arc::new(Inner {
registry: Registry::new(),
distributions: RwLock::new(HashMap::new()),
quantiles: self.quantiles.clone(),
buckets: self.buckets.clone(),
buckets_by_name: self.buckets_by_name,
descriptions: RwLock::new(HashMap::new()),
});
let recorder = PrometheusRecorder { inner };
Ok(recorder)
}
/// Builds the recorder and exporter and returns them both.
///
/// In most cases, users should prefer to use [`PrometheusBuilder::install`] to create and
/// install the recorder and exporter automatically for them. If a caller is combining
/// recorders, or needs to schedule the exporter to run in a particular way, this method
/// provides the flexibility to do so.
pub fn build(
#[cfg(feature = "tokio-exporter")]
pub fn build_with_exporter(
self,
) -> Result<
(
@ -494,7 +539,7 @@ impl PrometheusBuilder {
}
impl Recorder for PrometheusRecorder {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
fn register_counter(&self, key: Key, _unit: Option<Unit>, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Counter, key),
@ -503,7 +548,7 @@ impl Recorder for PrometheusRecorder {
);
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
fn register_gauge(&self, key: Key, _unit: Option<Unit>, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Gauge, key),
@ -512,7 +557,7 @@ impl Recorder for PrometheusRecorder {
);
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
fn register_histogram(&self, key: Key, _unit: Option<Unit>, description: Option<&'static str>) {
self.add_description_if_missing(&key, description);
self.inner.registry().op(
CompositeKey::new(MetricKind::Histogram, key),
@ -550,7 +595,11 @@ fn key_to_parts(key: Key) -> (String, Vec<String>) {
let name = key.name();
let labels = key.labels();
let sanitize = |c| c == '.' || c == '=' || c == '{' || c == '}' || c == '+' || c == '-';
let name = name.replace(sanitize, "_");
let name = name
.parts()
.map(|s| s.replace(sanitize, "_"))
.collect::<Vec<_>>()
.join("_");
let labels = labels
.into_iter()
.map(|label| {

View File

@ -0,0 +1,11 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
<!-- next-header -->
## [Unreleased] - ReleaseDate
### Added
- Effective birth of the crate.

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-exporter-tcp"
version = "0.1.0-alpha.3"
version = "0.1.0-alpha.5"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -19,7 +19,7 @@ keywords = ["metrics", "telemetry", "tcp"]
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util" }
bytes = "0.5"
crossbeam-channel = "0.4"
crossbeam-channel = "0.5"
prost = "0.6"
prost-types = "0.6"
mio = { version = "0.7", features = ["os-poll", "tcp"] }

View File

@ -25,9 +25,9 @@ fn main() {
Err(e) => eprintln!("read error: {:?}", e),
};
match proto::Metric::decode_length_delimited(&mut buf) {
match proto::Event::decode_length_delimited(&mut buf) {
Err(e) => eprintln!("decode error: {:?}", e),
Ok(msg) => println!("metric: {:?}", msg),
Ok(msg) => println!("event: {:?}", msg),
}
}
}

View File

@ -1,7 +1,7 @@
use std::thread;
use std::time::Duration;
use metrics::{histogram, increment};
use metrics::{histogram, increment, register_histogram, Unit};
use metrics_exporter_tcp::TcpBuilder;
use quanta::Clock;
@ -15,6 +15,8 @@ fn main() {
let mut clock = Clock::new();
let mut last = None;
register_histogram!("tcp_server_loop_delta_ns", Unit::Nanoseconds);
loop {
increment!("tcp_server_loops", "system" => "foo");

View File

@ -4,6 +4,22 @@ import "google/protobuf/timestamp.proto";
package event.proto;
message Metadata {
string name = 1;
enum MetricType {
COUNTER = 0;
GAUGE = 1;
HISTOGRAM = 2;
}
MetricType metric_type = 2;
oneof unit {
string unit_value = 3;
}
oneof description {
string description_value = 4;
}
}
message Metric {
string name = 1;
google.protobuf.Timestamp timestamp = 2;
@ -26,3 +42,10 @@ message Gauge {
message Histogram {
uint64 value = 1;
}
message Event {
oneof event {
Metadata metadata = 1;
Metric metric = 2;
}
}

View File

@ -23,7 +23,7 @@
//! `proto/event.proto`.
//!
//! # Usage
//! The TCP exporter can be constructed by creating a [`TcpBuilder], configuring it as needed, and
//! The TCP exporter can be constructed by creating a [`TcpBuilder`], configuring it as needed, and
//! calling [`TcpBuilder::install`] to both spawn the TCP server as well as install the exporter
//! globally.
//!
@ -44,16 +44,21 @@
//! ```
//!
//! [metrics]: https://docs.rs/metrics
#![deny(missing_docs)]
#![cfg_attr(docsrs, feature(doc_cfg), deny(broken_intra_doc_links))]
use std::collections::{BTreeMap, HashMap, VecDeque};
use std::io::{self, Write};
use std::net::SocketAddr;
use std::sync::Arc;
use std::sync::{
atomic::{AtomicBool, Ordering},
Arc, Mutex,
};
use std::thread;
use std::time::SystemTime;
use bytes::Bytes;
use crossbeam_channel::{bounded, unbounded, Receiver, Sender};
use metrics::{Key, Recorder, SetRecorderError};
use metrics::{Key, Recorder, SetRecorderError, Unit};
use mio::{
net::{TcpListener, TcpStream},
Events, Interest, Poll, Token, Waker,
@ -70,12 +75,19 @@ mod proto {
include!(concat!(env!("OUT_DIR"), "/event.proto.rs"));
}
use self::proto::metadata::MetricType;
enum MetricValue {
Counter(u64),
Gauge(f64),
Histogram(u64),
}
enum Event {
Metadata(Key, MetricType, Option<Unit>, Option<&'static str>),
Metric(Key, MetricValue),
}
/// Errors that could occur while installing a TCP recorder/exporter.
#[derive(Debug)]
pub enum Error {
@ -98,10 +110,55 @@ impl From<SetRecorderError> for Error {
}
}
#[derive(Clone)]
struct State {
client_count: Arc<Mutex<usize>>,
should_send: Arc<AtomicBool>,
waker: Arc<Waker>,
}
impl State {
pub fn from_waker(waker: Waker) -> State {
State {
client_count: Arc::new(Mutex::new(0)),
should_send: Arc::new(AtomicBool::new(false)),
waker: Arc::new(waker),
}
}
pub fn should_send(&self) -> bool {
self.should_send.load(Ordering::Relaxed)
}
pub fn increment_clients(&self) {
// This is slightly overkill _but_ it means we can ensure no wrapping
// addition or subtraction, keeping our "if no clients, don't send" logic
// intact in the face of a logic mistake on our part.
let mut count = self.client_count.lock().unwrap();
*count = count.saturating_add(1);
self.should_send.store(true, Ordering::SeqCst);
}
pub fn decrement_clients(&self) {
// This is slightly overkill _but_ it means we can ensure no wrapping
// addition or subtraction, keeping our "if no clients, don't send" logic
// intact in the face of a logic mistake on our part.
let mut count = self.client_count.lock().unwrap();
*count = count.saturating_sub(1);
if *count == 0 {
self.should_send.store(false, Ordering::SeqCst);
}
}
pub fn wake(&self) {
let _ = self.waker.wake();
}
}
/// A TCP recorder.
pub struct TcpRecorder {
tx: Sender<(Key, MetricValue)>,
waker: Arc<Waker>,
tx: Sender<Event>,
state: State,
}
/// Builder for creating and installing a TCP recorder/exporter.
@ -174,35 +231,57 @@ impl TcpBuilder {
};
let poll = Poll::new()?;
let waker = Arc::new(Waker::new(poll.registry(), WAKER)?);
let waker = Waker::new(poll.registry(), WAKER)?;
let mut listener = TcpListener::bind(self.listen_addr)?;
poll.registry()
.register(&mut listener, LISTENER, Interest::READABLE)?;
let state = State::from_waker(waker);
let recorder = TcpRecorder {
tx,
waker: Arc::clone(&waker),
state: state.clone(),
};
thread::spawn(move || run_transport(poll, waker, listener, rx, buffer_size));
thread::spawn(move || run_transport(poll, listener, rx, state, buffer_size));
Ok(recorder)
}
}
impl TcpRecorder {
fn register_metric(
&self,
key: Key,
metric_type: MetricType,
unit: Option<Unit>,
description: Option<&'static str>,
) {
let _ = self
.tx
.try_send(Event::Metadata(key, metric_type, unit, description));
self.state.wake();
}
fn push_metric(&self, key: Key, value: MetricValue) {
let _ = self.tx.try_send((key, value));
let _ = self.waker.wake();
if self.state.should_send() {
let _ = self.tx.try_send(Event::Metric(key, value));
self.state.wake();
}
}
}
impl Recorder for TcpRecorder {
fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.register_metric(key, MetricType::Counter, unit, description);
}
fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.register_metric(key, MetricType::Gauge, unit, description);
}
fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.register_metric(key, MetricType::Histogram, unit, description);
}
fn increment_counter(&self, key: Key, value: u64) {
self.push_metric(key, MetricValue::Counter(value));
@ -219,15 +298,16 @@ impl Recorder for TcpRecorder {
fn run_transport(
mut poll: Poll,
waker: Arc<Waker>,
listener: TcpListener,
rx: Receiver<(Key, MetricValue)>,
rx: Receiver<Event>,
state: State,
buffer_size: Option<usize>,
) {
let buffer_limit = buffer_size.unwrap_or(std::usize::MAX);
let mut events = Events::with_capacity(1024);
let mut clients = HashMap::new();
let mut clients_to_remove = Vec::new();
let mut metadata = HashMap::new();
let mut next_token = START_TOKEN;
let mut buffered_pmsgs = VecDeque::with_capacity(buffer_limit);
@ -257,7 +337,7 @@ fn run_transport(
if buffered_pmsgs.len() >= buffer_limit {
// We didn't drain ourselves here, so schedule a future wake so we
// continue to drain remaining metrics.
let _ = waker.wake();
state.wake();
break;
}
@ -270,10 +350,22 @@ fn run_transport(
// If our sender is dead, we can't do anything else, so just return.
Err(_) => return,
};
let (key, value) = msg;
match convert_metric_to_protobuf_encoded(key, value) {
Ok(pmsg) => buffered_pmsgs.push_back(pmsg),
Err(e) => error!(error = ?e, "error encoding metric"),
match msg {
Event::Metadata(key, metric_type, unit, desc) => {
let entry = metadata
.entry(key)
.or_insert_with(|| (metric_type, None, None));
let (_, uentry, dentry) = entry;
*uentry = unit;
*dentry = desc;
}
Event::Metric(key, value) => {
match convert_metric_to_protobuf_encoded(key, value) {
Ok(pmsg) => buffered_pmsgs.push_back(pmsg),
Err(e) => error!(error = ?e, "error encoding metric"),
}
}
}
}
drop(_mrxspan);
@ -290,6 +382,7 @@ fn run_transport(
let done = drive_connection(conn, wbuf, msgs);
if done {
clients_to_remove.push(*token);
state.decrement_clients();
continue;
}
@ -314,6 +407,7 @@ fn run_transport(
let done = drive_connection(conn, wbuf, msgs);
if done {
clients_to_remove.push(*token);
state.decrement_clients();
}
}
@ -326,6 +420,7 @@ fn run_transport(
if let Some((conn, _, _)) = clients.get_mut(&token) {
trace!(?conn, ?token, "removing client");
clients.remove(&token);
state.decrement_clients();
}
}
}
@ -340,9 +435,12 @@ fn run_transport(
.register(&mut conn, token, CLIENT_INTEREST)
.expect("failed to register interest for client connection");
// Start tracking them.
state.increment_clients();
// Start tracking them, and enqueue all of the metadata.
let metadata = generate_metadata_messages(&metadata);
clients
.insert(token, (conn, None, VecDeque::new()))
.insert(token, (conn, None, metadata))
.ok_or(())
.expect_err("client mapped to existing token!");
}
@ -361,6 +459,7 @@ fn run_transport(
if done {
trace!(?conn, ?token, "removing client");
clients.remove(&token);
state.decrement_clients();
}
}
}
@ -370,6 +469,23 @@ fn run_transport(
}
}
fn generate_metadata_messages(
metadata: &HashMap<Key, (MetricType, Option<Unit>, Option<&'static str>)>,
) -> VecDeque<Bytes> {
let mut bufs = VecDeque::new();
for (key, (metric_type, unit, desc)) in metadata.iter() {
let msg = convert_metadata_to_protobuf_encoded(
key,
metric_type.clone(),
unit.clone(),
desc.clone(),
)
.expect("failed to encode metadata buffer");
bufs.push_back(msg);
}
bufs
}
#[tracing::instrument(skip(wbuf, msgs))]
fn drive_connection(
conn: &mut TcpStream,
@ -391,7 +507,7 @@ fn drive_connection(
};
match conn.write(&buf) {
// Zero write = client closedd their connection, so remove 'em.
// Zero write = client closed their connection, so remove 'em.
Ok(0) => {
trace!(?conn, "zero write, closing client");
return true;
@ -421,6 +537,28 @@ fn drive_connection(
}
}
fn convert_metadata_to_protobuf_encoded(
key: &Key,
metric_type: MetricType,
unit: Option<Unit>,
desc: Option<&'static str>,
) -> Result<Bytes, EncodeError> {
let name = key.name().to_string();
let metadata = proto::Metadata {
name,
metric_type: metric_type.into(),
unit: unit.map(|u| proto::metadata::Unit::UnitValue(u.as_str().to_owned())),
description: desc.map(|d| proto::metadata::Description::DescriptionValue(d.to_owned())),
};
let event = proto::Event {
event: Some(proto::event::Event::Metadata(metadata)),
};
let mut buf = Vec::new();
event.encode_length_delimited(&mut buf)?;
Ok(Bytes::from(buf))
}
fn convert_metric_to_protobuf_encoded(key: Key, value: MetricValue) -> Result<Bytes, EncodeError> {
let name = key.name().to_string();
let labels = key
@ -442,9 +580,12 @@ fn convert_metric_to_protobuf_encoded(key: Key, value: MetricValue) -> Result<By
timestamp: Some(now),
value: Some(mvalue),
};
let event = proto::Event {
event: Some(proto::event::Event::Metric(metric)),
};
let mut buf = Vec::new();
metric.encode_length_delimited(&mut buf)?;
event.encode_length_delimited(&mut buf)?;
Ok(Bytes::from(buf))
}

View File

@ -0,0 +1,11 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
<!-- next-header -->
## [Unreleased] - ReleaseDate
### Added
- Effective birth of the crate.

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-macros"
version = "0.1.0-alpha.3"
version = "0.1.0-alpha.5"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -25,3 +25,6 @@ proc-macro2 = "1.0"
proc-macro-hack = "0.5"
lazy_static = "1.4"
regex = "1.3"
[dev-dependencies]
syn = { version = "1.0", features = ["full"] }

View File

@ -3,48 +3,35 @@ extern crate proc_macro;
use self::proc_macro::TokenStream;
use lazy_static::lazy_static;
use proc_macro2::Span;
use proc_macro_hack::proc_macro_hack;
use quote::{format_ident, quote, ToTokens};
use regex::Regex;
use syn::parse::discouraged::Speculative;
use syn::parse::{Error, Parse, ParseStream, Result};
use syn::{parse_macro_input, Expr, LitStr, Token};
#[cfg(test)]
mod tests;
enum Key {
NotScoped(LitStr),
Scoped(LitStr),
}
impl Key {
pub fn span(&self) -> Span {
match self {
Key::Scoped(s) => s.span(),
Key::NotScoped(s) => s.span(),
}
}
}
enum Labels {
Existing(Expr),
Inline(Vec<(LitStr, Expr)>),
}
struct WithoutExpression {
key: Key,
key: LitStr,
labels: Option<Labels>,
}
struct WithExpression {
key: Key,
key: LitStr,
op_value: Expr,
labels: Option<Labels>,
}
struct Registration {
key: Key,
key: LitStr,
unit: Option<Expr>,
description: Option<LitStr>,
labels: Option<Labels>,
}
@ -79,30 +66,78 @@ impl Parse for Registration {
fn parse(mut input: ParseStream) -> Result<Self> {
let key = read_key(&mut input)?;
// We accept three possible parameters: unit, description, and labels.
//
// If our first parameter is a literal string, we either have the description and no labels,
// or a description and labels. Peek at the trailing token after the description to see if
// we need to keep parsing.
// This may or may not be the start of labels, if the description has been omitted, so
// we hold on to it until we can make sure nothing else is behind it, or if it's a full
// fledged set of labels.
let (description, labels) = if input.peek(Token![,]) && input.peek3(Token![=>]) {
let (unit, description, labels) = if input.peek(Token![,]) && input.peek3(Token![=>]) {
// We have a ", <something> =>" pattern, which can only be labels, so we have no
// description.
// unit or description.
let labels = parse_labels(&mut input)?;
(None, labels)
(None, None, labels)
} else if input.peek(Token![,]) && input.peek2(LitStr) {
// We already know we're not working with labels only, and if we have ", <literal
// string>" then we have to at least have a description, possibly with labels.
input.parse::<Token![,]>()?;
let description = input.parse::<LitStr>().ok();
let labels = parse_labels(&mut input)?;
(description, labels)
} else {
// We might have labels passed as an expression.
(None, description, labels)
} else if input.peek(Token![,]) {
// We may or may not have anything left to parse here, but it could also be any
// combination of unit + description and/or labels.
//
// We speculatively try and parse an expression from the buffer, and see if we can match
// it to the qualified name of the Unit enum. We run all of the other normal parsing
// after that for description and labels.
let forked = input.fork();
forked.parse::<Token![,]>()?;
let unit = if let Ok(Expr::Path(path)) = forked.parse::<Expr>() {
let qname = path
.path
.segments
.iter()
.map(|x| x.ident.to_string())
.collect::<Vec<_>>()
.join("::");
if qname.starts_with("metrics::Unit") || qname.starts_with("Unit") {
Some(Expr::Path(path))
} else {
None
}
} else {
None
};
// If we succeeded, advance the main parse stream up to where the fork left off.
if unit.is_some() {
input.advance_to(&forked);
}
// We still have to check for a possible description.
let description =
if input.peek(Token![,]) && input.peek2(LitStr) && !input.peek3(Token![=>]) {
input.parse::<Token![,]>()?;
input.parse::<LitStr>().ok()
} else {
None
};
let labels = parse_labels(&mut input)?;
(None, labels)
(unit, description, labels)
} else {
(None, None, None)
};
Ok(Registration {
key,
unit,
description,
labels,
})
@ -113,33 +148,36 @@ impl Parse for Registration {
pub fn register_counter(input: TokenStream) -> TokenStream {
let Registration {
key,
unit,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("counter", key, description, labels).into()
get_expanded_registration("counter", key, unit, description, labels).into()
}
#[proc_macro_hack]
pub fn register_gauge(input: TokenStream) -> TokenStream {
let Registration {
key,
unit,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("gauge", key, description, labels).into()
get_expanded_registration("gauge", key, unit, description, labels).into()
}
#[proc_macro_hack]
pub fn register_histogram(input: TokenStream) -> TokenStream {
let Registration {
key,
unit,
description,
labels,
} = parse_macro_input!(input as Registration);
get_expanded_registration("histogram", key, description, labels).into()
get_expanded_registration("histogram", key, unit, description, labels).into()
}
#[proc_macro_hack]
@ -186,12 +224,18 @@ pub fn histogram(input: TokenStream) -> TokenStream {
fn get_expanded_registration(
metric_type: &str,
key: Key,
name: LitStr,
unit: Option<Expr>,
description: Option<LitStr>,
labels: Option<Labels>,
) -> proc_macro2::TokenStream {
let register_ident = format_ident!("register_{}", metric_type);
let key = key_to_quoted(key, labels);
let key = key_to_quoted(labels);
let unit = match unit {
Some(e) => quote! { Some(#e) },
None => quote! { None },
};
let description = match description {
Some(s) => quote! { Some(#s) },
@ -200,11 +244,12 @@ fn get_expanded_registration(
quote! {
{
static METRIC_NAME: [metrics::SharedString; 1] = [metrics::SharedString::const_str(#name)];
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
// Registrations are fairly rare, don't attempt to cache here
// and just use an owned ref.
recorder.#register_ident(metrics::Key::Owned(#key), #description);
recorder.#register_ident(metrics::Key::Owned(#key), #unit, #description);
}
}
}
@ -213,47 +258,70 @@ fn get_expanded_registration(
fn get_expanded_callsite<V>(
metric_type: &str,
op_type: &str,
key: Key,
name: LitStr,
labels: Option<Labels>,
op_values: V,
) -> proc_macro2::TokenStream
where
V: ToTokens,
{
let use_fast_path = can_use_fast_path(&labels);
let key = key_to_quoted(key, labels);
// We use a helper method for histogram values to coerce into u64, but otherwise,
// just pass through whatever the caller gave us.
let op_values = if metric_type == "histogram" {
quote! {
metrics::__into_u64(#op_values)
}
quote! { metrics::__into_u64(#op_values) }
} else {
quote! { #op_values }
};
let op_ident = format_ident!("{}_{}", op_type, metric_type);
let use_fast_path = can_use_fast_path(&labels);
if use_fast_path {
// We're on the fast path here, so we'll build our key, statically cache it,
// and use a borrowed reference to it for this and future operations.
let statics = match labels {
Some(Labels::Inline(pairs)) => {
let labels = pairs
.into_iter()
.map(|(key, val)| quote! { metrics::Label::from_static_parts(#key, #val) })
.collect::<Vec<_>>();
let labels_len = labels.len();
let labels_len = quote! { #labels_len };
quote! {
static METRIC_NAME: [metrics::SharedString; 1] = [metrics::SharedString::const_str(#name)];
static METRIC_LABELS: [metrics::Label; #labels_len] = [#(#labels),*];
static METRIC_KEY: metrics::KeyData =
metrics::KeyData::from_static_parts(&METRIC_NAME, &METRIC_LABELS);
}
}
None => {
quote! {
static METRIC_NAME: [metrics::SharedString; 1] = [metrics::SharedString::const_str(#name)];
static METRIC_KEY: metrics::KeyData =
metrics::KeyData::from_static_name(&METRIC_NAME);
}
}
_ => unreachable!("use_fast_path == true, but found expression-based labels"),
};
quote! {
{
static CACHED_KEY: metrics::OnceKeyData = metrics::OnceKeyData::new();
#statics
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
// Initialize our fast path.
let key = CACHED_KEY.get_or_init(|| { #key });
recorder.#op_ident(metrics::Key::Borrowed(&key), #op_values);
recorder.#op_ident(metrics::Key::Borrowed(&METRIC_KEY), #op_values);
}
}
}
} else {
// We're on the slow path, so basically we register every single time.
//
// Recorders are expected to deduplicate any duplicate registrations.
// We're on the slow path, so we allocate, womp.
let key = key_to_quoted(labels);
quote! {
{
static METRIC_NAME: [metrics::SharedString; 1] = [metrics::SharedString::const_str(#name)];
// Only do this work if there's a recorder installed.
if let Some(recorder) = metrics::try_recorder() {
recorder.#op_ident(metrics::Key::Owned(#key), #op_values);
@ -273,20 +341,9 @@ fn can_use_fast_path(labels: &Option<Labels>) -> bool {
}
}
fn read_key(input: &mut ParseStream) -> Result<Key> {
let key = if let Ok(_) = input.parse::<Token![<]>() {
let s = input.parse::<LitStr>()?;
input.parse::<Token![>]>()?;
Key::Scoped(s)
} else {
let s = input.parse::<LitStr>()?;
Key::NotScoped(s)
};
let inner = match key {
Key::Scoped(ref s) => s.value(),
Key::NotScoped(ref s) => s.value(),
};
fn read_key(input: &mut ParseStream) -> Result<LitStr> {
let key = input.parse::<LitStr>()?;
let inner = key.value();
lazy_static! {
static ref RE: Regex = Regex::new("^[a-zA-Z][a-zA-Z0-9_:\\.]*$").unwrap();
@ -301,32 +358,19 @@ fn read_key(input: &mut ParseStream) -> Result<Key> {
Ok(key)
}
fn quote_key_name(key: Key) -> proc_macro2::TokenStream {
match key {
Key::NotScoped(s) => {
quote! { #s }
}
Key::Scoped(s) => {
quote! {
format!("{}.{}", std::module_path!().replace("::", "."), #s)
}
}
}
}
fn key_to_quoted(key: Key, labels: Option<Labels>) -> proc_macro2::TokenStream {
let name = quote_key_name(key);
fn key_to_quoted(labels: Option<Labels>) -> proc_macro2::TokenStream {
match labels {
None => quote! { metrics::KeyData::from_name(#name) },
None => quote! { metrics::KeyData::from_static_name(&METRIC_NAME) },
Some(labels) => match labels {
Labels::Inline(pairs) => {
let labels = pairs
.into_iter()
.map(|(key, val)| quote! { metrics::Label::new(#key, #val) });
quote! { metrics::KeyData::from_name_and_labels(#name, vec![#(#labels),*]) }
quote! {
metrics::KeyData::from_parts(&METRIC_NAME[..], vec![#(#labels),*])
}
}
Labels::Existing(e) => quote! { metrics::KeyData::from_name_and_labels(#name, #e) },
Labels::Existing(e) => quote! { metrics::KeyData::from_parts(&METRIC_NAME[..], #e) },
},
}
}

View File

@ -1,35 +1,21 @@
use syn::parse_quote;
use syn::{Expr, ExprPath};
use super::*;
#[test]
fn test_quote_key_name_scoped() {
let stream = quote_key_name(Key::Scoped(parse_quote! {"qwerty"}));
let expected =
"format ! (\"{}.{}\" , std :: module_path ! () . replace (\"::\" , \".\") , \"qwerty\")";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_quote_key_name_not_scoped() {
let stream = quote_key_name(Key::NotScoped(parse_quote! {"qwerty"}));
let expected = "\"qwerty\"";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_registration() {
let stream = get_expanded_registration(
"mytype",
Key::NotScoped(parse_quote! {"mykeyname"}),
None,
None,
);
// Basic registration.
let stream =
get_expanded_registration("mytype", parse_quote! { "mykeyname" }, None, None, None);
let expected = concat!(
"{ if let Some (recorder) = metrics :: try_recorder () { ",
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . register_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_name (\"mykeyname\")) , ",
"metrics :: Key :: Owned (metrics :: KeyData :: from_static_name (& METRIC_NAME)) , ",
"None , ",
"None",
") ; ",
"} }",
@ -38,47 +24,175 @@ fn test_get_expanded_registration() {
assert_eq!(stream.to_string(), expected);
}
/// If there are no dynamic labels - generate an invocation with caching.
#[test]
fn test_get_expanded_callsite_fast_path() {
fn test_get_expanded_registration_with_unit() {
// Now with unit.
let units: ExprPath = parse_quote! { metrics::Unit::Nanoseconds };
let stream = get_expanded_registration(
"mytype",
parse_quote! { "mykeyname" },
Some(Expr::Path(units)),
None,
None,
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . register_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_static_name (& METRIC_NAME)) , ",
"Some (metrics :: Unit :: Nanoseconds) , ",
"None",
") ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_registration_with_description() {
// And with description.
let stream = get_expanded_registration(
"mytype",
parse_quote! { "mykeyname" },
None,
Some(parse_quote! { "flerkin" }),
None,
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . register_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_static_name (& METRIC_NAME)) , ",
"None , ",
"Some (\"flerkin\")",
") ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_registration_with_unit_and_description() {
// And with unit and description.
let units: ExprPath = parse_quote! { metrics::Unit::Nanoseconds };
let stream = get_expanded_registration(
"mytype",
parse_quote! { "mykeyname" },
Some(Expr::Path(units)),
Some(parse_quote! { "flerkin" }),
None,
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . register_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_static_name (& METRIC_NAME)) , ",
"Some (metrics :: Unit :: Nanoseconds) , ",
"Some (\"flerkin\")",
") ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_callsite_fast_path_no_labels() {
let stream = get_expanded_callsite(
"mytype",
"myop",
Key::NotScoped(parse_quote! {"mykeyname"}),
parse_quote! {"mykeyname"},
None,
quote! { 1 },
);
let expected = concat!(
"{ ",
"static CACHED_KEY : metrics :: OnceKeyData = metrics :: OnceKeyData :: new () ; ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"static METRIC_KEY : metrics :: KeyData = metrics :: KeyData :: from_static_name (& METRIC_NAME) ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"let key = CACHED_KEY . get_or_init (|| { ",
"metrics :: KeyData :: from_name (\"mykeyname\") ",
"}) ; ",
"recorder . myop_mytype (metrics :: Key :: Borrowed (& key) , 1) ; ",
"recorder . myop_mytype (metrics :: Key :: Borrowed (& METRIC_KEY) , 1) ; ",
"} }",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_callsite_fast_path_static_labels() {
let labels = Labels::Inline(vec![(parse_quote! { "key1" }, parse_quote! { "value1" })]);
let stream = get_expanded_callsite(
"mytype",
"myop",
parse_quote! {"mykeyname"},
Some(labels),
quote! { 1 },
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"static METRIC_LABELS : [metrics :: Label ; 1usize] = [metrics :: Label :: from_static_parts (\"key1\" , \"value1\")] ; ",
"static METRIC_KEY : metrics :: KeyData = metrics :: KeyData :: from_static_parts (& METRIC_NAME , & METRIC_LABELS) ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . myop_mytype (metrics :: Key :: Borrowed (& METRIC_KEY) , 1) ; ",
"} ",
"}",
);
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_get_expanded_callsite_fast_path_dynamic_labels() {
let labels = Labels::Inline(vec![(parse_quote! { "key1" }, parse_quote! { &value1 })]);
let stream = get_expanded_callsite(
"mytype",
"myop",
parse_quote! {"mykeyname"},
Some(labels),
quote! { 1 },
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . myop_mytype (metrics :: Key :: Owned (",
"metrics :: KeyData :: from_parts (& METRIC_NAME [..] , vec ! [metrics :: Label :: new (\"key1\" , & value1)])",
") , 1) ; ",
"} ",
"}",
);
assert_eq!(stream.to_string(), expected);
}
/// If there are dynamic labels - generate a direct invocation.
#[test]
fn test_get_expanded_callsite_regular_path() {
let stream = get_expanded_callsite(
"mytype",
"myop",
Key::NotScoped(parse_quote! {"mykeyname"}),
parse_quote! {"mykeyname"},
Some(Labels::Existing(parse_quote! { mylabels })),
quote! { 1 },
);
let expected = concat!(
"{ ",
"static METRIC_NAME : [metrics :: SharedString ; 1] = [metrics :: SharedString :: const_str (\"mykeyname\")] ; ",
"if let Some (recorder) = metrics :: try_recorder () { ",
"recorder . myop_mytype (",
"metrics :: Key :: Owned (metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , mylabels)) , ",
"metrics :: Key :: Owned (metrics :: KeyData :: from_parts (& METRIC_NAME [..] , mylabels)) , ",
"1",
") ; ",
"} }",
@ -89,18 +203,17 @@ fn test_get_expanded_callsite_regular_path() {
#[test]
fn test_key_to_quoted_no_labels() {
let stream = key_to_quoted(Key::NotScoped(parse_quote! {"mykeyname"}), None);
let expected = "metrics :: KeyData :: from_name (\"mykeyname\")";
let stream = key_to_quoted(None);
let expected = "metrics :: KeyData :: from_static_name (& METRIC_NAME)";
assert_eq!(stream.to_string(), expected);
}
#[test]
fn test_key_to_quoted_existing_labels() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Existing(Expr::Path(parse_quote! { mylabels }))),
);
let expected = "metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , mylabels)";
let stream = key_to_quoted(Some(Labels::Existing(Expr::Path(
parse_quote! { mylabels },
))));
let expected = "metrics :: KeyData :: from_parts (& METRIC_NAME [..] , mylabels)";
assert_eq!(stream.to_string(), expected);
}
@ -108,15 +221,12 @@ fn test_key_to_quoted_existing_labels() {
/// Key).
#[test]
fn test_key_to_quoted_inline_labels() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Inline(vec![
(parse_quote! {"mylabel1"}, parse_quote! { mylabel1 }),
(parse_quote! {"mylabel2"}, parse_quote! { "mylabel2" }),
])),
);
let stream = key_to_quoted(Some(Labels::Inline(vec![
(parse_quote! {"mylabel1"}, parse_quote! { mylabel1 }),
(parse_quote! {"mylabel2"}, parse_quote! { "mylabel2" }),
])));
let expected = concat!(
"metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , vec ! [",
"metrics :: KeyData :: from_parts (& METRIC_NAME [..] , vec ! [",
"metrics :: Label :: new (\"mylabel1\" , mylabel1) , ",
"metrics :: Label :: new (\"mylabel2\" , \"mylabel2\")",
"])"
@ -126,13 +236,7 @@ fn test_key_to_quoted_inline_labels() {
#[test]
fn test_key_to_quoted_inline_labels_empty() {
let stream = key_to_quoted(
Key::NotScoped(parse_quote! {"mykeyname"}),
Some(Labels::Inline(vec![])),
);
let expected = concat!(
"metrics :: KeyData :: from_name_and_labels (\"mykeyname\" , vec ! [",
"])"
);
let stream = key_to_quoted(Some(Labels::Inline(vec![])));
let expected = concat!("metrics :: KeyData :: from_parts (& METRIC_NAME [..] , vec ! [])");
assert_eq!(stream.to_string(), expected);
}

4
metrics-observer/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
/target
**/*.rs.bk
Cargo.lock
/.vscode

View File

@ -0,0 +1,11 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
<!-- next-header -->
## [Unreleased] - ReleaseDate
### Added
- Effective birth of the crate.

View File

@ -9,7 +9,7 @@ license = "MIT"
[dependencies]
getopts = "0.2"
bytes = "0.5"
crossbeam-channel = "0.4"
crossbeam-channel = "0.5"
prost = "0.6"
prost-types = "0.6"
tui = "0.12"

View File

@ -4,6 +4,22 @@ import "google/protobuf/timestamp.proto";
package event.proto;
message Metadata {
string name = 1;
enum MetricType {
COUNTER = 0;
GAUGE = 1;
HISTOGRAM = 2;
}
MetricType metric_type = 2;
oneof unit {
string unit_value = 3;
}
oneof description {
string description_value = 4;
}
}
message Metric {
string name = 1;
google.protobuf.Timestamp timestamp = 2;
@ -26,3 +42,10 @@ message Gauge {
message Histogram {
uint64 value = 1;
}
message Event {
oneof event {
Metadata metadata = 1;
Metric metric = 2;
}
}

View File

@ -2,7 +2,7 @@ use std::io;
use std::thread;
use std::time::Duration;
use crossbeam_channel::{bounded, Receiver, TrySendError, RecvTimeoutError};
use crossbeam_channel::{bounded, Receiver, RecvTimeoutError, TrySendError};
use termion::event::Key;
use termion::input::TermRead;
@ -37,4 +37,4 @@ impl InputEvents {
Err(e) => Err(e),
}
}
}
}

View File

@ -1,21 +1,27 @@
use std::fmt;
use std::num::FpCategory;
use std::time::Duration;
use std::{error::Error, io};
use chrono::Local;
use metrics::Unit;
use termion::{event::Key, input::MouseTerminal, raw::IntoRawMode, screen::AlternateScreen};
use tui::{
backend::TermionBackend,
layout::{Constraint, Direction, Layout},
style::{Color, Modifier, Style},
text::{Span, Spans},
widgets::{Block, Borders, Paragraph, Wrap, List, ListItem},
widgets::{Block, Borders, List, ListItem, Paragraph, Wrap},
Terminal,
};
mod input;
use self::input::InputEvents;
mod metrics;
use self::metrics::{ClientState, MetricData};
// Module name/crate name collision that we have to deal with.
#[path = "metrics.rs"]
mod metrics_inner;
use self::metrics_inner::{ClientState, MetricData};
mod selector;
use self::selector::Selector;
@ -28,7 +34,7 @@ fn main() -> Result<(), Box<dyn Error>> {
let mut terminal = Terminal::new(backend)?;
let mut events = InputEvents::new();
let client = metrics::Client::new("127.0.0.1:5000".to_string());
let client = metrics_inner::Client::new("127.0.0.1:5000".to_string());
let mut selector = Selector::new();
loop {
@ -36,12 +42,7 @@ fn main() -> Result<(), Box<dyn Error>> {
let chunks = Layout::default()
.direction(Direction::Vertical)
.margin(1)
.constraints(
[
Constraint::Length(4),
Constraint::Percentage(90)
].as_ref()
)
.constraints([Constraint::Length(4), Constraint::Percentage(90)].as_ref())
.split(f.size());
let current_dt = Local::now().format(" (%Y/%m/%d %I:%M:%S %p)").to_string();
@ -58,7 +59,7 @@ fn main() -> Result<(), Box<dyn Error>> {
}
Spans::from(spans)
},
}
ClientState::Connected => Spans::from(vec![
Span::raw("state: "),
Span::styled("connected", Style::default().fg(Color::Green)),
@ -67,7 +68,10 @@ fn main() -> Result<(), Box<dyn Error>> {
let header_block = Block::default()
.title(vec![
Span::styled("metrics-observer", Style::default().add_modifier(Modifier::BOLD)),
Span::styled(
"metrics-observer",
Style::default().add_modifier(Modifier::BOLD),
),
Span::raw(current_dt),
])
.borders(Borders::ALL);
@ -87,42 +91,55 @@ fn main() -> Result<(), Box<dyn Error>> {
// Knock 5 off the line width to account for 3-character highlight symbol + borders.
let line_width = chunks[1].width.saturating_sub(6) as usize;
let items = client.with_metrics(|metrics| {
let mut items = Vec::new();
for (key, value) in metrics.iter() {
let inner_key = key.key();
let name = inner_key.name();
let labels = inner_key.labels().map(|label| format!("{} = {}", label.key(), label.value())).collect::<Vec<_>>();
let display_name = if labels.is_empty() {
name.to_string()
} else {
format!("{} [{}]", name, labels.join(", "))
};
let mut items = Vec::new();
let metrics = client.get_metrics();
for (key, value, unit, _desc) in metrics {
let inner_key = key.key();
let name = inner_key.name();
let labels = inner_key
.labels()
.map(|label| format!("{} = {}", label.key(), label.value()))
.collect::<Vec<_>>();
let display_name = if labels.is_empty() {
name.to_string()
} else {
format!("{} [{}]", name, labels.join(", "))
};
let display_value = match value {
MetricData::Counter(value) => format!("total: {}", value),
MetricData::Gauge(value) => format!("current: {}", value),
MetricData::Histogram(value) => {
let min = value.min();
let max = value.max();
let p50 = value.value_at_quantile(0.5);
let p99 = value.value_at_quantile(0.99);
let p999 = value.value_at_quantile(0.999);
let display_value = match value {
MetricData::Counter(value) => {
format!("total: {}", u64_to_displayable(value, unit))
}
MetricData::Gauge(value) => {
format!("current: {}", f64_to_displayable(value, unit))
}
MetricData::Histogram(value) => {
let min = value.min();
let max = value.max();
let p50 = value.value_at_quantile(0.5);
let p99 = value.value_at_quantile(0.99);
let p999 = value.value_at_quantile(0.999);
format!("min: {} p50: {} p99: {} p999: {} max: {}",
min, p50, p99, p999, max)
},
};
format!(
"min: {} p50: {} p99: {} p999: {} max: {}",
u64_to_displayable(min, unit.clone()),
u64_to_displayable(p50, unit.clone()),
u64_to_displayable(p99, unit.clone()),
u64_to_displayable(p999, unit.clone()),
u64_to_displayable(max, unit),
)
}
};
let name_length = display_name.chars().count();
let value_length = display_value.chars().count();
let space = line_width.saturating_sub(name_length).saturating_sub(value_length);
let name_length = display_name.chars().count();
let value_length = display_value.chars().count();
let space = line_width
.saturating_sub(name_length)
.saturating_sub(value_length);
let display = format!("{}{}{}", display_name, " ".repeat(space), display_value);
items.push(ListItem::new(display));
}
items
});
let display = format!("{}{}{}", display_name, " ".repeat(space), display_value);
items.push(ListItem::new(display));
}
selector.set_length(items.len());
let metrics_block = Block::default()
@ -132,7 +149,7 @@ fn main() -> Result<(), Box<dyn Error>> {
let metrics = List::new(items)
.block(metrics_block)
.highlight_symbol(">> ");
f.render_stateful_widget(metrics, chunks[1], selector.state());
})?;
@ -145,10 +162,203 @@ fn main() -> Result<(), Box<dyn Error>> {
Key::Down => selector.next(),
Key::PageUp => selector.top(),
Key::PageDown => selector.bottom(),
_ => {},
_ => {}
}
}
}
Ok(())
}
}
fn u64_to_displayable(value: u64, unit: Option<Unit>) -> String {
let unit = match unit {
None => return value.to_string(),
Some(inner) => inner,
};
if unit.is_time_based() {
return u64_time_to_displayable(value, unit);
}
let label = unit.as_canonical_label();
format!("{}{}", value, label)
}
fn f64_to_displayable(value: f64, unit: Option<Unit>) -> String {
let unit = match unit {
None => return value.to_string(),
Some(inner) => inner,
};
if unit.is_time_based() {
return f64_time_to_displayable(value, unit);
}
let label = unit.as_canonical_label();
format!("{}{}", value, label)
}
fn u64_time_to_displayable(value: u64, unit: Unit) -> String {
let dur = match unit {
Unit::Nanoseconds => Duration::from_nanos(value),
Unit::Microseconds => Duration::from_micros(value),
Unit::Milliseconds => Duration::from_millis(value),
Unit::Seconds => Duration::from_secs(value),
// If it's not a time-based unit, then just format the value plainly.
_ => return value.to_string(),
};
format!("{:?}", TruncatedDuration(dur))
}
fn f64_time_to_displayable(value: f64, unit: Unit) -> String {
// Calculate how much we need to scale the value by, since `Duration` only takes f64 values if
// they are at the seconds granularity, although obviously they could contain significant digits
// for subsecond precision.
let scaling_factor = match unit {
Unit::Nanoseconds => Some(1_000_000_000.0),
Unit::Microseconds => Some(1_000_000.0),
Unit::Milliseconds => Some(1_000.0),
Unit::Seconds => None,
// If it's not a time-based unit, then just format the value plainly.
_ => return value.to_string(),
};
let adjusted = match scaling_factor {
Some(factor) => value / factor,
None => value,
};
let sign = if adjusted < 0.0 { "-" } else { "" };
let normalized = adjusted.abs();
if !normalized.is_normal() && normalized.classify() != FpCategory::Zero {
// We need a normalized number, but unlike `is_normal`, `Duration` is fine with a value that
// is at zero, so we just exclude that here.
return value.to_string();
}
let dur = Duration::from_secs_f64(normalized);
format!("{}{:?}", sign, TruncatedDuration(dur))
}
struct TruncatedDuration(Duration);
impl fmt::Debug for TruncatedDuration {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
/// Formats a floating point number in decimal notation.
///
/// The number is given as the `integer_part` and a fractional part.
/// The value of the fractional part is `fractional_part / divisor`. So
/// `integer_part` = 3, `fractional_part` = 12 and `divisor` = 100
/// represents the number `3.012`. Trailing zeros are omitted.
///
/// `divisor` must not be above 100_000_000. It also should be a power
/// of 10, everything else doesn't make sense. `fractional_part` has
/// to be less than `10 * divisor`!
fn fmt_decimal(
f: &mut fmt::Formatter<'_>,
mut integer_part: u64,
mut fractional_part: u32,
mut divisor: u32,
precision: usize,
) -> fmt::Result {
// Encode the fractional part into a temporary buffer. The buffer
// only need to hold 9 elements, because `fractional_part` has to
// be smaller than 10^9. The buffer is prefilled with '0' digits
// to simplify the code below.
let mut buf = [b'0'; 9];
let precision = if precision > 9 { 9 } else { precision };
// The next digit is written at this position
let mut pos = 0;
// We keep writing digits into the buffer while there are non-zero
// digits left and we haven't written enough digits yet.
while fractional_part > 0 && pos < precision {
// Write new digit into the buffer
buf[pos] = b'0' + (fractional_part / divisor) as u8;
fractional_part %= divisor;
divisor /= 10;
pos += 1;
}
// If a precision < 9 was specified, there may be some non-zero
// digits left that weren't written into the buffer. In that case we
// need to perform rounding to match the semantics of printing
// normal floating point numbers. However, we only need to do work
// when rounding up. This happens if the first digit of the
// remaining ones is >= 5.
if fractional_part > 0 && fractional_part >= divisor * 5 {
// Round up the number contained in the buffer. We go through
// the buffer backwards and keep track of the carry.
let mut rev_pos = pos;
let mut carry = true;
while carry && rev_pos > 0 {
rev_pos -= 1;
// If the digit in the buffer is not '9', we just need to
// increment it and can stop then (since we don't have a
// carry anymore). Otherwise, we set it to '0' (overflow)
// and continue.
if buf[rev_pos] < b'9' {
buf[rev_pos] += 1;
carry = false;
} else {
buf[rev_pos] = b'0';
}
}
// If we still have the carry bit set, that means that we set
// the whole buffer to '0's and need to increment the integer
// part.
if carry {
integer_part += 1;
}
}
// If we haven't emitted a single fractional digit and the precision
// wasn't set to a non-zero value, we don't print the decimal point.
if pos == 0 {
write!(f, "{}", integer_part)
} else {
// SAFETY: We are only writing ASCII digits into the buffer and it was
// initialized with '0's, so it contains valid UTF8.
let s = unsafe { std::str::from_utf8_unchecked(&buf[..pos]) };
let s = s.trim_end_matches('0');
write!(f, "{}.{}", integer_part, s)
}
}
// Print leading '+' sign if requested
if f.sign_plus() {
write!(f, "+")?;
}
let secs = self.0.as_secs();
let sub_nanos = self.0.subsec_nanos();
let nanos = self.0.as_nanos();
if secs > 0 {
fmt_decimal(f, secs, sub_nanos, 100_000_000, 3)?;
f.write_str("s")
} else if nanos >= 1_000_000 {
fmt_decimal(
f,
nanos as u64 / 1_000_000,
(nanos % 1_000_000) as u32,
100_000,
2,
)?;
f.write_str("ms")
} else if nanos >= 1_000 {
fmt_decimal(f, nanos as u64 / 1_000, (nanos % 1_000) as u32, 100, 1)?;
f.write_str("µs")
} else {
fmt_decimal(f, nanos as u64, 0, 1, 0)?;
f.write_str("ns")
}
}
}

View File

@ -1,28 +1,35 @@
use std::collections::HashMap;
use std::io::Read;
use std::net::TcpStream;
use std::time::Duration;
use std::thread;
use std::net::ToSocketAddrs;
use std::sync::{Arc, Mutex, RwLock};
use std::io::Read;
use std::thread;
use std::time::Duration;
use bytes::{BufMut, BytesMut};
use prost::Message;
use hdrhistogram::Histogram;
use prost::Message;
use metrics::{Label, KeyData};
use metrics::{KeyData, Label, Unit};
use metrics_util::{CompositeKey, MetricKind};
mod proto {
include!(concat!(env!("OUT_DIR"), "/event.proto.rs"));
}
use self::proto::{
event::Event,
metadata::{Description as DescriptionMetadata, MetricType, Unit as UnitMetadata},
Event as EventWrapper,
};
#[derive(Clone)]
pub enum ClientState {
Disconnected(Option<String>),
Connected,
}
#[derive(Clone)]
pub enum MetricData {
Counter(u64),
Gauge(f64),
@ -32,6 +39,7 @@ pub enum MetricData {
pub struct Client {
state: Arc<Mutex<ClientState>>,
metrics: Arc<RwLock<HashMap<CompositeKey, MetricData>>>,
metadata: Arc<RwLock<HashMap<(MetricKind, String), (Option<Unit>, Option<String>)>>>,
handle: thread::JoinHandle<()>,
}
@ -39,11 +47,13 @@ impl Client {
pub fn new(addr: String) -> Client {
let state = Arc::new(Mutex::new(ClientState::Disconnected(None)));
let metrics = Arc::new(RwLock::new(HashMap::new()));
let metadata = Arc::new(RwLock::new(HashMap::new()));
let handle = {
let state = state.clone();
let metrics = metrics.clone();
let metadata = metadata.clone();
thread::spawn(move || {
let mut runner = Runner::new(addr, state, metrics);
let mut runner = Runner::new(addr, state, metrics, metadata);
runner.run();
})
};
@ -51,6 +61,7 @@ impl Client {
Client {
state,
metrics,
metadata,
handle,
}
}
@ -59,12 +70,22 @@ impl Client {
self.state.lock().unwrap().clone()
}
pub fn with_metrics<F, T>(&self, f: F) -> T
where
F: FnOnce(&HashMap<CompositeKey, MetricData>) -> T,
{
let handle = self.metrics.read().unwrap();
f(&handle)
pub fn get_metrics(&self) -> Vec<(CompositeKey, MetricData, Option<Unit>, Option<String>)> {
let metrics = self.metrics.read().unwrap();
let metadata = self.metadata.read().unwrap();
metrics
.iter()
.map(|(k, v)| {
let metakey = (k.kind(), k.key().name().to_string());
let (unit, desc) = match metadata.get(&metakey) {
Some((unit, desc)) => (unit.clone(), desc.clone()),
None => (None, None),
};
(k.clone(), v.clone(), unit, desc)
})
.collect()
}
}
@ -79,6 +100,7 @@ struct Runner {
addr: String,
client_state: Arc<Mutex<ClientState>>,
metrics: Arc<RwLock<HashMap<CompositeKey, MetricData>>>,
metadata: Arc<RwLock<HashMap<(MetricKind, String), (Option<Unit>, Option<String>)>>>,
}
impl Runner {
@ -86,12 +108,14 @@ impl Runner {
addr: String,
state: Arc<Mutex<ClientState>>,
metrics: Arc<RwLock<HashMap<CompositeKey, MetricData>>>,
metadata: Arc<RwLock<HashMap<(MetricKind, String), (Option<Unit>, Option<String>)>>>,
) -> Runner {
Runner {
state: RunnerState::Disconnected,
addr,
client_state: state,
metrics,
metadata,
}
}
@ -104,38 +128,47 @@ impl Runner {
let mut state = self.client_state.lock().unwrap();
*state = ClientState::Disconnected(None);
}
// Try to connect to our target and transition into Connected.
let addr = match self.addr.to_socket_addrs() {
Ok(mut addrs) => match addrs.next() {
Some(addr) => addr,
None => {
let mut state = self.client_state.lock().unwrap();
*state = ClientState::Disconnected(Some("failed to resolve specified host".to_string()));
*state = ClientState::Disconnected(Some(
"failed to resolve specified host".to_string(),
));
break;
}
}
},
Err(_) => {
let mut state = self.client_state.lock().unwrap();
*state = ClientState::Disconnected(Some("failed to resolve specified host".to_string()));
*state = ClientState::Disconnected(Some(
"failed to resolve specified host".to_string(),
));
break;
}
};
match TcpStream::connect_timeout(&addr, Duration::from_secs(3)) {
Ok(stream) => RunnerState::Connected(stream),
Err(_) => {
RunnerState::ErrorBackoff("error while connecting", Duration::from_secs(3))
}
Err(_) => RunnerState::ErrorBackoff(
"error while connecting",
Duration::from_secs(3),
),
}
},
}
RunnerState::ErrorBackoff(msg, dur) => {
{
let mut state = self.client_state.lock().unwrap();
*state = ClientState::Disconnected(Some(format!("{}, retrying in {} seconds...", msg, dur.as_secs())));
*state = ClientState::Disconnected(Some(format!(
"{}, retrying in {} seconds...",
msg,
dur.as_secs()
)));
}
thread::sleep(dur);
RunnerState::Disconnected
},
}
RunnerState::Connected(ref mut stream) => {
{
let mut state = self.client_state.lock().unwrap();
@ -144,53 +177,108 @@ impl Runner {
let mut buf = BytesMut::new();
let mut rbuf = [0u8; 1024];
loop {
match stream.read(&mut rbuf[..]) {
Ok(0) => break,
Ok(n) => buf.put_slice(&rbuf[..n]),
Err(e) => eprintln!("read error: {:?}", e),
};
match proto::Metric::decode_length_delimited(&mut buf) {
Err(e) => eprintln!("decode error: {:?}", e),
Ok(msg) => {
let mut labels_raw = msg.labels.into_iter().collect::<Vec<_>>();
labels_raw.sort_by(|a, b| a.0.cmp(&b.0));
let labels = labels_raw.into_iter().map(|(k, v)| Label::new(k, v)).collect::<Vec<_>>();
let key_data: KeyData = (msg.name, labels).into();
match msg.value.expect("no metric value") {
proto::metric::Value::Counter(value) => {
let key = CompositeKey::new(MetricKind::Counter, key_data.into());
let mut metrics = self.metrics.write().unwrap();
let counter = metrics.entry(key).or_insert_with(|| MetricData::Counter(0));
if let MetricData::Counter(inner) = counter {
*inner += value.value;
}
},
proto::metric::Value::Gauge(value) => {
let key = CompositeKey::new(MetricKind::Gauge, key_data.into());
let mut metrics = self.metrics.write().unwrap();
let gauge = metrics.entry(key).or_insert_with(|| MetricData::Gauge(0.0));
if let MetricData::Gauge(inner) = gauge {
*inner = value.value;
}
},
proto::metric::Value::Histogram(value) => {
let key = CompositeKey::new(MetricKind::Histogram, key_data.into());
let mut metrics = self.metrics.write().unwrap();
let histogram = metrics.entry(key).or_insert_with(|| {
let histogram = Histogram::new(3).expect("failed to create histogram");
MetricData::Histogram(histogram)
});
let event = match EventWrapper::decode_length_delimited(&mut buf) {
Err(e) => {
eprintln!("decode error: {:?}", e);
continue;
}
Ok(event) => event,
};
if let MetricData::Histogram(inner) = histogram {
inner.record(value.value).expect("failed to record value to histogram");
}
},
if let Some(event) = event.event {
match event {
Event::Metadata(metadata) => {
let metric_type = MetricType::from_i32(metadata.metric_type)
.expect("unknown metric type over wire");
let metric_type = match metric_type {
MetricType::Counter => MetricKind::Counter,
MetricType::Gauge => MetricKind::Gauge,
MetricType::Histogram => MetricKind::Histogram,
};
let key = (metric_type, metadata.name);
let mut mmap = self
.metadata
.write()
.expect("failed to get metadata write lock");
let entry = mmap.entry(key).or_insert((None, None));
let (uentry, dentry) = entry;
*uentry = metadata
.unit
.map(|u| match u {
UnitMetadata::UnitValue(us) => us,
})
.and_then(|s| Unit::from_str(s.as_str()));
*dentry = metadata.description.map(|d| match d {
DescriptionMetadata::DescriptionValue(ds) => ds,
});
}
},
Event::Metric(metric) => {
let mut labels_raw =
metric.labels.into_iter().collect::<Vec<_>>();
labels_raw.sort_by(|a, b| a.0.cmp(&b.0));
let labels = labels_raw
.into_iter()
.map(|(k, v)| Label::new(k, v))
.collect::<Vec<_>>();
let key_data: KeyData = (metric.name, labels).into();
match metric.value.expect("no metric value") {
proto::metric::Value::Counter(value) => {
let key = CompositeKey::new(
MetricKind::Counter,
key_data.into(),
);
let mut metrics = self.metrics.write().unwrap();
let counter = metrics
.entry(key)
.or_insert_with(|| MetricData::Counter(0));
if let MetricData::Counter(inner) = counter {
*inner += value.value;
}
}
proto::metric::Value::Gauge(value) => {
let key = CompositeKey::new(
MetricKind::Gauge,
key_data.into(),
);
let mut metrics = self.metrics.write().unwrap();
let gauge = metrics
.entry(key)
.or_insert_with(|| MetricData::Gauge(0.0));
if let MetricData::Gauge(inner) = gauge {
*inner = value.value;
}
}
proto::metric::Value::Histogram(value) => {
let key = CompositeKey::new(
MetricKind::Histogram,
key_data.into(),
);
let mut metrics = self.metrics.write().unwrap();
let histogram =
metrics.entry(key).or_insert_with(|| {
let histogram = Histogram::new(3)
.expect("failed to create histogram");
MetricData::Histogram(histogram)
});
if let MetricData::Histogram(inner) = histogram {
inner
.record(value.value)
.expect("failed to record value to histogram");
}
}
}
}
}
}
}
@ -200,4 +288,4 @@ impl Runner {
self.state = next;
}
}
}
}

View File

@ -55,4 +55,4 @@ impl Selector {
};
self.1.select(Some(i));
}
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-tracing-context"
version = "0.1.0-alpha.1"
version = "0.1.0-alpha.3"
authors = ["MOZGIII <mike-n@narod.ru>"]
edition = "2018"
@ -22,6 +22,10 @@ bench = false
name = "visit"
harness = false
[[bench]]
name = "layer"
harness = false
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
metrics-util = { version = "0.4.0-alpha.1", path = "../metrics-util" }

View File

@ -0,0 +1,48 @@
use criterion::{criterion_group, criterion_main, Benchmark, Criterion};
use metrics::{Key, KeyData, Label, NoopRecorder, Recorder, SharedString};
use metrics_tracing_context::{MetricsLayer, TracingContextLayer};
use metrics_util::layers::Layer;
use tracing::{
dispatcher::{with_default, Dispatch},
span, Level,
};
use tracing_subscriber::{layer::SubscriberExt, Registry};
fn layer_benchmark(c: &mut Criterion) {
c.bench(
"layer",
Benchmark::new("all/enhance_key", |b| {
let subscriber = Registry::default().with(MetricsLayer::new());
let dispatcher = Dispatch::new(subscriber);
with_default(&dispatcher, || {
let user = "ferris";
let email = "ferris@rust-lang.org";
let span = span!(Level::TRACE, "login", user, user.email = email);
let _guard = span.enter();
let tracing_layer = TracingContextLayer::all();
let recorder = tracing_layer.layer(NoopRecorder);
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
})
})
.with_function("noop recorder overhead (increment_counter)", |b| {
let recorder = NoopRecorder;
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
}),
);
}
criterion_group!(benches, layer_benchmark);
criterion_main!(benches);

View File

@ -52,10 +52,10 @@
//! the following labels:
//! - `service=login_service`
//! - `user=ferris`
#![deny(missing_docs)]
#![cfg_attr(docsrs, feature(doc_cfg), deny(broken_intra_doc_links))]
use metrics::{Key, KeyData, Label, Recorder};
use metrics::{Key, KeyData, Label, Recorder, Unit};
use metrics_util::layers::Layer;
use tracing::Span;
@ -65,7 +65,7 @@ mod tracing_integration;
pub use label_filter::LabelFilter;
pub use tracing_integration::{Labels, MetricsLayer, SpanExt};
/// [`TracingContextLayer`] provides an implementation of a [`metrics::Layer`]
/// [`TracingContextLayer`] provides an implementation of a [`Layer`][metrics_util::layers::Layer]
/// for [`TracingContext`].
pub struct TracingContextLayer<F> {
label_filter: F,
@ -101,8 +101,7 @@ where
}
}
/// [`TracingContext`] is a [`metrics::Recorder`] that injects labels from the
/// [`tracing::Span`]s.
/// [`TracingContext`] is a [`metrics::Recorder`] that injects labels from[`tracing::Span`]s.
pub struct TracingContext<R, F> {
inner: R,
label_filter: F,
@ -127,7 +126,7 @@ where
fn enhance_key(&self, key: Key) -> Key {
let (name, mut labels) = key.into_owned().into_parts();
self.enhance_labels(&mut labels);
KeyData::from_name_and_labels(name, labels).into()
KeyData::from_parts(name, labels).into()
}
}
@ -136,16 +135,16 @@ where
R: Recorder,
F: LabelFilter,
{
fn register_counter(&self, key: Key, description: Option<&'static str>) {
self.inner.register_counter(key, description)
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_counter(key, unit, description)
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
self.inner.register_gauge(key, description)
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_gauge(key, unit, description)
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
self.inner.register_histogram(key, description)
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_histogram(key, unit, description)
}
fn increment_counter(&self, key: Key, value: u64) {

View File

@ -20,7 +20,7 @@ impl Visit for Labels {
}
fn record_bool(&mut self, field: &Field, value: bool) {
let label = Label::new(field.name(), if value { "true" } else { "false " });
let label = Label::from_static_parts(field.name(), if value { "true" } else { "false" });
self.0.push(label);
}

View File

@ -1,5 +1,3 @@
use std::collections::HashSet;
use metrics::{counter, KeyData, Label};
use metrics_tracing_context::{LabelFilter, MetricsLayer, TracingContextLayer};
use metrics_util::{layers::Layer, DebugValue, DebuggingRecorder, MetricKind, Snapshotter};
@ -54,7 +52,7 @@ fn test_basic_functionality() {
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts",
vec![
Label::new("service", "login_service"),
@ -63,6 +61,8 @@ fn test_basic_functionality() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
)]
)
@ -89,14 +89,13 @@ fn test_macro_forms() {
"service" => "login_service", "node_name" => node_name.clone());
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts_no_labels",
vec![
Label::new("user", "ferris"),
@ -104,11 +103,13 @@ fn test_macro_forms() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts_static_labels",
vec![
Label::new("service", "login_service"),
@ -117,11 +118,13 @@ fn test_macro_forms() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts_dynamic_labels",
vec![
Label::new("node_name", "localhost"),
@ -130,11 +133,13 @@ fn test_macro_forms() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts_static_and_dynamic_labels",
vec![
Label::new("service", "login_service"),
@ -144,11 +149,11 @@ fn test_macro_forms() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),
]
.into_iter()
.collect()
)
}
@ -168,6 +173,8 @@ fn test_no_labels() {
vec![(
MetricKind::Counter,
KeyData::from_name("login_attempts").into(),
None,
None,
DebugValue::Counter(1),
)]
)
@ -211,14 +218,13 @@ fn test_multiple_paths_to_the_same_callsite() {
path2();
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"my_counter",
vec![
Label::new("shared_field", "path1"),
@ -227,11 +233,13 @@ fn test_multiple_paths_to_the_same_callsite() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),
(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"my_counter",
vec![
Label::new("shared_field", "path2"),
@ -240,11 +248,11 @@ fn test_multiple_paths_to_the_same_callsite() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
)
]
.into_iter()
.collect()
)
}
@ -282,13 +290,12 @@ fn test_nested_spans() {
outer();
let snapshot = snapshotter.snapshot();
let snapshot: HashSet<_> = snapshot.into_iter().collect();
assert_eq!(
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"my_counter",
vec![
Label::new("shared_field", "inner"),
@ -300,10 +307,10 @@ fn test_nested_spans() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
),]
.into_iter()
.collect()
)]
)
}
@ -333,7 +340,7 @@ fn test_label_filtering() {
snapshot,
vec![(
MetricKind::Counter,
KeyData::from_name_and_labels(
KeyData::from_parts(
"login_attempts",
vec![
Label::new("service", "login_service"),
@ -341,6 +348,8 @@ fn test_label_filtering() {
],
)
.into(),
None,
None,
DebugValue::Counter(1),
)]
)

View File

@ -7,6 +7,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
<!-- next-header -->
## [Unreleased] - ReleaseDate
### Removed
- Removed `StreamingIntegers` as we no longer use it, and `compressed_vec` is a better option.
## [0.3.1] - 2019-11-21
### Changed

View File

@ -1,6 +1,6 @@
[package]
name = "metrics-util"
version = "0.4.0-alpha.4"
version = "0.4.0-alpha.6"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -27,19 +27,22 @@ name = "registry"
harness = false
[[bench]]
name = "streaming_integers"
name = "prefix"
harness = false
[[bench]]
name = "filter"
harness = false
[dependencies]
metrics = { version = "0.13.0-alpha.1", path = "../metrics", features = ["std"] }
crossbeam-epoch = "0.8"
crossbeam-utils = "0.7"
serde = "1.0"
arc-swap = "0.4"
atomic-shim = "0.1"
parking_lot = "0.11"
crossbeam-epoch = { version = "0.9", optional = true }
crossbeam-utils = { version = "0.8", default-features = false }
arc-swap = { version = "1.0", optional = true }
atomic-shim = { version = "0.1", optional = true }
aho-corasick = { version = "0.7", optional = true }
dashmap = "3"
dashmap = { version = "3", optional = true }
indexmap = { version = "1.6", optional = true }
[dev-dependencies]
criterion = "0.3"
@ -48,6 +51,6 @@ rand = { version = "0.7", features = ["small_rng"] }
rand_distr = "0.3"
[features]
default = ["std", "layer-filter"]
std = []
default = ["std"]
std = ["arc-swap", "atomic-shim", "crossbeam-epoch", "dashmap", "indexmap"]
layer-filter = ["aho-corasick"]

View File

@ -0,0 +1,61 @@
use criterion::{criterion_group, criterion_main, Benchmark, Criterion};
use metrics::{Key, KeyData, Label, NoopRecorder, Recorder, SharedString};
use metrics_util::layers::{FilterLayer, Layer};
fn layer_benchmark(c: &mut Criterion) {
c.bench(
"filter",
Benchmark::new("match", |b| {
let patterns = vec!["tokio"];
let filter_layer = FilterLayer::from_patterns(patterns);
let recorder = filter_layer.layer(NoopRecorder);
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("tokio.foo")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
})
.with_function("no match", |b| {
let patterns = vec!["tokio"];
let filter_layer = FilterLayer::from_patterns(patterns);
let recorder = filter_layer.layer(NoopRecorder);
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("hyper.foo")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
})
.with_function("deep match", |b| {
let patterns = vec!["tokio"];
let filter_layer = FilterLayer::from_patterns(patterns);
let recorder = filter_layer.layer(NoopRecorder);
static KEY_NAME: [SharedString; 2] = [
SharedString::const_str("prefix"),
SharedString::const_str("tokio.foo"),
];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
})
.with_function("noop recorder overhead (increment_counter)", |b| {
let recorder = NoopRecorder;
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("tokio.foo")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
}),
);
}
criterion_group!(benches, layer_benchmark);
criterion_main!(benches);

View File

@ -0,0 +1,33 @@
use criterion::{criterion_group, criterion_main, Benchmark, Criterion};
use metrics::{Key, KeyData, Label, NoopRecorder, Recorder, SharedString};
use metrics_util::layers::{Layer, PrefixLayer};
fn layer_benchmark(c: &mut Criterion) {
c.bench(
"prefix",
Benchmark::new("basic", |b| {
let prefix_layer = PrefixLayer::new("prefix");
let recorder = prefix_layer.layer(NoopRecorder);
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
})
.with_function("noop recorder overhead (increment_counter)", |b| {
let recorder = NoopRecorder;
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("foo", "bar")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
recorder.increment_counter(Key::Borrowed(&KEY_DATA), 1);
})
}),
);
}
criterion_group!(benches, layer_benchmark);
criterion_main!(benches);

View File

@ -1,5 +1,5 @@
use criterion::{criterion_group, criterion_main, BatchSize, Benchmark, Criterion};
use metrics::{Key, KeyData, Label, OnceKeyData};
use metrics::{Key, KeyData, Label, SharedString};
use metrics_util::Registry;
fn registry_benchmark(c: &mut Criterion) {
@ -7,22 +7,22 @@ fn registry_benchmark(c: &mut Criterion) {
"registry",
Benchmark::new("cached op (basic)", |b| {
let registry: Registry<Key, ()> = Registry::new();
static KEY_DATA: OnceKeyData = OnceKeyData::new();
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_DATA: KeyData = KeyData::from_static_name(&KEY_NAME);
b.iter(|| {
let key = Key::Borrowed(KEY_DATA.get_or_init(|| KeyData::from_name("simple_key")));
let key = Key::Borrowed(&KEY_DATA);
registry.op(key, |_| (), || ())
})
})
.with_function("cached op (labels)", |b| {
let registry: Registry<Key, ()> = Registry::new();
static KEY_DATA: OnceKeyData = OnceKeyData::new();
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("type", "http")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| {
let key = Key::Borrowed(KEY_DATA.get_or_init(|| {
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels("simple_key", labels)
}));
let key = Key::Borrowed(&KEY_DATA);
registry.op(key, |_| (), || ())
})
})
@ -64,7 +64,20 @@ fn registry_benchmark(c: &mut Criterion) {
b.iter(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels(key, labels)
KeyData::from_parts(key, labels)
})
})
.with_function("const key data overhead (basic)", |b| {
b.iter(|| {
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
KeyData::from_static_name(&KEY_NAME)
})
})
.with_function("const key data overhead (labels)", |b| {
b.iter(|| {
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static LABELS: [Label; 1] = [Label::from_static_parts("type", "http")];
KeyData::from_static_parts(&KEY_NAME, &LABELS)
})
})
.with_function("owned key overhead (basic)", |b| {
@ -77,29 +90,19 @@ fn registry_benchmark(c: &mut Criterion) {
b.iter(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
Key::Owned(KeyData::from_name_and_labels(key, labels))
Key::Owned(KeyData::from_parts(key, labels))
})
})
.with_function("cached key overhead (basic)", |b| {
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key_data = KEY_DATA.get_or_init(|| {
let key = "simple_key";
KeyData::from_name(key)
});
Key::Borrowed(key_data)
})
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_DATA: KeyData = KeyData::from_static_name(&KEY_NAME);
b.iter(|| Key::Borrowed(&KEY_DATA))
})
.with_function("cached key overhead (labels)", |b| {
static KEY_DATA: OnceKeyData = OnceKeyData::new();
b.iter(|| {
let key_data = KEY_DATA.get_or_init(|| {
let key = "simple_key";
let labels = vec![Label::new("type", "http")];
KeyData::from_name_and_labels(key, labels)
});
Key::Borrowed(key_data)
})
static KEY_NAME: [SharedString; 1] = [SharedString::const_str("simple_key")];
static KEY_LABELS: [Label; 1] = [Label::from_static_parts("type", "http")];
static KEY_DATA: KeyData = KeyData::from_static_parts(&KEY_NAME, &KEY_LABELS);
b.iter(|| Key::Borrowed(&KEY_DATA))
}),
);
}

View File

@ -1,96 +0,0 @@
use criterion::{criterion_group, criterion_main, Benchmark, Criterion, Throughput};
use lazy_static::lazy_static;
use metrics_util::StreamingIntegers;
use rand::{distributions::Distribution, rngs::SmallRng, SeedableRng};
use rand_distr::Gamma;
use std::time::Duration;
lazy_static! {
static ref NORMAL_SMALL: Vec<u64> = get_gamma_distribution(100, Duration::from_millis(200));
static ref NORMAL_MEDIUM: Vec<u64> = get_gamma_distribution(10000, Duration::from_millis(200));
static ref NORMAL_LARGE: Vec<u64> = get_gamma_distribution(1000000, Duration::from_millis(200));
static ref LINEAR_SMALL: Vec<u64> = get_linear_distribution(100);
static ref LINEAR_MEDIUM: Vec<u64> = get_linear_distribution(10000);
static ref LINEAR_LARGE: Vec<u64> = get_linear_distribution(1000000);
}
fn get_gamma_distribution(len: usize, upper_bound: Duration) -> Vec<u64> {
// Start with a seeded RNG so that we predictably regenerate our data.
let mut rng = SmallRng::seed_from_u64(len as u64);
// This Gamma distribution gets us pretty close to a typical web server response time
// distribution where there's a big peak down low, and a long tail that drops off sharply.
let gamma = Gamma::new(1.75, 1.0).expect("failed to create gamma distribution");
// Scale all the values by 22 million to simulate a lower bound of 22ms (but in
// nanoseconds) for all generated values.
gamma
.sample_iter(&mut rng)
.map(|n| n * upper_bound.as_nanos() as f64)
.map(|n| n as u64)
.take(len)
.collect::<Vec<u64>>()
}
fn get_linear_distribution(len: usize) -> Vec<u64> {
let mut values = Vec::new();
for i in 0..len as u64 {
values.push(i);
}
values
}
macro_rules! define_basic_benches {
($c:ident, $name:expr, $input:ident) => {
$c.bench(
$name,
Benchmark::new("compress", |b| {
b.iter_with_large_drop(|| {
let mut si = StreamingIntegers::new();
si.compress(&$input);
si
})
})
.with_function("decompress", |b| {
let mut si = StreamingIntegers::new();
si.compress(&$input);
b.iter_with_large_drop(move || si.decompress())
})
.with_function("decompress + sum", |b| {
let mut si = StreamingIntegers::new();
si.compress(&$input);
b.iter_with_large_drop(move || {
let total: u64 = si.decompress().iter().sum();
total
})
})
.with_function("decompress_with + sum", |b| {
let mut si = StreamingIntegers::new();
si.compress(&$input);
b.iter(move || {
let mut total = 0;
si.decompress_with(|batch| {
let batch_total: u64 = batch.iter().sum();
total += batch_total;
});
});
})
.throughput(Throughput::Elements($input.len() as u64)),
)
};
}
fn streaming_integer_benchmark(c: &mut Criterion) {
define_basic_benches!(c, "normal small", NORMAL_SMALL);
define_basic_benches!(c, "normal medium", NORMAL_MEDIUM);
define_basic_benches!(c, "normal large", NORMAL_LARGE);
define_basic_benches!(c, "linear small", LINEAR_SMALL);
define_basic_benches!(c, "linear medium", LINEAR_MEDIUM);
define_basic_benches!(c, "linear large", LINEAR_LARGE);
}
criterion_group!(benches, streaming_integer_benchmark);
criterion_main!(benches);

View File

@ -5,14 +5,19 @@ use std::{
sync::atomic::{AtomicUsize, Ordering},
};
const BLOCK_SIZE: usize = 512;
#[cfg(target_pointer_width = "16")]
const BLOCK_SIZE: usize = 16;
#[cfg(target_pointer_width = "32")]
const BLOCK_SIZE: usize = 32;
#[cfg(target_pointer_width = "64")]
const BLOCK_SIZE: usize = 64;
/// Discrete chunk of values with atomic read/write access.
struct Block<T> {
// Write index.
write: AtomicUsize,
// Read index.
// Read bitmap.
read: AtomicUsize,
// The individual slots.
@ -35,7 +40,7 @@ impl<T> Block<T> {
/// Gets the current length of this block.
pub fn len(&self) -> usize {
self.read.load(Ordering::Acquire)
self.read.load(Ordering::Acquire).trailing_ones() as usize
}
/// Gets a slice of the data written to this block.
@ -71,7 +76,7 @@ impl<T> Block<T> {
}
// Scoot our read index forward.
self.read.fetch_add(1, Ordering::AcqRel);
self.read.fetch_or(1 << index, Ordering::AcqRel);
Ok(())
}
@ -112,7 +117,7 @@ pub struct AtomicBucket<T> {
impl<T> AtomicBucket<T> {
/// Creates a new, empty bucket.
pub const fn new() -> Self {
pub fn new() -> Self {
AtomicBucket {
tail: Atomic::null(),
}
@ -324,6 +329,7 @@ mod tests {
let result = block.push(42);
assert!(result.is_ok());
assert_eq!(block.len(), 1);
let data = block.data();
assert_eq!(data.len(), 1);

View File

@ -1,8 +1,21 @@
use std::{hash::Hash, hash::Hasher, sync::Arc};
use core::hash::{Hash, Hasher};
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use crate::{handle::Handle, registry::Registry};
use metrics::{Key, Recorder};
use indexmap::IndexMap;
use metrics::{Key, Recorder, Unit};
type UnitMap = Arc<Mutex<HashMap<DifferentiatedKey, Unit>>>;
type DescriptionMap = Arc<Mutex<HashMap<DifferentiatedKey, &'static str>>>;
type Snapshot = Vec<(
MetricKind,
Key,
Option<Unit>,
Option<&'static str>,
DebugValue,
)>;
/// Metric kinds.
#[derive(Debug, Eq, PartialEq, Hash, Clone, Copy, Ord, PartialOrd)]
@ -60,23 +73,62 @@ impl Hash for DebugValue {
/// Captures point-in-time snapshots of `DebuggingRecorder`.
pub struct Snapshotter {
registry: Arc<Registry<DifferentiatedKey, Handle>>,
metrics: Option<Arc<Mutex<IndexMap<DifferentiatedKey, ()>>>>,
units: UnitMap,
descs: DescriptionMap,
}
impl Snapshotter {
/// Takes a snapshot of the recorder.
pub fn snapshot(&self) -> Vec<(MetricKind, Key, DebugValue)> {
let mut metrics = Vec::new();
pub fn snapshot(&self) -> Snapshot {
let mut snapshot = Vec::new();
let handles = self.registry.get_handles();
for (dkey, handle) in handles {
let collect_metric = |dkey: DifferentiatedKey,
handle: &Handle,
units: &UnitMap,
descs: &DescriptionMap,
snapshot: &mut Snapshot| {
let unit = units
.lock()
.expect("units lock poisoned")
.get(&dkey)
.cloned();
let desc = descs
.lock()
.expect("descriptions lock poisoned")
.get(&dkey)
.cloned();
let (kind, key) = dkey.into_parts();
let value = match kind {
MetricKind::Counter => DebugValue::Counter(handle.read_counter()),
MetricKind::Gauge => DebugValue::Gauge(handle.read_gauge()),
MetricKind::Histogram => DebugValue::Histogram(handle.read_histogram()),
};
metrics.push((kind, key, value));
snapshot.push((kind, key, unit, desc, value));
};
match &self.metrics {
Some(inner) => {
let metrics = {
let metrics = inner.lock().expect("metrics lock poisoned");
metrics.clone()
};
for (dk, _) in metrics.into_iter() {
if let Some(h) = handles.get(&dk) {
collect_metric(dk, h, &self.units, &self.descs, &mut snapshot);
}
}
}
None => {
for (dk, h) in handles.into_iter() {
collect_metric(dk, &h, &self.units, &self.descs, &mut snapshot);
}
}
}
metrics
snapshot
}
}
@ -86,13 +138,34 @@ impl Snapshotter {
/// to the raw values.
pub struct DebuggingRecorder {
registry: Arc<Registry<DifferentiatedKey, Handle>>,
metrics: Option<Arc<Mutex<IndexMap<DifferentiatedKey, ()>>>>,
units: Arc<Mutex<HashMap<DifferentiatedKey, Unit>>>,
descs: Arc<Mutex<HashMap<DifferentiatedKey, &'static str>>>,
}
impl DebuggingRecorder {
/// Creates a new `DebuggingRecorder`.
pub fn new() -> DebuggingRecorder {
Self::with_ordering(true)
}
/// Creates a new `DebuggingRecorder` with ordering enabled or disabled.
///
/// When ordering is enabled, any snapshotter derived from this recorder will iterate the
/// collected metrics in order of when the metric was first observed. If ordering is disabled,
/// then the iteration order is undefined.
pub fn with_ordering(ordered: bool) -> Self {
let metrics = if ordered {
Some(Arc::new(Mutex::new(IndexMap::new())))
} else {
None
};
DebuggingRecorder {
registry: Arc::new(Registry::new()),
metrics,
units: Arc::new(Mutex::new(HashMap::new())),
descs: Arc::new(Mutex::new(HashMap::new())),
}
}
@ -100,6 +173,34 @@ impl DebuggingRecorder {
pub fn snapshotter(&self) -> Snapshotter {
Snapshotter {
registry: self.registry.clone(),
metrics: self.metrics.clone(),
units: self.units.clone(),
descs: self.descs.clone(),
}
}
fn register_metric(&self, rkey: DifferentiatedKey) {
if let Some(metrics) = &self.metrics {
let mut metrics = metrics.lock().expect("metrics lock poisoned");
let _ = metrics.entry(rkey.clone()).or_insert(());
}
}
fn insert_unit_description(
&self,
rkey: DifferentiatedKey,
unit: Option<Unit>,
desc: Option<&'static str>,
) {
if let Some(unit) = unit {
let mut units = self.units.lock().expect("units lock poisoned");
let uentry = units.entry(rkey.clone()).or_insert_with(|| unit.clone());
*uentry = unit;
}
if let Some(desc) = desc {
let mut descs = self.descs.lock().expect("description lock poisoned");
let dentry = descs.entry(rkey).or_insert_with(|| desc);
*dentry = desc;
}
}
@ -110,23 +211,30 @@ impl DebuggingRecorder {
}
impl Recorder for DebuggingRecorder {
fn register_counter(&self, key: Key, _description: Option<&'static str>) {
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let rkey = DifferentiatedKey(MetricKind::Counter, key);
self.register_metric(rkey.clone());
self.insert_unit_description(rkey.clone(), unit, description);
self.registry.op(rkey, |_| {}, || Handle::counter())
}
fn register_gauge(&self, key: Key, _description: Option<&'static str>) {
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let rkey = DifferentiatedKey(MetricKind::Gauge, key);
self.register_metric(rkey.clone());
self.insert_unit_description(rkey.clone(), unit, description);
self.registry.op(rkey, |_| {}, || Handle::gauge())
}
fn register_histogram(&self, key: Key, _description: Option<&'static str>) {
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let rkey = DifferentiatedKey(MetricKind::Histogram, key);
self.register_metric(rkey.clone());
self.insert_unit_description(rkey.clone(), unit, description);
self.registry.op(rkey, |_| {}, || Handle::histogram())
}
fn increment_counter(&self, key: Key, value: u64) {
let rkey = DifferentiatedKey(MetricKind::Counter, key);
self.register_metric(rkey.clone());
self.registry.op(
rkey,
|handle| handle.increment_counter(value),
@ -136,6 +244,7 @@ impl Recorder for DebuggingRecorder {
fn update_gauge(&self, key: Key, value: f64) {
let rkey = DifferentiatedKey(MetricKind::Gauge, key);
self.register_metric(rkey.clone());
self.registry.op(
rkey,
|handle| handle.update_gauge(value),
@ -145,6 +254,7 @@ impl Recorder for DebuggingRecorder {
fn record_histogram(&self, key: Key, value: u64) {
let rkey = DifferentiatedKey(MetricKind::Histogram, key);
self.register_metric(rkey.clone());
self.registry.op(
rkey,
|handle| handle.record_histogram(value),

View File

@ -1,4 +1,4 @@
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, Unit};
/// Fans out metrics to multiple recorders.
pub struct Fanout {
@ -6,21 +6,21 @@ pub struct Fanout {
}
impl Recorder for Fanout {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
for recorder in &self.recorders {
recorder.register_counter(key.clone(), description);
recorder.register_counter(key.clone(), unit.clone(), description);
}
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
for recorder in &self.recorders {
recorder.register_gauge(key.clone(), description);
recorder.register_gauge(key.clone(), unit.clone(), description);
}
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
for recorder in &self.recorders {
recorder.register_histogram(key.clone(), description);
recorder.register_histogram(key.clone(), unit.clone(), description);
}
}
@ -73,7 +73,7 @@ impl FanoutBuilder {
mod tests {
use super::FanoutBuilder;
use crate::debugging::DebuggingRecorder;
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, Unit};
#[test]
fn test_basic_functionality() {
@ -91,8 +91,18 @@ mod tests {
assert_eq!(before1.len(), 0);
assert_eq!(before2.len(), 0);
fanout.register_counter(Key::Owned("tokio.loops".into()), None);
fanout.register_gauge(Key::Owned("hyper.sent_bytes".into()), None);
let ud = &[(Unit::Count, "counter desc"), (Unit::Bytes, "gauge desc")];
fanout.register_counter(
Key::Owned("tokio.loops".into()),
Some(ud[0].0.clone()),
Some(ud[0].1),
);
fanout.register_gauge(
Key::Owned("hyper.sent_bytes".into()),
Some(ud[1].0.clone()),
Some(ud[1].1),
);
fanout.increment_counter(Key::Owned("tokio.loops".into()), 47);
fanout.update_gauge(Key::Owned("hyper.sent_bytes".into()), 12.0);
@ -101,11 +111,21 @@ mod tests {
assert_eq!(after1.len(), 2);
assert_eq!(after2.len(), 2);
let after = after1.into_iter().zip(after2).collect::<Vec<_>>();
let after = after1
.into_iter()
.zip(after2)
.enumerate()
.collect::<Vec<_>>();
for ((_, k1, v1), (_, k2, v2)) in after {
for (i, ((_, k1, u1, d1, v1), (_, k2, u2, d2, v2))) in after {
assert_eq!(k1, k2);
assert_eq!(u1, u2);
assert_eq!(d1, d2);
assert_eq!(v1, v2);
assert_eq!(Some(ud[i].0.clone()), u1);
assert_eq!(Some(ud[i].0.clone()), u2);
assert_eq!(Some(ud[i].1), d1);
assert_eq!(Some(ud[i].1), d2);
}
}
}

View File

@ -1,6 +1,6 @@
use crate::layers::Layer;
use aho_corasick::{AhoCorasick, AhoCorasickBuilder};
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, Unit};
/// Filters and discards metrics matching certain name patterns.
///
@ -13,30 +13,32 @@ pub struct Filter<R> {
impl<R> Filter<R> {
fn should_filter(&self, key: &Key) -> bool {
self.automaton.is_match(key.name().as_ref())
key.name()
.parts()
.any(|s| self.automaton.is_match(s.as_ref()))
}
}
impl<R: Recorder> Recorder for Filter<R> {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
if self.should_filter(&key) {
return;
}
self.inner.register_counter(key, description)
self.inner.register_counter(key, unit, description)
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
if self.should_filter(&key) {
return;
}
self.inner.register_gauge(key, description)
self.inner.register_gauge(key, unit, description)
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
if self.should_filter(&key) {
return;
}
self.inner.register_histogram(key, description)
self.inner.register_histogram(key, unit, description)
}
fn increment_counter(&self, key: Key, value: u64) {
@ -75,11 +77,14 @@ impl FilterLayer {
/// Creates a `FilterLayer` from an existing set of patterns.
pub fn from_patterns<P, I>(patterns: P) -> Self
where
P: Iterator<Item = I>,
P: IntoIterator<Item = I>,
I: AsRef<str>,
{
FilterLayer {
patterns: patterns.map(|s| s.as_ref().to_string()).collect(),
patterns: patterns
.into_iter()
.map(|s| s.as_ref().to_string())
.collect(),
case_insensitive: false,
use_dfa: true,
}
@ -135,30 +140,65 @@ mod tests {
use super::FilterLayer;
use crate::debugging::DebuggingRecorder;
use crate::layers::Layer;
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, Unit};
#[test]
fn test_basic_functionality() {
let patterns = &["tokio", "bb8"];
let recorder = DebuggingRecorder::new();
let snapshotter = recorder.snapshotter();
let filter = FilterLayer::from_patterns(patterns.iter());
let filter = FilterLayer::from_patterns(patterns);
let layered = filter.layer(recorder);
let before = snapshotter.snapshot();
assert_eq!(before.len(), 0);
layered.register_counter(Key::Owned("tokio.loops".into()), None);
layered.register_gauge(Key::Owned("hyper.sent_bytes".into()), None);
layered.register_histogram(Key::Owned("hyper.recv_bytes".into()), None);
layered.register_counter(Key::Owned("bb8.conns".into()), None);
layered.register_gauge(Key::Owned("hyper.tokio.sent_bytes".into()), None);
let ud = &[
(Unit::Count, "counter desc"),
(Unit::Bytes, "gauge desc"),
(Unit::Bytes, "histogram desc"),
(Unit::Count, "counter desc"),
(Unit::Bytes, "gauge desc"),
];
layered.register_counter(
Key::Owned("tokio.loops".into()),
Some(ud[0].0.clone()),
Some(ud[0].1),
);
layered.register_gauge(
Key::Owned("hyper.sent_bytes".into()),
Some(ud[1].0.clone()),
Some(ud[1].1),
);
layered.register_histogram(
Key::Owned("hyper.tokio.sent_bytes".into()),
Some(ud[2].0.clone()),
Some(ud[2].1),
);
layered.register_counter(
Key::Owned("bb8.conns".into()),
Some(ud[3].0.clone()),
Some(ud[3].1),
);
layered.register_gauge(
Key::Owned("hyper.recv_bytes".into()),
Some(ud[4].0.clone()),
Some(ud[4].1),
);
let after = snapshotter.snapshot();
assert_eq!(after.len(), 2);
for (_kind, key, _value) in &after {
assert!(!key.name().contains("tokio") && !key.name().contains("bb8"));
for (_kind, key, unit, desc, _value) in after {
assert!(
!key.name().to_string().contains("tokio")
&& !key.name().to_string().contains("bb8")
);
// We cheat here since we're not comparing one-to-one with the source data,
// but we know which metrics are going to make it through so we can hard code.
assert_eq!(Some(Unit::Bytes), unit);
assert!(!desc.unwrap().is_empty() && desc.unwrap() == "gauge desc");
}
}
@ -174,19 +214,19 @@ mod tests {
let before = snapshotter.snapshot();
assert_eq!(before.len(), 0);
layered.register_counter(Key::Owned("tokiO.loops".into()), None);
layered.register_gauge(Key::Owned("hyper.sent_bytes".into()), None);
layered.register_histogram(Key::Owned("hyper.recv_bytes".into()), None);
layered.register_counter(Key::Owned("bb8.conns".into()), None);
layered.register_counter(Key::Owned("Bb8.conns_closed".into()), None);
layered.register_counter(Key::Owned("tokiO.loops".into()), None, None);
layered.register_gauge(Key::Owned("hyper.sent_bytes".into()), None, None);
layered.register_histogram(Key::Owned("hyper.recv_bytes".into()), None, None);
layered.register_counter(Key::Owned("bb8.conns".into()), None, None);
layered.register_counter(Key::Owned("Bb8.conns_closed".into()), None, None);
let after = snapshotter.snapshot();
assert_eq!(after.len(), 2);
for (_kind, key, _value) in &after {
for (_kind, key, _unit, _desc, _value) in &after {
assert!(
!key.name().to_lowercase().contains("tokio")
&& !key.name().to_lowercase().contains("bb8")
!key.name().to_string().to_lowercase().contains("tokio")
&& !key.name().to_string().to_lowercase().contains("bb8")
);
}
}

View File

@ -1,4 +1,4 @@
//! Layers are composable helpers that can be "layered" on top of an existing `Recorder` to enhancne
//! Layers are composable helpers that can be "layered" on top of an existing `Recorder` to enhance
//! or alter its behavior as desired, without having to change the recorder implementation itself.
//!
//! As well, [`Stack`] can be used to easily compose multiple layers together and provides a
@ -8,7 +8,7 @@
//! Here's an example of a layer that filters out all metrics that start with a specific string:
//!
//! ```rust
//! # use metrics::{Key, Recorder};
//! # use metrics::{Key, Recorder, Unit};
//! # use metrics_util::DebuggingRecorder;
//! # use metrics_util::layers::{Layer, Stack, PrefixLayer};
//! // A simple layer that denies any metrics that have "stairway" or "heaven" in their name.
@ -17,30 +17,35 @@
//!
//! impl<R> StairwayDeny<R> {
//! fn is_invalid_key(&self, key: &Key) -> bool {
//! key.name().contains("stairway") || key.name().contains("heaven")
//! for part in key.name().parts() {
//! if part.contains("stairway") || part.contains("heaven") {
//! return true
//! }
//! }
//! false
//! }
//! }
//!
//! impl<R: Recorder> Recorder for StairwayDeny<R> {
//! fn register_counter(&self, key: Key, description: Option<&'static str>) {
//! fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
//! if self.is_invalid_key(&key) {
//! return;
//! }
//! self.0.register_counter(key, description)
//! self.0.register_counter(key, unit, description)
//! }
//!
//! fn register_gauge(&self, key: Key, description: Option<&'static str>) {
//! fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
//! if self.is_invalid_key(&key) {
//! return;
//! }
//! self.0.register_gauge(key, description)
//! self.0.register_gauge(key, unit, description)
//! }
//!
//! fn register_histogram(&self, key: Key, description: Option<&'static str>) {
//! fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
//! if self.is_invalid_key(&key) {
//! return;
//! }
//! self.0.register_histogram(key, description)
//! self.0.register_histogram(key, unit, description)
//! }
//!
//! fn increment_counter(&self, key: Key, value: u64) {
@ -102,7 +107,7 @@
//! .expect("failed to install stack");
//! # }
//! ```
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, Unit};
#[cfg(feature = "std")]
use metrics::SetRecorderError;
@ -155,16 +160,16 @@ impl<R: Recorder + 'static> Stack<R> {
}
impl<R: Recorder> Recorder for Stack<R> {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
self.inner.register_counter(key, description)
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_counter(key, unit, description);
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
self.inner.register_gauge(key, description)
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_gauge(key, unit, description);
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
self.inner.register_histogram(key, description)
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
self.inner.register_histogram(key, unit, description);
}
fn increment_counter(&self, key: Key, value: u64) {

View File

@ -1,36 +1,34 @@
use crate::layers::Layer;
use metrics::{Key, Recorder};
use metrics::{Key, Recorder, SharedString, Unit};
/// Applies a prefix to every metric key.
///
/// Keys will be prefixed in the format of `<prefix>.<remaining>`.
pub struct Prefix<R> {
prefix: String,
prefix: SharedString,
inner: R,
}
impl<R> Prefix<R> {
fn prefix_key(&self, key: Key) -> Key {
key.into_owned()
.map_name(|old| format!("{}.{}", self.prefix, old))
.into()
key.into_owned().prepend_name(self.prefix.clone()).into()
}
}
impl<R: Recorder> Recorder for Prefix<R> {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let new_key = self.prefix_key(key);
self.inner.register_counter(new_key, description)
self.inner.register_counter(new_key, unit, description)
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let new_key = self.prefix_key(key);
self.inner.register_gauge(new_key, description)
self.inner.register_gauge(new_key, unit, description)
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
let new_key = self.prefix_key(key);
self.inner.register_histogram(new_key, description)
self.inner.register_histogram(new_key, unit, description)
}
fn increment_counter(&self, key: Key, value: u64) {
@ -52,12 +50,12 @@ impl<R: Recorder> Recorder for Prefix<R> {
/// A layer for applying a prefix to every metric key.
///
/// More information on the behavior of the layer can be found in [`Prefix`].
pub struct PrefixLayer(String);
pub struct PrefixLayer(&'static str);
impl PrefixLayer {
/// Creates a new `PrefixLayer` based on the given prefix.
pub fn new<S: Into<String>>(prefix: S) -> PrefixLayer {
PrefixLayer(prefix.into())
PrefixLayer(Box::leak(prefix.into().into_boxed_str()))
}
}
@ -66,7 +64,7 @@ impl<R> Layer<R> for PrefixLayer {
fn layer(&self, inner: R) -> Self::Output {
Prefix {
prefix: self.0.clone(),
prefix: self.0.into(),
inner,
}
}
@ -77,7 +75,7 @@ mod tests {
use super::PrefixLayer;
use crate::debugging::DebuggingRecorder;
use crate::layers::Layer;
use metrics::{KeyData, Recorder};
use metrics::{KeyData, Recorder, Unit};
#[test]
fn test_basic_functionality() {
@ -89,15 +87,35 @@ mod tests {
let before = snapshotter.snapshot();
assert_eq!(before.len(), 0);
layered.register_counter(KeyData::from_name("counter_metric").into(), None);
layered.register_gauge(KeyData::from_name("gauge_metric").into(), None);
layered.register_histogram(KeyData::from_name("histogram_metric").into(), None);
let ud = &[
(Unit::Nanoseconds, "counter desc"),
(Unit::Microseconds, "gauge desc"),
(Unit::Milliseconds, "histogram desc"),
];
layered.register_counter(
KeyData::from_name("counter_metric").into(),
Some(ud[0].0.clone()),
Some(ud[0].1),
);
layered.register_gauge(
KeyData::from_name("gauge_metric").into(),
Some(ud[1].0.clone()),
Some(ud[1].1),
);
layered.register_histogram(
KeyData::from_name("histogram_metric").into(),
Some(ud[2].0.clone()),
Some(ud[2].1),
);
let after = snapshotter.snapshot();
assert_eq!(after.len(), 3);
for (_kind, key, _value) in &after {
assert!(key.name().starts_with("testing"));
for (i, (_kind, key, unit, desc, _value)) in after.iter().enumerate() {
assert!(key.name().to_string().starts_with("testing"));
assert_eq!(&Some(ud[i].0.clone()), unit);
assert_eq!(&Some(ud[i].1), desc);
}
}
}

View File

@ -1,24 +1,29 @@
//! Helper types and functions used within the metrics ecosystem.
#![deny(missing_docs)]
#![cfg_attr(docsrs, feature(doc_cfg), deny(broken_intra_doc_links))]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(feature = "std")]
mod bucket;
#[cfg(feature = "std")]
pub use bucket::AtomicBucket;
#[cfg(feature = "std")]
mod debugging;
#[cfg(feature = "std")]
pub use debugging::{DebugValue, DebuggingRecorder, MetricKind, Snapshotter};
#[cfg(feature = "std")]
mod handle;
#[cfg(feature = "std")]
pub use handle::Handle;
mod streaming;
pub use streaming::StreamingIntegers;
mod quantile;
pub use quantile::{parse_quantiles, Quantile};
mod tree;
pub use tree::{Integer, MetricsTree};
#[cfg(feature = "std")]
mod registry;
#[cfg(feature = "std")]
pub use registry::Registry;
mod key;

View File

@ -1,6 +1,6 @@
use core::hash::Hash;
use dashmap::DashMap;
use std::collections::HashMap;
use std::hash::Hash;
/// A high-performance metric registry.
///

View File

@ -1,295 +0,0 @@
use std::slice;
/// A compressed set of integers.
///
/// For some workloads, working with a large set of integers can require an outsized amount of
/// memory for numbers that are very similar. This data structure takes chunks of integers and
/// compresses then by using delta encoding and variable-byte encoding.
///
/// Delta encoding tracks the difference between successive integers: if you have 1000000 and
/// 1000001, the difference between the two is only 1. Coupled with variable-byte encoding, we can
/// compress those two numbers within 4 bytes, where normally they would require a minimum of 8
/// bytes if they were 32-bit integers, or 16 bytes if they were 64-bit integers. Over large runs
/// of integers where the delta is relatively small compared to the original value, the compression
/// savings add up quickly.
///
/// The original integers can be decompressed and collected, or can be decompressed on-the-fly
/// while passing them to a given function, allowing callers to observe the integers without
/// allocating the entire size of the decompressed set.
///
/// # Performance
/// As this is a scalar implementation, performance depends heavily on not only the input size, but
/// also the delta between values, as well as whether or not the decompressed values are being
/// collected or used on-the-fly.
///
/// Bigger deltas between values means longer variable-byte sizes which is hard for the CPU to
/// predict. As the linear benchemarks show, things are much faster when the delta between values
/// is minimal.
///
/// These figures were generated on a 2015 Macbook Pro (Core i7, 2.2GHz base/3.7GHz turbo).
///
/// | | compress (1) | decompress (2) | decompress/sum (3) | decompress_with/sum (4) |
/// |------------------------|--------------|----------------|--------------------|-------------------------|
/// | normal, 100 values | 94 Melem/s | 76 Melem/s | 71 Melem/s | 126 Melem/s |
/// | normal, 10000 values | 92 Melem/s | 85 Melem/s | 109 Melem/s | 109 Melem/s |
/// | normal, 1000000 values | 86 Melem/s | 79 Melem/s | 68 Melem/s | 110 Melem/s |
/// | linear, 100 values | 334 Melem/s | 109 Melem/s | 110 Melem/s | 297 Melem/s |
/// | linear, 10000 values | 654 Melem/s | 174 Melem/s | 374 Melem/s | 390 Melem/s |
/// | linear, 1000000 values | 703 Melem/s | 180 Melem/s | 132 Melem/s | 392 Melem/s |
///
/// The normal values consistent of an approximation of real nanosecond-based timing measurements
/// of a web service. The linear values are simply sequential integers ranging from 0 to the
/// configured size of the test run.
///
/// Operations:
/// 1. simply compress the input set, no decompression
/// 2. decompress the entire compressed set into a single vector
/// 3. same as #2 but sum all of the original values at the end
/// 4. use `decompress_with` to sum the numbers incrementally
#[derive(Debug, Default, Clone)]
pub struct StreamingIntegers {
inner: Vec<u8>,
len: usize,
last: Option<i64>,
}
impl StreamingIntegers {
/// Creates a new, empty streaming set.
pub fn new() -> Self {
Default::default()
}
/// Returns the number of elements in the set, also referred to as its 'length'.
pub fn len(&self) -> usize {
self.len
}
/// Returns `true` if the set contains no elements.
pub fn is_empty(&self) -> bool {
self.len == 0
}
/// Compresses a slice of integers, and adds them to the set.
pub fn compress(&mut self, src: &[u64]) {
let src_len = src.len();
if src_len == 0 {
return;
}
self.len += src_len;
// Technically, 64-bit integers can take up to 10 bytes when encoded as variable integers
// if they're at the maximum size, so we need to properly allocate here. As we directly
// operate on a mutable slice of the inner buffer below, we _can't_ afford to lazily
// allocate or guess at the resulting compression, otherwise we'll get a panic at runtime
// for bounds checks.
//
// TODO: we should try and add some heuristic here, because we're potentially
// overallocating by a lot when we plan for the worst case scenario
self.inner.reserve(src_len * 10);
let mut buf_idx = self.inner.len();
let buf_cap = self.inner.capacity();
let mut buf = unsafe {
let buf_ptr = self.inner.as_mut_ptr();
slice::from_raw_parts_mut(buf_ptr, buf_cap)
};
// If we have no last value, then the very first integer we write is the full value and not
// a delta value.
let mut src_idx = 0;
if self.last.is_none() {
let first = src[src_idx] as i64;
self.last = Some(first);
let zigzag = zigzag_encode(first);
buf_idx = vbyte_encode(zigzag, &mut buf, buf_idx);
src_idx += 1;
}
// Set up for our actual compression run.
let mut last = self.last.unwrap();
while src_idx < src_len {
let value = src[src_idx] as i64;
let diff = value - last;
let zigzag = zigzag_encode(diff);
buf_idx = vbyte_encode(zigzag, &mut buf, buf_idx);
last = value;
src_idx += 1;
}
unsafe {
self.inner.set_len(buf_idx);
}
self.last = Some(last);
}
/// Decompresses all of the integers written to the set.
///
/// Returns a vector with all of the original values. For larger sets of integers, this can be
/// slow due to the allocation required. Consider [decompress_with] to incrementally iterate
/// the decompresset set in smaller chunks.
///
/// [decompress_with]: StreamingIntegers::decompress_with
pub fn decompress(&self) -> Vec<u64> {
let mut values = Vec::new();
let mut buf_idx = 0;
let buf_len = self.inner.len();
let buf = self.inner.as_slice();
let mut last = 0;
while buf_idx < buf_len {
let (value, new_idx) = vbyte_decode(&buf, buf_idx);
buf_idx = new_idx;
let delta = zigzag_decode(value);
let original = last + delta;
last = original;
values.push(original as u64);
}
values
}
/// Decompresses all of the integers written to the set, invoking `f` for each batch.
///
/// During decompression, values are batched internally until a limit is reached, and then `f`
/// is called with a reference to the batch. This leads to minimal allocation to decompress
/// the entire set, for use cases where the values can be observed incrementally without issue.
pub fn decompress_with<F>(&self, mut f: F)
where
F: FnMut(&[u64]),
{
let mut values = Vec::with_capacity(1024);
let mut buf_idx = 0;
let buf_len = self.inner.len();
let buf = self.inner.as_slice();
let mut last = 0;
while buf_idx < buf_len {
let (value, new_idx) = vbyte_decode(&buf, buf_idx);
buf_idx = new_idx;
let delta = zigzag_decode(value);
let original = last + delta;
last = original;
values.push(original as u64);
if values.len() == values.capacity() {
f(&values);
values.clear();
}
}
if !values.is_empty() {
f(&values);
}
}
}
#[inline]
fn zigzag_encode(input: i64) -> u64 {
((input << 1) ^ (input >> 63)) as u64
}
#[inline]
fn zigzag_decode(input: u64) -> i64 {
((input >> 1) as i64) ^ (-((input & 1) as i64))
}
#[inline]
fn vbyte_encode(mut input: u64, buf: &mut [u8], mut buf_idx: usize) -> usize {
while input >= 128 {
buf[buf_idx] = 0x80 as u8 | (input as u8 & 0x7F);
buf_idx += 1;
input >>= 7;
}
buf[buf_idx] = input as u8;
buf_idx + 1
}
#[inline]
fn vbyte_decode(buf: &[u8], mut buf_idx: usize) -> (u64, usize) {
let mut tmp = 0;
let mut factor = 0;
loop {
tmp |= u64::from(buf[buf_idx] & 0x7F) << (7 * factor);
if buf[buf_idx] & 0x80 != 0x80 {
return (tmp, buf_idx + 1);
}
buf_idx += 1;
factor += 1;
}
}
#[cfg(test)]
mod tests {
use super::StreamingIntegers;
#[test]
fn test_streaming_integers_new() {
let si = StreamingIntegers::new();
let decompressed = si.decompress();
assert_eq!(decompressed.len(), 0);
}
#[test]
fn test_streaming_integers_single_block() {
let mut si = StreamingIntegers::new();
let decompressed = si.decompress();
assert_eq!(decompressed.len(), 0);
let values = vec![8, 6, 7, 5, 3, 0, 9];
si.compress(&values);
let decompressed = si.decompress();
assert_eq!(decompressed, values);
}
#[test]
fn test_streaming_integers_multiple_blocks() {
let mut si = StreamingIntegers::new();
let decompressed = si.decompress();
assert_eq!(decompressed.len(), 0);
let values = vec![8, 6, 7, 5, 3, 0, 9];
si.compress(&values);
let values2 = vec![6, 6, 6];
si.compress(&values2);
let values3 = vec![];
si.compress(&values3);
let values4 = vec![6, 6, 6, 7, 7, 7, 8, 8, 8];
si.compress(&values4);
let total = vec![values, values2, values3, values4]
.into_iter()
.flatten()
.collect::<Vec<_>>();
let decompressed = si.decompress();
assert_eq!(decompressed, total);
}
#[test]
fn test_streaming_integers_empty_block() {
let mut si = StreamingIntegers::new();
let decompressed = si.decompress();
assert_eq!(decompressed.len(), 0);
let values = vec![];
si.compress(&values);
let decompressed = si.decompress();
assert_eq!(decompressed.len(), 0);
}
}

View File

@ -1,122 +0,0 @@
use serde::ser::{Serialize, Serializer};
use std::collections::HashMap;
/// An integer metric value.
pub enum Integer {
/// A signed value.
Signed(i64),
/// An unsigned value.
Unsigned(u64),
}
impl From<i64> for Integer {
fn from(i: i64) -> Integer {
Integer::Signed(i)
}
}
impl From<u64> for Integer {
fn from(i: u64) -> Integer {
Integer::Unsigned(i)
}
}
enum TreeEntry {
Value(Integer),
Nested(MetricsTree),
}
impl Serialize for TreeEntry {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match self {
TreeEntry::Value(value) => match value {
Integer::Signed(i) => serializer.serialize_i64(*i),
Integer::Unsigned(i) => serializer.serialize_u64(*i),
},
TreeEntry::Nested(tree) => tree.serialize(serializer),
}
}
}
/// A tree-structured metrics container.
///
/// Used for building a tree structure out of scoped metrics, where each level in the tree
/// represents a nested scope.
#[derive(Default)]
pub struct MetricsTree {
contents: HashMap<String, TreeEntry>,
}
impl MetricsTree {
/// Inserts a single value into the tree.
pub fn insert_value<V: Into<Integer>>(
&mut self,
mut levels: Vec<String>,
key: String,
value: V,
) {
match levels.len() {
0 => {
self.contents.insert(key, TreeEntry::Value(value.into()));
}
_ => {
let name = levels.remove(0);
let inner = self
.contents
.entry(name)
.or_insert_with(|| TreeEntry::Nested(MetricsTree::default()));
if let TreeEntry::Nested(tree) = inner {
tree.insert_value(levels, key, value);
}
}
}
}
/// Inserts multiple values into the tree.
pub fn insert_values<V: Into<Integer>>(
&mut self,
mut levels: Vec<String>,
values: Vec<(String, V)>,
) {
match levels.len() {
0 => {
for v in values.into_iter() {
self.contents.insert(v.0, TreeEntry::Value(v.1.into()));
}
}
_ => {
let name = levels.remove(0);
let inner = self
.contents
.entry(name)
.or_insert_with(|| TreeEntry::Nested(MetricsTree::default()));
if let TreeEntry::Nested(tree) = inner {
tree.insert_values(levels, values);
}
}
}
}
/// Clears all entries in the tree.
pub fn clear(&mut self) {
self.contents.clear();
}
}
impl Serialize for MetricsTree {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut sorted = self.contents.iter().collect::<Vec<_>>();
sorted.sort_by_key(|p| p.0);
serializer.collect_map(sorted)
}
}

View File

@ -7,15 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
<!-- next-header -->
## [Unreleased] - ReleaseDate
### Added
- Support for specifying the unit of a measurement during registration. ([#107](https://github.com/metrics-rs/metrics/pull/107))
## [0.12.1] - 2019-11-21
### Changed
- Cost for macros dropped to almost zero when no recorder is installed. (#55)
- Cost for macros dropped to almost zero when no recorder is installed. ([#55](https://github.com/metrics-rs/metrics/pull/55))
## [0.12.0] - 2019-10-18
### Changed
- Improved documentation. (#44, #45, #46)
- Renamed `Recorder::record_counter` to `increment_counter` and `Recorder::record_gauge` to `update_gauge`. (#47)
- Renamed `Recorder::record_counter` to `increment_counter` and `Recorder::record_gauge` to `update_gauge`. ([#47](https://github.com/metrics-rs/metrics/pull/47))
## [0.11.1] - 2019-08-09
### Changed

View File

@ -1,6 +1,6 @@
[package]
name = "metrics"
version = "0.13.0-alpha.5"
version = "0.13.0-alpha.8"
authors = ["Toby Lawrence <toby@nuclearfurnace.com>"]
edition = "2018"
@ -27,10 +27,15 @@ bench = false
name = "macros"
harness = false
[[bench]]
name = "key"
harness = false
[dependencies]
beef = "0.4"
metrics-macros = { version = "0.1.0-alpha.1", path = "../metrics-macros" }
proc-macro-hack = "0.5"
once_cell = "1"
sharded-slab = "0.1"
[dev-dependencies]
log = "0.4"

28
metrics/benches/key.rs Normal file
View File

@ -0,0 +1,28 @@
use criterion::{criterion_group, criterion_main, Benchmark, Criterion};
use metrics::{NameParts, SharedString};
fn key_benchmark(c: &mut Criterion) {
c.bench(
"key",
Benchmark::new("name_parts/to_string", |b| {
static NAME_PARTS: [SharedString; 2] = [
SharedString::const_str("part1"),
SharedString::const_str("part2"),
];
let name = NameParts::from_static_names(&NAME_PARTS);
b.iter(|| name.to_string())
})
.with_function("name_parts/Display::to_string", |b| {
static NAME_PARTS: [SharedString; 2] = [
SharedString::const_str("part1"),
SharedString::const_str("part2"),
];
let name = NameParts::from_static_names(&NAME_PARTS);
b.iter(|| std::fmt::Display::to_string(&name))
}),
);
}
criterion_group!(benches, key_benchmark);
criterion_main!(benches);

View File

@ -3,15 +3,22 @@ extern crate criterion;
use criterion::{Benchmark, Criterion};
use metrics::{counter, Key, Recorder};
use metrics::{counter, Key, Recorder, Unit};
use rand::{thread_rng, Rng};
#[derive(Default)]
struct TestRecorder;
impl Recorder for TestRecorder {
fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
fn register_counter(&self, _key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {
}
fn register_gauge(&self, _key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {}
fn register_histogram(
&self,
_key: Key,
_unit: Option<Unit>,
_description: Option<&'static str>,
) {
}
fn increment_counter(&self, _key: Key, _value: u64) {}
fn update_gauge(&self, _key: Key, _value: f64) {}
fn record_histogram(&self, _key: Key, _value: u64) {}
@ -26,11 +33,13 @@ fn macro_benchmark(c: &mut Criterion) {
c.bench(
"macros",
Benchmark::new("uninitialized/no_labels", |b| {
metrics::clear_recorder();
b.iter(|| {
counter!("counter_bench", 42);
})
})
.with_function("uninitialized/with_static_labels", |b| {
metrics::clear_recorder();
b.iter(|| {
counter!("counter_bench", 42, "request" => "http", "svc" => "admin");
})

View File

@ -1,30 +1,37 @@
use metrics::{counter, gauge, histogram, increment, Key, Recorder};
#[allow(dead_code)]
static RECORDER: PrintRecorder = PrintRecorder;
//! This example is part unit test and part demonstration.
//!
//! We show all of the registration macros, as well as all of the "emission" macros, the ones you
//! would actually call to update a metric.
//!
//! We demonstrate the various permutations of values that can be passed in the macro calls, all of
//! which are documented in detail for the respective macro.
use metrics::{
counter, gauge, histogram, increment, register_counter, register_gauge, register_histogram,
Key, Recorder, Unit,
};
#[derive(Default)]
struct PrintRecorder;
impl Recorder for PrintRecorder {
fn register_counter(&self, key: Key, description: Option<&'static str>) {
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
println!(
"(counter) registered key {} with description {:?}",
key, description
"(counter) registered key {} with unit {:?} and description {:?}",
key, unit, description
);
}
fn register_gauge(&self, key: Key, description: Option<&'static str>) {
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
println!(
"(gauge) registered key {} with description {:?}",
key, description
"(gauge) registered key {} with unit {:?} and description {:?}",
key, unit, description
);
}
fn register_histogram(&self, key: Key, description: Option<&'static str>) {
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>) {
println!(
"(histogram) registered key {} with description {:?}",
key, description
"(histogram) registered key {} with unit {:?} and description {:?}",
key, unit, description
);
}
@ -41,36 +48,51 @@ impl Recorder for PrintRecorder {
}
}
#[cfg(feature = "std")]
fn init_print_logger() {
let recorder = PrintRecorder::default();
metrics::set_boxed_recorder(Box::new(recorder)).unwrap()
}
#[cfg(not(feature = "std"))]
fn init_print_logger() {
metrics::set_recorder(&RECORDER).unwrap()
}
fn main() {
let server_name = "web03".to_string();
init_print_logger();
for _ in 0..3 {
increment!("requests_processed");
increment!("requests_processed", "request_type" => "admin");
}
let common_labels = &[("listener", "frontend")];
// Go through registration:
register_counter!("requests_processed", "number of requests processed");
register_counter!("bytes_sent", Unit::Bytes);
register_gauge!("connection_count", common_labels);
register_histogram!(
"svc.execution_time",
Unit::Milliseconds,
"execution time of request handler"
);
register_gauge!("unused_gauge", "service" => "backend");
register_histogram!("unused_histogram", Unit::Seconds, "unused histo", "service" => "middleware");
// All the supported permutations of `increment!`:
increment!("requests_processed");
increment!("requests_processed", "request_type" => "admin");
increment!("requests_processed", "request_type" => "admin", "server" => server_name.clone());
counter!("requests_processed", 1);
counter!("requests_processed", 1, "request_type" => "admin");
counter!("requests_processed", 1, "request_type" => "admin", "server" => server_name.clone());
increment!("requests_processed", common_labels);
// All the supported permutations of `counter!`:
counter!("bytes_sent", 64);
counter!("bytes_sent", 64, "listener" => "frontend");
counter!("bytes_sent", 64, "listener" => "frontend", "server" => server_name.clone());
counter!("bytes_sent", 64, common_labels);
// All the supported permutations of `gauge!`:
gauge!("connection_count", 300.0);
gauge!("connection_count", 300.0, "listener" => "frontend");
gauge!("connection_count", 300.0, "listener" => "frontend", "server" => server_name.clone());
histogram!("service.execution_time", 70);
histogram!("service.execution_time", 70, "type" => "users");
histogram!("service.execution_time", 70, "type" => "users", "server" => server_name.clone());
histogram!(<"service.execution_time">, 70);
histogram!(<"service.execution_time">, 70, "type" => "users");
histogram!(<"service.execution_time">, 70, "type" => "users", "server" => server_name.clone());
gauge!("connection_count", 300.0, common_labels);
// All the supported permutations of `histogram!`:
histogram!("svc.execution_time", 70);
histogram!("svc.execution_time", 70, "type" => "users");
histogram!("svc.execution_time", 70, "type" => "users", "server" => server_name.clone());
histogram!("svc.execution_time", 70, common_labels);
}

22
metrics/examples/sizes.rs Normal file
View File

@ -0,0 +1,22 @@
//! This example is purely for development.
use metrics::{Key, KeyData, Label, NameParts, SharedString};
use std::borrow::Cow;
fn main() {
println!("KeyData: {} bytes", std::mem::size_of::<KeyData>());
println!("Key: {} bytes", std::mem::size_of::<Key>());
println!("NameParts: {} bytes", std::mem::size_of::<NameParts>());
println!("Label: {} bytes", std::mem::size_of::<Label>());
println!(
"Cow<'static, [Label]>: {} bytes",
std::mem::size_of::<Cow<'static, [Label]>>()
);
println!(
"Vec<SharedString>: {} bytes",
std::mem::size_of::<Vec<SharedString>>()
);
println!(
"[Option<SharedString>; 2]: {} bytes",
std::mem::size_of::<[Option<SharedString>; 2]>()
);
}

View File

@ -1,11 +1,194 @@
use std::borrow::Cow;
use crate::cow::Cow;
/// An allocation-optimized string.
///
/// We specify `ScopedString` to attempt to get the best of both worlds: flexibility to provide a
/// We specify `SharedString` to attempt to get the best of both worlds: flexibility to provide a
/// static or dynamic (owned) string, while retaining the performance benefits of being able to
/// take ownership of owned strings and borrows of completely static strings.
pub type ScopedString = Cow<'static, str>;
///
/// `SharedString` can be converted to from either `&'static str` or `String`, with a method,
/// `const_str`, from constructing `SharedString` from `&'static str` in a `const` fashion.
pub type SharedString = Cow<'static, str>;
/// Units for a given metric.
///
/// While metrics do not necessarily need to be tied to a particular unit to be recorded, some
/// downstream systems natively support defining units and so they can be specified during registration.
#[derive(Clone, Debug, PartialEq)]
pub enum Unit {
/// Count.
Count,
/// Percentage.
Percent,
/// Seconds.
///
/// One second is equal to 1000 milliseconds.
Seconds,
/// Milliseconds.
///
/// One millisecond is equal to 1000 microseconds.
Milliseconds,
/// Microseconds.
///
/// One microsecond is equal to 1000 nanoseconds.
Microseconds,
/// Nanoseconds.
Nanoseconds,
/// Tebibytes.
///
/// One tebibyte is equal to 1024 gigibytes.
Tebibytes,
/// Gigibytes.
///
/// One gigibyte is equal to 1024 mebibytes.
Gigibytes,
/// Mebibytes.
///
/// One mebibyte is equal to 1024 kibibytes.
Mebibytes,
/// Kibibytes.
///
/// One kibibyte is equal to 1024 bytes.
Kibibytes,
/// Bytes.
Bytes,
/// Terabits per second.
///
/// One terabit is equal to 1000 gigabits.
TerabitsPerSecond,
/// Gigabits per second.
///
/// One gigabit is equal to 1000 megabits.
GigabitsPerSecond,
/// Megabits per second.
///
/// One megabit is equal to 1000 kilobits.
MegabitsPerSecond,
/// Kilobits per second.
///
/// One kilobit is equal to 1000 bits.
KilobitsPerSecond,
/// Bits per second.
BitsPerSecond,
/// Count per second.
CountPerSecond,
}
impl Unit {
/// Gets the string form of this `Unit`.
pub fn as_str(&self) -> &str {
match self {
Unit::Count => "count",
Unit::Percent => "percent",
Unit::Seconds => "seconds",
Unit::Milliseconds => "milliseconds",
Unit::Microseconds => "microseconds",
Unit::Nanoseconds => "nanoseconds",
Unit::Tebibytes => "tebibytes",
Unit::Gigibytes => "gigibytes",
Unit::Mebibytes => "mebibytes",
Unit::Kibibytes => "kibibytes",
Unit::Bytes => "bytes",
Unit::TerabitsPerSecond => "terabits_per_second",
Unit::GigabitsPerSecond => "gigabits_per_second",
Unit::MegabitsPerSecond => "megabits_per_second",
Unit::KilobitsPerSecond => "kilobits_per_second",
Unit::BitsPerSecond => "bits_per_second",
Unit::CountPerSecond => "count_per_second",
}
}
/// Gets the canonical string label for the given unit.
///
/// For example, the canonical label for `Seconds` would be `s`, while for `Nanoseconds`,
/// it would be `ns`.
///
/// Not all units have a meaningful display label and so some may be empty.
pub fn as_canonical_label(&self) -> &str {
match self {
Unit::Count => "",
Unit::Percent => "%",
Unit::Seconds => "s",
Unit::Milliseconds => "ms",
Unit::Microseconds => "us",
Unit::Nanoseconds => "ns",
Unit::Tebibytes => "TiB",
Unit::Gigibytes => "GiB",
Unit::Mebibytes => "MiB",
Unit::Kibibytes => "KiB",
Unit::Bytes => "B",
Unit::TerabitsPerSecond => "Tbps",
Unit::GigabitsPerSecond => "Gbps",
Unit::MegabitsPerSecond => "Mbps",
Unit::KilobitsPerSecond => "kbps",
Unit::BitsPerSecond => "bps",
Unit::CountPerSecond => "/s",
}
}
/// Converts the string representation of a unit back into `Unit` if possible.
///
/// The value passed here should match the output of [`Unit::as_str`].
pub fn from_str(s: &str) -> Option<Unit> {
match s {
"count" => Some(Unit::Count),
"percent" => Some(Unit::Percent),
"seconds" => Some(Unit::Seconds),
"milliseconds" => Some(Unit::Milliseconds),
"microseconds" => Some(Unit::Microseconds),
"nanoseconds" => Some(Unit::Nanoseconds),
"tebibytes" => Some(Unit::Tebibytes),
"gigibytes" => Some(Unit::Gigibytes),
"mebibytes" => Some(Unit::Mebibytes),
"kibibytes" => Some(Unit::Kibibytes),
"bytes" => Some(Unit::Bytes),
"terabits_per_second" => Some(Unit::TerabitsPerSecond),
"gigabits_per_second" => Some(Unit::GigabitsPerSecond),
"megabits_per_second" => Some(Unit::MegabitsPerSecond),
"kilobits_per_second" => Some(Unit::KilobitsPerSecond),
"bits_per_second" => Some(Unit::BitsPerSecond),
"count_per_second" => Some(Unit::CountPerSecond),
_ => None,
}
}
/// Whether or not this unit relates to the measurement of time.
pub fn is_time_based(&self) -> bool {
match self {
Unit::Seconds | Unit::Milliseconds | Unit::Microseconds | Unit::Nanoseconds => true,
_ => false,
}
}
/// Whether or not this unit relates to the measurement of data.
pub fn is_data_based(&self) -> bool {
match self {
Unit::Tebibytes
| Unit::Gigibytes
| Unit::Mebibytes
| Unit::Kibibytes
| Unit::Bytes
| Unit::TerabitsPerSecond
| Unit::GigabitsPerSecond
| Unit::MegabitsPerSecond
| Unit::KilobitsPerSecond
| Unit::BitsPerSecond => true,
_ => false,
}
}
/// Whether or not this unit relates to the measurement of data rates.
pub fn is_data_rate_based(&self) -> bool {
match self {
Unit::TerabitsPerSecond
| Unit::GigabitsPerSecond
| Unit::MegabitsPerSecond
| Unit::KilobitsPerSecond
| Unit::BitsPerSecond => true,
_ => false,
}
}
}
/// An object which can be converted into a `u64` representation.
///
@ -22,7 +205,7 @@ impl IntoU64 for u64 {
}
}
impl IntoU64 for std::time::Duration {
impl IntoU64 for core::time::Duration {
fn into_u64(self) -> u64 {
self.as_nanos() as u64
}
@ -33,3 +216,37 @@ impl IntoU64 for std::time::Duration {
pub fn __into_u64<V: IntoU64>(value: V) -> u64 {
value.into_u64()
}
#[cfg(test)]
mod tests {
use super::Unit;
#[test]
fn test_unit_conversions() {
let all_variants = vec![
Unit::Count,
Unit::Percent,
Unit::Seconds,
Unit::Milliseconds,
Unit::Microseconds,
Unit::Nanoseconds,
Unit::Tebibytes,
Unit::Gigibytes,
Unit::Mebibytes,
Unit::Kibibytes,
Unit::Bytes,
Unit::TerabitsPerSecond,
Unit::GigabitsPerSecond,
Unit::MegabitsPerSecond,
Unit::KilobitsPerSecond,
Unit::BitsPerSecond,
Unit::CountPerSecond,
];
for variant in all_variants {
let s = variant.as_str();
let parsed = Unit::from_str(s);
assert_eq!(Some(variant), parsed);
}
}
}

515
metrics/src/cow.rs Normal file
View File

@ -0,0 +1,515 @@
use crate::label::Label;
use alloc::borrow::Borrow;
use alloc::string::String;
use alloc::vec::Vec;
use core::cmp::Ordering;
use core::fmt;
use core::hash::{Hash, Hasher};
use core::marker::PhantomData;
use core::mem::ManuallyDrop;
use core::ptr::{slice_from_raw_parts, NonNull};
/// A clone-on-write smart pointer with an optimized memory layout.
pub struct Cow<'a, T: Cowable + ?Sized + 'a> {
/// Pointer to data.
ptr: NonNull<T::Pointer>,
/// Pointer metadata: length and capacity.
meta: Metadata,
/// Lifetime marker.
marker: PhantomData<&'a T>,
}
impl<T> Cow<'_, T>
where
T: Cowable + ?Sized,
{
#[inline]
pub fn owned(val: T::Owned) -> Self {
let (ptr, meta) = T::owned_into_parts(val);
Cow {
ptr,
meta,
marker: PhantomData,
}
}
}
impl<'a, T> Cow<'a, T>
where
T: Cowable + ?Sized,
{
#[inline]
pub fn borrowed(val: &'a T) -> Self {
let (ptr, meta) = T::ref_into_parts(val);
Cow {
ptr,
meta,
marker: PhantomData,
}
}
#[inline]
pub fn into_owned(self) -> T::Owned {
let cow = ManuallyDrop::new(self);
if cow.is_borrowed() {
unsafe { T::clone_from_parts(cow.ptr, &cow.meta) }
} else {
unsafe { T::owned_from_parts(cow.ptr, &cow.meta) }
}
}
#[inline]
pub fn is_borrowed(&self) -> bool {
self.meta.capacity() == 0
}
#[inline]
pub fn is_owned(&self) -> bool {
self.meta.capacity() != 0
}
#[inline]
fn borrow(&self) -> &T {
unsafe { &*T::ref_from_parts(self.ptr, &self.meta) }
}
}
// Implementations of constant functions for creating `Cow` via static strings, static string
// slices, and static label slices.
impl<'a> Cow<'a, str> {
pub const fn const_str(val: &'a str) -> Self {
Cow {
// We are casting *const T to *mut T, however for all borrowed values
// this raw pointer is only ever dereferenced back to &T.
ptr: unsafe { NonNull::new_unchecked(val.as_ptr() as *mut u8) },
meta: Metadata::from_ref(val.len()),
marker: PhantomData,
}
}
}
impl<'a> Cow<'a, [Cow<'static, str>]> {
pub const fn const_slice(val: &'a [Cow<'static, str>]) -> Self {
Cow {
ptr: unsafe { NonNull::new_unchecked(val.as_ptr() as *mut Cow<'static, str>) },
meta: Metadata::from_ref(val.len()),
marker: PhantomData,
}
}
}
impl<'a> Cow<'a, [Label]> {
pub const fn const_slice(val: &'a [Label]) -> Self {
Cow {
ptr: unsafe { NonNull::new_unchecked(val.as_ptr() as *mut Label) },
meta: Metadata::from_ref(val.len()),
marker: PhantomData,
}
}
}
impl<T> Hash for Cow<'_, T>
where
T: Hash + Cowable + ?Sized,
{
#[inline]
fn hash<H: Hasher>(&self, state: &mut H) {
self.borrow().hash(state)
}
}
impl<'a, T> Default for Cow<'a, T>
where
T: Cowable + ?Sized,
&'a T: Default,
{
#[inline]
fn default() -> Self {
Cow::borrowed(Default::default())
}
}
impl<T> Eq for Cow<'_, T> where T: Eq + Cowable + ?Sized {}
impl<A, B> PartialOrd<Cow<'_, B>> for Cow<'_, A>
where
A: Cowable + ?Sized + PartialOrd<B>,
B: Cowable + ?Sized,
{
#[inline]
fn partial_cmp(&self, other: &Cow<'_, B>) -> Option<Ordering> {
PartialOrd::partial_cmp(self.borrow(), other.borrow())
}
}
impl<T> Ord for Cow<'_, T>
where
T: Ord + Cowable + ?Sized,
{
#[inline]
fn cmp(&self, other: &Self) -> Ordering {
Ord::cmp(self.borrow(), other.borrow())
}
}
impl<'a, T> From<&'a T> for Cow<'a, T>
where
T: Cowable + ?Sized,
{
#[inline]
fn from(val: &'a T) -> Self {
Cow::borrowed(val)
}
}
impl From<String> for Cow<'_, str> {
#[inline]
fn from(s: String) -> Self {
Cow::owned(s)
}
}
impl From<Vec<Label>> for Cow<'_, [Label]> {
#[inline]
fn from(v: Vec<Label>) -> Self {
Cow::owned(v)
}
}
impl From<Vec<Cow<'static, str>>> for Cow<'_, [Cow<'static, str>]> {
#[inline]
fn from(v: Vec<Cow<'static, str>>) -> Self {
Cow::owned(v)
}
}
impl<T> Drop for Cow<'_, T>
where
T: Cowable + ?Sized,
{
#[inline]
fn drop(&mut self) {
if self.is_owned() {
unsafe { T::owned_from_parts(self.ptr, &self.meta) };
}
}
}
impl<'a, T> Clone for Cow<'a, T>
where
T: Cowable + ?Sized,
{
#[inline]
fn clone(&self) -> Self {
if self.is_owned() {
// Gotta clone the actual inner value.
Cow::owned(unsafe { T::clone_from_parts(self.ptr, &self.meta) })
} else {
Cow { ..*self }
}
}
}
impl<T> core::ops::Deref for Cow<'_, T>
where
T: Cowable + ?Sized,
{
type Target = T;
#[inline]
fn deref(&self) -> &T {
self.borrow()
}
}
impl<T> AsRef<T> for Cow<'_, T>
where
T: Cowable + ?Sized,
{
#[inline]
fn as_ref(&self) -> &T {
self.borrow()
}
}
impl<T> Borrow<T> for Cow<'_, T>
where
T: Cowable + ?Sized,
{
#[inline]
fn borrow(&self) -> &T {
self.borrow()
}
}
impl<A, B> PartialEq<Cow<'_, B>> for Cow<'_, A>
where
A: Cowable + ?Sized,
B: Cowable + ?Sized,
A: PartialEq<B>,
{
fn eq(&self, other: &Cow<B>) -> bool {
self.borrow() == other.borrow()
}
}
impl<T> fmt::Debug for Cow<'_, T>
where
T: Cowable + fmt::Debug + ?Sized,
{
#[inline]
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.borrow().fmt(f)
}
}
impl<T> fmt::Display for Cow<'_, T>
where
T: Cowable + fmt::Display + ?Sized,
{
#[inline]
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.borrow().fmt(f)
}
}
unsafe impl<T: Cowable + Sync + ?Sized> Sync for Cow<'_, T> {}
unsafe impl<T: Cowable + Send + ?Sized> Send for Cow<'_, T> {}
/// Helper trait required by `Cow<T>` to extract capacity of owned
/// variant of `T`, and manage conversions.
///
/// This can be only implemented on types that match requirements:
///
/// + `T::Owned` has a `capacity`, which is an extra word that is absent in `T`.
/// + `T::Owned` with `capacity` of `0` does not allocate memory.
/// + `T::Owned` can be reconstructed from `*mut T` borrowed out of it, plus capacity.
pub unsafe trait Cowable {
type Pointer;
type Owned;
fn ref_into_parts(&self) -> (NonNull<Self::Pointer>, Metadata);
fn owned_into_parts(owned: Self::Owned) -> (NonNull<Self::Pointer>, Metadata);
unsafe fn ref_from_parts(ptr: NonNull<Self::Pointer>, metadata: &Metadata) -> *const Self;
unsafe fn owned_from_parts(ptr: NonNull<Self::Pointer>, metadata: &Metadata) -> Self::Owned;
unsafe fn clone_from_parts(ptr: NonNull<Self::Pointer>, metadata: &Metadata) -> Self::Owned;
}
unsafe impl Cowable for str {
type Pointer = u8;
type Owned = String;
#[inline]
fn ref_into_parts(&self) -> (NonNull<u8>, Metadata) {
// A note on soundness:
//
// We are casting *const T to *mut T, however for all borrowed values
// this raw pointer is only ever dereferenced back to &T.
let ptr = unsafe { NonNull::new_unchecked(self.as_ptr() as *mut _) };
let metadata = Metadata::from_ref(self.len());
(ptr, metadata)
}
#[inline]
unsafe fn ref_from_parts(ptr: NonNull<u8>, metadata: &Metadata) -> *const str {
slice_from_raw_parts(ptr.as_ptr(), metadata.length()) as *const _
}
#[inline]
fn owned_into_parts(owned: String) -> (NonNull<u8>, Metadata) {
let mut owned = ManuallyDrop::new(owned);
let ptr = unsafe { NonNull::new_unchecked(owned.as_mut_ptr()) };
let metadata = Metadata::from_owned(owned.len(), owned.capacity());
(ptr, metadata)
}
#[inline]
unsafe fn owned_from_parts(ptr: NonNull<u8>, metadata: &Metadata) -> String {
String::from_utf8_unchecked(Vec::from_raw_parts(
ptr.as_ptr(),
metadata.length(),
metadata.capacity(),
))
}
#[inline]
unsafe fn clone_from_parts(ptr: NonNull<u8>, metadata: &Metadata) -> Self::Owned {
let str = Self::ref_from_parts(ptr, metadata);
str.as_ref().unwrap().to_string()
}
}
unsafe impl<'a> Cowable for [Cow<'a, str>] {
type Pointer = Cow<'a, str>;
type Owned = Vec<Cow<'a, str>>;
#[inline]
fn ref_into_parts(&self) -> (NonNull<Cow<'a, str>>, Metadata) {
// A note on soundness:
//
// We are casting *const T to *mut T, however for all borrowed values
// this raw pointer is only ever dereferenced back to &T.
let ptr = unsafe { NonNull::new_unchecked(self.as_ptr() as *mut _) };
let metadata = Metadata::from_ref(self.len());
(ptr, metadata)
}
#[inline]
unsafe fn ref_from_parts(
ptr: NonNull<Cow<'a, str>>,
metadata: &Metadata,
) -> *const [Cow<'a, str>] {
slice_from_raw_parts(ptr.as_ptr(), metadata.length())
}
#[inline]
fn owned_into_parts(owned: Vec<Cow<'a, str>>) -> (NonNull<Cow<'a, str>>, Metadata) {
let mut owned = ManuallyDrop::new(owned);
let ptr = unsafe { NonNull::new_unchecked(owned.as_mut_ptr()) };
let metadata = Metadata::from_owned(owned.len(), owned.capacity());
(ptr, metadata)
}
#[inline]
unsafe fn owned_from_parts(
ptr: NonNull<Cow<'a, str>>,
metadata: &Metadata,
) -> Vec<Cow<'a, str>> {
Vec::from_raw_parts(ptr.as_ptr(), metadata.length(), metadata.capacity())
}
#[inline]
unsafe fn clone_from_parts(ptr: NonNull<Cow<'a, str>>, metadata: &Metadata) -> Self::Owned {
let ptr = Self::ref_from_parts(ptr, metadata);
let xs = ptr.as_ref().unwrap();
let mut owned = Vec::with_capacity(xs.len() + 1);
owned.extend_from_slice(xs);
owned
}
}
unsafe impl Cowable for [Label] {
type Pointer = Label;
type Owned = Vec<Label>;
#[inline]
fn ref_into_parts(&self) -> (NonNull<Label>, Metadata) {
// A note on soundness:
//
// We are casting *const T to *mut T, however for all borrowed values
// this raw pointer is only ever dereferenced back to &T.
let ptr = unsafe { NonNull::new_unchecked(self.as_ptr() as *mut _) };
let metadata = Metadata::from_ref(self.len());
(ptr, metadata)
}
#[inline]
unsafe fn ref_from_parts(ptr: NonNull<Label>, metadata: &Metadata) -> *const [Label] {
slice_from_raw_parts(ptr.as_ptr(), metadata.length())
}
#[inline]
fn owned_into_parts(owned: Vec<Label>) -> (NonNull<Label>, Metadata) {
let mut owned = ManuallyDrop::new(owned);
let ptr = unsafe { NonNull::new_unchecked(owned.as_mut_ptr()) };
let metadata = Metadata::from_owned(owned.len(), owned.capacity());
(ptr, metadata)
}
#[inline]
unsafe fn owned_from_parts(ptr: NonNull<Label>, metadata: &Metadata) -> Vec<Label> {
Vec::from_raw_parts(ptr.as_ptr(), metadata.length(), metadata.capacity())
}
#[inline]
unsafe fn clone_from_parts(ptr: NonNull<Label>, metadata: &Metadata) -> Self::Owned {
let xs = Self::ref_from_parts(ptr, metadata);
xs.as_ref().unwrap().to_vec()
}
}
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct Metadata(usize, usize);
impl Metadata {
#[inline]
fn length(&self) -> usize {
self.0
}
#[inline]
fn capacity(&self) -> usize {
self.1
}
pub const fn from_ref(len: usize) -> Metadata {
Metadata(len, 0)
}
pub const fn from_owned(len: usize, capacity: usize) -> Metadata {
Metadata(len, capacity)
}
pub const fn borrowed() -> Metadata {
Metadata(0, 0)
}
pub const fn owned() -> Metadata {
Metadata(0, 1)
}
}
/*
This can be enabled again when we have a way to do panics/asserts in stable Rust,
since const panicking is behind a feature flag at the moment.
const MASK_LO: usize = u32::MAX as usize;
const MASK_HI: usize = !MASK_LO;
#[cfg(target_pointer_width = "64")]
impl Metadata {
#[inline]
fn length(&self) -> usize {
self.0 & MASK_LO
}
#[inline]
fn capacity(&self) -> usize {
self.0 & MASK_HI
}
pub const fn from_ref(len: usize) -> Metadata {
if len & MASK_HI != 0 {
panic!("Cow: length out of bounds for referenced value");
}
Metadata(len)
}
pub const fn from_owned(len: usize, capacity: usize) -> Metadata {
if len & MASK_HI != 0 {
panic!("Cow: length out of bounds for owned value");
}
if capacity & MASK_HI != 0 {
panic!("Cow: capacity out of bounds for owned value");
}
Metadata((capacity & MASK_LO) << 32 | len & MASK_LO)
}
pub const fn borrowed() -> Metadata {
Metadata(0)
}
pub const fn owned() -> Metadata {
Metadata(1 << 32)
}
}*/

View File

@ -1,44 +1,151 @@
use crate::{IntoLabels, Label, ScopedString};
use std::{
use crate::{cow::Cow, IntoLabels, Label, SharedString};
use alloc::{string::String, vec::Vec};
use core::{
fmt,
hash::{Hash, Hasher},
ops,
slice::Iter,
};
/// A metric key data.
const NO_LABELS: [Label; 0] = [];
/// Parts compromising a metric name.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct NameParts(Cow<'static, [SharedString]>);
impl NameParts {
/// Creates a [`NameParts`] from the given name.
pub fn from_name<N: Into<SharedString>>(name: N) -> Self {
NameParts(Cow::owned(vec![name.into()]))
}
/// Creates a [`NameParts`] from the given static name.
pub const fn from_static_names(names: &'static [SharedString]) -> Self {
NameParts(Cow::<'static, [SharedString]>::const_slice(names))
}
/// Appends a name part.
pub fn append<S: Into<SharedString>>(self, part: S) -> Self {
let mut parts = self.0.into_owned();
parts.push(part.into());
NameParts(Cow::owned(parts))
}
/// Prepends a name part.
pub fn prepend<S: Into<SharedString>>(self, part: S) -> Self {
let mut parts = self.0.into_owned();
parts.insert(0, part.into());
NameParts(Cow::owned(parts))
}
/// Gets a reference to the parts for this name.
pub fn parts(&self) -> Iter<'_, SharedString> {
self.0.iter()
}
/// Renders the name parts as a dot-delimited string.
pub fn to_string(&self) -> String {
// It's faster to allocate the string by hand instead of collecting the parts and joining
// them, or deferring to Dsiplay::to_string, or anything else. This may change in the
// future, or benefit from some sort of string pooling, but otherwise, this seemingly
// suboptimal approach -- oh no, a single allocation! :P -- works pretty well overall.
let mut first = false;
let mut s = String::with_capacity(16);
for p in self.0.iter() {
if first {
s.push_str(".");
first = false;
}
s.push_str(p.as_ref());
}
s
}
}
impl From<String> for NameParts {
fn from(name: String) -> NameParts {
NameParts::from_name(name)
}
}
impl From<&'static str> for NameParts {
fn from(name: &'static str) -> NameParts {
NameParts::from_name(name)
}
}
impl From<&'static [SharedString]> for NameParts {
fn from(names: &'static [SharedString]) -> NameParts {
NameParts::from_static_names(names)
}
}
impl fmt::Display for NameParts {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let s = self.to_string();
f.write_str(s.as_str())?;
Ok(())
}
}
/// Inner representation of [`Key`].
///
/// A key data always includes a name, but can optionally include multiple
/// labels used to further describe the metric.
/// While [`Key`] is the type that users will interact with via [`Recorder`][crate::Recorder],
/// [`KeyData`] is responsible for the actual storage of the name and label data.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct KeyData {
name: ScopedString,
labels: Vec<Label>,
// TODO: once const slicing is possible on stable, we could likely use `beef` for both of these
name_parts: NameParts,
labels: Cow<'static, [Label]>,
}
impl KeyData {
/// Creates a `KeyData` from a name.
/// Creates a [`KeyData`] from a name.
pub fn from_name<N>(name: N) -> Self
where
N: Into<ScopedString>,
{
Self::from_name_and_labels(name, Vec::new())
}
/// Creates a `KeyData` from a name and vector of `Label`s.
pub fn from_name_and_labels<N, L>(name: N, labels: L) -> Self
where
N: Into<ScopedString>,
L: IntoLabels,
N: Into<SharedString>,
{
Self {
name: name.into(),
labels: labels.into_labels(),
name_parts: NameParts::from_name(name),
labels: Cow::owned(Vec::new()),
}
}
/// Name of this key.
pub fn name(&self) -> &ScopedString {
&self.name
/// Creates a [`KeyData`] from a name and set of labels.
pub fn from_parts<N, L>(name: N, labels: L) -> Self
where
N: Into<NameParts>,
L: IntoLabels,
{
Self {
name_parts: name.into(),
labels: Cow::owned(labels.into_labels()),
}
}
/// Creates a [`KeyData`] from a static name.
///
/// This function is `const`, so it can be used in a static context.
pub const fn from_static_name(name_parts: &'static [SharedString]) -> Self {
Self::from_static_parts(name_parts, &NO_LABELS)
}
/// Creates a [`KeyData`] from a static name and static set of labels.
///
/// This function is `const`, so it can be used in a static context.
pub const fn from_static_parts(
name_parts: &'static [SharedString],
labels: &'static [Label],
) -> Self {
Self {
name_parts: NameParts::from_static_names(name_parts),
labels: Cow::<[Label]>::const_slice(labels),
}
}
/// Name parts of this key.
pub fn name(&self) -> &NameParts {
&self.name_parts
}
/// Labels of this key, if they exist.
@ -46,45 +153,54 @@ impl KeyData {
self.labels.iter()
}
/// Map the name of this key to a new name, based on `f`.
///
/// The value returned by `f` becomes the new name of the key.
pub fn map_name<F>(mut self, f: F) -> Self
where
F: Fn(ScopedString) -> String,
{
let new_name = f(self.name);
self.name = new_name.into();
self
/// Appends a part to the name,
pub fn append_name<S: Into<SharedString>>(self, part: S) -> Self {
let name_parts = self.name_parts.append(part);
Self {
name_parts,
labels: self.labels,
}
}
/// Consumes this `Key`, returning the name and any labels.
pub fn into_parts(self) -> (ScopedString, Vec<Label>) {
(self.name, self.labels)
/// Prepends a part to the name.
pub fn prepend_name<S: Into<SharedString>>(self, part: S) -> Self {
let name_parts = self.name_parts.prepend(part);
Self {
name_parts,
labels: self.labels,
}
}
/// Returns a clone of this key with some additional labels.
/// Consumes this [`Key`], returning the name parts and any labels.
pub fn into_parts(self) -> (NameParts, Vec<Label>) {
(self.name_parts.clone(), self.labels.into_owned())
}
/// Clones this [`Key`], and expands the existing set of labels.
pub fn with_extra_labels(&self, extra_labels: Vec<Label>) -> Self {
if extra_labels.is_empty() {
return self.clone();
}
let name = self.name.clone();
let mut labels = self.labels.clone();
let name_parts = self.name_parts.clone();
let mut labels = self.labels.clone().into_owned();
labels.extend(extra_labels);
Self { name, labels }
Self {
name_parts,
labels: labels.into(),
}
}
}
impl fmt::Display for KeyData {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.labels.is_empty() {
write!(f, "KeyData({})", self.name)
write!(f, "KeyData({})", self.name_parts)
} else {
write!(f, "KeyData({}, [", self.name)?;
write!(f, "KeyData({}, [", self.name_parts)?;
let mut first = true;
for label in &self.labels {
for label in self.labels.as_ref() {
if first {
write!(f, "{} = {}", label.0, label.1)?;
first = false;
@ -111,38 +227,40 @@ impl From<&'static str> for KeyData {
impl<N, L> From<(N, L)> for KeyData
where
N: Into<ScopedString>,
N: Into<SharedString>,
L: IntoLabels,
{
fn from(parts: (N, L)) -> Self {
Self::from_name_and_labels(parts.0, parts.1)
Self {
name_parts: NameParts::from_name(parts.0),
labels: Cow::owned(parts.1.into_labels()),
}
}
}
/// Represents the identifier of a metric.
/// A metric identifier.
///
/// [`Key`] holds either an owned or static reference variant of [`KeyData`].
/// While [`KeyData`] holds the actual name and label data for a metric, [`Key`] works similar to
/// [`std::borrow::Cow`] in that we can either hold an owned version of the key data, or a static
/// reference to key data initialized elsewhere.
///
/// This allows for flexibility in the ways that [`KeyData`] can be passed around
/// and reused, enabling performance improvements in specific situations.
/// This allows for flexibility in the ways that [`KeyData`] can be passed around and reused, which
/// allows us to enable performance optimizations in specific circumstances.
#[derive(Debug, Clone)]
pub enum Key {
/// A statically borrowed [`KeyData`].
///
/// If you are capable of keeping a static [`KeyData`] around, this variant can
/// be used to reduce allocations and improve performance.
///
/// The reference is read-only, so you can't modify the underlying data.
/// If you are capable of keeping a static [`KeyData`] around, this variant can be used to
/// reduce allocations and improve performance.
Borrowed(&'static KeyData),
/// An owned [`KeyData`].
///
/// Useful when you need to modify a borrowed [`KeyData`] in-flight, or when
/// there's no way to keep around a static [`KeyData`] reference.
/// Useful when you need to modify a borrowed [`KeyData`] in-flight, or when there's no way to
/// keep around a static [`KeyData`] reference.
Owned(KeyData),
}
impl PartialEq for Key {
/// We deliberately hide the differences between the containment types.
fn eq(&self, other: &Self) -> bool {
self.as_ref() == other.as_ref()
}
@ -162,7 +280,7 @@ impl Hash for Key {
impl Key {
/// Converts any kind of [`Key`] into an owned [`KeyData`].
///
/// Owned variant returned as is, borrowed variant is cloned.
/// If this key is owned, the value is returned as is, otherwise, the contents are cloned.
pub fn into_owned(self) -> KeyData {
match self {
Self::Borrowed(val) => val.clone(),
@ -171,7 +289,7 @@ impl Key {
}
}
impl std::ops::Deref for Key {
impl ops::Deref for Key {
type Target = KeyData;
#[must_use]
@ -202,12 +320,6 @@ impl fmt::Display for Key {
}
}
// Here we don't provide generic `From` impls
// (i.e. `impl <T: Into<KeyData>> From<T> for Key`) because the decision whether
// to construct the owned or borrowed ref is important for performance, and
// we want users of this type to explicitly make this decision rather than rely
// on the the magic of `.into()`.
impl From<KeyData> for Key {
fn from(key_data: KeyData) -> Self {
Self::Owned(key_data)
@ -220,44 +332,39 @@ impl From<&'static KeyData> for Key {
}
}
/// A type to simplify management of the static `KeyData`.
///
/// Allows for an efficient caching of the `KeyData` at the callsites.
pub type OnceKeyData = once_cell::sync::OnceCell<KeyData>;
#[cfg(test)]
mod tests {
use super::{Key, KeyData, OnceKeyData};
use crate::Label;
use super::{Key, KeyData};
use crate::{Label, SharedString};
use std::collections::HashMap;
static BORROWED_BASIC: OnceKeyData = OnceKeyData::new();
static BORROWED_LABELS: OnceKeyData = OnceKeyData::new();
static BORROWED_NAME: [SharedString; 1] = [SharedString::const_str("name")];
static FOOBAR_NAME: [SharedString; 1] = [SharedString::const_str("foobar")];
static BORROWED_BASIC: KeyData = KeyData::from_static_name(&BORROWED_NAME);
static LABELS: [Label; 1] = [Label::from_static_parts("key", "value")];
static BORROWED_LABELS: KeyData = KeyData::from_static_parts(&BORROWED_NAME, &LABELS);
#[test]
fn test_keydata_eq_and_hash() {
let mut keys = HashMap::new();
let owned_basic = KeyData::from_name("name");
let borrowed_basic = BORROWED_BASIC.get_or_init(|| KeyData::from_name("name"));
assert_eq!(&owned_basic, borrowed_basic);
assert_eq!(&owned_basic, &BORROWED_BASIC);
let previous = keys.insert(owned_basic, 42);
assert!(previous.is_none());
let previous = keys.get(&borrowed_basic);
let previous = keys.get(&BORROWED_BASIC);
assert_eq!(previous, Some(&42));
let labels = vec![Label::new("key", "value")];
let owned_labels = KeyData::from_name_and_labels("name", labels.clone());
let borrowed_labels =
BORROWED_LABELS.get_or_init(|| KeyData::from_name_and_labels("name", labels.clone()));
assert_eq!(&owned_labels, borrowed_labels);
let labels = LABELS.to_vec();
let owned_labels = KeyData::from_parts(&BORROWED_NAME[..], labels);
assert_eq!(&owned_labels, &BORROWED_LABELS);
let previous = keys.insert(owned_labels, 43);
assert!(previous.is_none());
let previous = keys.get(&borrowed_labels);
let previous = keys.get(&BORROWED_LABELS);
assert_eq!(previous, Some(&43));
}
@ -265,8 +372,8 @@ mod tests {
fn test_key_eq_and_hash() {
let mut keys = HashMap::new();
let owned_basic = Key::from(KeyData::from_name("name"));
let borrowed_basic = Key::from(BORROWED_BASIC.get_or_init(|| KeyData::from_name("name")));
let owned_basic: Key = KeyData::from_name("name").into();
let borrowed_basic: Key = Key::from(&BORROWED_BASIC);
assert_eq!(owned_basic, borrowed_basic);
let previous = keys.insert(owned_basic, 42);
@ -275,11 +382,9 @@ mod tests {
let previous = keys.get(&borrowed_basic);
assert_eq!(previous, Some(&42));
let labels = vec![Label::new("key", "value")];
let owned_labels = Key::from(KeyData::from_name_and_labels("name", labels.clone()));
let borrowed_labels = Key::from(
BORROWED_LABELS.get_or_init(|| KeyData::from_name_and_labels("name", labels.clone())),
);
let labels = LABELS.to_vec();
let owned_labels = Key::from(KeyData::from_parts(&BORROWED_NAME[..], labels));
let borrowed_labels = Key::from(&BORROWED_LABELS);
assert_eq!(owned_labels, borrowed_labels);
let previous = keys.insert(owned_labels, 43);
@ -295,19 +400,19 @@ mod tests {
let result1 = key1.to_string();
assert_eq!(result1, "KeyData(foobar)");
let key2 = KeyData::from_name_and_labels("foobar", vec![Label::new("system", "http")]);
let key2 = KeyData::from_parts(&FOOBAR_NAME[..], vec![Label::new("system", "http")]);
let result2 = key2.to_string();
assert_eq!(result2, "KeyData(foobar, [system = http])");
let key3 = KeyData::from_name_and_labels(
"foobar",
let key3 = KeyData::from_parts(
&FOOBAR_NAME[..],
vec![Label::new("system", "http"), Label::new("user", "joe")],
);
let result3 = key3.to_string();
assert_eq!(result3, "KeyData(foobar, [system = http, user = joe])");
let key4 = KeyData::from_name_and_labels(
"foobar",
let key4 = KeyData::from_parts(
&FOOBAR_NAME[..],
vec![
Label::new("black", "black"),
Label::new("lives", "lives"),
@ -326,27 +431,26 @@ mod tests {
let owned_a = KeyData::from_name("a");
let owned_b = KeyData::from_name("b");
static STATIC_A: OnceKeyData = OnceKeyData::new();
static STATIC_B: OnceKeyData = OnceKeyData::new();
let borrowed_a = STATIC_A.get_or_init(|| owned_a.clone());
let borrowed_b = STATIC_B.get_or_init(|| owned_b.clone());
static A_NAME: [SharedString; 1] = [SharedString::const_str("a")];
static STATIC_A: KeyData = KeyData::from_static_name(&A_NAME);
static B_NAME: [SharedString; 1] = [SharedString::const_str("b")];
static STATIC_B: KeyData = KeyData::from_static_name(&B_NAME);
assert_eq!(Key::Owned(owned_a.clone()), Key::Owned(owned_a.clone()));
assert_eq!(Key::Owned(owned_b.clone()), Key::Owned(owned_b.clone()));
assert_eq!(Key::Borrowed(borrowed_a), Key::Borrowed(borrowed_a));
assert_eq!(Key::Borrowed(borrowed_b), Key::Borrowed(borrowed_b));
assert_eq!(Key::Borrowed(&STATIC_A), Key::Borrowed(&STATIC_A));
assert_eq!(Key::Borrowed(&STATIC_B), Key::Borrowed(&STATIC_B));
assert_eq!(Key::Owned(owned_a.clone()), Key::Borrowed(borrowed_a));
assert_eq!(Key::Owned(owned_b.clone()), Key::Borrowed(borrowed_b));
assert_eq!(Key::Owned(owned_a.clone()), Key::Borrowed(&STATIC_A));
assert_eq!(Key::Owned(owned_b.clone()), Key::Borrowed(&STATIC_B));
assert_eq!(Key::Borrowed(borrowed_a), Key::Owned(owned_a.clone()));
assert_eq!(Key::Borrowed(borrowed_b), Key::Owned(owned_b.clone()));
assert_eq!(Key::Borrowed(&STATIC_A), Key::Owned(owned_a.clone()));
assert_eq!(Key::Borrowed(&STATIC_B), Key::Owned(owned_b.clone()));
assert_ne!(Key::Owned(owned_a.clone()), Key::Owned(owned_b.clone()),);
assert_ne!(Key::Borrowed(borrowed_a), Key::Borrowed(borrowed_b));
assert_ne!(Key::Owned(owned_a.clone()), Key::Borrowed(borrowed_b));
assert_ne!(Key::Owned(owned_b.clone()), Key::Borrowed(borrowed_a));
assert_ne!(Key::Borrowed(&STATIC_A), Key::Borrowed(&STATIC_B));
assert_ne!(Key::Owned(owned_a.clone()), Key::Borrowed(&STATIC_B));
assert_ne!(Key::Owned(owned_b.clone()), Key::Borrowed(&STATIC_A));
}
}

View File

@ -1,48 +1,63 @@
use crate::ScopedString;
use crate::SharedString;
use alloc::vec::Vec;
/// A key/value pair used to further describe a metric.
/// Metadata for a metric key in the for of a key/value pair.
///
/// Metrics are always defined by a name, but can optionally be assigned "labels", which are
/// key/value pairs that provide metadata about the key. Labels are typically used for
/// differentiating the context of when an where a metric are emitted.
///
/// For example, in a web service, you might wish to label metrics with the user ID responsible for
/// the request currently being processed, or the request path being processed. Another example may
/// be that if you were running a piece o code that was turned on or off by a feature toggle, you may
/// wish to include a label in metrics to indicate whether or not they were using the feature toggle.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct Label(pub(crate) ScopedString, pub(crate) ScopedString);
pub struct Label(pub(crate) SharedString, pub(crate) SharedString);
impl Label {
/// Creates a `Label` from a key and value.
/// Creates a [`Label`] from a key and value.
pub fn new<K, V>(key: K, value: V) -> Self
where
K: Into<ScopedString>,
V: Into<ScopedString>,
K: Into<SharedString>,
V: Into<SharedString>,
{
Label(key.into(), value.into())
}
/// The key of this label.
/// Creates a [`Label`] from a static key and value.
pub const fn from_static_parts(key: &'static str, value: &'static str) -> Self {
Label(SharedString::const_str(key), SharedString::const_str(value))
}
/// Key of this label.
pub fn key(&self) -> &str {
self.0.as_ref()
}
/// The value of this label.
/// Value of this label.
pub fn value(&self) -> &str {
self.1.as_ref()
}
/// Consumes this `Label`, returning the key and value.
pub fn into_parts(self) -> (ScopedString, ScopedString) {
/// Consumes this [`Label`], returning the key and value.
pub fn into_parts(self) -> (SharedString, SharedString) {
(self.0, self.1)
}
}
impl<K, V> From<&(K, V)> for Label
where
K: Into<ScopedString> + Clone,
V: Into<ScopedString> + Clone,
K: Into<SharedString> + Clone,
V: Into<SharedString> + Clone,
{
fn from(pair: &(K, V)) -> Label {
Label::new(pair.0.clone(), pair.1.clone())
}
}
/// A value that can be converted to `Label`s.
/// A value that can be converted to a vector of [`Label`]s.
pub trait IntoLabels {
/// Consumes this value, turning it into a vector of `Label`s.
/// Consumes this value, turning it into a vector of [`Label`]s.
fn into_labels(self) -> Vec<Label>;
}

View File

@ -4,17 +4,41 @@
//! implementation. Libraries can use the metrics API provided by this crate, and the consumer of
//! those libraries can choose the metrics implementation that is most suitable for its use case.
//!
//! If no metrics implementation is selected, the facade falls back to a "noop" implementation that
//! ignores all metrics. The overhead in this case is very small - an atomic load and comparison.
//! # Overview
//! `metrics` exposes two main concepts: emitting a metric, and recording it.
//!
//! # Use
//! The basic use of the facade crate is through the three metrics macros: [`counter!`], [`gauge!`],
//! and [`histogram!`]. These macros correspond to updating a counter, updating a gauge,
//! and updating a histogram.
//! ## Emission
//! Metrics are emitted by utilizing the registration or emission macros. There is a macro for
//! registering and emitting each fundamental metric type:
//! - [`register_counter!`], [`increment!`], and [`counter!`] for counters
//! - [`register_gauge!`] and [`gauge!`] for gauges
//! - [`register_histogram!`] and [`histogram!`] for histograms
//!
//! In order to register or emit a metric, you need a way to record these events, which is where
//! [`Recorder`] comes into play.
//!
//! ## Recording
//! The [`Recorder`] trait defines the interface between the registration/emission macros, and
//! exporters, which is how we refer to concrete implementations of [`Recorder`]. The trait defines
//! what the exporters are doing -- recording -- but ultimately exporters are sending data from your
//! application to somewhere else: whether it be a third-party service or logging via standard out.
//! It's "exporting" the metric data somewhere else besides your application.
//!
//! Each metric type is usually reserved for a specific type of use case, whether it be tracking a
//! single value or allowing the summation of multiple values, and the respective macros elaborate
//! more on the usage and invariants provided by each.
//!
//! # Getting Started
//!
//! ## In libraries
//! Libraries should link only to the `metrics` crate, and use the provided macros to record
//! whatever metrics will be useful to downstream consumers.
//! Libraries need only include the `metrics` crate to emit metrics. When an executable installs a
//! recorder, all included crates which emitting metrics will now emit their metrics to that record,
//! which allows library authors to seamless emit their own metrics without knowing or caring which
//! exporter implementation is chosen, or even if one is installed.
//!
//! In cases where no global recorder is installed, a "noop" recorder lives in its place, which has
//! an incredibly very low overhead: an atomic load and comparison. Libraries can safely instrument
//! their code without fear of ruining baseline performance.
//!
//! ### Examples
//!
@ -38,39 +62,49 @@
//!
//! ## In executables
//!
//! Executables should choose a metrics implementation and initialize it early in the runtime of
//! the program. Metrics implementations will typically include a function to do this. Any
//! metrics recordered before the implementation is initialized will be ignored.
//! Executables, which themselves can emit their own metrics, are intended to install a global
//! recorder so that metrics can actually be recorded and exported somewhere.
//!
//! The executable itself may use the `metrics` crate to record metrics well.
//! Initialization of the global recorder isn't required for macros to function, but any metrics
//! emitted before a global recorder is installed will not be recorded, so early initialization is
//! recommended when possible.
//!
//! ### Warning
//!
//! The metrics system may only be initialized once.
//!
//! # Available metrics implementations
//! For most use cases, you'll be using an off-the-shelf exporter implementation that hooks up to an
//! existing metrics collection system, or interacts with the existing systems/processes that you use.
//!
//! * # Native recorder:
//! * [metrics-exporter-tcp] - outputs metrics to clients over TCP
//! * [metrics-exporter-prometheus] - serves a Prometheus scrape endpoint
//! Out of the box, some exporter implementations are available for you to use:
//!
//! # Implementing a Recorder
//! * [metrics-exporter-tcp] - outputs metrics to clients over TCP
//! * [metrics-exporter-prometheus] - serves a Prometheus scrape endpoint
//!
//! Recorders implement the [`Recorder`] trait. Here's a basic example which writes the
//! metrics in text form via the `log` crate.
//! You can also implement your own recorder if a suitable one doesn't already exist.
//!
//! # Development
//!
//! The primary interface with `metrics` is through the [`Recorder`] trait, so we'll show examples
//! below of the trait and implementation notes.
//!
//! ## Implementing and installing a basic recorder
//!
//! Here's a basic example which writes metrics in text form via the `log` crate.
//!
//! ```rust
//! use log::info;
//! use metrics::{Key, Recorder};
//! use metrics::{Key, Recorder, Unit};
//! use metrics::SetRecorderError;
//!
//! struct LogRecorder;
//!
//! impl Recorder for LogRecorder {
//! fn register_counter(&self, key: Key, _description: Option<&'static str>) {}
//! fn register_counter(&self, key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {}
//!
//! fn register_gauge(&self, key: Key, _description: Option<&'static str>) {}
//! fn register_gauge(&self, key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {}
//!
//! fn register_histogram(&self, key: Key, _description: Option<&'static str>) {}
//! fn register_histogram(&self, key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {}
//!
//! fn increment_counter(&self, key: Key, value: u64) {
//! info!("counter '{}' -> {}", key, value);
@ -84,24 +118,9 @@
//! info!("histogram '{}' -> {}", key, value);
//! }
//! }
//! # fn main() {}
//! ```
//!
//! Recorders are installed by calling the [`set_recorder`] function. Recorders should provide a
//! function that wraps the creation and installation of the recorder:
//!
//! ```rust
//! # use metrics::{Recorder, Key};
//! # struct LogRecorder;
//! # impl Recorder for LogRecorder {
//! # fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn increment_counter(&self, _key: Key, _value: u64) {}
//! # fn update_gauge(&self, _key: Key, _value: f64) {}
//! # fn record_histogram(&self, _key: Key, _value: u64) {}
//! # }
//! use metrics::SetRecorderError;
//! // Recorders are installed by calling the [`set_recorder`] function. Recorders should provide a
//! // function that wraps the creation and installation of the recorder:
//!
//! static RECORDER: LogRecorder = LogRecorder;
//!
@ -110,42 +129,102 @@
//! }
//! # fn main() {}
//! ```
//! ## Keys
//!
//! # Use with `std`
//! All metrics are, in essence, the combination of a metric type and metric identifier, such as a
//! histogram called "response_latency". You could conceivably have multiple metrics with the same
//! name, so long as they are of different types.
//!
//! `set_recorder` requires you to provide a `&'static Recorder`, which can be hard to
//! obtain if your recorder depends on some runtime configuration. The `set_boxed_recorder`
//! function is available with the `std` Cargo feature. It is identical to `set_recorder` except
//! that it takes a `Box<Recorder>` rather than a `&'static Recorder`:
//! As the types are enforced/limited by the [`Recorder`] trait itself, the remaining piece is the
//! identifier, which we handle by using [`Key`].
//!
//! ```rust
//! # use metrics::{Recorder, Key};
//! # struct LogRecorder;
//! # impl Recorder for LogRecorder {
//! # fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
//! # fn increment_counter(&self, _key: Key, _value: u64) {}
//! # fn update_gauge(&self, _key: Key, _value: f64) {}
//! # fn record_histogram(&self, _key: Key, _value: u64) {}
//! # }
//! use metrics::SetRecorderError;
//! [`Key`] itself is a wrapper for [`KeyData`], which holds not only the name of a metric, but
//! potentially holds labels for it as well. The name of a metric must always be a literal string.
//! The labels are a key/value pair, where both components are strings as well.
//!
//! # #[cfg(feature = "std")]
//! pub fn init() -> Result<(), SetRecorderError> {
//! metrics::set_boxed_recorder(Box::new(LogRecorder))
//! }
//! # fn main() {}
//! ```
//! Internally, `metrics` uses a clone-on-write "smart pointer" for these values to optimize cases
//! where the values are static strings, which can provide significant performance benefits. These
//! smart pointers can also hold owned `String` values, though, so users can mix and match static
//! strings and owned strings for labels without issue. Metric names, as mentioned above, are always
//! static strings.
//!
//! Two [`Key`] objects can be checked for equality and considered to point to the same metric if
//! they are equal. Equality checks both the name of the key and the labels of a key. Labels are
//! _not_ sorted prior to checking for equality, but insertion order is maintained, so any [`Key`]
//! constructed from the same set of labels in the same order should be equal.
//!
//! It is an implementation detail if a recorder wishes to do an deeper equality check that ignores
//! the order of labels, but practically speaking, metric emission, and thus labels, should be
//! fixed in ordering in nearly all cases, and so it isn't typically a problem.
//!
//! ## Registration
//!
//! Recorders must handle the "registration" of a metric.
//!
//! In practice, registration solves two potential problems: providing metadata for a metric, and
//! creating an entry for a metric even though it has not been emitted yet.
//!
//! Callers may wish to provide a human-readable description of what the metric is, or provide the
//! units the metrics uses. Additionally, users may wish to register their metrics so that they
//! show up in the output of the installed exporter even if the metrics have yet to be emitted.
//! This allows callers to ensure the metrics output is stable, or allows them to expose all of the
//! potential metrics a system has to offer, again, even if they have not all yet been emitted.
//!
//! As you can see from the trait, the registration methods treats the metadata as optional, and
//! the macros allow users to mix and match whichever fields they want to provide.
//!
//! When a metric is registered, the expectation is that it will show up in output with a default
//! value, so, for example, a counter should be initialized to zero, a histogram would have no
//! values, and so on.
//!
//! ## Emission
//!
//! Likewise, records must handle the emission of metrics as well.
//!
//! Comparatively speaking, emission is not too different from registration: you have access to the
//! same [`Key`] as well as the value being emitted.
//!
//! For recorders which temporarily buffer or hold on to values before exporting, a typical approach
//! would be to utilize atomic variables for the storage. For counters and gauges, this can be done
//! simply by using types like [`AtomicU64`](std::sync::atomic::AtomicU64). For histograms, this can be
//! slightly tricky as you must hold on to all of the distinct values. In our helper crate,
//! [`metrics-util`][metrics-util], we've provided a type called [`AtomicBucket`][AtomicBucket]. For
//! exporters that will want to get all of the current values in a batch, while clearing the bucket so
//! that values aren't processed again, [AtomicBucket] provides a simple interface to do so, as well as
//! optimized performance on both the insertion and read side.
//!
//! ## Installing recorders
//!
//! In order to actually use an exporter, it must be installed as the "global" recorder. This is a
//! static recorder that the registration and emission macros refer to behind-the-scenes. `metrics`
//! provides a few methods to do so: [`set_recorder`], [`set_boxed_recorder`], and [`set_recorder_racy`].
//!
//! Primarily, you'll use [`set_boxed_recorder`] to pass a boxed version of the exporter to be
//! installed. This is due to the fact that most exporters won't be able to be constructed
//! statically. If you could construct your exporter statically, though, then you could instead
//! choose [`set_recorder`].
//!
//! Similarly, [`set_recorder_racy`] takes a static reference, but is also not thread safe, and
//! should only be used on platforms which do not support atomic operations, such as embedded
//! environments.
//!
//! [metrics-exporter-tcp]: https://docs.rs/metrics-exporter-tcp
//! [metrics-exporter-prometheus]: https://docs.rs/metrics-exporter-prometheus
//! [metrics-util]: https://docs.rs/metrics-util
//! [AtomicBucket]: https://docs.rs/metrics-util/0.4.0-alpha.6/metrics_util/struct.AtomicBucket.html
#![deny(missing_docs)]
#![cfg_attr(not(feature = "std"), no_std)]
#![cfg_attr(docsrs, feature(doc_cfg), deny(broken_intra_doc_links))]
extern crate alloc;
use proc_macro_hack::proc_macro_hack;
mod common;
pub use self::common::*;
mod cow;
mod key;
pub use self::key::*;
@ -157,39 +236,40 @@ pub use self::recorder::*;
/// Registers a counter.
///
/// Counters represent a single value that can only be incremented over time, or reset to zero.
/// Counters represent a single monotonic value, which means the value can only be incremented, not
/// decremented, and always starts out with an initial value of zero.
///
/// Metrics can be registered with an optional description. Whether or not the installed recorder
/// does anything with the description is implementation defined. Labels can also be specified
/// when registering a metric.
///
/// Counters, when registered, start at zero.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Metrics can be registered with an optional unit and description. Whether or not the installed
/// recorder does anything with the description is implementation defined. Labels can also be
/// specified when registering a metric.
///
/// # Example
/// ```
/// # use metrics::register_counter;
/// # use metrics::Unit;
/// # fn main() {
/// // A regular, unscoped counter:
/// // A basic counter:
/// register_counter!("some_metric_name");
///
/// // A scoped counter. This inherits a scope derived by the current module:
/// register_counter!(<"some_metric_name">);
/// // Providing a unit for a counter:
/// register_counter!("some_metric_name", Unit::Bytes);
///
/// // Providing a description for a counter:
/// register_counter!("some_metric_name", "number of woopsy daisies");
/// register_counter!("some_metric_name", "total number of bytes");
///
/// // Specifying labels:
/// register_counter!("some_metric_name", "service" => "http");
///
/// // And all combined:
/// register_counter!("some_metric_name", "number of woopsy daisies", "service" => "http");
/// register_counter!(<"some_metric_name">, "number of woopsy daisies", "service" => "http");
/// // We can combine the units, description, and labels arbitrarily:
/// register_counter!("some_metric_name", Unit::Bytes, "total number of bytes");
/// register_counter!("some_metric_name", Unit::Bytes, "service" => "http");
/// register_counter!("some_metric_name", "total number of bytes", "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // And all combined:
/// register_counter!("some_metric_name", Unit::Bytes, "number of woopsy daisies", "service" => "http");
///
/// /// We can also pass labels by giving a vector or slice of key/value pairs. In this scenario,
/// // a unit or description can still be passed in their respective positions:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// register_counter!("some_metric_name", &labels);
@ -200,39 +280,40 @@ pub use metrics_macros::register_counter;
/// Registers a gauge.
///
/// Gauges represent a single value that can go up or down over time.
/// Gauges represent a single value that can go up or down over time, and always starts out with an
/// initial value of zero.
///
/// Metrics can be registered with an optional description. Whether or not the installed recorder
/// does anything with the description is implementation defined. Labels can also be specified
/// when registering a metric.
///
/// Gauges, when registered, start at zero.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Metrics can be registered with an optional unit and description. Whether or not the installed
/// recorder does anything with the description is implementation defined. Labels can also be
/// specified when registering a metric.
///
/// # Example
/// ```
/// # use metrics::register_gauge;
/// # use metrics::Unit;
/// # fn main() {
/// // A regular, unscoped gauge:
/// // A basic gauge:
/// register_gauge!("some_metric_name");
///
/// // A scoped gauge. This inherits a scope derived by the current module:
/// register_gauge!(<"some_metric_name">);
/// // Providing a unit for a gauge:
/// register_gauge!("some_metric_name", Unit::Bytes);
///
/// // Providing a description for a gauge:
/// register_gauge!("some_metric_name", "number of woopsy daisies");
/// register_gauge!("some_metric_name", "total number of bytes");
///
/// // Specifying labels:
/// register_gauge!("some_metric_name", "service" => "http");
///
/// // And all combined:
/// register_gauge!("some_metric_name", "number of woopsy daisies", "service" => "http");
/// register_gauge!(<"some_metric_name">, "number of woopsy daisies", "service" => "http");
/// // We can combine the units, description, and labels arbitrarily:
/// register_gauge!("some_metric_name", Unit::Bytes, "total number of bytes");
/// register_gauge!("some_metric_name", Unit::Bytes, "service" => "http");
/// register_gauge!("some_metric_name", "total number of bytes", "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // And all combined:
/// register_gauge!("some_metric_name", Unit::Bytes, "total number of bytes", "service" => "http");
///
/// // We can also pass labels by giving a vector or slice of key/value pairs. In this scenario,
/// // a unit or description can still be passed in their respective positions:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// register_gauge!("some_metric_name", &labels);
@ -243,39 +324,40 @@ pub use metrics_macros::register_gauge;
/// Records a histogram.
///
/// Histograms measure the distribution of values for a given set of measurements.
/// Histograms measure the distribution of values for a given set of measurements, and start with no
/// initial values.
///
/// Metrics can be registered with an optional description. Whether or not the installed recorder
/// does anything with the description is implementation defined. Labels can also be specified
/// when registering a metric.
///
/// Histograms, when registered, start at zero.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Metrics can be registered with an optional unit and description. Whether or not the installed
/// recorder does anything with the description is implementation defined. Labels can also be
/// specified when registering a metric.
///
/// # Example
/// ```
/// # use metrics::register_histogram;
/// # use metrics::Unit;
/// # fn main() {
/// // A regular, unscoped histogram:
/// // A basic histogram:
/// register_histogram!("some_metric_name");
///
/// // A scoped histogram. This inherits a scope derived by the current module:
/// register_histogram!(<"some_metric_name">);
/// // Providing a unit for a histogram:
/// register_histogram!("some_metric_name", Unit::Nanoseconds);
///
/// // Providing a description for a histogram:
/// register_histogram!("some_metric_name", "number of woopsy daisies");
/// register_histogram!("some_metric_name", "request handler duration");
///
/// // Specifying labels:
/// register_histogram!("some_metric_name", "service" => "http");
///
/// // And all combined:
/// register_histogram!("some_metric_name", "number of woopsy daisies", "service" => "http");
/// register_histogram!(<"some_metric_name">, "number of woopsy daisies", "service" => "http");
/// // We can combine the units, description, and labels arbitrarily:
/// register_histogram!("some_metric_name", Unit::Nanoseconds, "request handler duration");
/// register_histogram!("some_metric_name", Unit::Nanoseconds, "service" => "http");
/// register_histogram!("some_metric_name", "request handler duration", "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // And all combined:
/// register_histogram!("some_metric_name", Unit::Nanoseconds, "request handler duration", "service" => "http");
///
/// // We can also pass labels by giving a vector or slice of key/value pairs. In this scenario,
/// // a unit or description can still be passed in their respective positions:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// register_histogram!("some_metric_name", &labels);
@ -284,32 +366,22 @@ pub use metrics_macros::register_gauge;
#[proc_macro_hack]
pub use metrics_macros::register_histogram;
/// Increments a counter.
/// Increments a counter by one.
///
/// Counters represent a single value that can only be incremented over time, or reset to zero.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Counters represent a single monotonic value, which means the value can only be incremented, not
/// decremented, and always starts out with an initial value of zero.
///
/// # Example
/// ```
/// # use metrics::increment;
/// # fn main() {
/// // A regular, unscoped increment:
/// // A basic increment:
/// increment!("some_metric_name");
///
/// // A scoped increment. This inherits a scope derived by the current module:
/// increment!(<"some_metric_name">);
///
/// // Specifying labels:
/// increment!("some_metric_name", "service" => "http");
///
/// // And all combined:
/// increment!("some_metric_name", "service" => "http");
/// increment!(<"some_metric_name">, "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // We can also pass labels by giving a vector or slice of key/value pairs:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// increment!("some_metric_name", &labels);
@ -320,30 +392,20 @@ pub use metrics_macros::increment;
/// Increments a counter.
///
/// Counters represent a single value that can only be incremented over time, or reset to zero.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Counters represent a single monotonic value, which means the value can only be incremented, not
/// decremented, and always starts out with an initial value of zero.
///
/// # Example
/// ```
/// # use metrics::counter;
/// # fn main() {
/// // A regular, unscoped counter:
/// // A basic counter:
/// counter!("some_metric_name", 12);
///
/// // A scoped counter. This inherits a scope derived by the current module:
/// counter!(<"some_metric_name">, 12);
///
/// // Specifying labels:
/// counter!("some_metric_name", 12, "service" => "http");
///
/// // And all combined:
/// counter!("some_metric_name", 12, "service" => "http");
/// counter!(<"some_metric_name">, 12, "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // We can also pass labels by giving a vector or slice of key/value pairs:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// counter!("some_metric_name", 12, &labels);
@ -354,30 +416,20 @@ pub use metrics_macros::counter;
/// Updates a gauge.
///
/// Gauges represent a single value that can go up or down over time.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Gauges represent a single value that can go up or down over time, and always starts out with an
/// initial value of zero.
///
/// # Example
/// ```
/// # use metrics::gauge;
/// # fn main() {
/// // A regular, unscoped gauge:
/// // A basic gauge:
/// gauge!("some_metric_name", 42.2222);
///
/// // A scoped gauge. This inherits a scope derived by the current module:
/// gauge!(<"some_metric_name">, 33.3333);
///
/// // Specifying labels:
/// gauge!("some_metric_name", 66.6666, "service" => "http");
///
/// // And all combined:
/// gauge!("some_metric_name", 55.5555, "service" => "http");
/// gauge!(<"some_metric_name">, 11.1111, "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // We can also pass labels by giving a vector or slice of key/value pairs:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// gauge!("some_metric_name", 42.42, &labels);
@ -388,11 +440,8 @@ pub use metrics_macros::gauge;
/// Records a histogram.
///
/// Histograms measure the distribution of values for a given set of measurements.
///
/// # Scoped versus unscoped
/// Metrics can be unscoped or scoped, where the scoping is derived by the current module the call
/// is taking place in. This scope is used as a prefix to the provided metric name.
/// Histograms measure the distribution of values for a given set of measurements, and start with no
/// initial values.
///
/// # Implicit conversions
/// Histograms are represented as `u64` values, but often come from another source, such as a time
@ -407,25 +456,17 @@ pub use metrics_macros::gauge;
/// # use metrics::histogram;
/// # use std::time::Duration;
/// # fn main() {
/// // A regular, unscoped histogram:
/// // A basic histogram:
/// histogram!("some_metric_name", 34);
///
/// // An implicit conversion from `Duration`:
/// let d = Duration::from_millis(17);
/// histogram!("some_metric_name", d);
///
/// // A scoped histogram. This inherits a scope derived by the current module:
/// histogram!(<"some_metric_name">, 38);
/// histogram!(<"some_metric_name">, d);
///
/// // Specifying labels:
/// histogram!("some_metric_name", 38, "service" => "http");
///
/// // And all combined:
/// histogram!("some_metric_name", d, "service" => "http");
/// histogram!(<"some_metric_name">, 57, "service" => "http");
///
/// // And just for an alternative form of passing labels:
/// // We can also pass labels by giving a vector or slice of key/value pairs:
/// let dynamic_val = "woo";
/// let labels = [("dynamic_key", format!("{}!", dynamic_val))];
/// histogram!("some_metric_name", 1337, &labels);

View File

@ -1,6 +1,6 @@
use crate::Key;
use std::fmt;
use std::sync::atomic::{AtomicUsize, Ordering};
use crate::{Key, Unit};
use core::fmt;
use core::sync::atomic::{AtomicUsize, Ordering};
static mut RECORDER: &'static dyn Recorder = &NoopRecorder;
static STATE: AtomicUsize = AtomicUsize::new(0);
@ -12,28 +12,34 @@ const INITIALIZED: usize = 2;
static SET_RECORDER_ERROR: &str =
"attempted to set a recorder after the metrics system was already initialized";
/// A value that records metrics behind the facade.
/// A trait for registering and recording metrics.
///
/// This is the core trait that allows interoperability between exporter implementations and the
/// macros provided by `metrics`.
pub trait Recorder {
/// Registers a counter.
///
/// Callers may provide a description of the counter being registered. Whether or not a metric
/// can be reregistered to provide a description, if one was already passed or not, as well as
/// how descriptions are used by the underlying recorder, is an implementation detail.
fn register_counter(&self, key: Key, description: Option<&'static str>);
/// Callers may provide the unit or a description of the counter being registered. Whether or
/// not a metric can be reregistered to provide a unit/description, if one was already passed
/// or not, as well as how units/descriptions are used by the underlying recorder, is an
/// implementation detail.
fn register_counter(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>);
/// Registers a gauge.
///
/// Callers may provide a description of the counter being registered. Whether or not a metric
/// can be reregistered to provide a description, if one was already passed or not, as well as
/// how descriptions are used by the underlying recorder, is an implementation detail.
fn register_gauge(&self, key: Key, description: Option<&'static str>);
/// Callers may provide the unit or a description of the gauge being registered. Whether or
/// not a metric can be reregistered to provide a unit/description, if one was already passed
/// or not, as well as how units/descriptions are used by the underlying recorder, is an
/// implementation detail.
fn register_gauge(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>);
/// Registers a histogram.
///
/// Callers may provide a description of the counter being registered. Whether or not a metric
/// can be reregistered to provide a description, if one was already passed or not, as well as
/// how descriptions are used by the underlying recorder, is an implementation detail.
fn register_histogram(&self, key: Key, description: Option<&'static str>);
/// Callers may provide the unit or a description of the histogram being registered. Whether or
/// not a metric can be reregistered to provide a unit/description, if one was already passed
/// or not, as well as how units/descriptions are used by the underlying recorder, is an
/// implementation detail.
fn register_histogram(&self, key: Key, unit: Option<Unit>, description: Option<&'static str>);
/// Increments a counter.
fn increment_counter(&self, key: Key, value: u64);
@ -42,18 +48,26 @@ pub trait Recorder {
fn update_gauge(&self, key: Key, value: f64);
/// Records a histogram.
///
/// The value can be value that implements [`IntoU64`]. By default, `metrics` provides an
/// implementation for both `u64` itself as well as [`Duration`](std::time::Duration).
fn record_histogram(&self, key: Key, value: u64);
}
struct NoopRecorder;
/// A no-op recorder.
///
/// Used as the default recorder when one has not been installed yet. Useful for acting as the root
/// recorder when testing layers.
pub struct NoopRecorder;
impl Recorder for NoopRecorder {
fn register_counter(&self, _key: Key, _description: Option<&'static str>) {}
fn register_gauge(&self, _key: Key, _description: Option<&'static str>) {}
fn register_histogram(&self, _key: Key, _description: Option<&'static str>) {}
fn register_counter(&self, _key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {
}
fn register_gauge(&self, _key: Key, _unit: Option<Unit>, _description: Option<&'static str>) {}
fn register_histogram(
&self,
_key: Key,
_unit: Option<Unit>,
_description: Option<&'static str>,
) {
}
fn increment_counter(&self, _key: Key, _value: u64) {}
fn update_gauge(&self, _key: Key, _value: f64) {}
fn record_histogram(&self, _key: Key, _value: u64) {}
@ -87,6 +101,7 @@ pub fn set_recorder(recorder: &'static dyn Recorder) -> Result<(), SetRecorderEr
///
/// An error is returned if a recorder has already been set.
#[cfg(all(feature = "std", atomic_cas))]
#[cfg_attr(docsrs, doc(cfg(feature = "std")))]
pub fn set_boxed_recorder(recorder: Box<dyn Recorder>) -> Result<(), SetRecorderError> {
set_recorder_inner(|| unsafe { &*Box::into_raw(recorder) })
}
@ -186,7 +201,7 @@ pub fn recorder() -> &'static dyn Recorder {
/// If a recorder has not been set, returns `None`.
pub fn try_recorder() -> Option<&'static dyn Recorder> {
unsafe {
if STATE.load(Ordering::SeqCst) != INITIALIZED {
if STATE.load(Ordering::Relaxed) != INITIALIZED {
None
} else {
Some(RECORDER)

View File

@ -1,4 +1,11 @@
[build]
command = """
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly --profile minimal \
&& source $HOME/.cargo/env \
&& RUSTDOCFLAGS=\"--cfg docsrs\" cargo +nightly doc --no-deps
"""
publish = "target/doc"
[[redirects]]
from = "/"
to = "https://docs.rs/metrics"
status = 302
to = "/metrics"