Description
A note for the community
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Use Cases
I currently have a need to get system uptime from within Vector, to act differently after a period of time.
It would be useful to understand load internally, so that during periods of higher system load the system or specific stanzas could fork or abort as needed with less-prioritized streams or data in a dynamic way.
In short: giving the system some understanding of its own conditions allows for dynamic forking of behaviors to suit requirements on a moment-to-moment basis. One outcome of this is that it will allow for better utilization of resources, where some paths may handle more "work" if other paths are less busy. Access to basic process behaviors just seems like a good thing, overall.
Attempted Solutions
There are really horrible ways of feeding prometheus data through a source and then putting them into a memory enrichment table, but that seems like a lot of un-necessary overhead when all that is typically desired is subset of the metric space, without taking up memory.
Proposal
I'd like to use prometheus values in some of my VRL logic, to "self-examine" the Vector process.
I would suggest something like:
prometheus_value(<metric_name>[,<tag>=<value>,...])
If there are more than one metrics that match the given metric name and tag list, an error will be produced.
Example: if we have a set of metrics that looks like this (from "curl" on the prometheus port):
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="+Inf"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.015625"} 35768541 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.03125"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.0625"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.125"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.25"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="0.5"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="1"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="1024"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="128"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="16"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="2"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="2048"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="256"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="32"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="4"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="4096"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="512"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="64"} 35769276 1751031858622
vector_source_lag_time_seconds_bucket{component_id="vector_metrics",component_kind="source",component_type="internal_metrics",host="tel410.ams",le="8"} 35769276 1751031858622
then:
test = prometheus_value("vector_source_lag_time_seconds_bucket","le"="8")
would result in:
"test": 35769276
but
test = prometheus_value("vector_source_lag_time_seconds_bucket","component_id"="vector_metrics")
would result in an error, since there is not enough data to result in only a single answer with those labels.
Additional example:
given the prometheus metric:
vector_uptime_seconds{host="tel410.ams"} 48560 1751031858622
then
test = prometheus_value("vector_uptime_seconds")
results in
"test": 48560
References
No response
Version
vector 0.48.0 (x86_64-unknown-linux-gnu 150208a 2025-06-26 05:08:48.904691970)