diff --git a/README.md b/README.md index 37af63b6684..7a0e1ae9804 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ For questions and feature requests, visit the [discussion forum](https://discuss ## Getting Started -To get started with APM, see our [Quick start guide](https://www.elastic.co/guide/en/apm/get-started/current/install-and-run.html). +To get started with APM, see our [Quick start guide](https://www.elastic.co/guide/en/apm/guide/current/apm-quick-start.html). ## APM Server Development @@ -144,4 +144,13 @@ When building images for testing pre-release versions, we recommend setting `SNA ## Documentation -[Documentation](https://www.elastic.co/guide/en/apm/server/current/index.html) for the APM Server can be found in the `docs` folder. +Documentation for the APM Server can be found in the [APM guide](https://www.elastic.co/guide/en/apm/guide/8.5/index.html). Most documentation files live in the [elastic/observability-docs](https://github.com/elastic/observability-docs) repo's [`docs/en/apm-server/` directory](https://github.com/elastic/observability-docs/tree/8.5/docs/en/apm-server). + +However, the following content lives in this repo: + +* The **changelog** page listing all release notes is in [`CHANGELOG.asciidoc`](/CHANGELOG.asciidoc). +* Each minor version's **release notes** are documented in individual files in the [`changelogs/`](/changelogs/) directory. +* A list of all **breaking changes** are documented in [`changelogs/all-breaking-changes.asciidoc`](/changelogs/all-breaking-changes.asciidoc). +* **Sample data sets** that are injected into the docs are in the [`docs/data/`](/docs/data/) directory. +* **Specifications** that are injected into the docs are in the [`docs/spec/`](/docs/spec/) directory. + diff --git a/RELEASES.md b/RELEASES.md index 8d5652e3b03..bd31363caaf 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -62,7 +62,7 @@ ## When compatibility between Agents & Server changes -* Update the [agent/server compatibility matrix](https://github.com/elastic/apm-server/blob/main/docs/guide/agent-server-compatibility.asciidoc). +* Update the [agent/server compatibility matrix](https://github.com/elastic/observability-docs/blob/main/docs/en/observability/apm/agent-server-compatibility.asciidoc) in the elastic/observability repo. ## Templates diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000000..4094f8f90f6 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,10 @@ +> [!IMPORTANT] +> Most documentation source files have moved to the [elastic/observability-docs](https://github.com/elastic/observability-docs) repo ([`docs/en/apm-server/`](https://github.com/elastic/observability-docs/tree/8.5/docs/en/apm-server)). +> +> However, the following content still lives in this repo: +> +> * The **changelog** page listing all release notes is in [`CHANGELOG.asciidoc`](/CHANGELOG.asciidoc). +> * Each minor version's **release notes** are documented in individual files in the [`changelogs/`](/changelogs/) directory. +> * A list of all **breaking changes** are documented in [`changelogs/all-breaking-changes.asciidoc`](/changelogs/all-breaking-changes.asciidoc). +> * **Sample data sets** that are injected into the docs are in the [`docs/data/`](/docs/data/) directory. +> * **Specifications** that are injected into the docs are in the [`docs/spec/`](/docs/spec/) directory. diff --git a/docs/agent-server-compatibility.asciidoc b/docs/agent-server-compatibility.asciidoc deleted file mode 100644 index 33bb9141863..00000000000 --- a/docs/agent-server-compatibility.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[agent-server-compatibility]] -=== {apm-agent} compatibility - -The chart below outlines the compatibility between different versions of the APM agents and the APM integration/server. - -[options="header"] -|==== -|Language |{apm-agent} version |APM integration/server version -// Go -.1+|**Go agent** -|`1.x` |≥ `6.5` - -// iOS -.1+|**iOS agent** -|`0.x` |≥ `7.14` - -// Java -.1+|**Java agent** -|`1.x`|≥ `6.5` - -// .NET -.1+|**.NET agent** -|`1.x` |≥ `6.5` - -// Node -.1+|**Node.js agent** -|`3.x` |≥ `6.6` - -// PHP -.1+|**PHP agent** -|`1.x` |≥ `7.0` - -// Python -.1+|**Python agent** -|`6.x` |≥ `6.6` - -// Ruby -.2+|**Ruby agent** -|`3.x` |≥ `6.5` -|`4.x` |≥ `6.5` - -// RUM -.2+|**JavaScript RUM agent** -|`4.x` |≥ `6.5` -|`5.x` |≥ `7.0` -|==== diff --git a/docs/api-config.asciidoc b/docs/api-config.asciidoc deleted file mode 100644 index 7bf1331d14e..00000000000 --- a/docs/api-config.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[[api-config]] -=== Agent configuration API - -APM Server exposes an API endpoint that allows agents to query the server for configuration changes. -More information on this feature is available in {kibana-ref}/agent-configuration.html[{apm-agent} configuration in {kib}]. - -[float] -[[api-config-endpoint]] -=== Agent configuration endpoint - -The Agent configuration endpoint accepts both `HTTP GET` and `HTTP POST` requests. -If <> or a <> is configured, requests to this endpoint must be authenticated. - -[float] -[[api-config-api-get]] -==== HTTP GET - -`service.name` is a required query string parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents?service.name=SERVICE_NAME ------------------------------------------------------------- - -[float] -[[api-config-api-post]] -==== HTTP POST - -Encode parameters as a JSON object in the body. -`service.name` is a required parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents -{ - "service": { - "name": "test-service", - "environment": "all" - }, - "CAPTURE_BODY": "off" -} ------------------------------------------------------------- - -[float] -[[api-config-api-response]] -==== Responses - -* Successful - `200` -* {kib} endpoint is disabled - `403` -* {kib} is unreachable - `503` - -[float] -[[api-config-api-example]] -==== Example request - -Example Agent configuration `GET` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -i http://127.0.0.1:8200/config/v1/agents?service.name=test-service ---------------------------------------------------------------------------- - -Example Agent configuration `POST` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/config/v1/agents \ - -H "Authorization: Bearer secret_token" \ - -H 'content-type: application/json' \ - -d '{"service": {"name": "test-service"}}' ---------------------------------------------------------------------------- - -[float] -[[api-config-api-ex-response]] -==== Example response - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -HTTP/1.1 200 OK -Cache-Control: max-age=30, must-revalidate -Content-Type: application/json -Etag: "7b23d63c448a863fa" -Date: Mon, 24 Feb 2020 20:53:07 GMT -Content-Length: 98 - -{ - "capture_body": "off", - "transaction_max_spans": "500", - "transaction_sample_rate": "0.3" -} ---------------------------------------------------------------------------- diff --git a/docs/api-error.asciidoc b/docs/api-error.asciidoc deleted file mode 100644 index 22e3c9da4cb..00000000000 --- a/docs/api-error.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-error]] -==== Errors - -An error or a logged error message captured by an agent occurring in a monitored service. - -[float] -[[api-error-schema]] -==== Error Schema - -APM Server uses JSON Schema to validate requests. The specification for errors is defined on -{github_repo_link}/docs/spec/v2/error.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/error.json[] ----- diff --git a/docs/api-event-example.asciidoc b/docs/api-event-example.asciidoc deleted file mode 100644 index 9297e327c78..00000000000 --- a/docs/api-event-example.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[api-event-example]] -==== Example request body - -A request body example containing one event for all currently supported event types. - -[source,json] ----- -include::./data/intake-api/generated/events.ndjson[] ----- diff --git a/docs/api-events.asciidoc b/docs/api-events.asciidoc deleted file mode 100644 index 71159ab05a1..00000000000 --- a/docs/api-events.asciidoc +++ /dev/null @@ -1,123 +0,0 @@ -[[api-events]] -=== Events intake API - -NOTE: Most users do not need to interact directly with the events intake API. - -The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. -Agents communicate with the Server by sending events -- captured pieces of information -- in an HTTP request. -Events can be: - -* Transactions -* Spans -* Errors -* Metrics - -Each event is sent as its own line in the HTTP request body. -This is known as http://ndjson.org[newline delimited JSON (NDJSON)]. - -With NDJSON, agents can open an HTTP POST request and use chunked encoding to stream events to the APM Server -as soon as they are recorded in the agent. -This makes it simple for agents to serialize each event to a stream of newline delimited JSON. -The APM Server also treats the HTTP body as a compressed stream and thus reads and handles each event independently. - -See the <> to learn more about the different types of events. - -[[api-events-endpoint]] -[float] -=== Endpoint - -Send an `HTTP POST` request to the APM Server `intake/v2/events` endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/events ------------------------------------------------------------- - -For <> send an `HTTP POST` request to the APM Server `intake/v2/rum/events` endpoint instead: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/rum/events ------------------------------------------------------------- - -[[api-events-response]] -[float] -=== Response - -On success, the server will respond with a 202 Accepted status code and no body. - -Keep in mind that events can succeed and fail independently of each other. Only if all events succeed does the server respond with a 202. - -[[api-events-errors]] -[float] -=== Errors - -There are two types of errors that the APM Server may return to an agent: - -* Event related errors (typically validation errors) -* Non-event related errors - -The APM Server processes events one after the other. -If an error is encountered while processing an event, -the error encountered as well as the document causing the error are added to an internal array. -The APM Server will only save 5 event related errors. -If it encounters more than 5 event related errors, -the additional errors will not be returned to agent. -Once all events have been processed, -the error response is sent. - -Some errors, not relating to specific events, -may terminate the request immediately. -For example: IP rate limit reached, wrong metadata, etc. -If at any point one of these errors is encountered, -it is added to the internal array and immediately returned. - -An example error response might look something like this: - -[source,json] ------------------------------------------------------------- -{ - "errors": [ - { - "message": "", <1> - "document": "" <2> - },{ - "message": "", - "document": "" - },{ - "message": "", - "document": "" - },{ - "message": "too many requests" <3> - }, - ], - "accepted": 2320 <4> -} ------------------------------------------------------------- - -<1> An event related error -<2> The document causing the error -<3> An immediately returning non-event related error -<4> The number of accepted events - -If you're developing an agent, these errors can be useful for debugging. - -[[api-events-schema-definition]] -[float] -=== Event API Schemas - -The APM Server uses a collection of JSON Schemas for validating requests to the intake API: - -* <> -* <> -* <> -* <> -* <> -* <> - -include::./api-metadata.asciidoc[] -include::./api-transaction.asciidoc[] -include::./api-span.asciidoc[] -include::./api-error.asciidoc[] -include::./api-metricset.asciidoc[] -include::./api-event-example.asciidoc[] diff --git a/docs/api-info.asciidoc b/docs/api-info.asciidoc deleted file mode 100644 index 517b90fa039..00000000000 --- a/docs/api-info.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[api-info]] -=== Server information API - -The APM Server exposes an API endpoint to query general server information. -This lightweight endpoint is useful as a server up/down health check. - -[float] -[[api-info-endpoint]] -=== Server Information endpoint - -Send an `HTTP GET` request to the server information endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/ ------------------------------------------------------------- - -This endpoint always returns an HTTP 200. - -If <> or a <> is configured, requests to this endpoint must be authenticated. - -[float] -[[api-info-examples]] -==== Example - -Example APM Server information request: - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/ \ - -H "Authorization: Bearer secret_token" - -{ - "build_date": "2021-12-18T19:59:06Z", - "build_sha": "24fe620eeff5a19e2133c940c7e5ce1ceddb1445", - "publish_ready": true, - "version": "{version}" -} ---------------------------------------------------------------------------- diff --git a/docs/api-metadata.asciidoc b/docs/api-metadata.asciidoc deleted file mode 100644 index 04addb0c512..00000000000 --- a/docs/api-metadata.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[api-metadata]] -==== Metadata - -Every new connection to the APM Server starts with a `metadata` stanza. -This provides general metadata concerning the other objects in the stream. - -Rather than send this metadata information from the agent multiple times, -the APM Server hangs on to this information and applies it to other objects in the stream as necessary. - -TIP: Metadata is stored under `context` when viewing documents in {es}. - -* <> -* <> - -[[api-kubernetes-data]] -[float] -==== Kubernetes data - -APM agents automatically read Kubernetes data and send it to the APM Server. -In most instances, agents are able to read this data from inside the container. -If this is not the case, or if you wish to override this data, you can set environment variables for the agents to read. -These environment variable are set via the Kubernetes https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables[Downward API]. -Here's how you would add the environment variables to your Kubernetes pod spec: - -[source,yaml] ----- - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: KUBERNETES_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: KUBERNETES_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: KUBERNETES_POD_UID - valueFrom: - fieldRef: - fieldPath: metadata.uid ----- - -The table below maps these environment variables to the APM metadata event field: - -[options="header"] -|===== -|Environment variable |Metadata field name -| `KUBERNETES_NODE_NAME` |system.kubernetes.node.name -| `KUBERNETES_POD_NAME` |system.kubernetes.pod.name -| `KUBERNETES_NAMESPACE` |system.kubernetes.namespace -| `KUBERNETES_POD_UID` |system.kubernetes.pod.uid -|===== - -[[api-metadata-schema]] -[float] -==== Metadata Schema - -APM Server uses JSON Schema to validate requests. The specification for metadata is defined on -{github_repo_link}/docs/spec/v2/metadata.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/metadata.json[] ----- \ No newline at end of file diff --git a/docs/api-metricset.asciidoc b/docs/api-metricset.asciidoc deleted file mode 100644 index d59ea85d460..00000000000 --- a/docs/api-metricset.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-metricset]] -==== Metrics - -Metrics contain application metric data captured by an {apm-agent}. - -[[api-metricset-schema]] -[float] -==== Metric Schema - -APM Server uses JSON Schema to validate requests. The specification for metrics is defined on -{github_repo_link}/docs/spec/v2/metricset.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/metricset.json[] ----- diff --git a/docs/api-span.asciidoc b/docs/api-span.asciidoc deleted file mode 100644 index 96d75d31d75..00000000000 --- a/docs/api-span.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-span]] -==== Spans - -Spans are events captured by an agent occurring in a monitored service. - -[[api-span-schema]] -[float] -==== Span Schema - -APM Server uses JSON Schema to validate requests. The specification for spans is defined on -{github_repo_link}/docs/spec/v2/span.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/span.json[] ----- diff --git a/docs/api-transaction.asciidoc b/docs/api-transaction.asciidoc deleted file mode 100644 index 758496ebcb2..00000000000 --- a/docs/api-transaction.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-transaction]] -==== Transactions - -Transactions are events corresponding to an incoming request or similar task occurring in a monitored service. - -[[api-transaction-schema]] -[float] -==== Transaction Schema - -APM Server uses JSON Schema to validate requests. The specification for transactions is defined on -{github_repo_link}/docs/spec/v2/transaction.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/transaction.json[] ----- diff --git a/docs/api.asciidoc b/docs/api.asciidoc deleted file mode 100644 index 8afaa83f25d..00000000000 --- a/docs/api.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[api]] -== API - -The APM Server exposes endpoints for: - -* <> -* <> -* <> - -include::./api-events.asciidoc[] -include::./api-config.asciidoc[] -include::./api-info.asciidoc[] diff --git a/docs/apm-breaking.asciidoc b/docs/apm-breaking.asciidoc deleted file mode 100644 index e06351bbdf5..00000000000 --- a/docs/apm-breaking.asciidoc +++ /dev/null @@ -1,203 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[apm-breaking]] -=== Breaking Changes - -// These tagged regions are required for the stack-docs repo includes -// tag::83-bc[] -// end::83-bc[] -// tag::notable-v8-breaking-changes[] -// end::notable-v8-breaking-changes[] - -This section describes the breaking changes and deprecations introduced in this release -and previous minor versions. - -[float] -[[breaking-changes-8.2]] -=== 8.2 - -// tag::82-bc[] -The following breaking changes are introduced in APM version 8.2.0: - -[float] -==== APM Server now emits events with `event.duration` - -APM Server no longer emits events with a `transaction.duration.us` or `span.duration.us`. -Instead, events are emitted with an `event.duration`. -An ingest pipeline sets the legacy `.duration.us` field and removes the `event.duration`. - -This change will impact users who are not using APM Server's {es} output or the packaged ingest pipeline. -For details, see https://github.com/elastic/apm-server/pull/7261[PR #7261]. - -[float] -==== Removed `observer.version_major` - -The field `observer.version_major` is non-standard and existed only for the APM UI to filter out legacy docs (versions <7.0). -This check is no longer performed, so the field has been removed. - -For details, see https://github.com/elastic/apm-server/pull/7399[PR #7399]. - -[float] -==== APM Server no longer ships with System V init scripts or the go-daemon wrapper - -As of version 8.1.0, all Linux distributions supported by APM Server support systemd. -As a result, APM Server no longer ships with System V init scripts or the go-daemon wrapper; use systemd instead. - -For details, see https://github.com/elastic/apm-server/pull/7576[PR #7576]. - -[float] -==== Deprecated 32-bit architectures - -APM Server support for 32-bit architectures has been deprecated and will be removed in a future release. -// end::82-bc[] - -[float] -[[breaking-changes-8.1]] -=== 8.1 - -// tag::81-bc[] -There are no breaking changes in APM. -// end::81-bc[] - -[float] -[[breaking-changes-8.0]] -=== 8.0 - -// tag::80-bc[] -The following breaking changes are introduced in APM version 8.0. - -[float] -==== Indices are now manged by {fleet} - -All index management has been removed from APM Server; -{fleet} is now entirely responsible for setting up index templates, index lifecycle polices, -and index pipelines. - -As a part of this change, the following settings have been removed: - -* `apm-server.ilm.*` -* `apm-server.register.ingest.pipeline.*` -* `setup.*` - -[float] -==== Data streams by default - -APM Server now only writes to well-defined data streams; -writing to classic indices is no longer supported. - -As a part of this change, the following settings have been removed: - -* `apm-server.data_streams.enabled` -* `output.elasticsearch.index` -* `output.elasticsearch.indices` -* `output.elasticsearch.pipeline` -* `output.elasticsearch.pipelines` - -[float] -==== New {es} output - -APM Server has a new {es} output implementation with defaults that should be sufficient for -most use cases; it should no longer be necessary to manually tune the output -of APM Server. - -As a part of this change, the following settings have been removed: - -* `output.elasticsearch.bulk_max_size` -* `output.elasticsearch.worker` -* `queue.*` - -[float] -==== New source map upload endpoint - -The source map upload endpoint has been removed from APM Server. -Source maps should now be uploaded directly to {kib} instead. - -[float] -==== Legacy Jaeger endpoints have been removed - -The legacy Jaeger gRPC and HTTP endpoints have been removed from APM Server. - -As a part of this change, the following settings have been removed: - -* `apm-server.jaeger` - -[float] -==== Homebrew no longer supported - -APM Server no longer supports installation via Homebrew. - -[float] -==== All removed and changed settings - -Below is a list of all **removed settings** (in alphabetical order) for -users upgrading a standalone (legacy) APM Server to {stack} version 8.0. - -[source,yml] ----- -apm-server.data_streams.enabled -apm-server.ilm.* -apm-server.jaeger -apm-server.register.ingest.pipeline.* -apm-server.sampling.keep_unsampled -output.elasticsearch.bulk_max_size -output.elasticsearch.index -output.elasticsearch.indices -output.elasticsearch.pipeline -output.elasticsearch.pipelines -output.elasticsearch.worker -queue.* -setup.* ----- - -Below is a list of **renamed settings** (in alphabetical order) for -users upgrading a standalone (legacy) APM Server to {stack} version 8.0. - -[source,yml] ----- -previous setting --> new setting - -apm-server.api_key --> apm-server.auth.api_key -apm-server.instrumentation --> instrumentation -apm-server.rum.allowed_service --> apm-server.auth.anonymous.allow_service -apm-server.rum.event_rate --> apm-server.auth.anonymous.rate_limit -apm-server.secret_token --> apm-server.auth.secret_token ----- - -[float] -==== Supported {ecloud} settings - -Below is a list of all **supported settings** (in alphabetical order) for -users upgrading an {ecloud} standalone (legacy) cluster to {stack} version 8.0. -Any previously supported settings not listed below will be removed when upgrading. - -[source,yml] ----- -apm-server.agent.config.cache.expiration -apm-server.aggregation.transactions.* -apm-server.auth.anonymous.allow_agent -apm-server.auth.anonymous.allow_service -apm-server.auth.anonymous.rate_limit.event_limit -apm-server.auth.anonymous.rate_limit.ip_limit -apm-server.auth.api_key.enabled -apm-server.auth.api_key.limit -apm-server.capture_personal_data -apm-server.default_service_environment -apm-server.max_event_size -apm-server.rum.allow_headers -apm-server.rum.allow_origins -apm-server.rum.enabled -apm-server.rum.exclude_from_grouping -apm-server.rum.library_pattern -apm-server.rum.source_mapping.enabled -apm-server.rum.source_mapping.cache.expiration -logging.level -logging.selectors -logging.metrics.enabled -logging.metrics.period -max_procs -output.elasticsearch.flush_bytes -output.elasticsearch.flush_interval ----- - -// end::80-bc[] diff --git a/docs/apm-components.asciidoc b/docs/apm-components.asciidoc deleted file mode 100644 index 55d86a06472..00000000000 --- a/docs/apm-components.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[apm-components]] -== Components and documentation - -**** -There are two ways to install, run, and manage Elastic APM: - -* With the Elastic APM integration -* With the standalone (legacy) APM Server binary - -This documentation focuses on option one: the **Elastic APM integration**. -For standalone APM Server (legacy) documentation, please see the <> -and <>. -**** - -Elastic APM consists of four components: *APM agents*, the *Elastic APM integration*, *{es}*, and *{kib}*. -Generally, there are two ways that these four components can work together: - -APM agents on edge machines send data to a centrally hosted APM integration: - -image::./images/apm-architecture.png[Architecture of Elastic APM] - -Or, APM agents and the APM integration live on edge machines and enroll via a centrally hosted {agent}: - -image::./images/apm-architecture-two.png[Architecture of Elastic APM option two] - -Read on to learn more about each of these components! - -[float] -=== APM Agents - -APM agents are open source libraries written in the same language as your service. -You may only need one, or you might use all of them. -You install them into your service as you would install any other library. -They instrument your code and collect performance data and errors at runtime. -This data is buffered for a short period and sent on to APM Server. - -Each agent has its own documentation: - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -[float] -[[apm-integration]] -=== Elastic APM integration - -The APM integration receives performance data from your APM agents, -validates and processes it, and then transforms the data into {es} documents. -Removing this logic from APM agents help keeps them light, prevents certain security risks, -and improves compatibility across the {stack}. - -The Elastic integration runs on {fleet-guide}[{agent}]. {agent} is a single, unified way to add monitoring for logs, -metrics, traces, and other types of data to each host. -A single agent makes it easier and faster to deploy monitoring across your infrastructure. -The agent's single, unified policy makes it easier to add integrations for new data sources. - -[float] -=== {es} - -{ref}/index.html[{es}] is a highly scalable free and open full-text search and analytics engine. -It allows you to store, search, and analyze large volumes of data quickly and in near real time. -{es} is used to store APM performance metrics and make use of its aggregations. - -[float] -=== {kib} {apm-app} - -{kibana-ref}/index.html[{kib}] is a free and open analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with data stored in {es}. - -Since application performance monitoring is all about visualizing data and detecting bottlenecks, -it's crucial you understand how to use the {kibana-ref}/xpack-apm.html[{apm-app}] in {kib}. -The following sections will help you get started: - -* {apm-app-ref}/apm-ui.html[Set up] -* {apm-app-ref}/apm-getting-started.html[Get started] -* {apm-app-ref}/apm-how-to.html[How-to guides] - -APM also has built-in integrations with {ml-cap}. To learn more about this feature, -or the {anomaly-detect} feature that's built on top of it, -refer to {kibana-ref}/machine-learning-integration.html[{ml-cap} integration]. diff --git a/docs/apm-data-security.asciidoc b/docs/apm-data-security.asciidoc deleted file mode 100644 index 5a4fc84a68e..00000000000 --- a/docs/apm-data-security.asciidoc +++ /dev/null @@ -1,579 +0,0 @@ -[[apm-data-security]] -=== Data security - -When setting up Elastic APM, it's essential to review all captured data carefully to ensure -it doesn't contain sensitive information like passwords, credit card numbers, or health data. -In addition, you may wish to filter out other identifiable information, like IP addresses, user agent information, -or form field data. - -Depending on the type of data, we offer several different ways to filter, manipulate, -or obfuscate sensitive information during or before ingestion: - -* <> -* <> - -In addition to utilizing filters, you should regularly review the <> table to ensure -sensitive data is not being ingested. If it is, it's possible to remove or redact it. -See <> for more information. - -[float] -[[built-in-data-filters]] -==== Built-in data filters - -// tag::data-filters[] -Built-in data filters allow you to filter or turn off ingestion of the following types of data: - -[options="header"] -|==== -|Data type |Common sensitive data -|<> |Passwords, credit card numbers, authorization, etc. -|<> |Passwords, credit card numbers, etc. -|<> |Client IP address and user agent. -|<> |URLs visited, click events, user browser errors, resources used, etc. -|<> |Sensitive user or business information -|==== -// end::data-filters[] - -[float] -[[custom-data-filters]] -==== Custom filters - -// tag::custom-filters[] -Custom filters allow you to filter or redact other types of APM data on ingestion: - -|==== -// |<> | Applied at ingestion time. -// All agents and fields are supported. Data leaves the instrumented service. -// There are no performance overhead implications on the instrumented service. - -|<> | Not supported by all agents. -Data is sanitized before leaving the instrumented service. -Potential overhead implications on the instrumented service -|==== -// end::custom-filters[] - -[float] -[[sensitive-fields]] -==== Sensitive fields - -You should review the following fields regularly to ensure sensitive data is not being captured: - -[options="header"] -|==== -| Field | Description | Remedy -| `client.ip` | The client IP address, as forwarded by proxy. | <> -| `http.request.body.original` | The body of the monitored HTTP request. | <> -| `http.request.headers` | The canonical headers of the monitored HTTP request. | <> -| `http.request.socket.remote_address` | The address of the last proxy or end-user (if no proxy). | <> -| `http.response.headers` | The canonical headers of the monitored HTTP response. | <> -| `process.args` | Process arguments. | <> -| `span.db.statement` | Database statement. | <> -| `stacktrace.vars` | A flat mapping of local variables captured in the stack frame | <> -| `url.query` | The query string of the request, e.g. `?pass=hunter2`. | <> -| `user.*` | Logged-in user information. | <> -| `user_agent.*` | Device and version making the network request. | <> -|==== - -// **************************************************************** - -[[filtering]] -==== Built-in data filters - -include::./apm-data-security.asciidoc[tag=data-filters] - -[discrete] -[[filters-http-header]] -==== HTTP headers - -By default, APM agents capture HTTP request and response headers (including cookies). -Most Elastic APM agents provide the ability to sanitize HTTP header fields, -including cookies and `application/x-www-form-urlencoded` data (POST form fields). -Query string and captured request bodies, like `application/json` data, are not sanitized. - -The default list of sanitized fields attempts to target common field names for data relating to -passwords, credit card numbers, authorization, etc., but can be customized to fit your data. -This sensitive data never leaves the instrumented service. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-sanitize-field-names[`ELASTIC_APM_SANITIZE_FIELD_NAMES`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`sanitize_field_names`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-sanitize-field-names[`sanitizeFieldNames`] -* Node.js: {apm-node-ref-v}/configuration.html#sanitize-field-names[`sanitizeFieldNames`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] - -Alternatively, you can completely disable the capturing of HTTP headers. -This setting also supports {kibana-ref}/agent-configuration.html[Central configuration]: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-headers[`ELASTIC_APM_CAPTURE_HEADERS`] -* Java: {apm-java-ref-v}/config-core.html#config-capture-headers[`capture_headers`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-headers[`CaptureHeaders`] -* Node.js: {apm-node-ref-v}/configuration.html#capture-headers[`captureHeaders`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-headers[`capture_headers`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-headers[`capture_headers`] - -[discrete] -[[filters-http-body]] -==== HTTP bodies - -By default, the body of HTTP requests is not recorded. -Request bodies often contain sensitive data like passwords or credit card numbers, -so use care when enabling this feature. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-body[`ELASTIC_APM_CAPTURE_BODY`] -* Java: {apm-java-ref-v}/config-core.html#config-capture-body[`capture_body`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-body[`CaptureBody`] -* Node.js: {apm-node-ref-v}//configuration.html#capture-body[`captureBody`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-body[`capture_body`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-body[`capture_body`] - -[discrete] -[[filters-personal-data]] -==== Personal data - -By default, the APM Server captures some personal data associated with trace events: - -* `client.ip`: The client's IP address. Typically derived from the HTTP headers of incoming requests. -`client.ip` is also used in conjunction with the {ref}/geoip-processor.html[`geoip` processor] to assign -geographical information to trace events. To learn more about how `client.ip` is derived, -see <>. -* `user_agent`: User agent data, including the client operating system, device name, vendor, and version. - -The capturing of this data can be turned off by setting -**Capture personal data** to `false`. - -[discrete] -[[filters-real-user-data]] -==== Real user monitoring data - -Protecting user data is important. -For that reason, individual RUM instrumentations can be disabled in the RUM agent with the -{apm-rum-ref-v}/configuration.html#disable-instrumentations[`disableInstrumentations`] configuration variable. -Disabled instrumentations produce no spans or transactions. - -[options="header"] -|==== -|Disable |Configuration value -|HTTP requests |`fetch` and `xmlhttprequest` -|Page load metrics including static resources |`page-load` -|JavaScript errors on the browser |`error` -|User click events including URLs visited, mouse clicks, and navigation events |`eventtarget` -|Single page application route changes |`history` -|==== - -[discrete] -[[filters-database-statements]] -==== Database statements - -For SQL databases, APM agents do not capture the parameters of prepared statements. -Note that Elastic APM currently does not make an effort to strip parameters of regular statements. -Not using prepared statements makes your code vulnerable to SQL injection attacks, -so be sure to use prepared statements. - -For non-SQL data stores, such as {es} or MongoDB, -Elastic APM captures the full statement for queries. -For inserts or updates, the full document is not stored. -To filter or obfuscate data in non-SQL database statements, -or to remove the statement entirely, -you can set up an ingest node pipeline. - -[discrete] -[[filters-agent-specific]] -==== Agent-specific options - -Certain agents offer additional filtering and obfuscating options: - -**Agent configuration options** - -* (Node.js) Remove errors raised by the server-side process: -disable with {apm-node-ref-v}/configuration.html#capture-exceptions[captureExceptions]. - -* (Java) Remove process arguments from transactions: -disabled by default with {apm-java-ref-v}/config-reporter.html#config-include-process-args[`include_process_args`]. - -// **************************************************************** - -[[custom-filter]] -==== Custom filters - -include::./apm-data-security.asciidoc[tag=custom-filters] - -// [discrete] -// [[filters-ingest-pipeline]] -// ==== Create an ingest node pipeline filter - -// Ingest node pipelines specify a series of processors that transform data in a specific way. -// Transformation happens prior to indexing--inflicting no performance overhead on the monitored application. -// Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. - -// WARNING: Changes to the default APM pipeline do not persist through upgrades. -// You will need to re-add custom ingest pipelines after every upgrade. - -// **Example** - -// Say you decide to <> -// but quickly notice that sensitive information is being collected in the -// `http.request.body.original` field: - -// [source,json] -// ---- -// { -// "email": "test@abc.com", -// "password": "hunter2" -// } -// ---- - -// To obfuscate the passwords stored in the request body, -// you can use a series of {ref}/processors.html[ingest processors]. -// To start, create a pipeline with a simple description and an empty array of processors: - -// [source,json] -// ---- -// { -// "pipeline": { -// "description": "redact http.request.body.original.password", -// "processors": [] <1> -// } -// } -// ---- -// <1> The processors defined below will go in this array - -// Add the first processor to the processors array. -// Because the agent captures the request body as a string, use the -// {ref}/json-processor.html[JSON processor] to convert the original field value into a structured JSON object. -// Save this JSON object in a new field: - -// [source,json] -// ---- -// { -// "json": { -// "field": "http.request.body.original", -// "target_field": "http.request.body.original_json", -// "ignore_failure": true -// } -// } -// ---- - -// If `body.original_json` is not `null`, redact the `password` with the {ref}/set-processor.html[set processor], -// by setting the value of `body.original_json.password` to `"redacted"`: - -// [source,json] -// ---- -// { -// "set": { -// "field": "http.request.body.original_json.password", -// "value": "redacted", -// "if": "ctx?.http?.request?.body?.original_json != null" -// } -// } -// ---- - -// Use the {ref}/convert-processor.html[convert processor] to convert the JSON value of `body.original_json` to a string and set it as the `body.original` value: - -// [source,json] -// ---- -// { -// "convert": { -// "field": "http.request.body.original_json", -// "target_field": "http.request.body.original", -// "type": "string", -// "if": "ctx?.http?.request?.body?.original_json != null", -// "ignore_failure": true -// } -// } -// ---- - -// Finally, use the {ref}/remove-processor.html[remove processor] to remove the `body.original_json` field: - -// [source,json] -// ---- -// { -// "remove": { -// "field": "http.request.body.original", -// "if": "ctx?.http?.request?.body?.original_json != null", -// "ignore_failure": true -// } -// } -// ---- - -// Now that the pipeline has been defined, -// use the {ref}/put-pipeline-api.html[create or update pipeline API] to register the new pipeline in {es}. -// Name the pipeline `apm_redacted_body_password`: - -// [source,console] -// ---- -// PUT _ingest/pipeline/apm_redacted_body_password -// { -// "description": "redact http.request.body.original.password", -// "processors": [ -// { -// "json": { -// "field": "http.request.body.original", -// "target_field": "http.request.body.original_json", -// "ignore_failure": true -// } -// }, -// { -// "set": { -// "field": "http.request.body.original_json.password", -// "value": "redacted", -// "if": "ctx?.http?.request?.body?.original_json != null" -// } -// }, -// { -// "convert": { -// "field": "http.request.body.original_json", -// "target_field": "http.request.body.original", -// "type": "string", -// "if": "ctx?.http?.request?.body?.original_json != null", -// "ignore_failure": true -// } -// }, -// { -// "remove": { -// "field": "http.request.body.original_json", -// "if": "ctx?.http?.request?.body?.original_json != null", -// "ignore_failure": true -// } -// } -// ] -// } -// ---- - -// Prior to enabling this new pipeline, you can test it with the {ref}/simulate-pipeline-api.html[simulate pipeline API]. -// This API allows you to run multiple documents through a pipeline to ensure it is working correctly. - -// The request below simulates running three different documents through the pipeline: - -// [source,console] -// ---- -// POST _ingest/pipeline/apm_redacted_body_password/_simulate -// { -// "docs": [ -// { -// "_source": { <1> -// "http": { -// "request": { -// "body": { -// "original": """{"email": "test@abc.com", "password": "hunter2"}""" -// } -// } -// } -// } -// }, -// { -// "_source": { <2> -// "some-other-field": true -// } -// }, -// { -// "_source": { <3> -// "http": { -// "request": { -// "body": { -// "original": """["invalid json" """ -// } -// } -// } -// } -// } -// ] -// } -// ---- -// <1> This document features the same sensitive data from the original example above -// <2> This document only contains an unrelated field -// <3> This document contains invalid JSON - -// The API response should be similar to this: - -// [source,json] -// ---- -// { -// "docs" : [ -// { -// "doc" : { -// "_source" : { -// "http" : { -// "request" : { -// "body" : { -// "original" : { -// "password" : "redacted", -// "email" : "test@abc.com" -// } -// } -// } -// } -// } -// } -// }, -// { -// "doc" : { -// "_source" : { -// "nobody" : true -// } -// } -// }, -// { -// "doc" : { -// "_source" : { -// "http" : { -// "request" : { -// "body" : { -// "original" : """["invalid json" """ -// } -// } -// } -// } -// } -// } -// ] -// } -// ---- - -// As expected, only the first simulated document has a redacted password field. -// All other documents are unaffected. - -// The final step in this process is to add the newly created `apm_redacted_body_password` pipeline -// to the default APM pipeline. - -// Pipelines are defined per data stream, and APM data streams are dependent on the type of data being ingested: - -// include::./data-streams.asciidoc[tag=traces-data-streams] -// include::./data-streams.asciidoc[tag=metrics-data-streams] -// include::./data-streams.asciidoc[tag=logs-data-streams] - -// In this example, we're interested in redacting application tracing data, -// so we'll edit the `traces-apm-` ingest pipeline. - -// Navigate to **Stack Management** > **Index Management** > **Data Streams**. -// Search for the `traces-apm` data stream and select it. -// In the flyover panel, select the **Index template** for this data stream. -// Select **Settings** and find the name of the `default_pipeline`; in this example, it's +traces-apm-{version}+. - -// Navigate to **Stack Management** > **Index Pipelines** and search for +traces-apm-{version}+. -// Select **Manage** > **Edit**. - -// WARNING: Do not edit any of the predefined pipeline steps or you may break the APM UI. - -// At the end of the pipeline, select **Add a processor**. -// Select the `Pipeline` processor, and use the Pipeline name we created in the previous step, `apm_redacted_body_password`. -// Select **Add** and finally, **Save pipeline**. - -// That's it! Passwords will now be redacted from your APM HTTP body data. -// Don't forget to re-add your custom pipelines after each version upgrade. - -[discrete] -[[filters-in-agent]] -==== APM agent filters - -Some APM agents offer a way to manipulate or drop APM events _before_ they are sent to the APM Server. -Please see the relevant agent's documentation for more information and examples: - -// * Go: {apm-go-ref-v}/[] -// * Java: {apm-java-ref-v}/[] -* .NET: {apm-dotnet-ref-v}/public-api.html#filter-api[Filter API]. -* Node.js: {apm-node-ref-v}/agent-api.html#apm-add-filter[`addFilter()`]. -// * PHP: {apm-php-ref-v}[] -* Python: {apm-py-ref-v}/sanitizing-data.html[custom processors]. -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-add-filter[`add_filter()`]. - -// **************************************************************** - -[[data-security-delete]] -==== Delete sensitive data - -If you accidentally ingest sensitive data, follow these steps to remove or redact the offending data: - -. Stop collecting the sensitive data. -Use the *remedy* column of the <> table to determine how to stop collecting -the offending data. - -. Delete or redact the ingested data. With data collection fixed, you can now delete or redact the offending data: -+ -* <> -* <> - -[float] -[[redact-field-data]] -===== Redact specific fields - -To redact sensitive data in a specific field, use the {ref}/docs-update-by-query.html[update by query API]. - -For example, the following query removes the `client.ip` address -from APM documents in the `logs-apm.error-default` data stream: - -[source, console] ----- -POST /logs-apm.error-default/_update_by_query -{ - "query": { - "exists": { - "field": "client.ip" - } - } - "script": { - "source": "ctx._source.client.ip = params.redacted", - "params": { - "redacted": "[redacted]" - } - } -} ----- - -Or, perhaps you only want to redact IP addresses from European users: - -[source, console] ----- -POST /logs-apm.error-default/_update_by_query -{ - "query": { - "term": { - "client.geo.continent_name": { - "value": "Europe" - } - } - }, - "script": { - "source": "ctx._source.client.ip = params.redacted", - "params": { - "redacted": "[redacted]" - } - } -} ----- - -See {ref}/docs-update-by-query.html[update by query API] for more information and examples. - -[float] -[[delete-doc-data]] -===== Delete {es} documents - -WARNING: This will permanently delete your data. -You should test your queries with the {ref}/search-search.html[search API] prior to deleting data. - -To delete an {es} document, -you can use the {ref}/docs-delete-by-query.html[delete by query API]. - -For example, to delete all documents in the `apm-traces-*` data stream with a `user.email` value, run the following query: - -[source, console] ----- -POST /apm-traces-*/_delete_by_query -{ - "query": { - "exists": { - "field": "user.email" - } - } -} ----- - -See {ref}/docs-delete-by-query.html[delete by query API] for more information and examples. diff --git a/docs/apm-distributed-tracing.asciidoc b/docs/apm-distributed-tracing.asciidoc deleted file mode 100644 index b7cad3a9bfc..00000000000 --- a/docs/apm-distributed-tracing.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[[apm-distributed-tracing]] -=== Distributed tracing - -A `trace` is a group of <> and <> with a common root. -Each `trace` tracks the entirety of a single request. -When a `trace` travels through multiple services, as is common in a microservice architecture, -it is known as a distributed trace. - -[float] -[[why-distributed-tracing]] -=== Why is distributed tracing important? - -Distributed tracing enables you to analyze performance throughout your microservice architecture -by tracing the entirety of a request -- from the initial web request on your front-end service -all the way to database queries made on your back-end services. - -Tracking requests as they propagate through your services provides an end-to-end picture of -where your application is spending time, where errors are occurring, and where bottlenecks are forming. -Distributed tracing eliminates individual service's data silos and reveals what's happening outside of -service borders. - -For supported technologies, distributed tracing works out-of-the-box, with no additional configuration required. - -[float] -[[how-distributed-tracing]] -=== How distributed tracing works - -Distributed tracing works by injecting a custom `traceparent` HTTP header into outgoing requests. -This header includes information, like `trace-id`, which is used to identify the current trace, -and `parent-id`, which is used to identify the parent of the current span on incoming requests -or the current span on an outgoing request. - -When a service is working on a request, it checks for the existence of this HTTP header. -If it's missing, the service starts a new trace. -If it exists, the service ensures the current action is added as a child of the existing trace, -and continues to propagate the trace. - -[float] -[[trace-propagation]] -==== Trace propagation examples - -In this example, Elastic's Ruby agent communicates with Elastic's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex1.png[How traceparent propagation works] - -In this example, Elastic's Ruby agent communicates with OpenTelemetry's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex2.png[How traceparent propagation works] - -In this example, the trace meets a piece of middleware that doesn't propagate the `traceparent` header. -The distributed trace ends and any further communication will result in a new trace. - -// lint ignore traceparent -image::./images/dt-trace-ex3.png[How traceparent propagation works] - - -[float] -[[w3c-tracecontext-spec]] -==== W3C Trace Context specification - -All Elastic agents now support the official W3C Trace Context specification and `traceparent` header. -See the table below for the minimum required agent version: - -[options="header"] -|==== -|Agent name |Agent Version -|**Go Agent**| ≥`1.6` -|**Java Agent**| ≥`1.14` -|**.NET Agent**| ≥`1.3` -|**Node.js Agent**| ≥`3.4` -|**PHP Agent**| ≥`1.0` -|**Python Agent**| ≥`5.4` -|**Ruby Agent**| ≥`3.5` -|**RUM Agent**| ≥`5.0` -|==== - -NOTE: Older Elastic agents use a unique `elastic-apm-traceparent` header. -For backward-compatibility purposes, new versions of Elastic agents still support this header. - -[float] -[[visualize-distributed-tracing]] -=== Visualize distributed tracing - -The {apm-app}'s timeline visualization provides a visual deep-dive into each of your application's traces: - -[role="screenshot"] -image::./images/apm-distributed-tracing.png[Distributed tracing in the APM UI] - -[float] -[[manual-distributed-tracing]] -=== Manual distributed tracing - -Elastic agents automatically propagate distributed tracing context for supported technologies. -If your service communicates over a different, unsupported protocol, -you can manually propagate distributed tracing context from a sending service to a receiving service -with each agent's API. - -[float] -[[distributed-tracing-outgoing]] -==== Add the `traceparent` header to outgoing requests - -Sending services must add the `traceparent` header to outgoing requests. - --- -include::./shared/distributed-trace-send/distributed-trace-send-widget.asciidoc[] --- - -[float] -[[distributed-tracing-incoming]] -==== Parse the `traceparent` header on incoming requests - -Receiving services must parse the incoming `traceparent` header, -and start a new transaction or span as a child of the received context. - --- -include::./shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc[] --- - -[float] -[[distributed-tracing-rum]] -=== Distributed tracing with RUM - -Some additional setup may be required to correlate requests correctly with the Real User Monitoring (RUM) agent. - -See the {apm-rum-ref}/distributed-tracing-guide.html[RUM distributed tracing guide] -for information on enabling cross-origin requests, setting up server configuration, -and working with dynamically-generated HTML. diff --git a/docs/apm-input-settings.asciidoc b/docs/apm-input-settings.asciidoc deleted file mode 100644 index 6066adcbbc2..00000000000 --- a/docs/apm-input-settings.asciidoc +++ /dev/null @@ -1,545 +0,0 @@ -// tag::NAME-setting[] -| -[id="input-{input-type}-NAME-setting"] -`NAME` - -| (TYPE) DESCRIPTION. - -*Default:* `DEFAULT` - -OPTIONAL INFO AND EXAMPLE -// end::NAME-setting[] - -// ============================================================================= - -// These settings are shared across the docs for multiple inputs. Copy and use -// the above template to add a shared setting. Replace values in all caps. -// Use an include statement // to pull the tagged region into your source file: -// include::input-shared-settings.asciidoc[tag=NAME-setting] - -// tag::host-setting[] -| -[id="input-{input-type}-host-setting"] -Host - -| (text) Defines the host and port the server is listening on. - -Use `"unix:/path/to.sock"` to listen on a Unix domain socket. - -*Default:* `localhost:8200` -// end::host-setting[] - -// ============================================================================= - -// tag::url-setting[] -| -[id="input-{input-type}-url-setting"] -URL - -| The publicly reachable server URL. For deployments on {ecloud} or ECK, the default is unchangeable. -// end::url-setting[] - -// ============================================================================= - -// tag::max_header_bytes-setting[] -| -[id="input-{input-type}-max_header_bytes-setting"] -Maximum size of a request's header - -| (int) Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). - -*Default:* `1048576` Bytes -// end::max_header_bytes-setting[] - -// ============================================================================= - -// tag::idle_timeout-setting[] -| -[id="input-{input-type}-idle_timeout-setting"] -Idle time before underlying connection is closed - -| (text) Maximum amount of time to wait for the next incoming request before underlying connection is closed. - -*Default:* `45s` (45 seconds) -// end::idle_timeout-setting[] - -// ============================================================================= - -// tag::read_timeout-setting[] -| -[id="input-{input-type}-read_timeout-setting"] -Maximum duration for reading an entire request - -| (text) Maximum permitted duration for reading an entire request. - -*Default:* `3600s` (3600 seconds) -// end::read_timeout-setting[] - -// ============================================================================= - -// tag::shutdown_timeout-setting[] -| -[id="input-{input-type}-shutdown_timeout-setting"] -Maximum duration before releasing resources when shutting down - -| (text) Maximum duration in seconds before releasing resources when shutting down the server. - -*Default:* `30s` (30 seconds) -// end::shutdown_timeout-setting[] - -// ============================================================================= - -// tag::write_timeout-setting[] -| -[id="input-{input-type}-write_timeout-setting"] -Maximum duration for writing a response - -| (text) Maximum permitted duration for writing a response. - -*Default:* `30s` (30 seconds) -// end::write_timeout-setting[] - -// ============================================================================= - -// tag::max_event_bytes-setting[] -| -[id="input-{input-type}-max_event_bytes-setting"] -Maximum size per event - -| (int) Maximum permitted size of an event accepted by the server to be processed (in Bytes). - -*Default:* `307200` Bytes -// end::max_event_bytes-setting[] - -// ============================================================================= - -// tag::max_connections-setting[] -| -[id="input-{input-type}-max_connections-setting"] -Simultaneously accepted connections - -| (int) Maximum number of TCP connections to accept simultaneously. `0` means unlimited. - -*Default:* `0` (unlimited) -// end::max_connections-setting[] - -// ============================================================================= - -// tag::response_headers-setting[] -| -[id="input-{input-type}-response_headers-setting"] -Custom HTTP response headers - -| (text) Custom HTTP headers to add to HTTP responses. Useful for security policy compliance. - -// end::response_headers-setting[] - -// ============================================================================= - -// tag::capture_personal_data-setting[] -| -[id="input-{input-type}-capture_personal_data-setting"] -Capture personal data - -| (bool) Capture personal data such as IP or User Agent. -If true, APM Server captures the IP of the instrumented service and its User Agent if any. - -*Default:* `true` -// end::capture_personal_data-setting[] - -// ============================================================================= - -// tag::default_service_environment-setting[] -| -[id="input-{input-type}-default_service_environment-setting"] -Default Service Environment - -| (text) The default service environment for events without a defined service environment. - -*Default:* none - -// end::default_service_environment-setting[] - -// ============================================================================= - -// tag::golang_xpvar-setting[] -| -[id="input-{input-type}-golang_xpvar-setting"] -Enable APM Server Golang expvar support - -| (bool) When set to `true`, the server exposes https://golang.org/pkg/expvar/[Golang expvar] under `/debug/vars`. - -*Default:* `false` - -// end::golang_xpvar-setting[] - -// ============================================================================= - -// tag::enable_rum-setting[] -| -[id="input-{input-type}-enable_rum-setting"] -Enable RUM - -| (bool) Enables and disables Real User Monitoring (RUM). - -*Default:* `false` (disabled) -// end::enable_rum-setting[] - -// ============================================================================= - -// tag::rum_allow_origins-setting[] -| -[id="input-{input-type}-rum_allow_origins-setting"] -Allowed Origins - -| (text) A list of permitted origins for RUM support. -User-agents send an Origin header that will be validated against this list. -This is done automatically by modern browsers as part of the https://www.w3.org/TR/cors/[CORS specification]. -An origin is made of a protocol scheme, host and port, without the URL path. - -*Default:* `["*"]` (allows everything) -// end::rum_allow_origins-setting[] - -// ============================================================================= - -// tag::rum_allow_headers-setting[] -| -[id="input-{input-type}-rum_allow_headers-setting"] -Access-Control-Allow-Headers - -| (text) By default, HTTP requests made from the RUM agent to the APM integration are limited in the HTTP headers they are allowed to have. -If any other headers are added, the request will be rejected by the browser due to Cross-Origin Resource Sharing (CORS) restrictions. -If you need to add extra headers to these requests, use this configuration to allow additional headers. - -The default list of values includes `"Content-Type"`, `"Content-Encoding"`, and `"Accept"`. -Configured values are appended to the default list and used as the value for the -`Access-Control-Allow-Headers` header. -// end::rum_allow_headers-setting[] - -// ============================================================================= - -// tag::rum_response_headers-setting[] -| -[id="input-{input-type}-rum_response_headers-setting"] -Custom HTTP response headers - -| (text) Custom HTTP headers to add to RUM responses. For example, for security policy compliance. Headers set here are in addition to those set in the "Custom HTTP response headers", but only apply to RUM responses. - -*Default:* none -// end::rum_response_headers-setting[] - -// ============================================================================= - -// tag::rum_library_frame_pattern-setting[] -| -[id="input-{input-type}-rum_library_frame_pattern-setting"] -Library Frame Pattern - -| (text) RegExp to be matched against a stack trace frame's `file_name` and `abs_path` attributes. -If the RegExp matches, the stack trace frame is considered to be a library frame. -When source mapping is applied, the `error.culprit` is set to reflect the _function_ and the _filename_ -of the first non-library frame. -This aims to provide an entry point for identifying issues. - -*Default:* `"node_modules\|bower_components\|~"` -// end::rum_library_frame_pattern-setting[] - -// ============================================================================= - -// tag::rum_exclude_from_grouping-setting[] -| -[id="input-{input-type}-rum_exclude_from_grouping-setting"] -Exclude from grouping - -| (text) RegExp to be matched against a stack trace frame's `file_name`. -If the RegExp matches, the stack trace frame is excluded from being used for calculating error groups. - -*Default:* `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`) -// end::rum_exclude_from_grouping-setting[] - -// ============================================================================= - -// tag::tls_enabled-setting[] -| -[id="input-{input-type}-tls_enabled-setting"] -Enable TLS - -| (bool) Enable TLS. - -*Default:* `false` -// end::tls_enabled-setting[] - -// ============================================================================= - -// tag::tls_certificate-setting[] -| -[id="input-{input-type}-tls_certificate-setting"] -File path to server certificate - -| (text) The path to the file containing the certificate for server authentication. Required when TLS is enabled. - -*Default:* none -// end::tls_certificate-setting[] - -// ============================================================================= - -// tag::tls_key-setting[] -| -[id="input-{input-type}-tls_key-setting"] -File path to server certificate key - -| (text) The path to the file containing the Server certificate key. Required when TLS is enabled. - -*Default:* none -// end::tls_key-setting[] - -// ============================================================================= - -// tag::tls_supported_protocols-setting[] -| -[id="input-{input-type}-tls_supported_protocols-setting"] -Supported protocol versions - -| (array) A list of allowed TLS protocol versions. - -*Default:* `["TLSv1.1", "TLSv1.2", "TLSv1.3"]` -// end::tls_supported_protocols-setting[] - -// ============================================================================= - -// tag::tls_cipher_suites-setting[] -| -[id="input-{input-type}-tls_cipher_suites-setting"] -Cipher suites for TLS connections - -| (text) The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library’s https://golang.org/pkg/crypto/tls/[default suites] are used (recommended). -Note that TLS 1.3 cipher suites are not individually configurable in Go, so they are not included in this list. -// end::tls_cipher_suites-setting[] - -// ============================================================================= - -// tag::tls_curve_types-setting[] -| -[id="input-{input-type}-tls_curve_types-setting"] -Curve types for ECDHE based cipher suites - -| (text) The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -*Default:* none -// end::tls_curve_types-setting[] - -// ============================================================================= - -// tag::api_key_enabled-setting[] -| -[id="input-{input-type}-api_key_enabled-setting"] -API key for agent authentication - -| (bool) Enable or disable API key authorization between APM Server and APM agents. - -*Default:* `false` (disabled) -// end::api_key_enabled-setting[] - -// ============================================================================= - -// tag::api_key_limit-setting[] -| -[id="input-{input-type}-api_key_limit-setting"] -Number of keys - -| (int) Each unique API key triggers one request to {es}. -This setting restricts the number of unique API keys are allowed per minute. -The minimum value for this setting should be the number of API keys configured in your monitored services. - -*Default:* `100` -// end::api_key_limit-setting[] - -// ============================================================================= - -// tag::secret_token-setting[] -| -[id="input-{input-type}-secret_token-setting"] -Secret token - -| (text) Authorization token for sending APM data. -The same token must also be set in each {apm-agent}. -This token is not used for RUM endpoints. - -*Default:* No secret token set -// end::secret_token-setting[] - -// ============================================================================= - -// tag::anonymous_enabled-setting[] -| -[id="input-{input-type}-anonymous_enabled-setting"] -Anonymous Agent access - -| (bool) Enable or disable anonymous authentication. RUM agents do not support authentication, so disabling anonymous access will effectively disable RUM agents. - -*Default:* `true` (enabled) -// end::anonymous_enabled-setting[] - -// ============================================================================= - -// tag::anonymous_allow_agent-setting[] -| -[id="input-{input-type}-anonymous_allow_agent-setting"] -Allowed Anonymous agents - -| (array) A list of permitted {apm-agent} names for anonymous authentication. -Names in this list must match the agent's `agent.name`. - -*Default:* `[rum-js, js-base, iOS/swift]` (only RUM and iOS/Swift agent events are accepted) -// end::anonymous_allow_agent-setting[] - -// ============================================================================= - -// tag::anonymous_allow_service-setting[] -| -[id="input-{input-type}-anonymous_allow_service-setting"] -Allowed Anonymous services - -| (array) A list of permitted service names for anonymous authentication. -Names in this list must match the agent's `service.name`. -This can be used to limit the number of service-specific indices or data streams created. - -*Default:* Not set (any service name is accepted) -// end::anonymous_allow_service-setting[] - -// ============================================================================= - -// tag::anonymous_rate_limit_ip_limit-setting[] -| -[id="input-{input-type}-anonymous_rate_limit_ip_limit-setting"] -Anonymous Rate limit (IP limit) - -| (int) The number of unique IP addresses to track in a least recently used (LRU) cache. -IP addresses in the cache will be rate limited according to the `anonymous_rate_limit_event_limit` setting. -Consider increasing this default if your application has many concurrent clients. - -*Default:* `10000` -// end::anonymous_rate_limit_ip_limit-setting[] - -// ============================================================================= - -// tag::anonymous_rate_limit_event_limit-setting[] -| -[id="input-{input-type}-anonymous_rate_limit_event_limit-setting"] -Anonymous Event rate limit (event limit) - -| (int) The maximum amount of events allowed to be sent to the APM Server anonymous auth endpoint per IP per second. - -*Default:* `10` -// end::anonymous_rate_limit_event_limit-setting[] - -// ============================================================================= - -// tag::tail_sampling_enabled-setting[] -| -[id="input-{input-type}-tail_sampling_enabled"] -Enable Tail-based sampling - -| (bool) Enable and disable tail-based sampling. - -*Default:* `false` -// end::tail_sampling_enabled-setting[] - -// ============================================================================= - -// tag::tail_sampling_interval-setting[] -| -[id="input-{input-type}-tail_sampling_interval"] -Interval - -| (duration) Synchronization interval for multiple APM Servers. -Should be in the order of tens of seconds or low minutes. - -*Default:* `1m` -// end::tail_sampling_interval-setting[] - -// ============================================================================= - -// tag::tail_sampling_policies-setting[] -| -[id="input-{input-type}-tail_sampling_policies"] -Policies - -| (`[]policy`) Criteria used to match a root transaction to a sample rate. -Order is important; the first policy on the list that an event matches is the winner. -Each policy list must conclude with a default policy that only specifies a sample rate. -The default policy is used to catch remaining trace events that don’t match a stricter policy. - -Required when tail-based sampling is enabled. - -// end::tail_sampling_policies-setting[] - -// ============================================================================= - -// tag::sample_rate-setting[] -| -[id="input-{input-type}-sample_rate"] -Sample rate - -`sample_rate` - -| (int) The sample rate to apply to trace events matching this policy. -Required in each policy. - -// end::sample_rate-setting[] - -// ============================================================================= - -// tag::trace_name-setting[] -| -[id="input-{input-type}-trace_name"] -Trace name - -`trace.name` - -| (string) The trace name for events to match a policy. - -// end::trace_name-setting[] - -// ============================================================================= - -// tag::trace_outcome-setting[] -| -[id="input-{input-type}-trace_outcome"] -Trace outcome - -`trace.outcome` - -| (string) The trace outcome for events to match a policy. -Trace outcome can be `success`, `failure`, or `unknown`. - -// end::trace_outcome-setting[] - -// ============================================================================= - -// tag::service_name-setting[] -| -[id="input-{input-type}-service_name"] -Service name - -`service.name` - -| (string) The service name for events to match a policy. - -// end::service_name-setting[] - -// ============================================================================= - -// tag::service_env-setting[] -| -[id="input-{input-type}-service_env"] -Service Environment - -`service.environment` - -| (string) The service environment for events to match a policy. - -// end::service_env-setting[] - -// ============================================================================= diff --git a/docs/apm-overview.asciidoc b/docs/apm-overview.asciidoc deleted file mode 100644 index 299726c529c..00000000000 --- a/docs/apm-overview.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[[apm-overview]] -== Free and open application performance monitoring - -++++ -What is APM? -++++ - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -=== Give Elastic APM a try - -Learn more about the <> that make up Elastic APM -// , -// or jump right into the <>. diff --git a/docs/apm-quick-start.asciidoc b/docs/apm-quick-start.asciidoc deleted file mode 100644 index 8b6c1e20e4f..00000000000 --- a/docs/apm-quick-start.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[apm-quick-start]] -== Quick start - -// * Point to EA APT/YUM -// * Point to EA for running on Docker -// * Point to EA for directory layout -// * Point to EA for systemd - -include::{obs-repo-dir}/observability/ingest-traces.asciidoc[tag=apm-quick-start] diff --git a/docs/apm-rum.asciidoc b/docs/apm-rum.asciidoc deleted file mode 100644 index 7ca2368f49c..00000000000 --- a/docs/apm-rum.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[apm-rum]] -=== Real User Monitoring (RUM) -Real User Monitoring captures user interaction with clients such as web browsers. -The {apm-rum-ref-v}[JavaScript Agent] is Elastic’s RUM Agent. -// To use it you need to {apm-server-ref-v}/configuration-rum.html[enable RUM support] in the APM Server. - -Unlike Elastic APM backend agents which monitor requests and responses, -the RUM JavaScript agent monitors the real user experience and interaction within your client-side application. -The RUM JavaScript agent is also framework-agnostic, which means it can be used with any front-end JavaScript application. - -You will be able to measure metrics such as "Time to First Byte", `domInteractive`, -and `domComplete` which helps you discover performance issues within your client-side application as well as issues that relate to the latency of your server-side application. diff --git a/docs/apm-tune-elasticsearch.asciidoc b/docs/apm-tune-elasticsearch.asciidoc deleted file mode 100644 index 518fbdec244..00000000000 --- a/docs/apm-tune-elasticsearch.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[apm-tune-elasticsearch]] -=== Tune {es} for data ingestion - -++++ -Tune {es} -++++ - -The {es} Reference provides insight on tuning {es}. - -{ref}/tune-for-indexing-speed.html[Tune for indexing speed] provides information on: - -* Refresh interval -* Disabling swapping -* Optimizing file system cache -* Considerations regarding faster hardware -* Setting the indexing buffer size - -{ref}/tune-for-disk-usage.html[Tune for disk usage] provides information on: - -* Disabling unneeded features -* Shard size -* Shrink index diff --git a/docs/aws-lambda-extension.asciidoc b/docs/aws-lambda-extension.asciidoc deleted file mode 100644 index 53457f38e63..00000000000 --- a/docs/aws-lambda-extension.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[monitoring-aws-lambda]] -= Monitoring AWS Lambda Functions - -Elastic APM lets you monitor your AWS Lambda functions. -The natural integration of <> into your AWS Lambda functions provides insights into the functions' execution and runtime behavior as well as their relationships and dependencies to other services. - -To get started with the setup of Elastic APM for your Lambda functions, checkout the language-specific guides: - -* {apm-node-ref}/lambda.html[Quick Start with APM on AWS Lambda - Node.js] -* {apm-py-ref}/lambda-support.html[Quick Start with APM on AWS Lambda - Python] -* {apm-java-ref}/aws-lambda.html[Quick Start with APM on AWS Lambda - Java] - -Or, see the {apm-lambda-ref}/aws-lambda-arch.html[architecture guide] to learn more about how the extension works, -performance impacts, and more. diff --git a/docs/common-problems.asciidoc b/docs/common-problems.asciidoc deleted file mode 100644 index 5c6274b3ab9..00000000000 --- a/docs/common-problems.asciidoc +++ /dev/null @@ -1,221 +0,0 @@ -[[common-problems]] -=== Common problems - -This section describes common problems for users running {agent} and the APM integration. -If you're using the standalone (legacy) APM Server binary, see -<> instead. - -* <> -* <> -* <> -* <> -* <> - -[float] -[[no-data-indexed]] -=== No data is indexed - -If no data shows up in {es}, first make sure that your APM components are properly connected. - -**Is {agent} healthy?** - -In {kib} open **{fleet}** and find the host that is running the APM integration; -confirm that its status is **Healthy**. -If it isn't, check the {agent} logs to diagnose potential causes. -See {fleet-guide}/view-elastic-agent-status.html[view {agent} status] to learn more. - -**Is APM Server happy?** - -In {kib}, open **{fleet}** and select the host that is running the APM integration. -Open the **Logs** tab and select the `elastic_agent.apm_server` dataset. -Look for any APM Server errors that could help diagnose the problem. - -**Can the {apm-agent} connect to APM Server** - -To determine if the {apm-agent} can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, confirm that: - -. SSL isn't <>. -. The host is correct. For example, if you're using Docker, ensure a bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the {apm-agent} accordingly. - -If you see requests coming through the APM Server but they are not accepted (a response code other than `202`), -see <> to narrow down the possible causes. - -**Instrumentation gaps** - -APM agents provide auto-instrumentation for many popular frameworks and libraries. -If the {apm-agent} is not auto-instrumenting something that you were expecting, data won't be sent to the {stack}. -Reference the relevant {apm-agents-ref}/index.html[{apm-agent} documentation] for details on what is automatically instrumented. - -[float] -[[common-response-codes]] -=== APM Server response codes - -[[bad-request]] -[float] -==== HTTP 400: Data decoding error / Data validation error - -The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. -See the <> to verify compatibility. - -[[event-too-large]] -[float] -==== HTTP 400: Event too large - -APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> -setting in the APM integration, and adjusting relevant settings in the agent. - -[[unauthorized]] -[float] -==== HTTP 401: Invalid token - -Either the <> in the request header doesn't match the secret token configured in the APM integration, -or the <> is invalid. - -[[forbidden]] -[float] -==== HTTP 403: Forbidden request - -Either you are sending requests to a <> endpoint without RUM enabled, or a request -is coming from an origin not specified in the APM integration settings. -See the <> setting for more information. - -[[request-timed-out]] -[float] -==== HTTP 503: Request timed out waiting to be processed - -This happens when APM Server exceeds the maximum number of requests that it can process concurrently. -To alleviate this problem, you can try to: reduce the sample rate and/or reduce the collected stack trace information. -See <> for more information. - -Another option is to increase processing power. -This can be done by either migrating your {agent} to a more powerful machine -or adding more APM Server instances. - -[float] -[[common-ssl-problems]] -=== Common SSL-related problems - -* <> -* <> -* <> -* <> -* <> - - -[float] -[[ssl-client-fails]] -==== SSL client fails to connect - -The target host might be unreachable or the certificate may not be valid. -To fix this problem: - -. Make sure that the APM Server process on the target host is running and you can connect to it. -Try to ping the target host to verify that you can reach it from the host running APM Server. -Then use either `nc` or `telnet` to make sure that the port is available. For example: -+ -[source,shell] ----- -ping -telnet 5044 ----- - -. Verify that the certificate is valid and that the hostname and IP match. -. Use OpenSSL to test connectivity to the target server and diagnose problems. -See the https://www.openssl.org/docs/manmaster/man1/openssl-s_client.html[OpenSSL documentation] for more info. - -[float] -[[cannot-validate-certificate]] -==== x509: cannot validate certificate for because it doesn't contain any IP SANs - -This happens because your certificate is only valid for the hostname present in the Subject field. -To resolve this problem, try one of these solutions: - -* Create a DNS entry for the hostname, mapping it to the server's IP. -* Create an entry in `/etc/hosts` for the hostname. Or, on Windows, add an entry to -`C:\Windows\System32\drivers\etc\hosts`. -* Re-create the server certificate and add a Subject Alternative Name (SAN) for the IP address of the server. This makes the -server's certificate valid for both the hostname and the IP address. - -[float] -[[getsockopt-no-route-to-host]] -==== getsockopt: no route to host - -This is not an SSL problem. It's a networking problem. Make sure the two hosts can communicate. - -[float] -[[getsockopt-connection-refused]] -==== getsockopt: connection refused - -This is not an SSL problem. Make sure that {ls} is running and that there is no firewall blocking the traffic. - -[float] -[[target-machine-refused-connection]] -==== No connection could be made because the target machine actively refused it - -A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the -destination host. - -[[io-timeout]] -[float] -=== I/O Timeout - -I/O Timeouts can occur when your timeout settings across the stack are not configured correctly, -especially when using a load balancer. - -You may see an error like the one below in the {apm-agent} logs, and/or a similar error on the APM Server side: - -[source,logs] ----- -[ElasticAPM] APM Server responded with an error: -"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout" ----- - -To fix this, ensure timeouts are incrementing from the {apm-agent}, -through your load balancer, to the APM Server. - -By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 3600 seconds. -Your load balancer should be set somewhere between these numbers. - -For example: - -[source,txt] ----- -APM agent --> Load Balancer --> APM Server - 10s 15s 3600s ----- - -The APM Server timeout can be configured by updating the -<>. - -[[server-es-down]] -[float] -=== What happens when APM Server or {es} is down? - -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -If either {es} or the APM Server goes down, -some APM agents have internal queues or buffers that will temporarily store data. -As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server or {es} goes down. - -Adjusting {apm-agent} queues/buffers can increase the agent's overhead, so use caution when updating default values. - -* **Go agent** - Circular buffer with configurable size: -{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. -// * **iOS agent** - -* **Java agent** - Internal buffer with configurable size: -{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. -* **Node.js agent** - No internal queue. Data is lost. -* **PHP agent** - No internal queue. Data is lost. -* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] -with configurable size and time between flushes. -* **Ruby agent** - Internal queue with configurable size: -{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. -* **RUM agent** - No internal queue. Data is lost. -* **.NET agent** - No internal queue. Data is lost. diff --git a/docs/cross-cluster-search.asciidoc b/docs/cross-cluster-search.asciidoc deleted file mode 100644 index 8ae95da9b53..00000000000 --- a/docs/cross-cluster-search.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[cross-cluster-search]] -=== Cross-cluster search - -Elastic APM utilizes {es}'s cross-cluster search functionality. -Cross-cluster search lets you run a single search request against one or more -{ref}/modules-remote-clusters.html[remote clusters] -- -making it easy to search APM data across multiple sources. -This means you can also have deployments per data type, making sizing and scaling more predictable, -and allowing for better performance while managing multiple observability use cases. - -[float] -[[set-up-cross-cluster-search]] -==== Set up cross-cluster search - -*Step 1. Set up remote clusters.* - -If you're using the Hosted {ess}, see {cloud}/ec-enable-ccs.html[Enable cross-cluster search]. - -// lint ignore elasticsearch -You can add remote clusters directly in {kib}, under *Management* > *Elasticsearch* > *Remote clusters*. -All you need is a name for the remote cluster and the seed node(s). -Remember the names of your remote clusters, you'll need them in step two. -See {ref}/ccr-getting-started.html[managing remote clusters] for detailed information on the setup process. - -Alternatively, you can {ref}/modules-remote-clusters.html#configuring-remote-clusters[configure remote clusters] -in {es}'s `elasticsearch.yml` file. - -*Step 2. Edit the default {apm-app} {data-sources}.* - -{apm-app} {data-sources} determine which clusters and indices to display data from. -{data-sources-cap} follow this convention: `:`. - -To display data from all remote clusters and the local cluster, -duplicate and prepend the defaults with `*:`. -For example, the default {data-source} for Error indices is `logs-apm*,apm*`. -To add all remote clusters, change this to `*:logs-apm*,*:apm*,logs-apm*,apm*` - -You can also specify certain clusters to display data from, for example, -`cluster-one:logs-apm*,cluster-one:apm*,logs-apm*,apm*`. - -There are two ways to edit the default {data-source}: - -* In the {apm-app} -- Navigate to *APM* > *Settings* > *Indices*, and change all `xpack.apm.indices.*` values to -include remote clusters. -* In `kibana.yml` -- Update the {kibana-ref}/apm-settings-kb.html[`xpack.apm.indices.*`] configuration values to -include remote clusters. diff --git a/docs/custom-index-template.asciidoc b/docs/custom-index-template.asciidoc deleted file mode 100644 index 42699aede2d..00000000000 --- a/docs/custom-index-template.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ILM documentation -// ids look like this -// [id="name-name{append-legacy}"] -////////////////////////////////////////////////////////////////////////// - -[[custom-index-template]] -=== View the {es} index template - -:append-legacy: -// tag::index-template-integration[] - -Index templates are used to configure the backing indices of data streams as they are created. -These index templates are composed of multiple component templates--reusable building blocks -that configure index mappings, settings, and aliases. - -The default APM index templates can be viewed in {kib}. -Navigate to **{stack-manage-app}** > **Index Management** > **Index Templates**, and search for `apm`. -Select any of the APM index templates to view their relevant component templates. - -[discrete] -[id="index-template-view{append-legacy}"] -=== Edit the {es} index template - -WARNING: Custom index mappings may conflict with the mappings defined by the APM integration -and may break the APM integration and {apm-app} in {kib}. -Do not change or customize any default mappings. - -When you install the APM integration, {fleet} creates a default `@custom` component template for each data stream. -You can edit this `@custom` component template to customize your {es} indices. - -First, determine which <> you'd like to edit. -Then, open {kib} and navigate to **{stack-manage-app}** > **Index Management** > **Component Templates**. - -Custom component templates are named following this pattern: `@custom`. -Search for the name of the data stream, like `traces-apm`, and select its custom component template. -In this example, that'd be, `traces-apm@custom`. -Then click **Manage** > **Edit**. - -Add any custom index settings, metadata, or mappings. -For example, you may want to... - -* Customize the index lifecycle policy applied to a data stream. -See <> for a walk-through. - -* Change the number of {ref}/scalability.html[shards] per index. -Specify the number of primary shards in the **index settings**: -+ -[source,json] ----- -{ - "settings": { - "number_of_shards": "4", - } -} ----- - -* Change the number of {ref}/docs-replication.html[replicas] per index. -Specify the number of replica shards in the **index settings**: -+ -[source,json] ----- -{ - "index": { - "number_of_replicas": "2" - } -} ----- - -Changes to component templates are not applied retroactively to existing indices. -For changes to take effect, you must create a new write index for the data stream. -This can be done with the {es} {ref}/indices-rollover-index.html[Rollover API]. -For example, to roll over the `traces-apm-default` data stream, run: - -[source,console] ----- -POST /traces-apm-default/_rollover/ ----- - -// end::index-template-integration[] diff --git a/docs/data-model.asciidoc b/docs/data-model.asciidoc deleted file mode 100644 index e9055e8c683..00000000000 --- a/docs/data-model.asciidoc +++ /dev/null @@ -1,491 +0,0 @@ -:span-name-type-sheet: https://docs.google.com/spreadsheets/d/1SmWeX5AeqUcayrArUauS_CxGgsjwRgMYH4ZY8yQsMhQ/edit#gid=644582948 -:span-spec: https://github.com/elastic/apm/blob/main/tests/agents/json-specs/span_types.json - -[[data-model]] -== Data Model - -Elastic APM agents capture different types of information from within their instrumented applications. -These are known as events, and can be `spans`, `transactions`, `errors`, or `metrics`. - -* <> -* <> -* <> -* <> - -Events can contain additional <> which further enriches your data. - -[[data-model-spans]] -=== Spans - -*Spans* contain information about the execution of a specific code path. -They measure from the start to the end of an activity, -and they can have a parent/child relationship with other spans. - -Agents automatically instrument a variety of libraries to capture these spans from within your application, -but you can also use the Agent API for custom instrumentation of specific code paths. - -Among other things, spans can contain: - -* A `transaction.id` attribute that refers to its parent <>. -* A `parent.id` attribute that refers to its parent span or transaction. -* Its start time and duration. -* A `name`, `type`, `subtype`, and `action`—see the {span-name-type-sheet}[span name/type alignment] -sheet for span name patterns and examples by {apm-agent}. -In addition, some APM agents test against a public {span-spec}[span type/subtype spec]. -* An optional `stack trace`. Stack traces consist of stack frames, -which represent a function call on the call stack. -They include attributes like function name, file name and path, line number, etc. - -TIP: Most agents limit keyword fields, like `span.id`, to 1024 characters, -and non-keyword fields, like `span.start.us`, to 10,000 characters. - -[float] -[[data-model-dropped-spans]] -==== Dropped spans - -For performance reasons, APM agents can choose to sample or omit spans purposefully. -This can be useful in preventing edge cases, like long-running transactions with over 100 spans, -that would otherwise overload both the Agent and the APM Server. -When this occurs, the {apm-app} will display the number of spans dropped. - -To configure the number of spans recorded per transaction, see the relevant Agent documentation: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-max-spans[`ELASTIC_APM_TRANSACTION_MAX_SPANS`] -* iOS: _Not yet supported_ -* Java: {apm-java-ref-v}/config-core.html#config-transaction-max-spans[`transaction_max_spans`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-max-spans[`TransactionMaxSpans`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-max-spans[`transactionMaxSpans`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-max-spans[`transaction_max_spans`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] - -[float] -[[data-model-missing-spans]] -==== Missing spans - -Agents stream spans to the APM Server separately from their transactions. -Because of this, unforeseen errors may cause spans to go missing. -Agents know how many spans a transaction should have; -if the number of expected spans does not equal the number of spans received by the APM Server, -the {apm-app} will calculate the difference and display a message. - -[float] -==== Data streams - -Spans are stored with transactions in the following data streams: - -include::./data-streams.asciidoc[tag=traces-data-streams] - -See <> to learn more. - -[float] -==== Example span document - -This example shows what span documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/spans.json[] ----- -==== - -[[data-model-transactions]] -=== Transactions - -*Transactions* are a special kind of <> that have additional attributes associated with them. -They describe an event captured by an Elastic {apm-agent} instrumenting a service. -You can think of transactions as the highest level of work you’re measuring within a service. -As an example, a transaction might be a: - -* Request to your server -* Batch job -* Background job -* Custom transaction type - -Agents decide whether to sample transactions or not, -and provide settings to control sampling behavior. -If sampled, the <> of a transaction are sent and stored as separate documents. -Within one transaction there can be 0, 1, or many spans captured. - -A transaction contains: - -* The timestamp of the event -* A unique id, type, and name -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. -* Other relevant information depending on the agent. Example: The JavaScript RUM agent captures transaction marks, -which are points in time relative to the start of the transaction with some label. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -Transactions are grouped by their `type` and `name` in the APM UI's -{kibana-ref}/transactions.html[Transaction overview]. -If you're using a supported framework, APM agents will automatically handle the naming for you. -If you're not, or if you wish to override the default, -all agents have API methods to manually set the `type` and `name`. - -* `type` should be a keyword of specific relevance in the service's domain, -e.g. `request`, `backgroundjob`, etc. -* `name` should be a generic designation of a transaction in the scope of a single service, -e.g. `GET /users/:id`, `UsersController#show`, etc. - -TIP: Most agents limit keyword fields (e.g. `labels`) to 1024 characters, -non-keyword fields (e.g. `span.db.statement`) to 10,000 characters. - -[float] -==== Data streams - -Transactions are stored with spans in the following data streams: - -include::./data-streams.asciidoc[tag=traces-data-streams] - -See <> to learn more. - -[float] -==== Example transaction document - -This example shows what transaction documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/transactions.json[] ----- -==== - -[[data-model-errors]] -=== Errors - -An error event contains at least -information about the original `exception` that occurred -or about a `log` created when the exception occurred. -For simplicity, errors are represented by a unique ID. - -An Error contains: - -* Both the captured `exception` and the captured `log` of an error can contain a `stack trace`, -which is helpful for debugging. -* The `culprit` of an error indicates where it originated. -* An error might relate to the <> during which it happened, -via the `transaction.id`. -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -TIP: Most agents limit keyword fields (e.g. `error.id`) to 1024 characters, -non-keyword fields (e.g. `error.exception.message`) to 10,000 characters. - -Errors are stored in error indices. - -[float] -==== Data streams - -Errors are stored in the following data streams: - -include::./data-streams.asciidoc[tag=logs-data-streams] - -See <> to learn more. - -[float] -==== Example error document - -This example shows what error documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/errors.json[] ----- -==== - -[[data-model-metrics]] -=== Metrics - -**Metrics** measure the state of a system by gathering information on a regular interval. There are two types of APM metrics: - -* **System metrics**: Basic infrastructure and application metrics. -* **Calculated metrics**: Aggregated trace event metrics used to power visualizations in the {apm-app}. - -[float] -==== System metrics - -APM agents automatically pick up basic host-level metrics, -including system and process-level CPU and memory metrics. -Agent specific metrics are also available, -like {apm-java-ref-v}/metrics.html[JVM metrics] in the Java Agent, -and {apm-go-ref-v}/metrics.html[Go runtime] metrics in the Go Agent. - -Infrastructure and application metrics are important sources of information when debugging production systems, -which is why we've made it easy to filter metrics for specific hosts or containers in the {kib} {kibana-ref}/metrics.html[metrics overview]. - -Metrics have the `processor.event` property set to `metric`. - -TIP: Most agents limit keyword fields (e.g. `processor.event`) to 1024 characters, -non-keyword fields (e.g. `system.memory.total`) to 10,000 characters. - -Metrics are stored in metric indices. - -For a full list of tracked metrics, see the relevant agent documentation: - -* {apm-go-ref-v}/metrics.html[Go] -* {apm-java-ref-v}/metrics.html[Java] -* {apm-node-ref-v}/metrics.html[Node.js] -* {apm-py-ref-v}/metrics.html[Python] -* {apm-ruby-ref-v}/metrics.html[Ruby] - -[float] -==== Calculated metrics - -APM agents and APM Server calculate metrics from trace events to power visualizations in the {apm-app}. -These metrics are described below. - -[float] -===== Breakdown metrics - -To power the {apm-app-ref}/transactions.html[Time spent by span type] graph, -agents collect summarized metrics about the timings of spans and transactions, -broken down by span type. - -*`span.self_time.count`* and *`span.self_time.sum.us`*:: -+ --- -These metrics measure the "self-time" for a span type, and optional subtype, -within a transaction group. Together these metrics can be used to calculate -the average duration and percentage of time spent on each type of operation -within a transaction group. - -These metric documents can be identified by searching for `metricset.name: span_breakdown`. - -You can filter and group by these dimensions: - -* `transaction.name`: The name of the enclosing transaction group, for example `GET /` -* `transaction.type`: The type of the enclosing transaction, for example `request` -* `span.type`: The type of the span, for example `app`, `template` or `db` -* `span.subtype`: The sub-type of the span, for example `mysql` (optional) --- - -[float] -===== Transaction metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates transaction events into latency distribution metrics. - -*`transaction.duration.histogram`*:: -+ --- -This metric measures the latency distribution of transaction groups, -used to power visualizations and analytics in Elastic APM. - -These metric documents can be identified by searching for `metricset.name: transaction`. - -You can filter and group by these dimensions (some of which are optional, for example `container.id`): - -* `transaction.name`: The name of the transaction, for example `GET /` -* `transaction.type`: The type of the transaction, for example `request` -* `transaction.result`: The result of the transaction, for example `HTTP 2xx` -* `transaction.root`: A boolean flag indicating whether the transaction is the root of a trace -* `event.outcome`: The outcome of the transaction, for example `success` -* `agent.name`: The name of the {apm-agent} that instrumented the transaction, for example `java` -* `service.name`: The name of the service that served the transaction -* `service.version`: The version of the service that served the transaction -* `service.node.name`: The name of the service instance that served the transaction -* `service.environment`: The environment of the service that served the transaction -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.language.version`: The language version of the service that served the transaction -* `service.runtime.name`: The runtime name of the service that served the transaction, for example `jRuby` -* `service.runtime.version`: The runtime version that served the transaction -* `host.hostname`: The hostname of the service that served the transaction -* `host.os.platform`: The platform name of the service that served the transaction, for example `linux` -* `container.id`: The container ID of the service that served the transaction -* `kubernetes.pod.name`: The name of the Kubernetes pod running the service that served the transaction -* `cloud.provider`: The cloud provider hosting the service instance that served the transaction -* `cloud.region`: The cloud region hosting the service instance that served the transaction -* `cloud.availability_zone`: The cloud availability zone hosting the service instance that served the transaction -* `cloud.account.id`: The cloud account id of the service that served the transaction -* `cloud.account.name`: The cloud account name of the service that served the transaction -* `cloud.machine.type`: The cloud machine type or instance type of the service that served the transaction -* `cloud.project.id`: The cloud project identifier of the service that served the transaction -* `cloud.project.name`: The cloud project name of the service that served the transaction -* `cloud.service.name`: The cloud service name of the service that served the transaction -* `faas.coldstart`: Whether the _serverless_ service that served the transaction had a cold start -* `faas.trigger.type`: The trigger type that the lambda function was executed by of the service that served the transaction -* `faas.id`: The unique identifier of the invoked serverless function -* `faas.name`: The name of the lambda function -* `faas.version`: The version of the lambda function --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -===== Service-destination metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates span events into service-destination metrics. - -*`span.destination.service.response_time.count`* and *`span.destination.service.response_time.sum.us`*:: -+ --- -These metrics measure the count and total duration of requests from one service to another service. -These are used to calculate the throughput and latency of requests to backend services such as databases in -{kibana-ref}/service-maps.html[Service maps]. - -These metric documents can be identified by searching for `metricset.name: service_destination`. - -You can filter and group by these dimensions: - -* `span.destination.service.resource`: The destination service resource, for example `mysql` -* `event.outcome`: The outcome of the operation, for example `success` -* `agent.name`: The name of the {apm-agent} that instrumented the operation, for example `java` -* `service.name`: The name of the service that made the request -* `service.environment`: The environment of the service that made the request --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -==== Data streams - -Metrics are stored in the following data streams: - -include::./data-streams.asciidoc[tag=metrics-data-streams] - -See <> to learn more. - -[float] -==== Example metric document - -This example shows what metric documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -This example contains JVM metrics produced by the {apm-java-agent}. -and contains two related metrics: `jvm.gc.time` and `jvm.gc.count`. These are accompanied by various fields describing -the environment in which the metrics were captured: service name, host name, Kubernetes pod UID, container ID, process ID, and more. -These fields make it possible to search and aggregate across various dimensions, such as by service, host, and Kubernetes pod. - -[source,json] ----- -include::./data/elasticsearch/metricset.json[] ----- -==== - -// This heading is linked to from the APM UI section in Kibana -[[data-model-metadata]] -=== Metadata - -Metadata can enrich your events and make application performance monitoring even more useful. -Let's explore the different types of metadata that Elastic APM offers. - -[float] -[[data-model-labels]] -==== Labels - -Labels add *indexed* information to transactions, spans, and errors. -Indexed means the data is searchable and aggregatable in {es}. -Add additional key-value pairs to define multiple labels. - -* Indexed: Yes -* {es} type: {ref}/object.html[object] -* {es} field: `labels` -* Applies to: <> | <> | <> - -Label values can be a string, boolean, or number, although some agents only support string values at this time. -Because labels for a given key, regardless of agent used, are stored in the same place in {es}, -all label values of a given key must have the same data type. -Multiple data types per key will throw an exception, for example: `{foo: bar}` and `{foo: 42}` is not allowed. - -IMPORTANT: Avoid defining too many user-specified labels. -Defining too many unique fields in an index is a condition that can lead to a -{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-label[`SetLabel`] -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-tag[`setLabel`] -* .NET: {apm-dotnet-ref-v}/public-api.html#api-transaction-tags[`Labels`] -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-label[`setLabel`] | {apm-node-ref-v}/agent-api.html#apm-add-labels[`addLabels`] -* PHP: {apm-php-ref}/public-api.html#api-transaction-interface-set-label[`Transaction` `setLabel`] | {apm-php-ref}/public-api.html#api-span-interface-set-label[`Span` `setLabel`] -* Python: {apm-py-ref-v}/api.html#api-label[`elasticapm.label()`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-label[`set_label`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-add-labels[`addLabels`] - -[float] -[[data-model-custom]] -==== Custom context - -Custom context adds *non-indexed*, -custom contextual information to transactions and errors. -Non-indexed means the data is not searchable or aggregatable in {es}, -and you cannot build dashboards on top of the data. -This also means you don't have to worry about {ref}/mapping.html#mapping-limit-settings[mapping explosions], -as these fields are not added to the mapping. - -Non-indexed information is useful for providing contextual information to help you -quickly debug performance issues or errors. - -* Indexed: No -* {es} type: {ref}/object.html[object] -* {es} fields: `transaction.custom` | `error.custom` -* Applies to: <> | <> - -IMPORTANT: Setting a circular object, a large object, or a non JSON serializable object can lead to errors. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-custom[`SetCustom`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-custom-context[`addCustomContext`] -* .NET: _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-custom-context[`set_custom_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-custom-context[`set_custom_context`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] - -[float] -[[data-model-user]] -==== User context - -User context adds *indexed* user information to transactions and errors. -Indexed means the data is searchable and aggregatable in {es}. - -* Indexed: Yes -* {es} type: {ref}/keyword.html[keyword] -* {es} fields: `user.email` | `user.name` | `user.id` -* Applies to: <> | <> - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-username[`SetUsername`] | {apm-go-ref-v}/api.html#context-set-user-id[`SetUserID`] | -{apm-go-ref-v}/api.html#context-set-user-email[`SetUserEmail`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-set-user[`setUser`] -* .NET _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-user-context[`set_user_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-user[`set_user`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] diff --git a/docs/data-streams.asciidoc b/docs/data-streams.asciidoc deleted file mode 100644 index 0e23bd2a900..00000000000 --- a/docs/data-streams.asciidoc +++ /dev/null @@ -1,90 +0,0 @@ -[[apm-data-streams]] -=== Data streams - -**** -{agent} uses data streams to store append-only time series data across multiple indices. -Data streams are well-suited for logs, metrics, traces, and other continuously generated data, -and offer a host of benefits over other indexing strategies: - -* Reduced number of fields per index -* More granular data control -* Flexible naming scheme -* Fewer ingest permissions required - -See the {fleet-guide}/data-streams.html[{fleet} and {agent} Guide] to learn more. -**** - -[discrete] -[[apm-data-streams-naming-scheme]] -=== Data stream naming scheme - -APM data follows the `--` naming scheme. -The `type` and `dataset` are predefined by the APM integration, -but the `namespace` is your opportunity to customize how different types of data are stored in {es}. -There is no recommendation for what to use as your namespace--it is intentionally flexible. -For example, you might create namespaces for each of your environments, -like `dev`, `prod`, `production`, etc. -Or, you might create namespaces that correspond to strategic business units within your organization. - -[discrete] -[[apm-data-streams-list]] -=== APM data streams - -By type, the APM data streams are: - -Traces:: -Traces are comprised of {apm-guide-ref}/data-model.html[spans and transactions]. -Traces are stored in the following data streams: -+ -// tag::traces-data-streams[] -- Application traces: `traces-apm-` -- RUM and iOS agent application traces: `traces-apm.rum-` -// end::traces-data-streams[] - - -Metrics:: -Metrics include application-based metrics and basic system metrics. -Metrics are stored in the following data streams: -+ -// tag::metrics-data-streams[] -- APM internal metrics: `metrics-apm.internal-` -- APM profiling metrics: `metrics-apm.profiling-` -- Application metrics: `metrics-apm.app.-` -// end::metrics-data-streams[] -+ -Application metrics include the instrumented service's name--defined in each {apm-agent}'s -configuration--in the data stream name. -Service names therefore must follow certain index naming rules. -+ -[%collapsible] -.Service name rules -==== -* Service names are case-insensitive and must be unique. -For example, you cannot have a service named `Foo` and another named `foo`. -* Special characters will be removed from service names and replaced with underscores (`_`). -Special characters include: -+ -[source,text] ----- -'\\', '/', '*', '?', '"', '<', '>', '|', ' ', ',', '#', ':', '-' ----- -==== - - -Logs:: -Logs include application error events and application logs. -Logs are stored in the following data streams: -+ -// tag::logs-data-streams[] -- APM error/exception logging: `logs-apm.error-` -// end::logs-data-streams[] - -[discrete] -[[apm-data-streams-next]] -=== What's next? - -* Data streams define not only how data is stored in {es}, but also how data is retained over time. -See <> to learn how to create your own data retention policies. - -* See <> for information on APM storage and processing costs, -processing and performance, and other index management features. diff --git a/docs/features.asciidoc b/docs/features.asciidoc deleted file mode 100644 index 180091ba3e6..00000000000 --- a/docs/features.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[features]] -== Elastic APM features - -++++ -Features -++++ - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./apm-data-security.asciidoc[] - -include::./apm-distributed-tracing.asciidoc[] - -include::./apm-rum.asciidoc[] - -include::./sampling.asciidoc[] - -include::./open-telemetry.asciidoc[] - -include::./log-correlation.asciidoc[] - -include::./cross-cluster-search.asciidoc[] - -include::./span-compression.asciidoc[] - -include::./aws-lambda-extension.asciidoc[leveloffset=+2] diff --git a/docs/guide/index.asciidoc b/docs/guide/index.asciidoc deleted file mode 100644 index a11f42b5c0e..00000000000 --- a/docs/guide/index.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ -// This file exists to keep the current build working. -// Delete this file when the APM Overview is no longer built in main and 7.16 - -include::../legacy/guide/index.asciidoc[] diff --git a/docs/how-to.asciidoc b/docs/how-to.asciidoc deleted file mode 100644 index 4afc717db56..00000000000 --- a/docs/how-to.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[how-to-guides]] -== How-to guides - -Learn how to perform common APM configuration and management tasks. - -* <> -* <> -* <> -* <> -* <> - -include::./source-map-how-to.asciidoc[] - -include::./jaeger-integration.asciidoc[] - -include::./monitor.asciidoc[] - -include::./ingest-pipelines.asciidoc[] - -include::./custom-index-template.asciidoc[] diff --git a/docs/ilm-how-to.asciidoc b/docs/ilm-how-to.asciidoc deleted file mode 100644 index cd0f0d7c572..00000000000 --- a/docs/ilm-how-to.asciidoc +++ /dev/null @@ -1,173 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ILM documentation -////////////////////////////////////////////////////////////////////////// - -[[ilm-how-to]] -=== {ilm-cap} - -:append-legacy: -// tag::ilm-integration[] - -Index lifecycle policies allow you to automate the -lifecycle of your APM indices as they grow and age. -A default policy is applied to each APM data stream, -but can be customized depending on your business needs. - -See {ref}/index-lifecycle-management.html[{ilm-init}: Manage the index lifecycle] to learn more. - -[discrete] -[id="index-lifecycle-policies-default{append-legacy}"] -=== Default policies - -The table below describes the default index lifecycle policy applied to each APM data stream. -Each policy includes a rollover and delete definition: - -* **Rollover**: Using rollover indices prevents a single index from growing too large and optimizes indexing and search performance. Rollover, i.e. writing to a new index, occurs after either an age or size metric is met. -* **Delete**: The delete phase permanently removes the index after a time threshold is met. - -[cols="1,1,1",options="header"] -|=== -|Data stream -|Rollover after -|Delete after - -|`traces-apm` -|30 days / 50 GB -|10 days - -|`traces-apm.rum` -|30 days / 50 GB -|90 days - -|`metrics-apm.profiling` -|30 days / 50 GB -|10 days - -|`metrics-apm.internal` -|30 days / 50 GB -|90 days - -|`metrics-apm.app` -|30 days / 50 GB -|90 days - -|`logs-apm.error` -|30 days / 50 GB -|10 days - -|=== - -The APM index lifecycle policies can be viewed in {kib}. -Navigate to *{stack-manage-app}* / *Index Lifecycle Management*, and search for `apm`. - -TIP: Default {ilm-init} policies can change between minor versions. -This is not considered a breaking change as index management should continually improve and adapt to new features. - -[discrete] -[id="data-streams-custom-policy{append-legacy}"] -=== Configure a custom index lifecycle policy - -When the APM integration is installed, {fleet} creates a default `*@custom` component template for each data stream. -The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. - -This tutorial explains how to apply a custom index lifecycle policy to the `traces-apm` data stream. - -[discrete] -[id="data-streams-custom-one{append-legacy}"] -=== Step 1: View data streams - -The **Data Streams** view in {kib} shows you the data streams, -index templates, and index lifecycle policies associated with a given integration. - -. Navigate to **{stack-manage-app}** > **Index Management** > **Data Streams**. -. Search for `traces-apm` to see all data streams associated with APM trace data. -. In this example, I only have one data stream because I'm only using the `default` namespace. -You may have more if your setup includes multiple namespaces. -+ -[role="screenshot"] -image::images/data-stream-overview.png[Data streams info] - -[discrete] -[id="data-streams-custom-two{append-legacy}"] -=== Step 2: Create an index lifecycle policy - -. Navigate to **{stack-manage-app}** > **Index Lifecycle Policies**. -. Click **Create policy**. - -Name your new policy; For this tutorial, I've chosen `custom-traces-apm-policy`. -Customize the policy to your liking, and when you're done, click **Save policy**. - -[discrete] -[id="data-streams-custom-three{append-legacy}"] -=== Step 3: Apply the index lifecycle policy - -To apply your new index lifecycle policy to the `traces-apm-*` data stream, -edit the `@custom` component template. - -. Click on the **Component Template** tab and search for `traces-apm`. -. Select the `traces-apm@custom` template and click **Manage** > **Edit**. -. Under **Index settings**, set the {ilm-init} policy name created in the previous step: -+ -[source,json] ----- -{ - "lifecycle": { - "name": "custom-traces-apm-policy" - } -} ----- -. Continue to **Review** and ensure your request looks similar to the image below. -If it does, click **Create component template**. -+ -[role="screenshot"] -image::images/create-component-template.png[Create component template] - -[discrete] -[id="data-streams-custom-four{append-legacy}"] -=== Step 4: Roll over the data stream (optional) - -To confirm that the data stream is now using the new index template and {ilm-init} policy, -you can either repeat <>, or navigate to **{dev-tools-app}** and run the following: - -[source,bash] ----- -GET /_data_stream/traces-apm-default <1> ----- -<1> The name of the data stream we've been hacking on appended with your - -The result should include the following: - -[source,json] ----- -{ - "data_streams" : [ - { - ... - "template" : "traces-apm-default", <1> - "ilm_policy" : "custom-traces-apm-policy", <2> - ... - } - ] -} ----- -<1> The name of the custom index template created in step three -<2> The name of the {ilm-init} policy applied to the new component template in step two - -New {ilm-init} policies only take effect when new indices are created, -so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), -or force a rollover using the {ref}/indices-rollover-index.html[{es} rollover API]: - -[source,bash] ----- -POST /traces-apm-default/_rollover/ ----- - -[discrete] -[id="data-streams-custom-policy-namespace{append-legacy}"] -=== Namespace-level index lifecycle policies - -It is also possible to create more granular index lifecycle policies that apply to individual namespaces. -This process is similar to the above tutorial, but includes cloning and modify the existing index template to use -a new `*@custom` component template. - -// end::ilm-integration[] \ No newline at end of file diff --git a/docs/images/agent-settings-migration.png b/docs/images/agent-settings-migration.png deleted file mode 100644 index a1f1c12c124..00000000000 Binary files a/docs/images/agent-settings-migration.png and /dev/null differ diff --git a/docs/images/api-key-copy.png b/docs/images/api-key-copy.png deleted file mode 100644 index d47fc7cd2de..00000000000 Binary files a/docs/images/api-key-copy.png and /dev/null differ diff --git a/docs/images/apm-architecture-cloud.png b/docs/images/apm-architecture-cloud.png deleted file mode 100644 index 6bc7001fb9f..00000000000 Binary files a/docs/images/apm-architecture-cloud.png and /dev/null differ diff --git a/docs/images/apm-architecture-diy.png b/docs/images/apm-architecture-diy.png deleted file mode 100644 index d4e96466081..00000000000 Binary files a/docs/images/apm-architecture-diy.png and /dev/null differ diff --git a/docs/images/apm-architecture-two.png b/docs/images/apm-architecture-two.png deleted file mode 100644 index 6f9473bdcf0..00000000000 Binary files a/docs/images/apm-architecture-two.png and /dev/null differ diff --git a/docs/images/apm-architecture.png b/docs/images/apm-architecture.png deleted file mode 100644 index 372ea225586..00000000000 Binary files a/docs/images/apm-architecture.png and /dev/null differ diff --git a/docs/images/apm-distributed-tracing.png b/docs/images/apm-distributed-tracing.png deleted file mode 100644 index 7d51e273f9d..00000000000 Binary files a/docs/images/apm-distributed-tracing.png and /dev/null differ diff --git a/docs/images/apm-ui-api-key.png b/docs/images/apm-ui-api-key.png deleted file mode 100644 index d24161cc0e2..00000000000 Binary files a/docs/images/apm-ui-api-key.png and /dev/null differ diff --git a/docs/images/assets.png b/docs/images/assets.png deleted file mode 100644 index d3a8e6ea61a..00000000000 Binary files a/docs/images/assets.png and /dev/null differ diff --git a/docs/images/config-layer.png b/docs/images/config-layer.png deleted file mode 100644 index ec6c045d347..00000000000 Binary files a/docs/images/config-layer.png and /dev/null differ diff --git a/docs/images/create-component-template.png b/docs/images/create-component-template.png deleted file mode 100644 index cd9c18a19a4..00000000000 Binary files a/docs/images/create-component-template.png and /dev/null differ diff --git a/docs/images/data-flow.png b/docs/images/data-flow.png deleted file mode 100644 index 294ff7597d4..00000000000 Binary files a/docs/images/data-flow.png and /dev/null differ diff --git a/docs/images/data-stream-overview.png b/docs/images/data-stream-overview.png deleted file mode 100644 index 503661862de..00000000000 Binary files a/docs/images/data-stream-overview.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-1.png b/docs/images/dt-sampling-example-1.png deleted file mode 100644 index a3def0c7bfa..00000000000 Binary files a/docs/images/dt-sampling-example-1.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-2.png b/docs/images/dt-sampling-example-2.png deleted file mode 100644 index d7f87bcd891..00000000000 Binary files a/docs/images/dt-sampling-example-2.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-3.png b/docs/images/dt-sampling-example-3.png deleted file mode 100644 index a0045705a0c..00000000000 Binary files a/docs/images/dt-sampling-example-3.png and /dev/null differ diff --git a/docs/images/dt-trace-ex1.png b/docs/images/dt-trace-ex1.png deleted file mode 100644 index ca97955ee8b..00000000000 Binary files a/docs/images/dt-trace-ex1.png and /dev/null differ diff --git a/docs/images/dt-trace-ex2.png b/docs/images/dt-trace-ex2.png deleted file mode 100644 index 3df0827f586..00000000000 Binary files a/docs/images/dt-trace-ex2.png and /dev/null differ diff --git a/docs/images/dt-trace-ex3.png b/docs/images/dt-trace-ex3.png deleted file mode 100644 index 1bb666b030a..00000000000 Binary files a/docs/images/dt-trace-ex3.png and /dev/null differ diff --git a/docs/images/layers.png b/docs/images/layers.png deleted file mode 100644 index a8c508a1c74..00000000000 Binary files a/docs/images/layers.png and /dev/null differ diff --git a/docs/images/scale-apm.png b/docs/images/scale-apm.png deleted file mode 100644 index 5792ba4680a..00000000000 Binary files a/docs/images/scale-apm.png and /dev/null differ diff --git a/docs/images/schema-agent.png b/docs/images/schema-agent.png deleted file mode 100644 index 8e65de97cfb..00000000000 Binary files a/docs/images/schema-agent.png and /dev/null differ diff --git a/docs/images/server-api-key-create.png b/docs/images/server-api-key-create.png deleted file mode 100644 index d21c440b19a..00000000000 Binary files a/docs/images/server-api-key-create.png and /dev/null differ diff --git a/docs/images/source-map-after.png b/docs/images/source-map-after.png deleted file mode 100644 index feec9e7c231..00000000000 Binary files a/docs/images/source-map-after.png and /dev/null differ diff --git a/docs/images/source-map-before.png b/docs/images/source-map-before.png deleted file mode 100644 index a92baef141e..00000000000 Binary files a/docs/images/source-map-before.png and /dev/null differ diff --git a/docs/index.asciidoc b/docs/index.asciidoc deleted file mode 100644 index 0513156cb20..00000000000 --- a/docs/index.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ -// This file exists to keep the current build working. -// Delete this file when the APM Server Reference is no longer built in main and 7.16 - -include::legacy/index.asciidoc[] diff --git a/docs/ingest-pipelines.asciidoc b/docs/ingest-pipelines.asciidoc deleted file mode 100644 index 96d55ca03a9..00000000000 --- a/docs/ingest-pipelines.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ingest pipeline -////////////////////////////////////////////////////////////////////////// - -[[ingest-pipelines]] -=== Parse data using ingest pipelines - -:append-legacy: -// tag::ingest-pipelines[] - -Ingest pipelines preprocess and enrich APM documents before indexing them. -For example, a pipeline might define one processor that removes a field, and another that renames a field. -This can be useful for ensuring data security by removing or obfuscating sensitive information. -See <> for more on this topic. - -The default APM pipelines are defined in index templates that {fleet} loads into {es}. -{es} then uses the index pattern in these index templates to match pipelines to APM data streams. - -[discrete] -[id="view-edit-default-pipelines{append-legacy}"] -=== View ingest pipelines - -To view or edit a default pipelines in {kib}, -select **{stack-manage-app}** > **Ingest Pipelines**. -Search for `apm`. - -It is not currently possible to edit or add pipelines that persist through upgrades. - -See {ref}/ingest.html[ingest node pipelines] for more information. - -// end::ingest-pipelines[] \ No newline at end of file diff --git a/docs/input-apm.asciidoc b/docs/input-apm.asciidoc deleted file mode 100644 index 0a0cf3ed3e5..00000000000 --- a/docs/input-apm.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -:input-type: apm - -[[input-apm]] -== APM input settings - -++++ -Input settings -++++ - -Configure and customize APM integration settings directly in {kib}: - -// tag::edit-integration-settings[] -. Open {kib} and navigate to **{fleet}**. -. Under the **Agent policies** tab, select the policy you would like to configure. -. Find the Elastic APM integration and select **Actions** > **Edit integration**. -// end::edit-integration-settings[] - -[float] -[[apm-input-general-settings]] -=== General settings - -[cols="2*Integrate with Jaeger -++++ - -Elastic APM integrates with https://www.jaegertracing.io/[Jaeger], an open-source, distributed tracing system. -This integration allows users with an existing Jaeger setup to switch from the default Jaeger backend, -to the {stack}. -Best of all, no instrumentation changes are needed in your application code. - -[float] -[[jaeger-architecture]] -=== Supported architecture - -Jaeger architecture supports different data formats and transport protocols -that define how data can be sent to a collector. Elastic APM, as a Jaeger collector, -supports communication with *Jaeger agents* via gRPC. - -* The APM integration serves Jaeger gRPC over the same host and port as the Elastic {apm-agent} protocol. - -* The APM integration gRPC endpoint supports TLS. If SSL is configured, -SSL settings will automatically be applied to the APM integration's Jaeger gRPC endpoint. - -* The gRPC endpoint supports probabilistic sampling. -Sampling decisions can be configured <> with {apm-agent} central configuration, or <> in each Jaeger client. - -See the https://www.jaegertracing.io/docs/1.27/architecture[Jaeger docs] -for more information on Jaeger architecture. - -[float] -[[get-started-jaeger]] -=== Get started - -Connect your preexisting Jaeger setup to Elastic APM in three steps: - -* <> -* <> -* <> - -IMPORTANT: There are <> to this integration. - -[float] -[[configure-agent-client-jaeger]] -==== Configure Jaeger agents - -The APM integration serves Jaeger gRPC over the same host and port as the Elastic {apm-agent} protocol. - -include::./shared/jaeger/jaeger-widget.asciidoc[] - -[float] -[[configure-sampling-jaeger]] -==== Configure Sampling - -The APM integration supports probabilistic sampling, which can be used to reduce the amount of data that your agents collect and send. -Probabilistic sampling makes a random sampling decision based on the configured sampling value. -For example, a value of `.2` means that 20% of traces will be sampled. - -There are two different ways to configure the sampling rate of your Jaeger agents: - -* <> -* <> - -[float] -[[configure-sampling-central-jaeger]] -===== {apm-agent} central configuration (default) - -Central sampling, with {apm-agent} central configuration, -allows Jaeger clients to poll APM Server for the sampling rate. -This means sample rates can be configured on the fly, on a per-service and per-environment basis. -See {kibana-ref}/agent-configuration.html[Central configuration] to learn more. - -[float] -[[configure-sampling-local-jaeger]] -===== Local sampling in each Jaeger client - -If you don't have access to the {apm-app}, -you'll need to change the Jaeger client's `sampler.type` and `sampler.param`. -This enables you to set the sampling configuration locally in each Jaeger client. -See the official https://www.jaegertracing.io/docs/1.27/sampling/[Jaeger sampling documentation] -for more information. - -[float] -[[configure-start-jaeger]] -==== Start sending data - -That's it! Data sent from Jaeger clients to the APM Server can now be viewed in the {apm-app}. - -[float] -[[caveats-jaeger]] -=== Caveats - -There are some limitations and differences between Elastic APM and Jaeger that you should be aware of. - -*Jaeger integration limitations:* - -* Because Jaeger has its own trace context header, and does not currently support W3C trace context headers, -it is not possible to mix and match the use of Elastic's APM agents and Jaeger's clients. -* Elastic APM only supports probabilistic sampling. - -*Differences between APM Agents and Jaeger Clients:* - -* Jaeger clients only sends trace data. -APM agents support a larger number of features, like -multiple types of metrics, and application breakdown charts. -When using Jaeger, features like this will not be available in the {apm-app}. -* Elastic APM's <> is different than Jaegers. -For Jaeger trace data to work with Elastic's data model, we rely on spans being tagged with the appropriate -https://github.com/opentracing/specification/blob/master/semantic_conventions.md[`span.kind`]. -** Server Jaeger spans are mapped to Elastic APM <>. -** Client Jaeger spans are mapped to Elastic APM <> -- unless the span is the root, in which case it is mapped to an Elastic APM <>. diff --git a/docs/legacy/agent-configuration.asciidoc b/docs/legacy/agent-configuration.asciidoc deleted file mode 100644 index 60edf109183..00000000000 --- a/docs/legacy/agent-configuration.asciidoc +++ /dev/null @@ -1,103 +0,0 @@ -[[agent-configuration-api]] -== Agent configuration API - -++++ -Agent configuration -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -APM Server exposes an API endpoint that allows agents to query the server for configuration changes. -More information on this feature is available in {kibana-ref}/agent-configuration.html[APM Agent configuration in {kib}]. - -Starting with release 7.14, agent configuration can be declared directly within -`apm-server.yml`. Requests to the endpoint are unchanged; `apm-server` responds -directly without querying {kib} for the agent configuration. Refer to the -example in `apm-server.yml` under Agent Configuration. - -[[agent-config-endpoint]] -[float] -=== Agent configuration endpoint - -The Agent configuration endpoint accepts both `HTTP GET` and `HTTP POST` requests. -If an <> or <> has been configured, it will also apply to this endpoint. - -[[agent-config-api-get]] -[float] -==== HTTP GET - -`service.name` is a required query string parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents?service.name=SERVICE_NAME ------------------------------------------------------------- - -[[agent-config-api-post]] -[float] -==== HTTP POST - -Encode parameters as a JSON object in the body. -`service.name` is a required parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents -{ - "service": { - "name": "test-service", - "environment": "all" - }, - "CAPTURE_BODY": "off" -} ------------------------------------------------------------- - -[[agent-config-api-response]] -[float] -==== Responses - -* Successful - `200` -* {kib} endpoint is disabled - `403` -* {kib} is unreachable - `503` - -[[agent-config-api-example]] -[float] -==== Example request - -Example Agent configuration `GET` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -i http://127.0.0.1:8200/config/v1/agents?service.name=test-service ---------------------------------------------------------------------------- - -Example Agent configuration `POST` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/config/v1/agents \ - -H "Authorization: Bearer secret_token" \ - -H 'content-type: application/json' \ - -d '{"service": {"name": "test-service"}}' ---------------------------------------------------------------------------- - -[[agent-config-api-ex-response]] -[float] -==== Example response - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -HTTP/1.1 200 OK -Cache-Control: max-age=30, must-revalidate -Content-Type: application/json -Etag: "7b23d63c448a863fa" -Date: Mon, 24 Feb 2020 20:53:07 GMT -Content-Length: 98 - -{ - "capture_body": "off", - "transaction_max_spans": "500", - "transaction_sample_rate": "0.3" -} ---------------------------------------------------------------------------- diff --git a/docs/legacy/api-keys.asciidoc b/docs/legacy/api-keys.asciidoc deleted file mode 100644 index cab1e68dceb..00000000000 --- a/docs/legacy/api-keys.asciidoc +++ /dev/null @@ -1,152 +0,0 @@ -[role="xpack"] -[[beats-api-keys]] -== Grant access using API keys - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -Instead of using usernames and passwords, you can use API keys to grant -access to {es} resources. You can set API keys to expire at a certain time, -and you can explicitly invalidate them. Any user with the `manage_api_key` -or `manage_own_api_key` cluster privilege can create API keys. - -{beatname_uc} instances typically send both collected data and monitoring -information to {es}. If you are sending both to the same cluster, you can use the same -API key. For different clusters, you need to use an API key per cluster. - -NOTE: For security reasons, we recommend using a unique API key per {beatname_uc} instance. -You can create as many API keys per user as necessary. - -[float] -[[beats-api-key-publish]] -=== Create an API key for writing events - -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -[role="screenshot"] -image::images/server-api-key-create.png[API key creation] - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, assign the appropriate privileges to the new API key. For example: - -[source,json,subs="attributes,callouts"] ----- -{ - "{beat_default_index_prefix}_writer": { - "index": [ - { - "names": ["{beat_default_index_prefix}-*"], - "privileges": ["create_index", "create_doc"] - }, - { - "names": ["{beat_default_index_prefix}-*sourcemap"], - "privileges": ["read"] - }, - ] - } -} ----- - -NOTE: This example only provides privileges for **writing data**. -See <> for additional privileges and information. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key**. In the dropdown, switch to **{beats}** and copy the API key. - -You can now use this API key in your +{beatname_lc}.yml+ configuration file: - -["source","yml",subs="attributes"] --------------------- -output.elasticsearch: - api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> --------------------- -<1> Format is `id:api_key` (as shown in the {beats} dropdown) - -[float] -[[beats-api-key-monitor]] -=== Create an API key for monitoring - -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -[role="screenshot"] -image::images/server-api-key-create.png[API key creation] - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, assign the appropriate privileges to the new API key. -For example: - -[source,json,subs="attributes,callouts"] ----- -{ - "{beat_default_index_prefix}_monitoring": { - "index": [ - { - "names": [".monitoring-beats-*"], - "privileges": ["create_index", "create_doc"] - } - ] - } -} ----- - -NOTE: This example only provides privileges for **publishing monitoring data**. -See <> for additional privileges and information. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key**. In the dropdown, switch to **{beats}** and copy the API key. - -You can now use this API key in your +{beatname_lc}.yml+ configuration file like this: - -["source","yml",subs="attributes"] --------------------- -monitoring.elasticsearch: - api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> --------------------- -<1> Format is `id:api_key` (as shown in the {beats} dropdown) - -[float] -[[beats-api-key-es]] -=== Create an API key with {es} APIs - -You can also use {es}'s {ref}/security-api-create-api-key.html[Create API key API] to create a new API key. -For example: - -[source,console,subs="attributes,callouts"] ------------------------------------------------------------- -POST /_security/api_key -{ - "name": "{beat_default_index_prefix}_host001", <1> - "role_descriptors": { - "{beat_default_index_prefix}_writer": { <2> - "index": [ - { - "names": ["{beat_default_index_prefix}-*"], - "privileges": ["create_index", "create_doc"] - }, - { - "names": ["{beat_default_index_prefix}-*sourcemap"], - "privileges": ["read"] - }, - ] - } - } -} ------------------------------------------------------------- -<1> Name of the API key -<2> Granted privileges, see <> - -See the {ref}/security-api-create-api-key.html[Create API key] reference for more information. - -[[learn-more-api-keys]] -[float] -=== Learn more about API keys - -See the {es} API key documentation for more information: - -* {ref}/security-api-create-api-key.html[Create API key] -* {ref}/security-api-get-api-key.html[Get API key information] -* {ref}/security-api-invalidate-api-key.html[Invalidate API key] diff --git a/docs/legacy/breaking-changes.asciidoc b/docs/legacy/breaking-changes.asciidoc deleted file mode 100644 index 06630ab2480..00000000000 --- a/docs/legacy/breaking-changes.asciidoc +++ /dev/null @@ -1,132 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[breaking-changes]] -== Breaking Changes -APM Server is built on top of {beats-ref}/index.html[libbeat]. -As such, any breaking change in libbeat is also considered to be a breaking change in APM Server. - -[float] -=== 7.15 - -The following breaking changes were introduced in 7.15: - -- `network.connection_type` is now `network.connection.type` {pull}5671[5671] -- `transaction.page` and `error.page` no longer recorded {pull}5872[5872] -- experimental:["This breaking change applies to the experimental tail-based sampling feature."] `apm-server.sampling.tail` now requires `apm-server.data_streams.enabled` {pull}5952[5952] -- beta:["This breaking change applies to the beta APM integration."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] - -[float] -=== 7.14 -There are no breaking changes in APM Server. - -[float] -=== 7.13 -There are no breaking changes in APM Server. - -[float] -=== 7.12 - -There are three breaking changes to be aware of; -these changes only impact users ingesting data with -{apm-server-ref-v}/jaeger.html[Jaeger clients]. - -* Leading zeros are no longer removed from Jaeger client trace/span ids. -+ --- -This change ensures distributed tracing continues to work across platforms by creating -consistent, full trace/span IDs from Jaeger clients, Elastic APM agents, -and OpenTelemetry SDKs. --- - -* Jaeger spans will now have a type of "app" where they previously were "custom". -+ --- -If the Jaeger span type is not inferred, it will now be "app". -This aligns with the OpenTelemetry Collector exporter -and improves the functionality of the _time spent by span type_ charts in the {apm-app}. --- - -* Jaeger spans may now have a more accurate outcome of "unknown". -+ --- -Previously, a "success" outcome was assumed when a span didn't fail. -The new default assigns "unknown", and only sets an outcome of "success" or "failure" when -the outcome is explicitly known. -This change aligns with Elastic APM agents and the OpenTelemetry Collector exporter. --- - -[float] -=== 7.11 -There are no breaking changes in APM Server. - -[float] -=== 7.10 -There are no breaking changes in APM Server. - -[float] -=== 7.9 -There are no breaking changes in APM Server. - -[float] -=== 7.8 -There are no breaking changes in APM Server. - -[float] -=== 7.7 -There are no breaking changes in APM Server. -However, a previously hardcoded feature is now configurable. -Failing to follow these {apm-guide-7x}/upgrading-to-77.html[upgrade steps] will result in increased span metadata ingestion when upgrading to version 7.7. - -[float] -=== 7.6 -There are no breaking changes in APM Server. - -[float] -=== 7.5 -The following breaking changes have been introduced in 7.5: - -* Introduced dedicated `apm-server.ilm.setup.*` flags. -This means you can now customize {ilm-init} behavior from within the APM Server configuration. -As a side effect, `setup.template.*` settings will be ignored for {ilm-init} related templates per event type. -See {apm-server-ref}/ilm.html[set up {ilm-init}] for more information. - -* By default, {ilm-init} policies will not longer be versioned. -All event types will switch to the new default policy: rollover after 30 days or when reaching a size of 50 GB. -See {apm-server-ref}/ilm.html[default policy] for more information. - -* To make use of all the new features introduced in 7.5, -you must ensure you are using version 7.5+ of APM Server and version 7.5+ of {kib}. - -[float] -=== 7.0 -The following breaking changes have been introduced in 7.0: - -* Removed deprecated Intake v1 API endpoints. -Upgrade agents to a version that supports APM Server ≥ 6.5. -{apm-guide-ref}/breaking-7.0.0.html#breaking-remove-v1[More information]. -* Moved fields in {es} to be compliant with the Elastic Common Schema (ECS). -{apm-guide-ref}/breaking-7.0.0.html#breaking-ecs[More information and changed fields]. - -[float] -=== 6.5 -There are no breaking changes in APM Server. -Advanced users may find the {apm-guide-7x}/upgrading-to-65.html[upgrading to 6.5 guide] useful. - -[float] -=== 6.4 -The following breaking changes have been introduced in 6.4: - -* Indexing the `onboarding` document in it's own index by default. - -[float] -=== 6.3 -The following breaking changes have been introduced in 6.3: - -* Indexing events in separate indices by default. -* {beats-ref-63}/breaking-changes-6.3.html[Breaking changes in libbeat] - -[float] -=== 6.2 - -APM Server is now GA (generally available). diff --git a/docs/legacy/common-problems.asciidoc b/docs/legacy/common-problems.asciidoc deleted file mode 100644 index 3852abb9145..00000000000 --- a/docs/legacy/common-problems.asciidoc +++ /dev/null @@ -1,354 +0,0 @@ -[[common-problems-legacy]] -== Common problems - -IMPORTANT: {deprecation-notice-data} - -This section describes common problems you might encounter with APM Server. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[[no-data-indexed-legacy]] -[float] -=== No data is indexed - -If no data shows up in {es}, first check that the APM components are properly connected. - -To ensure that APM Server configuration is valid and it can connect to the configured output, {es} by default, -run the following commands: - -["source","sh"] ------------------------------------------------------------- -apm-server test config -apm-server test output ------------------------------------------------------------- - -To see if the agent can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, it might be that SSL is <> or that the host is wrong. -Particularly, if you are using Docker, ensure to bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the agent accordingly. - -If you see requests coming through the APM Server but they are not accepted (response code other than `202`), consider -the response code to narrow down the possible causes (see sections below). - -Another reason for data not showing up is that the agent is not auto-instrumenting something you were expecting, check -the {apm-agents-ref}/index.html[agent documentation] for details on what is automatically instrumented. - -APM Server currently relies on {es} to create indices that do not exist. -As a result, {es} must be configured to allow {ref}/docs-index_.html#index-creation[automatic index creation] for APM indices. - -[[data-indexed-no-apm-legacy]] -[float] -=== Data is indexed but doesn't appear in the APM UI - -The {apm-app} relies on index mappings to query and display data. -If your APM data isn't showing up in the {apm-app}, but is elsewhere in {kib}, like the Discover app, -you may have a missing index mapping. - -You can determine if a field was mapped correctly with the `_mapping` API. -For example, run the following command in the {kib} {kibana-ref}/console-kibana.html[console]. -This will display the field data type of the `service.name` field. - -[source,curl] ----- -GET apm-*/_mapping/field/service.name ----- - -If the `mapping.name.type` is `"text"`, your APM indices were not set up correctly. - -[source,yml] ----- -"mappings" : { - "service.name" : { - "full_name" : "service.name", - "mapping" : { - "name" : { - "type" : "text", <1> - "fields" : { - "keyword" : { - "type" : "keyword", - "ignore_above" : 256 - } - } - } - } - } -} ----- -<1> The `service.name` `mapping.name.type` would be `"keyword"` if this field had been set up correctly. - -To fix this problem, you must delete and recreate your APM indices as index templates cannot be applied retroactively. - -. Stop your APM Server(s) so they are not writing any new documents. - -. Delete your existing `apm-*` indices. -In the {kib} console, run: -+ -[source,curl] ----- -DELETE apm-* ----- -+ -Alternatively, you can use the {ref}/index-mgmt.html[Index Management] page in {kib}. -Select all `apm-*` index templates and navigate to **Manage Indices** > **Delete Indices**. - -. Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates. -Install the APM integration by following these steps: -+ --- -include::./getting-started-apm-server.asciidoc[tag=install-apm-integration] --- - -. Start APM Server. - -. Verify the correct index templates were installed. In the {kib} console, run: -+ -[source,curl] ----- -GET _template/apm-* ----- -+ -Alternatively, you can use the {ref}/index-mgmt.html[**Index Management**] page in {kib}. -On the **Index Templates** tab, search for `apm` under **Legacy Index Templates**. - - -[[bad-request-legacy]] -[float] -=== HTTP 400: Data decoding error / Data validation error - -The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. -See the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] for more information. - -[[event-too-large-legacy]] -[float] -=== HTTP 400: Event too large - -APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> -setting in the APM Server, and adjusting relevant settings in the agent. - -[[unauthorized-legacy]] -[float] -=== HTTP 401: Invalid token - -The <> in the request header doesn't match the configured in the APM Server. - -[[forbidden-legacy]] -[float] -=== HTTP 403: Forbidden request - -Either you are sending requests to a <> endpoint without RUM enabled, or a request -is coming from an origin not specified in `apm-server.rum.allow_origins`. See the <>. - -[[request-timed-out-legacy]] -[float] -=== HTTP 503: Request timed out waiting to be processed - -This happens when APM Server exceeds the maximum number of requests that it can process concurrently. - -To alleviate this problem, you can try to: - -* <> -* <> -* <> -* <> - -[float] -[[ssl-client-fails-legacy]] -=== SSL client fails to connect - -The target host running might be unreachable or the certificate may not be valid. To resolve your issue: - -* Make sure that the APM Server process on the target host is running and you can connect to it. -First, try to ping the target host to verify that you can reach it from the host running {beatname_uc}. -Then use either `nc` or `telnet` to make sure that the port is available. For example: -+ -[source,shell] ----------------------------------------------------------------------- -ping -telnet 5044 ----------------------------------------------------------------------- - -* Verify that the certificate is valid and that the hostname and IP match. - -* Use OpenSSL to test connectivity to the target server and diagnose problems. -See the https://www.openssl.org/docs/manmaster/apps/s_client.html[OpenSSL documentation] for more info. - -[float] -==== Common SSL-Related Errors and Resolutions - -Here are some common errors and ways to fix them: - -* <> -* <> -* <> -* <> - -[float] -[[cannot-validate-certificate-legacy]] -===== x509: cannot validate certificate for because it doesn't contain any IP SANs - -This happens because your certificate is only valid for the hostname present in the Subject field. - -To resolve this problem, try one of these solutions: - -* Create a DNS entry for the hostname, mapping it to the server's IP. -* Create an entry in `/etc/hosts` for the hostname. Or, on Windows, add an entry to -`C:\Windows\System32\drivers\etc\hosts`. -* Re-create the server certificate and add a Subject Alternative Name (SAN) for the IP address of the server. This makes the -server's certificate valid for both the hostname and the IP address. - -[float] -[[getsockopt-no-route-to-host-legacy]] -===== getsockopt: no route to host - -This is not an SSL problem. It's a networking problem. Make sure the two hosts can communicate. - -[float] -[[getsockopt-connection-refused-legacy]] -===== getsockopt: connection refused - -This is not an SSL problem. Make sure that {ls} is running and that there is no firewall blocking the traffic. - -[float] -[[target-machine-refused-connection-legacy]] -===== No connection could be made because the target machine actively refused it - -A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the -destination host. - -[[field-limit-exceeded-legacy]] -[float] -=== Field limit exceeded - -When adding too many distinct tag keys on a transaction or span, -you risk creating a link:{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -For example, -you should avoid that user-specified data, -like URL parameters, -is used as a tag key. -Likewise, -using the current timestamp or a user ID as a tag key is not a good idea. -However, -tag *values* with a high cardinality are not a problem. -Just try to keep the number of distinct tag keys at a minimum. - -The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. -Usually, -on the next day, -the spans and transactions will be indexed again because a new index is created each day. -But as soon as the field limit is reached, -indexing stops again. - -In the agent logs, -you won't see a sign of failures as the APM server asynchronously sends the data it received from the agents to {es}. -However, -the APM server and {es} log a warning like this: - -[source,logs] ----- -{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in index [apm-7.0.0-transaction-2017.05.30] has been exceeded\"} ----- - -[[io-timeout-legacy]] -[float] -=== I/O Timeout - -I/O Timeouts can occur when your timeout settings across the stack are not configured correctly, -especially when using a load balancer. - -You may see an error like the one below in the agent logs, and/or a similar error on the APM Server side: - -[source,logs] ----------------------------------------------------------------------- -[ElasticAPM] APM Server responded with an error: -"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout" ----------------------------------------------------------------------- - -To fix this, ensure timeouts are incrementing from the {apm-agents-ref}[{apm-agent}], -through your load balancer, to the <>. - -By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 30 seconds. -Your load balancer should be set somewhere between these numbers. - -For example: - -[source,txt] ----------------------------------------------------------------------- -APM agent --> Load Balancer --> APM Server - 10s 15s 30s ----------------------------------------------------------------------- - -[[server-es-down-legacy]] -[float] -=== What happens when APM Server or {es} is down? - -*If {es} is down* - -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -*If APM Server is down* - -Some agents have internal queues or buffers that will temporarily store data if the APM Server goes down. -As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server goes down. -Adjusting these queues/buffers can increase the agent's overhead, so use caution when updating default values. - -* **Go agent** - Circular buffer with configurable size: -{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. -// * **iOS agent** - ?? -* **Java agent** - Internal buffer with configurable size: -{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. -* **Node.js agent** - No internal queue. Data is lost. -* **PHP agent** - No internal queue. Data is lost. -* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] -with configurable size and time between flushes. -* **Ruby agent** - Internal queue with configurable size: -{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. -* **RUM agent** - No internal queue. Data is lost. -* **.NET agent** - No internal queue. Data is lost. - -[[central-config-troubleshooting-legacy]] -[float] -=== `/api/apm/settings/agent-configuration/search` errors - -If you're instrumenting and starting a lot of services at the same time -or using a very large number of service or environment names, -you may see the following APM Server logs related to {apm-agent} central configuration: - -* `.../api/apm/settings/agent-configuration/search: context canceled` -* `.../api/apm/settings/agent-configuration/search: net/http: TLS handshake timeout` - -There are two possible causes: - -1. {kib} is overwhelmed by the number of requests coming from APM Server. -2. {es} can't reply quickly enough to {kib}. - -For cause #1, try one or more of the following: - -* Increase the <> setting. -* Increase the <>. -* Increase {kib}'s resources so that it is able to manage more requests. -* If you're not using APM central configuration, disable it with <>. -Central configuration can also be disabled at the {apm-agent} level. - -For cause #2, investigate why {es} is not responding in a timely manner. -{kib}'s queries to {es} are simple, so it may just be that {es} is unhealthy. -If that's not the problem, you may need to use {ref}/index-modules-slowlog.html[Search Slow Log] to investigate your {es} logs. - -To avoid this problem entirely, -we recommend <>. diff --git a/docs/legacy/config-ownership.asciidoc b/docs/legacy/config-ownership.asciidoc deleted file mode 100644 index ebd3ccfcb96..00000000000 --- a/docs/legacy/config-ownership.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[float] -[[config-file-ownership]] -==== Configuration file ownership - -On systems with POSIX file permissions, -the {beatname_uc} configuration file is subject to ownership and file permission checks. -These checks prevent unauthorized users from providing or modifying configurations that are run by {beatname_uc}. - -When installed via an RPM or DEB package, -the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+ will be owned by +{beatname_lc}+, -and have file permissions of `0600` (`-rw-------`). - -{beatname_uc} will only start if the configuration file is owned by the user running the process, -or by running as root with configuration ownership set to `root:root` - -You may encounter the following errors if your configuration file fails these checks: - -["source", "systemd", subs="attributes"] ------ -Exiting: error loading config file: config file ("/etc/{beatname_lc}/{beatname_lc}.yml") -must be owned by the user identifier (uid=1000) or root ------ - -To correct this problem you can change the ownership of the configuration file with: -+chown {beatname_lc}:{beatname_lc} /etc/{beatname_lc}/{beatname_lc}.yml+. - -You can also make root the config owner, although this is not recommended: -+sudo chown root:root /etc/{beatname_lc}/{beatname_lc}.yml+. - -["source", "systemd", subs="attributes"] ------ -Exiting: error loading config file: config file ("/etc/{beatname_lc}/{beatname_lc}.yml") -can only be writable by the owner but the permissions are "-rw-rw-r--" -(to fix the permissions use: 'chmod go-w /etc/{beatname_lc}/{beatname_lc}.yml') ------ - -To correct this problem, use +chmod go-w /etc/{beatname_lc}/{beatname_lc}.yml+ to -remove write privileges from anyone other than the owner. - -[float] -===== Disabling strict permission checks - -You can disable strict permission checks from the command line by using -`--strict.perms=false`, but we strongly encourage you to leave the checks enabled. diff --git a/docs/legacy/configuration-anonymous.asciidoc b/docs/legacy/configuration-anonymous.asciidoc deleted file mode 100644 index cc3d54b6630..00000000000 --- a/docs/legacy/configuration-anonymous.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[configuration-anonymous]] -== Anonymous auth configuration options - -++++ -Anonymous authentication -++++ - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -This is useful for agents that run on clients, like the Real User Monitoring (RUM) agent running in a browser, -or the iOS/Swift agent running in a user application. - -Example configuration: - -["source","yaml"] ----- -apm-server.auth.anonymous.enabled: true -apm-server.auth.anonymous.allow_agent: [rum-js] -apm-server.auth.anonymous.allow_service: [my_service_name] -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 ----- - -[float] -[[config-auth-anon-rum]] -=== Real User Monitoring (RUM) - -Anonymous authentication must be enabled to collect RUM data. -For this reason, anonymous auth will be enabled automatically if <> -is set to `true`, and <> is not explicitly defined. - -See <> for additional RUM configuration options. - -[float] -[[config-auth-anon-mitigating]] -=== Mitigating malicious requests - -There are a few configuration variables that can mitigate the impact of malicious requests to an -unauthenticated APM Server endpoint. - -Use the <> and <> configs to ensure that the -`agent.name` and `service.name` of each incoming request match a specified list. - -Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address -(`client.ip`) of the request with <>. -This allows you to specify the maximum number of requests allowed per unique IP address, per second. - -[float] -[[config-auth-anon-client-ip]] -==== Deriving an incoming request's `client.ip` address - -The remote IP address of an incoming request might be different -from the end-user's actual IP address, for example, because of a proxy. For this reason, -the APM Server attempts to derive the IP address of an incoming request from HTTP headers. -The supported headers are parsed in the following order: - -1. `Forwarded` -2. `X-Real-Ip` -3. `X-Forwarded-For` - -If none of these headers are present, the remote address for the incoming request is used. - -[float] -[[config-auth-anon-client-ip-concerns]] -==== Using a reverse proxy or load balancer - -HTTP headers are easily modified; -it's possible for anyone to spoof the derived `client.ip` value by changing or setting, -for example, the value of the `X-Forwarded-For` header. -For this reason, if any of your clients are not trusted, -we recommend setting up a reverse proxy or load balancer in front of the APM Server. - -Using a proxy allows you to clear any existing IP-forwarding HTTP headers, -and replace them with one set by the proxy. -This prevents malicious users from cycling spoofed IP addresses to bypass the -APM Server's rate limiting feature. - -[float] -[[config-auth-anon]] -=== Configuration reference - -Specify the following options in the `apm-server.auth.anonymous` section of the `apm-server.yml` config file: - -[float] -[[config-auth-anon-enabled]] -==== `enabled` - -Enable or disable anonymous authentication. - -Default: `false` (disabled) - -[float] -[[config-auth-anon-allow-agent]] -==== `allow_agent` -A list of permitted {apm-agent} names for anonymous authentication. -Names in this list must match the agent's `agent.name`. - -Default: `[rum-js, js-base]` (only RUM agent events are accepted) - -[float] -[[config-auth-anon-allow-service]] -==== `allow_service` -A list of permitted service names for anonymous authentication. -Names in this list must match the agent's `service.name`. -This can be used to limit the number of service-specific indices or data streams created. - -Default: Not set (any service name is accepted) - -[float] -[[config-auth-anon-ip-limit]] -==== `rate_limit.ip_limit` -The number of unique IP addresses to track in an LRU cache. -IP addresses in the cache will be rate limited according to the <> setting. -Consider increasing this default if your application has many concurrent clients. - -Default: `1000` - -[float] -[[config-auth-anon-event-limit]] -==== `rate_limit.event_limit` -The maximum number of events allowed per second, per agent IP address. - -Default: `300` diff --git a/docs/legacy/configuration-process.asciidoc b/docs/legacy/configuration-process.asciidoc deleted file mode 100644 index f96953616bf..00000000000 --- a/docs/legacy/configuration-process.asciidoc +++ /dev/null @@ -1,159 +0,0 @@ -[[configuration-process]] -== General configuration options - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -Example config file: - -["source","yaml"] ----- -apm-server: - host: "localhost:8200" - rum: - enabled: true - -output: - elasticsearch: - hosts: ElasticsearchAddress:9200 - -max_procs: 4 ----- - -NOTE: If you are using an X-Pack secured version of {stack}, -you need to specify credentials in the config file before you run the commands that set up and start APM Server. -For example: - -[source,yaml] ----- -output.elasticsearch: - hosts: ["ElasticsearchAddress:9200"] - username: "elastic" - password: "elastic" ----- - -[float] -[[configuration-apm-server]] -=== Configuration options: `apm-server.*` - -[[host]] -[float] -==== `host` -Defines the host and port the server is listening on. -Use `"unix:/path/to.sock"` to listen on a Unix domain socket. -Defaults to 'localhost:8200'. - -[[max_header_size]] -[float] -==== `max_header_size` -Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). -Defaults to 1048576 Bytes (1 MB). - -[[idle_timeout]] -[float] -==== `idle_timeout` -Maximum amount of time to wait for the next incoming request before underlying connection is closed. -Defaults to 45 seconds. - -[[read_timeout]] -[float] -==== `read_timeout` -Maximum permitted duration for reading an entire request. -Defaults to 30 seconds. - -[[write_timeout]] -[float] -==== `write_timeout` -Maximum permitted duration for writing a response. -Defaults to 30 seconds. - -[[shutdown_timeout]] -[float] -==== `shutdown_timeout` -Maximum duration in seconds before releasing resources when shutting down the server. -Defaults to 5 seconds. - -[[max_event_size]] -[float] -==== `max_event_size` -Maximum permitted size of an event accepted by the server to be processed (in Bytes). -Defaults to 307200 Bytes. - -[float] -[[configuration-other]] -=== Configuration options: general - -[[max_connections]] -[float] -==== `max_connections` -Maximum number of TCP connections to accept simultaneously. -Default value is 0, which means _unlimited_. - -[[config-secret-token]] -[float] -==== `auth.secret_token` -Authorization token for sending data to the APM server. -If a token is set, the agents must send it in the following format: -Authorization: Bearer . -The token is not used for RUM endpoints. By default, no authorization token is set. - -It is recommended to use an authorization token in combination with SSL enabled. -Read more about <> and the <>. - -[[config-secret-token-legacy]] -[float] -==== `secret_token` - -deprecated::[7.14.0, Replaced by `auth.secret_token`. See <>] - -In versions prior to 7.14.0, secret token authorization was known as `apm-server.secret_token`. In 7.14.0 this was renamed `apm-server.auth.secret_token`. -The old configuration will continue to work until 8.0.0, and the new configuration will take precedence. - -[[capture_personal_data]] -[float] -==== `capture_personal_data` -If true, -APM Server captures the IP of the instrumented service and its User Agent if any. -Enabled by default. - -[[default_service_environment]] -[float] -==== `default_service_environment` -Sets the default service environment to associate with data and requests received from agents which have no service environment defined. - -[[expvar.enabled]] -[float] -==== `expvar.enabled` -When set to true APM Server exposes https://golang.org/pkg/expvar/[golang expvar]. -Disabled by default. - -[[expvar.url]] -[float] -==== `expvar.url` -Configure the URL to expose expvar. -Defaults to `debug/vars`. - -[[instrumentation.enabled]] -[float] -==== `instrumentation.enabled` -Enables self instrumentation of the APM Server itself. -Disabled by default. - -[float] -=== Configuration options: `max_procs` - -[[max_procs]] -[float] -==== `max_procs` -Sets the maximum number of CPUs that can be executing simultaneously. -The default is the number of logical CPUs available in the system. - -[float] -=== Configuration options: `data_streams` - -[[data_streams.wait_for_integration]] -[float] -==== `wait_for_integration` -Wait for the `apm` {fleet} integration to be installed by {kib}. Requires either <> -or for the <> to be configured. -Defaults to true. diff --git a/docs/legacy/configuration-rum.asciidoc b/docs/legacy/configuration-rum.asciidoc deleted file mode 100644 index badb2670473..00000000000 --- a/docs/legacy/configuration-rum.asciidoc +++ /dev/null @@ -1,170 +0,0 @@ -[[configuration-rum]] -== Configure Real User Monitoring (RUM) - -++++ -Real User Monitoring (RUM) -++++ - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. - -The {apm-rum-ref-v}/index.html[Real User Monitoring (RUM) agent] captures user interactions with clients such as web browsers. -These interactions are sent as events to the APM Server. -Because the RUM agent runs on the client side, the connection between agent and server is unauthenticated. -As a security precaution, RUM is therefore disabled by default. -To enable it, set `apm-server.rum.enabled` to `true` in your APM Server configuration file. - -In addition, if APM Server is deployed in an origin different than the page’s origin, -you will need to configure {apm-rum-ref-v}/configuring-cors.html[Cross-Origin Resource Sharing (CORS)] in the Agent. - -Example config with RUM enabled: - -["source","yaml"] ----- -apm-server.rum.enabled: true -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 -apm-server.auth.anonymous.allow_service: [your_service_name] -apm-server.rum.allow_origins: ['*'] -apm-server.rum.allow_headers: ["header1", "header2"] -apm-server.rum.library_pattern: "node_modules|bower_components|~" -apm-server.rum.exclude_from_grouping: "^/webpack" -apm-server.rum.source_mapping.enabled: true -apm-server.rum.source_mapping.cache.expiration: 5m ----- - -[float] -[[enable-rum-support]] -=== Configuration reference - -Specify the following options in the `apm-server.rum` section of the `apm-server.yml` config file: - -[[rum-enable]] -[float] -==== `enabled` -To enable RUM support, set `apm-server.rum.enabled` to `true`. -By default this is disabled. - -NOTE: Enabling RUM support automatically enables <>. -Anonymous access is required as the RUM agent runs in end users' browsers. - -[float] -[[event_rate.limit]] -==== `event_rate.limit` - -deprecated::[7.15.0, Replaced by <>.] - -The maximum number of events allowed per second, per agent IP address. - -Default: `300` - -[float] -==== `event_rate.lru_size` - -deprecated::[7.15.0, Replaced by <>.] - -The number of unique IP addresses to track in an LRU cache. -IP addresses in the cache will be rate limited according to the <> setting. -Consider increasing this default if your site has many concurrent clients. - -Default: `1000` - -[float] -[[rum-allow-service-names]] -==== `allow_service_names` - -deprecated::[7.15.0, Replaced by <>.] -A list of permitted service names for RUM support. -Names in this list must match the agent's `service.name`. -This can be set to restrict RUM events to those with one of a set of known service names, -in order to limit the number of service-specific indices or data streams created. - -Default: Not set (any service name is accepted) - -[float] -[[rum-allow-origins]] -==== `allow_origins` -A list of permitted origins for RUM support. -User-agents send an Origin header that will be validated against this list. -This is done automatically by modern browsers as part of the https://www.w3.org/TR/cors/[CORS specification]. -An origin is made of a protocol scheme, host and port, without the URL path. - -Default: `['*']` (allows everything) - -[float] -[[rum-allow-headers]] -==== `allow_headers` -HTTP requests made from the RUM agent to the APM Server are limited in the HTTP headers they are allowed to have. -If any other headers are added, the request will be rejected by the browser due to Cross-Origin Resource Sharing (CORS) restrictions. -Use this setting to allow additional headers. -The default list of allowed headers includes "Content-Type", "Content-Encoding", and "Accept"; -custom values configured here are appended to the default list and used as the value for the `Access-Control-Allow-Headers` header. - -Default: `[]` - -[float] -[[rum-response-headers]] -==== `response_headers` -Custom HTTP headers to add to RUM responses. -This can be useful for security policy compliance. - -Values set for the same key will be concatenated. - -Default: Not set - -[float] -[[rum-library-pattern]] -==== `library_pattern` -RegExp to be matched against a stack trace frame's `file_name` and `abs_path` attributes. -If the RegExp matches, the stack trace frame is considered to be a library frame. -When source mapping is applied, the `error.culprit` is set to reflect the _function_ and the _filename_ -of the first non library frame. -This aims to provide an entry point for identifying issues. - -Default: `"node_modules|bower_components|~"` - -[float] -==== `exclude_from_grouping` -RegExp to be matched against a stack trace frame's `file_name`. -If the RegExp matches, the stack trace frame is excluded from being used for calculating error groups. - -Default: `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`) - -[[config-sourcemapping-enabled]] -[float] -==== `source_mapping.enabled` -Used to enable/disable <> for RUM events. -When enabled, the APM Server needs additional privileges to read source maps. -See <> for more details. - -Default: `true` - -[[config-sourcemapping-elasticsearch]] -[float] -==== `source_mapping.elasticsearch` -Configure the {es} source map retrieval location, taking the same options as <>. -This must be set when using an output other than {es}, and that output is writing to {es}. -Otherwise leave this section empty. - -[[rum-sourcemap-cache]] -[float] -==== `source_mapping.cache.expiration` -If a source map has been uploaded to the APM Server, -<> is automatically applied to documents sent to the RUM endpoint. -Source maps are fetched from {es} and then kept in an in-memory cache for the configured time. -Values configured without a time unit are treated as seconds. - -Default: `5m` (5 minutes) - -[float] -==== `source_mapping.index_pattern` -Previous versions of APM Server stored source maps in `apm-%{[observer.version]}-sourcemap` indices. -Search source maps stored in an older version with this setting. - -Default: `"apm-*-sourcemap*"` - -[float] -=== Ingest pipelines - -The default APM Server pipeline includes processors that enrich RUM data prior to indexing in {es}. -See <> for details on how to locate, edit, or disable this preprocessing. diff --git a/docs/legacy/configure-kibana-endpoint.asciidoc b/docs/legacy/configure-kibana-endpoint.asciidoc deleted file mode 100644 index f5f7635266b..00000000000 --- a/docs/legacy/configure-kibana-endpoint.asciidoc +++ /dev/null @@ -1,128 +0,0 @@ -[[setup-kibana-endpoint]] -== Configure the {kib} endpoint - -++++ -{kib} endpoint -++++ - -IMPORTANT: {deprecation-notice-config} - -Configuring the {kib} endpoint is required for -{kibana-ref}/agent-configuration.html[APM Agent configuration in {kib}]. -You configure the endpoint in the `apm-server.kibana` section of the -+{beatname_lc}.yml+ config file. - -Here's a sample configuration: - -[source,yaml] ----- -apm-server.kibana.enabled: true -apm-server.kibana.host: "http://localhost:5601" ----- - -[float] -=== Considerations - -* If your setup uses a <> for Agent/Server communication, -the same token is used to secure this endpoint. -* It's important to still set relevant defaults locally in each Agent's configuration. -If APM Server is unreachable, slow to respond, returns an error, etc., -defaults set in the agent will apply according to their precedence. -* APM Server needs sufficient {kib} privileges to manage central configuration. -See <> for a list of required privileges. - -[float] -=== {kib} endpoint configuration options - -You can specify the following options in the `apm-server.kibana` section of the -+{beatname_lc}.yml+ config file: - -[float] -[[kibana-enabled]] -==== `apm-server.kibana.enabled` - -Defaults to `false`. Must be `true` to use APM Agent configuration. - -[float] -[[kibana-host]] -==== `apm-server.kibana.host` - -The {kib} host that APM Server will communicate with. The default is -`127.0.0.1:5601`. The value of `host` can be a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `192:15.3.2:5601` or `http://192.15.3.2:6701/path`. If no -port is specified, `5601` is used. - -NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken -from the <> and -<> config options. - -IPv6 addresses must be defined using the following format: -`https://[2001:db8::1]:5601`. - -[float] -[[kibana-protocol-option]] -==== `apm-server.kibana.protocol` - -The name of the protocol {kib} is reachable on. The options are: `http` or -`https`. The default is `http`. However, if you specify a URL for host, the -value of `protocol` is overridden by whatever scheme you specify in the URL. - -Example config: - -[source,yaml] ----- -apm-server.kibana.host: "192.0.2.255:5601" -apm-server.kibana.protocol: "http" -apm-server.kibana.path: /kibana ----- - - -[float] -==== `apm-server.kibana.username` - -The basic authentication username for connecting to {kib}. - -[float] -==== `apm-server.kibana.password` - -The basic authentication password for connecting to {kib}. - -[float] -[[kibana-path-option]] -==== `apm-server.kibana.path` - -An HTTP path prefix that is prepended to the HTTP API calls. This is useful for -the cases where {kib} listens behind an HTTP reverse proxy that exports the API -under a custom prefix. - -[float] -==== `apm-server.kibana.ssl.enabled` - -Enables {beatname_uc} to use SSL settings when connecting to {kib} via HTTPS. -If you configure {beatname_uc} to connect over HTTPS, this setting defaults to -`true` and {beatname_uc} uses the default SSL settings. - -Example configuration: - -[source,yaml] ----- -apm-server.kibana.host: "https://192.0.2.255:5601" -apm-server.kibana.ssl.enabled: true -apm-server.kibana.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] -apm-server.kibana.ssl.certificate: "/etc/pki/client/cert.pem" -apm-server.kibana.ssl.key: "/etc/pki/client/cert.key" ----- - -For information on the additional SSL configuration options, -see <>. - -[float] -=== Agent Config configuration options - -You can specify the following options in the `apm-server.agent.config` section of the -+{beatname_lc}.yml+ config file: - -[float] -==== `agent.config.cache.expiration` - -When using APM Agent configuration, information fetched from {kib} will be cached in memory. -This setting specifies the time before cache key expiration. Defaults to 30 seconds. diff --git a/docs/legacy/configuring-ingest.asciidoc b/docs/legacy/configuring-ingest.asciidoc deleted file mode 100644 index fe62a3c6117..00000000000 --- a/docs/legacy/configuring-ingest.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[configuring-ingest-node]] -== Parse data using ingest node pipelines - -deprecated::[7.16.0,Users should now use the <>. See <> if you've already upgraded.] - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -include::../ingest-pipelines.asciidoc[tag=ingest-pipelines] diff --git a/docs/legacy/configuring-output-after.asciidoc b/docs/legacy/configuring-output-after.asciidoc deleted file mode 100644 index 36ffabd711a..00000000000 --- a/docs/legacy/configuring-output-after.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[sourcemap-output]] - -[float] -=== Source maps - -Source maps can be uploaded through all outputs but must eventually be stored in {es}. -When using outputs other than {es}, `source_mapping.elasticsearch` must be set for source maps to be applied. -Be sure to update `source_mapping.index_pattern` if source maps are stored in the non-default location. -See <> for more details. - -[[libbeat-configuration-fields]] -[float] -=== `fields` - -Fields are optional tags that can be added to the documents that APM Server outputs. -They are defined at the top-level in your configuration file, and will apply to any configured output. -Fields can be scalar values, arrays, dictionaries, or any nested combination of these. -By default, the fields that you specify here will be grouped under a `fields` sub-dictionary in the output document. - -Example using the {es} output: - -[source,yaml] ------------------------------------------------------------------------------- -fields: {project: "myproject", instance-id: "574734885120952459"} -#-------------------------- Elasticsearch output -------------------------- -output.elasticsearch: - hosts: ["localhost:9200"] ------------------------------------------------------------------------------- - -To store the custom fields as top-level fields, set the `fields_under_root` option to true. -This is not recommended as when new fields are added to APM documents backward compatibility cannot be ensured. diff --git a/docs/legacy/configuring.asciidoc b/docs/legacy/configuring.asciidoc deleted file mode 100644 index 1379a7f11ec..00000000000 --- a/docs/legacy/configuring.asciidoc +++ /dev/null @@ -1,80 +0,0 @@ -[[configuring-howto-apm-server]] -= Configure APM Server - -++++ -Configure -++++ - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -include::{libbeat-dir}/shared/configuring-intro.asciidoc[] - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./configuration-process.asciidoc[] - -include::./configuration-anonymous.asciidoc[] - -include::{libbeat-dir}/shared-instrumentation.asciidoc[] - -include::./jaeger-reference.asciidoc[] - -ifndef::no_kerberos[] -include::{libbeat-dir}/shared-kerberos-config.asciidoc[] -endif::[] - -include::./configure-kibana-endpoint.asciidoc[] - -include::{libbeat-dir}/loggingconfig.asciidoc[] - -:no-redis-output: -include::{libbeat-dir}/outputconfig.asciidoc[] - -include::{libbeat-dir}/shared-path-config.asciidoc[] - -include::./configuration-rum.asciidoc[] - -// BEGIN SSL SECTION -------------------------------------------- -[[configuration-ssl-landing]] -== SSL/TLS settings - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -SSL/TLS is available for: - -* <> (APM Agents) -* <> that support SSL, like {es}, {ls}, or Kafka. - -Additional information on getting started with SSL/TLS is available in <>. - -// The leveloffset attribute pushes all headings in the included document down by -// the specified number of levels. It is required here because the shared Beats -// documentation was created as a level 1 heading. In the APM book, this level -// would break the DTD. Using leveloffset +1, we can include this file here. -// It's important to reset the level heading after including a file. -:leveloffset: +1 -include::{libbeat-dir}/shared-ssl-config.asciidoc[] -:leveloffset: -1 - -include::ssl-input-settings.asciidoc[] -// END SSL SECTION -------------------------------------------- - -include::./transaction-metrics.asciidoc[] - -:standalone: -include::{libbeat-dir}/shared-env-vars.asciidoc[] -:standalone!: diff --git a/docs/legacy/copied-from-beats/docs/command-reference.asciidoc b/docs/legacy/copied-from-beats/docs/command-reference.asciidoc deleted file mode 100644 index ae30db07277..00000000000 --- a/docs/legacy/copied-from-beats/docs/command-reference.asciidoc +++ /dev/null @@ -1,1066 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/command-reference.asciidoc[] -////////////////////////////////////////////////////////////////////////// - - -// These attributes are used to resolve short descriptions -// tag::attributes[] - -:global-flags: Also see <>. - -:deploy-command-short-desc: Deploys the specified function to your serverless environment - -:apikey-command-short-desc: Manage API Keys for communication between APM agents and server. - -ifndef::serverless[] -ifndef::no_dashboards[] -:export-command-short-desc: Exports the configuration, index template, {ilm-init} policy, or a dashboard to stdout -endif::no_dashboards[] - -ifdef::no_dashboards[] -:export-command-short-desc: Exports the configuration, index template, or {ilm-init} policy to stdout -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -:export-command-short-desc: Exports the configuration, index template, or {cloudformation-ref} template to stdout -endif::serverless[] - -:help-command-short-desc: Shows help for any command -:keystore-command-short-desc: Manages the <> -:modules-command-short-desc: Manages configured modules -:package-command-short-desc: Packages the configuration and executable into a zip file -:remove-command-short-desc: Removes the specified function from your serverless environment -:run-command-short-desc: Runs {beatname_uc}. This command is used by default if you start {beatname_uc} without specifying a command - -ifdef::has_ml_jobs[] -:setup-command-short-desc: Sets up the initial environment, including the index template, {ilm-init} policy and write alias, {kib} dashboards (when available), and {ml} jobs (when available) -endif::[] - -ifdef::no_dashboards[] -:setup-command-short-desc: Sets up the initial environment, including the ES index template, and {ilm-init} policy and write alias -endif::no_dashboards[] - -ifndef::has_ml_jobs,no_dashboards[] -:setup-command-short-desc: Sets up the initial environment, including the index template, {ilm-init} policy and write alias, and {kib} dashboards (when available) -endif::[] - -:update-command-short-desc: Updates the specified function -:test-command-short-desc: Tests the configuration -:version-command-short-desc: Shows information about the current version - -// end::attributes[] - -[[command-line-options]] -=== {beatname_uc} command reference - -++++ -Command reference -++++ - -IMPORTANT: {deprecation-notice-config} - -ifndef::no_dashboards[] -{beatname_uc} provides a command-line interface for starting {beatname_uc} and -performing common tasks, like testing configuration files and loading dashboards. -endif::no_dashboards[] - -ifdef::no_dashboards[] -{beatname_uc} provides a command-line interface for starting {beatname_uc} and -performing common tasks, like testing configuration files. -endif::no_dashboards[] - -The command-line also supports <> -for controlling global behaviors. - -ifeval::["{beatname_lc}"!="winlogbeat"] -[TIP] -========================= -Use `sudo` to run the following commands if: - -* the config file is owned by `root`, or -* {beatname_uc} is configured to capture data that requires `root` access - -========================= -endif::[] - -Some of the features described here require an Elastic license. For -more information, see https://www.elastic.co/subscriptions and -{kibana-ref}/managing-licenses.html[License Management]. - - -[options="header"] -|======================= -|Commands | -ifeval::["{beatname_lc}"=="functionbeat"] -|<> | {deploy-command-short-desc}. -endif::[] -ifdef::apm-server[] -|<> |{apikey-command-short-desc}. -endif::[] -|<> |{export-command-short-desc}. -|<> |{help-command-short-desc}. -ifndef::serverless[] -|<> |{keystore-command-short-desc}. -endif::[] -ifeval::["{beatname_lc}"=="functionbeat"] -|<> |{package-command-short-desc}. -|<> |{remove-command-short-desc}. -endif::[] -ifdef::has_modules_command[] -|<> |{modules-command-short-desc}. -endif::[] -ifndef::serverless[] -|<> |{run-command-short-desc}. -endif::[] -ifndef::apm-server[] -|<> |{setup-command-short-desc}. -endif::apm-server[] -|<> |{test-command-short-desc}. -ifeval::["{beatname_lc}"=="functionbeat"] -|<> |{update-command-short-desc}. -endif::[] -|<> |{version-command-short-desc}. -|======================= - -Also see <>. - -ifdef::apm-server[] -[float] -[[apikey-command]] -==== `apikey` command - -experimental::[] - -Communication between APM agents and APM Server now supports sending an -<>. -APM Server provides an `apikey` command that can create, verify, invalidate, -and show information about API Keys for agent/server communication. -Most operations require the `manage_own_api_key` cluster privilege, -and you must ensure that `apm-server.api_key` or `output.elasticsearch` are configured appropriately. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} apikey SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -// tag::apikey-subcommands[] -*`create`*:: -Create an API Key with the specified privilege(s). No required flags. -+ -The user requesting to create an API Key needs to have APM privileges used by the APM Server. -A superuser, by default, has these privileges. For other users, -you can create them. See <> for required privileges. - -*`info`*:: -Query API Key(s). `--id` or `--name` required. - -*`invalidate`*:: -Invalidate API Key(s). `--id` or `--name` required. - -*`verify`*:: -Check if a credentials string has the given privilege(s). - `--credentials` required. -// end::apikey-subcommands[] - -*FLAGS* - -*`--agent-config`*:: -Required for agents to read configuration remotely. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `config_agent:read` privilege to the created key. -When used with `verify`, asks for the `config_agent:read` privilege. - -*`--credentials CREDS`*:: -Required for the `verify` subcommand. Specifies the credentials for which to check privileges. -Credentials are the base64 encoded representation of the API key's `id:api_key`. - -*`--expiration TIME`*:: -When used with `create`, specifies the expiration for the key, e.g., "1d" (default never). - -*`--id ID`*:: -ID of the API key. Valid with the `info` and `invalidate` subcommands. -When used with `info`, queries the specified ID. -When used with `invalidate`, deletes the specified ID. - -*`--ingest`*:: -Required for ingesting events. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `event:write` privilege to the created key. -When used with `verify`, asks for the `event:write` privilege. - -*`--json`*:: -Prints the output of the command as JSON. -Valid with all `apikey` subcommands. - -*`--name NAME`*:: -Name of the API key(s). Valid with the `create`, `info`, and `invalidate` subcommands. -When used with `create`, specifies the name of the API key to be created (default: "apm-key"). -When used with `info`, specifies the API key to query (multiple matches are possible). -When used with `invalidate`, specifies the API key to delete (multiple matches are possible). - -*`--sourcemap`*:: -Required for uploading source maps. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `sourcemap:write` privilege to the created key. -When used with `verify`, asks for the `sourcemap:write` privilege. - -*`--valid-only`*:: -When used with `info`, only returns valid API Keys (not expired or invalidated). - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey create --ingest --agent-config --name example-001 -{beatname_lc} apikey info --name example-001 --valid-only -{beatname_lc} apikey invalidate --name example-001 ------ - -For more information, see <>. - -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[[deploy-command]] -==== `deploy` command - -{deploy-command-short-desc}. Before deploying functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} deploy FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to deploy. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `deploy` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} deploy cloudwatch -{beatname_lc} deploy sqs ------ -endif::[] - -[float] -[[export-command]] -==== `export` command - -ifndef::serverless[] -ifndef::no_dashboards[] -{export-command-short-desc}. You can use this -command to quickly view your configuration, see the contents of the index -template and the {ilm-init} policy, or export a dashboard from {kib}. -endif::no_dashboards[] - -ifdef::no_dashboards[] -{export-command-short-desc}. You can use this -command to quickly view your configuration or see the contents of the index -template or the {ilm-init} policy. -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -{export-command-short-desc}. You can use this -command to quickly view your configuration, see the contents of the index -template and the {ilm-init} policy, or export an CloudFormation template. -endif::serverless[] - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} export SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`config`*:: -Exports the current configuration to stdout. If you use the `-c` flag, this -command exports the configuration that's defined in the specified file. - -ifndef::no_dashboards[] -[[dashboard-subcommand]]*`dashboard`*:: -Exports a dashboard. You can use this option to store a dashboard on disk in a -module and load it automatically. For example, to export the dashboard to a JSON -file, run: -+ -["source","shell",subs="attributes"] ----- -{beatname_lc} export dashboard --id="DASHBOARD_ID" > dashboard.json ----- -+ -To find the `DASHBOARD_ID`, look at the URL for the dashboard in {kib}. By -default, `export dashboard` writes the dashboard to stdout. The example shows -how to write the dashboard to a JSON file so that you can import it later. The -JSON file will contain the dashboard with all visualizations and searches. You -must load the index pattern separately for {beatname_uc}. -+ -To load the dashboard, copy the generated `dashboard.json` file into the -`kibana/6/dashboard` directory of {beatname_uc}, and run -+{beatname_lc} setup --dashboards+ to import the dashboard. -+ -If {kib} is not running on `localhost:5061`, you must also adjust the -{beatname_uc} configuration under `setup.kibana`. -endif::no_dashboards[] - -[[template-subcommand]]*`template`*:: -Exports the index template to stdout. You can specify the `--es.version` and -`--index` flags to further define what gets exported. Furthermore you can export -the template to a file instead of `stdout` by defining a directory via `--dir`. - -[[ilm-policy-subcommand]] -*`ilm-policy`*:: -Exports the {ilm} policy to stdout. You can specify the -`--es.version` and a `--dir` to which the policy should be exported as a -file rather than exporting to `stdout`. - -ifdef::serverless[] -[[function-subcommand]]*`function` FUNCTION_NAME*:: -Exports an {cloudformation-ref} template to stdout. -endif::serverless[] - -*FLAGS* - -*`--es.version VERSION`*:: -When used with <>, exports an index -template that is compatible with the specified version. -When used with <>, exports the {ilm-init} policy -if the specified ES version is enabled for {ilm-init}. - -*`-h, --help`*:: -Shows help for the `export` command. - -*`--index BASE_NAME`*:: -When used with <>, sets the base name to use for -the index template. If this flag is not specified, the default base name is -+{beatname_lc}+. - -*`--dir DIRNAME`*:: -Define a directory to which the template and {ilm-init} policy should be exported to -as files instead of printing them to `stdout`. - -ifndef::no_dashboards[] -*`--id DASHBOARD_ID`*:: -When used with <>, specifies the dashboard ID. -endif::no_dashboards[] - -{global-flags} - -*EXAMPLES* - -ifndef::serverless[] -ifndef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname -{beatname_lc} export dashboard --id="a7b35890-8baa-11e8-9676-ef67484126fb" > dashboard.json ------ -endif::no_dashboards[] - -ifdef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname ------ -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname -{beatname_lc} export function cloudwatch ------ -endif::serverless[] - -[float] -[[help-command]] -==== `help` command - -{help-command-short-desc}. -ifndef::serverless[] -If no command is specified, shows help for the `run` command. -endif::[] - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} help COMMAND_NAME [FLAGS] ----- - - -*`COMMAND_NAME`*:: -Specifies the name of the command to show help for. - -*FLAGS* - -*`-h, --help`*:: Shows help for the `help` command. - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} help export ------ - -ifndef::serverless[] -[float] -[[keystore-command]] -==== `keystore` command - -{keystore-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} keystore SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`add KEY`*:: -Adds the specified key to the keystore. Use the `--force` flag to overwrite an -existing key. Use the `--stdin` flag to pass the value through `stdin`. - -*`create`*:: -Creates a keystore to hold secrets. Use the `--force` flag to overwrite the -existing keystore. - -*`list`*:: -Lists the keys in the keystore. - -*`remove KEY`*:: -Removes the specified key from the keystore. - -*FLAGS* - -*`--force`*:: -Valid with the `add` and `create` subcommands. When used with `add`, overwrites -the specified key. When used with `create`, overwrites the keystore. - -*`--stdin`*:: -When used with `add`, uses the stdin as the source of the key's value. - -*`-h, --help`*:: -Shows help for the `keystore` command. - - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} keystore create -{beatname_lc} keystore add ES_PWD -{beatname_lc} keystore remove ES_PWD -{beatname_lc} keystore list ------ - -See <> for more examples. - -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[float] -[[package-command]] -==== `package` command - -{package-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} package [FLAGS] ----- - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `package` command. - -*`-o, --output`*:: -Specifies the full path pattern to use when creating the packages. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} package --output /path/to/folder/package-{{.Provider}}.zip ------ - -[[remove-command]] -==== `remove` command - -{remove-command-short-desc}. Before removing functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} remove FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to remove. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `remove` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} remove cloudwatch -{beatname_lc} remove sqs ------ -endif::[] - -ifdef::has_modules_command[] -[[modules-command]] -==== `modules` command - -{modules-command-short-desc}. You can use this command to enable and disable -specific module configurations defined in the `modules.d` directory. The -changes you make with this command are persisted and used for subsequent -runs of {beatname_uc}. - -To see which modules are enabled and disabled, run the `list` subcommand. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} modules SUBCOMMAND [FLAGS] ----- - - -*`SUBCOMMAND`* - -*`disable MODULE_LIST`*:: -Disables the modules specified in the space-separated list. - -*`enable MODULE_LIST`*:: -Enables the modules specified in the space-separated list. - -*`list`*:: -Lists the modules that are currently enabled and disabled. - - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `export` command. - - -{global-flags} - -*EXAMPLES* - -ifeval::["{beatname_lc}"=="filebeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} modules list -{beatname_lc} modules enable apache2 auditd mysql ------ -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} modules list -{beatname_lc} modules enable apache nginx system ------ -endif::[] -endif::[] - -ifndef::serverless[] -[float] -[[run-command]] -==== `run` command - -{run-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ------ -{beatname_lc} run [FLAGS] ------ - -Or: - -["source","sh",subs="attributes"] ------ -{beatname_lc} [FLAGS] ------ - -*FLAGS* - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-I, --I FILE`*:: -Reads packet data from the specified file instead of reading packets from the -network. This option is useful only for testing {beatname_uc}. -+ -["source","sh",subs="attributes"] ------ -{beatname_lc} run -I ~/pcaps/network_traffic.pcap ------ -endif::[] - -*`-N, --N`*:: Disables publishing for testing purposes. -ifndef::no_file_output[] -This option disables all outputs except the <>. -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-O, --O`*:: -Read packets one by one by pressing _Enter_ after each. This option is useful -only for testing {beatname_uc}. -endif::[] - -*`--cpuprofile FILE`*:: -Writes CPU profile data to the specified file. This option is useful for -troubleshooting {beatname_uc}. - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-devices`*:: -Prints the list of devices that are available for sniffing and then exits. -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-dump FILE`*:: -Writes all captured packets to the specified file. This option is useful for -troubleshooting {beatname_uc}. -endif::[] - -*`-h, --help`*:: -Shows help for the `run` command. - -*`--httpprof [HOST]:PORT`*:: -Starts an HTTP server for profiling. This option is useful for troubleshooting -and profiling {beatname_uc}. - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-l N`*:: -Reads the pcap file `N` number of times. The default is 1. Use this option in -combination with the `-I` option. For an infinite loop, use _0_. The `-l` -option is useful only for testing {beatname_uc}. -endif::[] - -*`--memprofile FILE`*:: -Writes memory profile data to the specified output file. This option is useful -for troubleshooting {beatname_uc}. - -ifeval::["{beatname_lc}"=="filebeat"] -*`--modules MODULE_LIST`*:: -Specifies a comma-separated list of modules to run. For example: -+ -["source","sh",subs="attributes"] ------ -{beatname_lc} run --modules nginx,mysql,system ------ -+ -Rather than specifying the list of modules every time you run {beatname_uc}, -you can use the <> command to enable and disable -specific modules. Then when you run {beatname_uc}, it will run any modules -that are enabled. -endif::[] - -ifeval::["{beatname_lc}"=="filebeat"] -*`--once`*:: -When the `--once` flag is used, {beatname_uc} starts all configured harvesters -and inputs, and runs each input until the harvesters are closed. If you set the -`--once` flag, you should also set `close_eof` so the harvester is closed when -the end of the file is reached. By default harvesters are closed after -`close_inactive` is reached. -endif::[] - -*`--system.hostfs MOUNT_POINT`*:: - -Specifies the mount point of the host's file system for use in monitoring a host. - - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-t`*:: -Reads packets from the pcap file as fast as possible without sleeping. Use this -option in combination with the `-I` option. The `-t` option is useful only for -testing Packetbeat. -endif::[] - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} run -e ------ - -Or: - -["source","sh",subs="attributes"] ------ -{beatname_lc} -e ------ -endif::[] - -ifndef::apm-server[] -[float] -[[setup-command]] -==== `setup` command - -{setup-command-short-desc} - -* The index template ensures that fields are mapped correctly in {es}. -If {ilm} is enabled it also ensures that the defined {ilm-init} policy -and write alias are connected to the indices matching the index template. -The {ilm-init} policy takes care of the lifecycle of an index, when to do a rollover, -when to move an index from the hot phase to the next phase, etc. - -ifndef::no_dashboards[] -* The {kib} dashboards make it easier for you to visualize {beatname_uc} data -in {kib}. -endif::no_dashboards[] - -ifdef::has_ml_jobs[] -* The {ml} jobs contain the configuration information and metadata -necessary to analyze data for anomalies. -endif::[] - -This command sets up the environment without actually running -{beatname_uc} and ingesting data. - -*SYNOPSIS* - -// tag::setup-command-tag[] -["source","sh",subs="attributes"] ----- -{beatname_lc} setup [FLAGS] ----- - - -*FLAGS* - -ifndef::no_dashboards[] -*`--dashboards`*:: -Sets up the {kib} dashboards (when available). This option loads the dashboards -from the {beatname_uc} package. For more options, such as loading customized -dashboards, see {beatsdevguide}/import-dashboards.html[Importing Existing Beat -Dashboards] in the _{beats} Developer Guide_. -endif::no_dashboards[] - -*`-h, --help`*:: -Shows help for the `setup` command. - -ifeval::["{beatname_lc}"=="filebeat"] -*`--modules MODULE_LIST`*:: -Specifies a comma-separated list of modules. Use this flag to avoid errors when -there are no modules defined in the +{beatname_lc}.yml+ file. - -*`--pipelines`*:: -Sets up ingest pipelines for configured filesets. {beatname_uc} looks for -enabled modules in the +{beatname_lc}.yml+ file. If you used the -<> command to enable modules in the `modules.d` -directory, also specify the `--modules` flag. -endif::[] - -*`--index-management`*:: -Sets up components related to {es} index management including -template, {ilm-init} policy, and write alias (if supported and configured). - -ifdef::apm-server[] -*`--pipelines`*:: -Registers the <> definitions set in `ingest/pipeline/definition.json`. -endif::apm-server[] - -*`--template`*:: -deprecated:[7.2] -Sets up the index template only. -It is recommended to use `--index-management` instead. - -*`--ilm-policy`*:: -deprecated:[7.2] -Sets up the {ilm} policy. -It is recommended to use `--index-management` instead. - -{global-flags} - -*EXAMPLES* - -ifeval::["{beatname_lc}"=="filebeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --dashboards -{beatname_lc} setup --pipelines -{beatname_lc} setup --pipelines --modules system,nginx,mysql <1> -{beatname_lc} setup --index-management ------ -<1> If you used the <> command to enable modules in -the `modules.d` directory, also specify the `--modules` flag to indicate which -modules to load pipelines for. -endif::[] - -ifeval::["{beatname_lc}"!="filebeat"] - -ifndef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --dashboards -{beatname_lc} setup --index-management ------ -endif::no_dashboards[] - -ifndef::apm-server[] -ifdef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --index-management ------ -endif::no_dashboards[] -endif::apm-server[] - -ifdef::apm-server[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --index-management -{beatname_lc} setup --pipelines ------ -endif::apm-server[] - -endif::[] -// end::setup-command-tag[] - -endif::apm-server[] - -[float] -[[test-command]] -==== `test` command - -{test-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} test SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`config`*:: -Tests the configuration settings. - -ifeval::["{beatname_lc}"=="metricbeat"] -*`modules [MODULE_NAME] [METRICSET_NAME]`*:: -Tests module settings for all configured modules. When you run this command, -{beatname_uc} does a test run that applies the current settings, retrieves the -metrics, and shows them as output. To test the settings for a specific module, -specify `MODULE_NAME`. To test the settings for a specific metricset in the -module, also specify `METRICSET_NAME`. -endif::[] - -*`output`*:: -Tests that {beatname_uc} can connect to the output by using the -current settings. - -*FLAGS* - -*`-h, --help`*:: Shows help for the `test` command. - -{global-flags} - -ifeval::["{beatname_lc}"!="metricbeat"] -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} test config ------ -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} test config -{beatname_lc} test modules system cpu ------ -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[[update-command]] -==== `update` command - -{update-command-short-desc}. Before updating functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} update FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to update. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `update` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} update cloudwatch -{beatname_lc} update sqs ------ -endif::[] - -[float] -[[version-command]] -==== `version` command - -{version-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} version [FLAGS] ----- - - -*FLAGS* - -*`-h, --help`*:: Shows help for the `version` command. - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} version ------ - - -[float] -[[global-flags]] -=== Global flags - -These global flags are available whenever you run {beatname_uc}. - -*`-E, --E "SETTING_NAME=VALUE"`*:: -Overrides a specific configuration setting. You can specify multiple overrides. -For example: -+ -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" ----------------------------------------------------------------------- -+ -This setting is applied to the currently running {beatname_uc} process. -The {beatname_uc} configuration file is not changed. - -ifeval::["{beatname_lc}"=="filebeat"] -*`-M, --M "VAR_NAME=VALUE"`*:: Overrides the default configuration for a -{beatname_uc} module. You can specify multiple variable overrides. For example: -+ -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -{beatname_lc} -modules=nginx -M "nginx.access.var.paths=['/var/log/nginx/access.log*']" -M "nginx.access.var.pipeline=no_plugins" ----------------------------------------------------------------------- -endif::[] - -*`-c, --c FILE`*:: -Specifies the configuration file to use for {beatname_uc}. The file you specify -here is relative to `path.config`. If the `-c` flag is not specified, the -default config file, +{beatname_lc}.yml+, is used. - -*`-d, --d SELECTORS`*:: -Enables debugging for the specified selectors. For the selectors, you can -specify a comma-separated -list of components, or you can use `-d "*"` to enable debugging for all -components. For example, `-d "publisher"` displays all the publisher-related -messages. - -*`-e, --e`*:: -Logs to stderr and disables syslog/file output. - -*`-environment`*:: -For logging purposes, specifies the environment that {beatname_uc} is running in. -This setting is used to select a default log output when no log output is configured. -Supported values are: `systemd`, `container`, `macos_service`, and `windows_service`. -If `systemd` or `container` is specified, {beatname_uc} will log to stdout and stderr -by default. - -*`--path.config`*:: -Sets the path for configuration files. See the <> section for -details. - -*`--path.data`*:: -Sets the path for data files. See the <> section for details. - -*`--path.home`*:: -Sets the path for miscellaneous files. See the <> section for -details. - -*`--path.logs`*:: -Sets the path for log files. See the <> section for details. - -*`--strict.perms`*:: -Sets strict permission checking on configuration files. The default is `-strict.perms=true`. -ifndef::apm-server[] -See {beats-ref}/config-file-permissions.html[Config file ownership and permissions] -for more information. -endif::[] -ifdef::apm-server[] -See <> for more information. -endif::[] - -*`-v, --v`*:: -Logs INFO-level messages. diff --git a/docs/legacy/copied-from-beats/docs/debugging.asciidoc b/docs/legacy/copied-from-beats/docs/debugging.asciidoc deleted file mode 100644 index ee564f6eb4b..00000000000 --- a/docs/legacy/copied-from-beats/docs/debugging.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/debugging.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -IMPORTANT: {deprecation-notice-data} - -By default, {beatname_uc} sends all its output to syslog. When you run {beatname_uc} in -the foreground, you can use the `-e` command line flag to redirect the output to -standard error instead. For example: - -["source","sh",subs="attributes"] ------------------------------------------------ -{beatname_lc} -e ------------------------------------------------ - -The default configuration file is {beatname_lc}.yml (the location of the file varies by -platform). You can use a different configuration file by specifying the `-c` flag. For example: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -c my{beatname_lc}config.yml ------------------------------------------------------------- - -You can increase the verbosity of debug messages by enabling one or more debug -selectors. For example, to view publisher-related messages, start {beatname_uc} -with the `publisher` selector: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -d "publisher" ------------------------------------------------------------- - -If you want all the debugging output (fair warning, it's quite a lot), you can -use `*`, like this: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -d "*" ------------------------------------------------------------- diff --git a/docs/legacy/copied-from-beats/docs/getting-help.asciidoc b/docs/legacy/copied-from-beats/docs/getting-help.asciidoc deleted file mode 100644 index 5dbbec5165c..00000000000 --- a/docs/legacy/copied-from-beats/docs/getting-help.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/getting-help.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -IMPORTANT: {deprecation-notice-data} - -Start by searching the https://discuss.elastic.co/c/{discuss_forum}[{beatname_uc} discussion forum] for your issue. If you can't find a resolution, open a new issue or add a comment to an existing one. Make sure you provide the following information, and we'll help -you troubleshoot the problem: - -* {beatname_uc} version -* Operating System -* Configuration -* Any supporting information, such as debugging output, that will help us diagnose your -problem. See <> for more details. - -If you're sure you found a bug, you can open a ticket on -https://github.com/elastic/{github_repo_name}/issues?state=open[GitHub]. Note, however, -that we close GitHub issues containing questions or requests for help if they -don't indicate the presence of a bug. diff --git a/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc b/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc deleted file mode 100644 index 5610e913995..00000000000 --- a/docs/legacy/copied-from-beats/docs/howto/load-index-templates.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[id="{beatname_lc}-template"] -== View the {es} index template - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::../../../../custom-index-template.asciidoc[tag=index-template-integration] diff --git a/docs/legacy/copied-from-beats/docs/https.asciidoc b/docs/legacy/copied-from-beats/docs/https.asciidoc deleted file mode 100644 index 3b2969f1f17..00000000000 --- a/docs/legacy/copied-from-beats/docs/https.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/https.asciidoc[] -//// This content is structured to be included as a whole file. -////////////////////////////////////////////////////////////////////////// - -[role="xpack"] -[[securing-communication-elasticsearch]] -== Secure communication with {es} - -IMPORTANT: {deprecation-notice-config} - -When sending data to a secured cluster through the `elasticsearch` -output, {beatname_uc} can use any of the following authentication methods: - -* Basic authentication credentials (username and password). -* Token-based API authentication. -* A client certificate. - -Authentication is specified in the {beatname_uc} configuration file: - -* To use *basic authentication*, specify the `username` and `password` settings under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - username: "{beat_default_index_prefix}_writer" <1> - password: "{pwd}" <2> ----------------------------------------------------------------------- -<1> This user needs the privileges required to publish events to {es}. -To create a user like this, see <>. -<2> This example shows a hard-coded password, but you should store sensitive -values -ifndef::serverless[] -in the <>. -endif::[] -ifdef::serverless[] -in environment variables. -endif::[] --- - -* To use token-based *API key authentication*, specify the `api_key` under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - api_key: "KnR6yE41RrSowb0kQ0HWoA" <1> ----------------------------------------------------------------------- -<1> This API key must have the privileges required to publish events to {es}. -To create an API key like this, see <>. --- - -[[beats-tls]] -* To use *Public Key Infrastructure (PKI) certificates* to authenticate users, -specify the `certificate` and `key` settings under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate: "/etc/pki/client/cert.pem" <1> - ssl.key: "/etc/pki/client/cert.key" <2> ----------------------------------------------------------------------- -<1> The path to the certificate for SSL client authentication -<2> The client certificate key --- -+ -These settings assume that the -distinguished name (DN) in the certificate is mapped to the appropriate roles in -the `role_mapping.yml` file on each node in the {es} cluster. For more -information, see {ref}/mapping-roles.html#mapping-roles-file[Using role -mapping files]. -+ -By default, {beatname_uc} uses the list of trusted certificate authorities (CA) from the -operating system where {beatname_uc} is running. If the certificate authority that signed your node certificates -is not in the host system's trusted certificate authorities list, you need -to add the path to the `.pem` file that contains your CA's certificate to the -{beatname_uc} configuration. This will configure {beatname_uc} to use a specific list of -CA certificates instead of the default list from the OS. -+ -Here is an example configuration: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate_authorities: <1> - - /etc/pki/my_root_ca.pem - - /etc/pki/my_other_ca.pem - ssl.certificate: "/etc/pki/client.pem" <2> - ssl.key: "/etc/pki/key.pem" <3> ----------------------------------------------------------------------- -<1> Specify the path to the local `.pem` file that contains your Certificate -Authority's certificate. This is needed if you use your own CA to sign your node certificates. -<2> The path to the certificate for SSL client authentication -<3> The client certificate key --- -+ -NOTE: For any given connection, the SSL/TLS certificates must have a subject -that matches the value specified for `hosts`, or the SSL handshake fails. -For example, if you specify `hosts: ["foobar:9200"]`, the certificate MUST -include `foobar` in the subject (`CN=foobar`) or as a subject alternative name -(SAN). Make sure the hostname resolves to the correct IP address. If no DNS is available, then -you can associate the IP address with your hostname in `/etc/hosts` -(on Unix) or `C:\Windows\System32\drivers\etc\hosts` (on Windows). - -ifndef::no_dashboards[] -[role="xpack"] -[float] -[[securing-communication-kibana]] -=== Secure communication with the {kib} endpoint - -If you've configured the <>, -you can also specify credentials for authenticating with {kib} under `kibana.setup`. -If no credentials are specified, {kib} will use the configured authentication method -in the {es} output. - -For example, specify a unique username and password to connect to {kib} like this: - --- -["source","yaml",subs="attributes,callouts"] ----- -setup.kibana: - host: "mykibanahost:5601" - username: "{beat_default_index_prefix}_kib_setup" <1> - password: "{pwd}" <2> ----- -<1> This user needs privileges required to set up dashboards -<2> This example shows a hard-coded password, but you should store sensitive -values -ifndef::serverless[] -in the <>. -endif::[] -ifdef::serverless[] -in environment variables. -endif::[] -endif::no_dashboards[] --- - -[role="xpack"] -[float] -[[securing-communication-learn-more]] -=== Learn more about secure communication - -More information on sending data to a secured cluster is available in the configuration reference: - -* <> -* <> -ifndef::no_dashboards[] -* <> -endif::no_dashboards[] diff --git a/docs/legacy/copied-from-beats/docs/keystore.asciidoc b/docs/legacy/copied-from-beats/docs/keystore.asciidoc deleted file mode 100644 index bb94d88e3e8..00000000000 --- a/docs/legacy/copied-from-beats/docs/keystore.asciidoc +++ /dev/null @@ -1,124 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/keystore.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[keystore]] -=== Secrets keystore for secure settings - -IMPORTANT: {deprecation-notice-installation} - -++++ -Secrets keystore -++++ - -When you configure {beatname_uc}, you might need to specify sensitive settings, -such as passwords. Rather than relying on file system permissions to protect -these values, you can use the {beatname_uc} keystore to securely store secret -values for use in configuration settings. - -After adding a key and its secret value to the keystore, you can use the key in -place of the secret value when you configure sensitive settings. - -The syntax for referencing keys is identical to the syntax for environment -variables: - -`${KEY}` - -Where KEY is the name of the key. - -For example, imagine that the keystore contains a key called `ES_PWD` with the -value `yourelasticsearchpassword`: - -* In the configuration file, use `output.elasticsearch.password: "${ES_PWD}"` -* On the command line, use: `-E "output.elasticsearch.password=\${ES_PWD}"` - -When {beatname_uc} unpacks the configuration, it resolves keys before resolving -environment variables and other variables. - -Notice that the {beatname_uc} keystore differs from the {es} keystore. -Whereas the {es} keystore lets you store `elasticsearch.yml` values by -name, the {beatname_uc} keystore lets you specify arbitrary names that you can -reference in the {beatname_uc} configuration. - -To create and manage keys, use the `keystore` command. See the -<> for the full command syntax, including -optional flags. - -NOTE: The `keystore` command must be run by the same user who will run -{beatname_uc}. - -[float] -[[creating-keystore]] -=== Create a keystore - -To create a secrets keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore create ----------------------------------------------------------------- - - -{beatname_uc} creates the keystore in the directory defined by the `path.data` -configuration setting. - -[float] -[[add-keys-to-keystore]] -=== Add keys - -To store sensitive values, such as authentication credentials for {es}, -use the `keystore add` command: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore add ES_PWD ----------------------------------------------------------------- - - -When prompted, enter a value for the key. - -To overwrite an existing key's value, use the `--force` flag: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore add ES_PWD --force ----------------------------------------------------------------- - -To pass the value through stdin, use the `--stdin` flag. You can also use -`--force`: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -cat /file/containing/setting/value | {beatname_lc} keystore add ES_PWD --stdin --force ----------------------------------------------------------------- - - -[float] -[[list-settings]] -=== List keys - -To list the keys defined in the keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore list ----------------------------------------------------------------- - - -[float] -[[remove-settings]] -=== Remove keys - -To remove a key from the keystore, use: - -["source","sh",subs="attributes"] ----------------------------------------------------------------- -{beatname_lc} keystore remove ES_PWD ----------------------------------------------------------------- diff --git a/docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc b/docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc deleted file mode 100644 index a018135dd54..00000000000 --- a/docs/legacy/copied-from-beats/docs/loggingconfig.asciidoc +++ /dev/null @@ -1,299 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/loggingconfig.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - -[[configuration-logging]] -== Configure logging - -++++ -Logging -++++ - -IMPORTANT: {deprecation-notice-config} - -The `logging` section of the +{beatname_lc}.yml+ config file contains options -for configuring the logging output. -ifndef::serverless[] -The logging system can write logs to the syslog or rotate log files. If logging -is not explicitly configured the file output is used. - -ifndef::win_only[] -["source","yaml",subs="attributes"] ----- -logging.level: info -logging.to_files: true -logging.files: - path: /var/log/{beatname_lc} - name: {beatname_lc} - keepfiles: 7 - permissions: 0640 ----- -endif::win_only[] - -ifdef::win_only[] -["source","yaml",subs="attributes"] ----- -logging.level: info -logging.to_files: true -logging.files: - path: C:{backslash}ProgramData{backslash}{beatname_lc}{backslash}Logs - name: {beatname_lc} - keepfiles: 7 - permissions: 0640 ----- -endif::win_only[] - -TIP: In addition to setting logging options in the config file, you can modify -the logging output configuration from the command line. See -<>. - -ifndef::win_only[] -WARNING: When {beatname_uc} is running on a Linux system with systemd, it uses -by default the `-e` command line option, that makes it write all the logging output -to stderr so it can be captured by journald. Other outputs are disabled. See -<> to know more and learn how to change this. -endif::win_only[] -endif::serverless[] - -ifdef::serverless[] -For example, the following options configure {beatname_uc} to log all the debug -messages related to event publishing: - -["source","yaml",subs="attributes"] ----- -logging.level: debug -logging.selectors: ["publisher"] ----- - -The logs generated by {beatname_uc} are written to the CloudWatch log group for -the function running on Amazon Web Services (AWS). To view the logs, go to the -monitoring area of the AWS Lambda console and view the CloudWatch log group -for the function. - -// TODO: When we add support for other cloud providers, we will need to modify -// this statement and possibly have a different attribute for each provider to -// show the correct text. -endif::serverless[] - -[float] -=== Configuration options - -You can specify the following options in the `logging` section of the -+{beatname_lc}.yml+ config file: - -ifndef::serverless[] -[float] -==== `logging.to_stderr` - -When true, writes all logging output to standard error output. This is -equivalent to using the `-e` command line option. - -[float] -==== `logging.to_syslog` - -When true, writes all logging output to the syslog. - -NOTE: This option is not supported on Windows. - -[float] -==== `logging.to_eventlog` - -When true, writes all logging output to the Windows Event Log. - -[float] -==== `logging.to_files` - -When true, writes all logging output to files. The log files are automatically -rotated when the log file size limit is reached. - -NOTE: {beatname_uc} only creates a log file if there is logging output. For -example, if you set the log <> to `error` and there are no -errors, there will be no log file in the directory specified for logs. -endif::serverless[] - -[float] -[[level]] -==== `logging.level` - -Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default -log level is `info`. - -`debug`:: Logs debug messages, including a detailed printout of all events -flushed. Also logs informational messages, warnings, errors, and -critical errors. When the log level is `debug`, you can specify a list of -<> to display debug messages for specific components. If -no selectors are specified, the `*` selector is used to display debug messages -for all components. - -`info`:: Logs informational messages, including the number of events that are -published. Also logs any warnings, errors, or critical errors. - -`warning`:: Logs warnings, errors, and critical errors. - -`error`:: Logs errors and critical errors. - -[float] -[[selectors]] -==== `logging.selectors` - -The list of debugging-only selector tags used by different {beatname_uc} components. -Use `*` to enable debug output for all components. Use `publisher` to display -debug messages related to event publishing. - -[TIP] -===== -The list of available selectors may change between releases, so avoid creating -tests that depend on specific selectors. - -To see which selectors are available, run {beatname_uc} in debug mode -(set `logging.level: debug` in the configuration). The selector name appears -after the log level and is enclosed in brackets. -===== - -To configure multiple selectors, use the following {beats-ref}/config-file-format.html[YAML list syntax]: -["source","yaml",subs="attributes"] ----- -logging.selectors: [ harvester, input ] ----- - -ifndef::serverless[] -To override selectors at the command line, use the `-d` global flag (`-d` also -sets the debug log level). For more information, see <>. -endif::serverless[] - -[float] -==== `logging.metrics.enabled` - -By default, {beatname_uc} periodically logs its internal metrics that have -changed in the last period. For each metric that changed, the delta from the -value at the beginning of the period is logged. Also, the total values for all -non-zero internal metrics are logged on shutdown. Set this to false to disable -this behavior. The default is true. - -Here is an example log line: - -[source,shell] ----------------------------------------------------------------------------------------------------------------------------------------------------- -2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416 ----------------------------------------------------------------------------------------------------------------------------------------------------- - -Note that we currently offer no backwards compatible guarantees for the internal -metrics and for this reason they are also not documented. - -[float] -==== `logging.metrics.period` - -The period after which to log the internal metrics. The default is `30s`. - -ifndef::serverless[] -[float] -==== `logging.files.path` - -The directory that log files are written to. The default is the logs path. See -the <> section for details. - -[float] -==== `logging.files.name` - -The name of the file that logs are written to. The default is '{beatname_lc}'. - -[float] -==== `logging.files.rotateeverybytes` - -The maximum size of a log file. If the limit is reached, a new log file is -generated. The default size limit is 10485760 (10 MB). - -[float] -==== `logging.files.keepfiles` - -The number of most recent rotated log files to keep on disk. Older files are -deleted during log rotation. The default value is 7. The `keepfiles` options has -to be in the range of 2 to 1024 files. - -[float] -==== `logging.files.permissions` - -The permissions mask to apply when rotating log files. The default value is -0600. The `permissions` option must be a valid Unix-style file permissions mask -expressed in octal notation. In Go, numbers in octal notation must start with -'0'. - -The most permissive mask allowed is 0640. If a higher permissions mask is -specified via this setting, it will be subject to an umask of 0027. - -Examples: - -* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. -* 0600: give read and write access to the file owner, and no access to all others. - -[float] -==== `logging.files.interval` - -Enable log file rotation on time intervals in addition to size-based rotation. -Intervals must be at least `1s`. Values of `1m`, `1h`, `24h`, `7*24h`, `30*24h`, and `365*24h` -are boundary-aligned with minutes, hours, days, weeks, months, and years as -reported by the local system clock. All other intervals are calculated from the -Unix epoch. Defaults to disabled. -endif::serverless[] - -[float] -==== `logging.files.rotateonstartup` - -If the log file already exists on startup, immediately rotate it and start -writing to a new file instead of appending to the existing one. Defaults to -true. - -[float] -==== `logging.json` - -When true, logs messages in JSON format. The default is false. - -[float] -==== `logging.ecs` - -When true, logs messages with minimal required Elastic Common Schema (ECS) -information. - -ifndef::serverless[] -[float] -==== `logging.files.redirect_stderr` experimental[] - -When true, diagnostic messages printed to {beatname_uc}'s standard error output -will also be logged to the log file. This can be helpful in situations were -{beatname_uc} terminates unexpectedly because an error has been detected by -Go's runtime but diagnostic information is not present in the log file. -This feature is only available when logging to files (`logging.to_files` is true). -Disabled by default. -endif::serverless[] - -[float] -=== Logging format - -The logging format is generally the same for each logging output. The one -exception is with the syslog output where the timestamp is not included in the -message because syslog adds its own timestamp. - -Each log message consists of the following parts: - -* Timestamp in ISO8601 format -* Level -* Logger name contained in brackets (Optional) -* File name and line number of the caller -* Message -* Structured data encoded in JSON (Optional) - -Below are some samples: - -`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger` - -`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` - -`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc deleted file mode 100644 index 7a4638e0c43..00000000000 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-beats.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -[role="xpack"] -[[monitoring]] -= Monitor {beatname_uc} - -++++ -Monitor -++++ - -IMPORTANT: {deprecation-notice-monitor} - -You can use the {stack} {monitor-features} to gain insight into the health of -ifndef::apm-server[] -{beatname_uc} instances running in your environment. -endif::[] -ifdef::apm-server[] -{beatname_uc}. -endif::[] - -To monitor {beatname_uc}, make sure monitoring is enabled on your {es} cluster, -then configure the method used to collect {beatname_uc} metrics. You can use one -of following methods: - -* <> - Internal -collectors send monitoring data directly to your monitoring cluster. -ifndef::serverless[] -* <> - -{metricbeat} collects monitoring data from your {beatname_uc} instance -and sends it directly to your monitoring cluster. -endif::[] - -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - -include::monitoring-internal-collection.asciidoc[] - -ifndef::serverless[] -include::monitoring-metricbeat.asciidoc[] -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc deleted file mode 100644 index 14c53505209..00000000000 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-internal-collection.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/monitoring/monitoring-internal-collection.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[role="xpack"] -[[monitoring-internal-collection]] -== Use internal collection to send monitoring data -++++ -Use internal collection -++++ - -IMPORTANT: {deprecation-notice-monitor} - -Use internal collectors to send {beats} monitoring data directly to your -monitoring cluster. -ifndef::serverless[] -Or as an alternative to internal collection, use -<>. The benefit of using internal collection -instead of {metricbeat} is that you have fewer pieces of software to install -and maintain. -endif::[] - -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - -. Create an API key or user that has appropriate authority to send system-level monitoring -data to {es}. For example, you can use the built-in +{beat_monitoring_user}+ user or -assign the built-in +{beat_monitoring_user}+ role to another user. For more -information on the required privileges, see <>. -For more information on how to use API keys, see <>. - -. Add the `monitoring` settings in the {beatname_uc} configuration file. If you -configured the {es} output and want to send {beatname_uc} monitoring events to -the same {es} cluster, specify the following minimal configuration: -+ -["source","yml",subs="attributes"] --------------------- -monitoring: - enabled: true - elasticsearch: - api_key: id:api_key <1> - username: {beat_monitoring_user} - password: somepassword --------------------- -<1> Specify one of `api_key` or `username`/`password`. -+ -If you want to send monitoring events to an https://cloud.elastic.co/[{ecloud}] -monitoring cluster, you can use two simpler settings. When defined, these settings -overwrite settings from other parts in the configuration. For example: -+ -[source,yaml] --------------------- -monitoring: - enabled: true - cloud.id: 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==' - cloud.auth: 'elastic:{pwd}' --------------------- -+ -If you -ifndef::no-output-logstash[] -configured a different output, such as {ls} or you -endif::[] -want to send {beatname_uc} monitoring events to a separate {es} cluster -(referred to as the _monitoring cluster_), you must specify additional -configuration options. For example: -+ -["source","yml",subs="attributes"] --------------------- -monitoring: - enabled: true - cluster_uuid: PRODUCTION_ES_CLUSTER_UUID <1> - elasticsearch: - hosts: ["https://example.com:9200", "https://example2.com:9200"] <2> - api_key: id:api_key <3> - username: {beat_monitoring_user} - password: somepassword --------------------- -<1> This setting identifies the {es} cluster under which the -monitoring data for this {beatname_uc} instance will appear in the -{stack-monitor-app} UI. To get a cluster's `cluster_uuid`, -call the `GET /` API against that cluster. -<2> This setting identifies the hosts and port numbers of {es} nodes -that are part of the monitoring cluster. -<3> Specify one of `api_key` or `username`/`password`. -+ -If you want to use PKI authentication to send monitoring events to -{es}, you must specify a different set of configuration options. For -example: -+ -[source,yaml] --------------------- -monitoring: - enabled: true - cluster_uuid: PRODUCTION_ES_CLUSTER_UUID - elasticsearch: - hosts: ["https://example.com:9200", "https://example2.com:9200"] - username: "" - ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] - ssl.certificate: "/etc/pki/client/cert.pem" - ssl.key: "/etc/pki/client/cert.key" --------------------- -+ -You must specify the `username` as `""` explicitly so that -the username from the client certificate (`CN`) is used. See -<> for more information about SSL settings. - -ifndef::serverless[] -. Start {beatname_uc}. -endif::[] - -ifdef::serverless[] -. Deploy {beatname_uc}. -endif::[] - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. - - -include::shared-monitor-config.asciidoc[] diff --git a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc deleted file mode 100644 index 0386e1f25f7..00000000000 --- a/docs/legacy/copied-from-beats/docs/monitoring/monitoring-metricbeat.asciidoc +++ /dev/null @@ -1,290 +0,0 @@ -[role="xpack"] -[[monitoring-metricbeat-collection]] -== Use {metricbeat} to send monitoring data -[subs="attributes"] -++++ -Use {metricbeat} collection -++++ - -IMPORTANT: {deprecation-notice-monitor} - -In 7.3 and later, you can use {metricbeat} to collect data about {beatname_uc} -and ship it to the monitoring cluster. The benefit of using {metricbeat} instead -of internal collection is that the monitoring agent remains active even if the -{beatname_uc} instance dies. - -ifeval::["{beatname_lc}"=="metricbeat"] -Because you'll be using {metricbeat} to _monitor_ {beatname_uc}, you'll need to -run two instances of {beatname_uc}: a main instance that collects metrics from -the system and services running on the server, and a second instance that -collects metrics from {beatname_uc} only. Using a separate instance as a -monitoring agent allows you to send monitoring data to a dedicated monitoring -cluster. If the main agent goes down, the monitoring agent remains active. - -If you're running {beatname_uc} as a service, this approach requires extra work -because you need to run two instances of the same installed service -concurrently. If you don't want to run two instances concurrently, use -<> instead of using -{metricbeat}. -endif::[] - -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - -//NOTE: The tagged regions are re-used in the Stack Overview. - -To collect and ship monitoring data: - -. <> - -. <> - -[float] -[[configure-shipper]] -=== Configure the shipper you want to monitor - -. Enable the HTTP endpoint to allow external collection of monitoring data: -+ --- -// tag::enable-http-endpoint[] -Add the following setting in the {beatname_uc} configuration file -(+{beatname_lc}.yml+): - -[source,yaml] ----------------------------------- -http.enabled: true ----------------------------------- - -By default, metrics are exposed on port 5066. If you need to monitor multiple -{beats} shippers running on the same server, set `http.port` to expose metrics -for each shipper on a different port number: - -[source,yaml] ----------------------------------- -http.port: 5067 ----------------------------------- -// end::enable-http-endpoint[] --- - -. Disable the default collection of {beatname_uc} monitoring metrics. + -+ --- -// tag::disable-beat-collection[] -Add the following setting in the {beatname_uc} configuration file -(+{beatname_lc}.yml+): - -[source,yaml] ----------------------------------- -monitoring.enabled: false ----------------------------------- -// end::disable-beat-collection[] - -For more information, see -<>. --- - -. Configure host (optional). + -+ --- -// tag::set-http-host[] -If you intend to get metrics using {metricbeat} installed on another server, you need to bind the {beatname_uc} to host's IP: - -[source,yaml] ----------------------------------- -http.host: xxx.xxx.xxx.xxx ----------------------------------- -// end::set-http-host[] --- - -. Configure cluster UUID (optional). + -+ --- -// tag::set-cluster-uuid[] -To see the {beats} monitoring section in {kib} if you have a cluster, you need to associate the {beatname_uc} with cluster UUID: - -[source,yaml] ----------------------------------- -monitoring.cluster_uuid: "cluster-uuid" ----------------------------------- -// end::set-cluster-uuid[] --- - -ifndef::serverless[] -. Start {beatname_uc}. -endif::[] - -[float] -[[configure-metricbeat]] -=== Install and configure {metricbeat} to collect monitoring data - -ifeval::["{beatname_lc}"!="metricbeat"] -. Install {metricbeat} on the same server as {beatname_uc}. To learn how, see -{metricbeat-ref}/metricbeat-installation-configuration.html[Get started with {metricbeat}]. -If you already have {metricbeat} installed on the server, skip this step. -endif::[] -ifeval::["{beatname_lc}"=="metricbeat"] -. The next step depends on how you want to run {metricbeat}: -* If you're running as a service and want to run a separate monitoring instance, -take the steps required for your environment to run two instances of -{metricbeat} as a service. The steps for doing this vary by platform and are -beyond the scope of this documentation. -* If you're running the binary directly in the foreground and want to run a -separate monitoring instance, install {metricbeat} to a different path. If -necessary, set `path.config`, `path.data`, and `path.log` to point to the -correct directories. See <> for the default locations. -endif::[] - -. Enable the `beat-xpack` module in {metricbeat}. + -+ --- -// tag::enable-beat-module[] -For example, to enable the default configuration in the `modules.d` directory, -run the following command, using the correct command syntax for your OS: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules enable beat-xpack ----------------------------------------------------------------------- - -For more information, see -{metricbeat-ref}/configuration-metricbeat.html[Configure modules] and -{metricbeat-ref}/metricbeat-module-beat.html[beat module]. -// end::enable-beat-module[] --- - -. Configure the `beat-xpack` module in {metricbeat}. + -+ --- -// tag::configure-beat-module[] -The `modules.d/beat-xpack.yml` file contains the following settings: - -[source,yaml] ----------------------------------- -- module: beat - metricsets: - - stats - - state - period: 10s - hosts: ["http://localhost:5066"] - #username: "user" - #password: "secret" - xpack.enabled: true ----------------------------------- - -Set the `hosts`, `username`, and `password` settings as required by your -environment. For other module settings, it's recommended that you accept the -defaults. - -By default, the module collects {beatname_uc} monitoring data from -`localhost:5066`. If you exposed the metrics on a different host or port when -you enabled the HTTP endpoint, update the `hosts` setting. - -To monitor multiple -ifndef::apm-server[] -{beats} agents, -endif::[] -ifdef::apm-server[] -APM Server instances, -endif::[] -specify a list of hosts, for example: - -[source,yaml] ----------------------------------- -hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"] ----------------------------------- - -If you configured {beatname_uc} to use encrypted communications, you must access -it via HTTPS. For example, use a `hosts` setting like `https://localhost:5066`. -// end::configure-beat-module[] - -// tag::remote-monitoring-user[] -If the Elastic {security-features} are enabled, you must also provide a user -ID and password so that {metricbeat} can collect metrics successfully: - -.. Create a user on the {es} cluster that has the -`remote_monitoring_collector` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -.. Add the `username` and `password` settings to the beat module configuration -file. -// end::remote-monitoring-user[] --- - -. Optional: Disable the system module in the {metricbeat}. -+ --- -// tag::disable-system-module[] -By default, the {metricbeat-ref}/metricbeat-module-system.html[system module] is -enabled. The information it collects, however, is not shown on the -*{stack-monitor-app}* page in {kib}. Unless you want to use that information for -other purposes, run the following command: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules disable system ----------------------------------------------------------------------- -// end::disable-system-module[] --- - -. Identify where to send the monitoring data. + -+ --- -TIP: In production environments, we strongly recommend using a separate cluster -(referred to as the _monitoring cluster_) to store the data. Using a separate -monitoring cluster prevents production cluster outages from impacting your -ability to access your monitoring data. It also prevents monitoring activities -from impacting the performance of your production cluster. - -For example, specify the {es} output information in the {metricbeat} -configuration file (`metricbeat.yml`): - -[source,yaml] ----------------------------------- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #api_key: "id:api_key" <2> - #username: "elastic" - #password: "changeme" ----------------------------------- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. -<2> Specify one of `api_key` or `username`/`password`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one ingest node. - -If the {es} {security-features} are enabled on the monitoring cluster, you -must provide a valid user ID and password so that {metricbeat} can send metrics -successfully: - -.. Create a user on the monitoring cluster that has the -`remote_monitoring_agent` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. -+ -TIP: If you're using {ilm}, the remote monitoring user -requires additional privileges to create and read indices. For more -information, see <>. - -.. Add the `username` and `password` settings to the {es} output information in -the {metricbeat} configuration file. - -For more information about these configuration options, see -{metricbeat-ref}/elasticsearch-output.html[Configure the {es} output]. --- - -. {metricbeat-ref}/metricbeat-starting.html[Start {metricbeat}] to begin -collecting monitoring data. - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc b/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc deleted file mode 100644 index 447d7ebc6d4..00000000000 --- a/docs/legacy/copied-from-beats/docs/monitoring/shared-monitor-config.asciidoc +++ /dev/null @@ -1,130 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/monitoring/shared-monitor-config.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - -[role="xpack"] -[[configuration-monitor]] -=== Settings for internal collection - -IMPORTANT: {deprecation-notice-monitor} - -Use the following settings to configure internal collection when you are not -using {metricbeat} to collect monitoring data. - -You specify these settings in the X-Pack monitoring section of the -+{beatname_lc}.yml+ config file: - -==== `monitoring.enabled` - -The `monitoring.enabled` config is a boolean setting to enable or disable {monitoring}. -If set to `true`, monitoring is enabled. - -The default value is `false`. - -==== `monitoring.elasticsearch` - -The {es} instances that you want to ship your {beatname_uc} metrics to. This -configuration option contains the following fields: - -===== `api_key` - -The detail of the API key to be used to send monitoring information to {es}. -See <> for more information. - -===== `bulk_max_size` - -The maximum number of metrics to bulk in a single {es} bulk API index request. -The default is `50`. For more information, see <>. - -===== `backoff.init` - -The number of seconds to wait before trying to reconnect to {es} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -===== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{es} after a network error. The default is `60s`. - -===== `compression_level` - -The gzip compression level. Setting this value to `0` disables compression. The -compression level must be in the range of `1` (best speed) to `9` (best -compression). The default value is `0`. Increasing the compression level -reduces the network usage but increases the CPU usage. - -===== `headers` - -Custom HTTP headers to add to each request. For more information, see -<>. - -===== `hosts` - -The list of {es} nodes to connect to. Monitoring metrics are distributed to -these nodes in round robin order. For more information, see -<>. - -===== `max_retries` - -The number of times to retry sending the monitoring metrics after a failure. -After the specified number of retries, the metrics are typically dropped. The -default value is `3`. For more information, see <>. - -===== `parameters` - -Dictionary of HTTP parameters to pass within the URL with index operations. - -===== `password` - -The password that {beatname_uc} uses to authenticate with the {es} instances for -shipping monitoring data. - -===== `metrics.period` - -The time interval (in seconds) when metrics are sent to the {es} cluster. A new -snapshot of {beatname_uc} metrics is generated and scheduled for publishing each -period. The default value is 10 * time.Second. - -===== `state.period` - -The time interval (in seconds) when state information are sent to the {es} cluster. A new -snapshot of {beatname_uc} state is generated and scheduled for publishing each -period. The default value is 60 * time.Second. - -===== `protocol` - -The name of the protocol to use when connecting to the {es} cluster. The options -are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, -however, the value of protocol is overridden by the scheme you specify in the URL. - -===== `proxy_url` - -The URL of the proxy to use when connecting to the {es} cluster. For more -information, see <>. - -===== `timeout` - -The HTTP request timeout in seconds for the {es} request. The default is `90`. - -===== `ssl` - -Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer -(SSL) parameters like the certificate authority (CA) to use for HTTPS-based -connections. If the `ssl` section is missing, the host CAs are used for -HTTPS connections to {es}. For more information, see <>. - -===== `username` - -The user ID that {beatname_uc} uses to authenticate with the {es} instances for -shipping monitoring data. diff --git a/docs/legacy/copied-from-beats/docs/output-cloud.asciidoc b/docs/legacy/copied-from-beats/docs/output-cloud.asciidoc deleted file mode 100644 index 380672c14ed..00000000000 --- a/docs/legacy/copied-from-beats/docs/output-cloud.asciidoc +++ /dev/null @@ -1,53 +0,0 @@ -[[configure-cloud-id]] -=== Configure the output for {ess} on {ecloud} - -[subs="attributes"] -++++ -{ess} -++++ - -IMPORTANT: {deprecation-notice-config} - -ifdef::apm-server[] -NOTE: This page refers to using a separate instance of APM Server with an existing -{ess-product}[{ess} deployment]. -If you want to use APM on {ess}, see: -{cloud}/ec-create-deployment.html[Create your deployment] and -{cloud}/ec-manage-apm-settings.html[Add APM user settings]. -endif::apm-server[] - -{beatname_uc} comes with two settings that simplify the output configuration -when used together with {ess-product}[{ess}]. When defined, -these setting overwrite settings from other parts in the configuration. - -Example: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" -cloud.auth: "elastic:{pwd}" ------------------------------------------------------------------------------- - -These settings can be also specified at the command line, like this: - - -["source","sh",subs="attributes"] ------------------------------------------------------------------------------- -{beatname_lc} -e -E cloud.id="" -E cloud.auth="" ------------------------------------------------------------------------------- - - -==== `cloud.id` - -The Cloud ID, which can be found in the {ess} web console, is used by -{beatname_uc} to resolve the {es} and {kib} URLs. This setting -overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. - -NOTE: The base64 encoded `cloud.id` found in the {ess} web console does not explicitly specify a port. This means that {beatname_uc} will default to using port 443 when using `cloud.id`, not the commonly configured cloud endpoint port 9243. - -==== `cloud.auth` - -When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and -`output.elasticsearch.password` settings. Because the {kib} settings inherit -the username and password from the {es} output, this can also be used -to set the `setup.kibana.username` and `setup.kibana.password` options. diff --git a/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc b/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc deleted file mode 100644 index 2580b4f4509..00000000000 --- a/docs/legacy/copied-from-beats/docs/outputconfig.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/outputconfig.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - - -[[configuring-output]] -== Configure the output - -++++ -Output -++++ - -IMPORTANT: {deprecation-notice-config} - -You configure {beatname_uc} to write to a specific output by setting options -in the Outputs section of the +{beatname_lc}.yml+ config file. Only a single -output may be defined. - -The following topics describe how to configure each supported output. If you've -secured the {stack}, also read <> for more about -security-related configuration options. - -include::outputs-list.asciidoc[tag=outputs-list] - -ifdef::beat-specific-output-config[] -include::{beat-specific-output-config}[] -endif::[] - -include::outputs-list.asciidoc[tag=outputs-include] diff --git a/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc b/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc deleted file mode 100644 index 4181c10f64f..00000000000 --- a/docs/legacy/copied-from-beats/docs/outputs-list.asciidoc +++ /dev/null @@ -1,87 +0,0 @@ -// TODO: Create script that generates this file. Conditional coding needs to -// be preserved. - -//# tag::outputs-list[] - -ifndef::no_cloud_id[] -* <> -endif::[] -ifndef::no_es_output[] -* <> -endif::[] -ifndef::no_ls_output[] -* <> -endif::[] -ifndef::no_kafka_output[] -* <> -endif::[] -ifndef::no_redis_output[] -* <> -endif::[] -ifndef::no_file_output[] -* <> -endif::[] -ifndef::no_console_output[] -* <> -endif::[] - -//# end::outputs-list[] - -//# tag::outputs-include[] -ifndef::no_cloud_id[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::output-cloud.asciidoc[] -endif::[] - -ifndef::no_es_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/elasticsearch/docs/elasticsearch.asciidoc[] -endif::[] - -ifndef::no_ls_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/logstash/docs/logstash.asciidoc[] -endif::[] - -ifndef::no_kafka_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/kafka/docs/kafka.asciidoc[] -endif::[] - -ifndef::no_redis_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/redis/docs/redis.asciidoc[] -endif::[] - -ifndef::no_file_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/fileout/docs/fileout.asciidoc[] -endif::[] - -ifndef::no_console_output[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/console/docs/console.asciidoc[] -endif::[] - -ifndef::no_codec[] -ifdef::requires_xpack[] -[role="xpack"] -endif::[] -include::{libbeat-outputs-dir}/codec/docs/codec.asciidoc[] -endif::[] - -//# end::outputs-include[] diff --git a/docs/legacy/copied-from-beats/docs/repositories.asciidoc b/docs/legacy/copied-from-beats/docs/repositories.asciidoc deleted file mode 100644 index 5b61d434308..00000000000 --- a/docs/legacy/copied-from-beats/docs/repositories.asciidoc +++ /dev/null @@ -1,172 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/setup-repositories.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[setup-repositories]] -=== Repositories for APT and YUM - -IMPORTANT: {deprecation-notice-installation} - -We have repositories available for APT and YUM-based distributions. Note that we -provide binary packages, but no source packages. - -We use the PGP key https://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], -{es} Signing Key, with fingerprint - - 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4 - -to sign all our packages. It is available from https://pgp.mit.edu. - -[float] -==== APT - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {repo} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -To add the {repo} repository for APT: - -. Download and install the Public Signing Key: -+ -[source,sh] --------------------------------------------------- -wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - --------------------------------------------------- - -. You may need to install the `apt-transport-https` package on Debian before proceeding: -+ -[source,sh] --------------------------------------------------- -sudo apt-get install apt-transport-https --------------------------------------------------- - -ifeval::["{release-state}"=="prerelease"] -. Save the repository definition to +/etc/apt/sources.list.d/elastic-{major-version}-prerelease.list+: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -echo "deb https://artifacts.elastic.co/packages/{major-version}-prerelease/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}-prerelease.list --------------------------------------------------- -+ -endif::[] - -ifeval::["{release-state}"=="released"] -. Save the repository definition to +/etc/apt/sources.list.d/elastic-{major-version}.list+: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -echo "deb https://artifacts.elastic.co/packages/{major-version}/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}.list --------------------------------------------------- -+ -endif::[] -[WARNING] -================================================== -To add the Elastic repository, make sure that you use the `echo` method shown -in the example. Do not use `add-apt-repository` because it will add a `deb-src` -entry, but we do not provide a source package. - -If you have added the `deb-src` entry by mistake, you will see an error like -the following: - -["source","txt",subs="attributes"] ----- -Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file) ----- - -Simply delete the `deb-src` entry from the `/etc/apt/sources.list` file, and the installation should work as expected. -================================================== - -. Run `apt-get update`, and the repository is ready for use. For example, you can -install {beatname_uc} by running: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo apt-get update && sudo apt-get install {beatname_pkg} --------------------------------------------------- - -. To configure {beatname_uc} to start automatically during boot, run: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo systemctl enable {beatname_pkg} --------------------------------------------------- - -endif::[] - -[float] -==== YUM - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {repo} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -To add the {repo} repository for YUM: - -. Download and install the public signing key: -+ -[source,sh] --------------------------------------------------- -sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch --------------------------------------------------- - -. Create a file with a `.repo` extension (for example, `elastic.repo`) in -your `/etc/yum.repos.d/` directory and add the following lines: -+ -ifeval::["{release-state}"=="prerelease"] -["source","sh",subs="attributes"] --------------------------------------------------- -[elastic-{major-version}-prerelease] -name=Elastic repository for {major-version} prerelease packages -baseurl=https://artifacts.elastic.co/packages/{major-version}-prerelease/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md --------------------------------------------------- -endif::[] -ifeval::["{release-state}"=="released"] -["source","sh",subs="attributes"] --------------------------------------------------- -[elastic-{major-version}] -name=Elastic repository for {major-version} packages -baseurl=https://artifacts.elastic.co/packages/{major-version}/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md --------------------------------------------------- -endif::[] -+ -Your repository is ready to use. For example, you can install {beatname_uc} by -running: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo yum install {beatname_pkg} --------------------------------------------------- - -. To configure {beatname_uc} to start automatically during boot, run: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo systemctl enable {beatname_pkg} --------------------------------------------------- - -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc b/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc deleted file mode 100644 index 96773aa8ddd..00000000000 --- a/docs/legacy/copied-from-beats/docs/security/linux-seccomp.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[[linux-seccomp]] -== Use Linux Secure Computing Mode (seccomp) - -IMPORTANT: {deprecation-notice-config} - -beta[] - -On Linux 3.17 and later, {beatname_uc} can take advantage of secure computing -mode, also known as seccomp. Seccomp restricts the system calls that a process -can issue. Specifically {beatname_uc} can load a seccomp BPF filter at process -start-up that drops the privileges to invoke specific system calls. Once a -filter is loaded by the process it cannot be removed. - -The kernel exposes a large number of system calls that are not used by -{beatname_uc}. By installing a seccomp filter, you can limit the total kernel -surface exposed to {beatname_uc} (principle of least privilege). This minimizes -the impact of unknown vulnerabilities that might be found in the process. - -The filter is expressed as a Berkeley Packet Filter (BPF) program. The BPF -program is generated based on a policy defined by {beatname_uc}. The policy -can be customized through configuration as well. - -A seccomp policy is architecture specific due to the fact that system calls vary -by architecture. {beatname_uc} includes an allowlist seccomp policy for the -AMD64 and 386 architectures. You can view those policies -https://github.com/elastic/beats/tree/{branch}/libbeat/common/seccomp[here]. - -[float] -[[seccomp-policy-config]] -=== Seccomp Policy Configuration - -The seccomp policy can be customized through the configuration policy. This is -an example blocklist policy that prohibits `execve`, `execveat`, `fork`, and -`vfork` syscalls. - -[source,yaml] ----- -seccomp: - default_action: allow <1> - syscalls: - - action: errno <2> - names: <3> - - execve - - execveat - - fork - - vfork ----- -<1> If the system call being invoked by the process does not match one of the -names below then it will be allowed. -<2> If the system call being invoked matches one of the names below then an -error will be returned to caller. This is known as a blocklist policy. -<3> These are system calls being prohibited. - -These are the configuration options for a seccomp policy. - -*`enabled`*:: On Linux, this option is enabled by default. To disable seccomp -filter loading, set this option to `false`. - -*`default_action`*:: The default action to take when none of the defined system -calls match. See <> for the full list of -values. This is required. - -*`syscalls`*:: Each object in this list must contain an `action` and a list of -system call `names`. The list must contain at least one item. - -*`names`*:: A list of system call names. The system call name must exist for -the runtime architecture, otherwise an error will be logged and the filter will -not be installed. At least one system call must be defined. - -[[seccomp-policy-config-action]] -*`action`*:: The action to take when any of the system calls listed in `names` -is executed. This is required. These are the available action values. The -actions that are available depend on the kernel version. - -- `errno` - The system call will return `EPERM` (permission denied) to the - caller. -- `trace` - The kernel will notify a `ptrace` tracer. If no tracer is present - then the system call fails with `ENOSYS` (function not implemented). -- `trap` - The kernel will send a `SIGSYS` signal to the calling thread and not - execute the system call. The Go runtime will exit. -- `kill_thread` - The kernel will immediately terminate the thread. Other - threads will continue to execute. -- `kill_process` - The kernel will terminate the process. Available in Linux - 4.14 and later. -- `log` - The kernel will log the system call before executing it. Available in - Linux 4.14 and later. (This does not go to the Beat's log.) -- `allow` - The kernel will allow the system call to execute. - -[float] -=== {auditbeat} Reports Seccomp Violations - -You can use {auditbeat} to report any seccomp violations that occur on the system. -The kernel generates an event for each violation and {auditbeat} reports the -event. The `event.action` value will be `violated-seccomp-policy` and the event -will contain information about the process and system call. diff --git a/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc b/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc deleted file mode 100644 index 1545ccaebcf..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-directory-layout.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-directory-layout.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[directory-layout]] -=== Directory layout - -// lint disable usr - -IMPORTANT: {deprecation-notice-installation} - -The directory layout of an installation is as follows: - -[cols="> in the configuration file. -endif::serverless[] - -[float] -==== Default paths - -{beatname_uc} uses the following default paths unless you explicitly change them. - -ifdef::deb_os,rpm_os[] -[float] -===== deb and rpm -[cols=" <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="heartbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="journalbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ ---cap-add=NET_ADMIN \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ - --cap-add="AUDIT_CONTROL" \ - --cap-add="AUDIT_READ" \ - {dockerimage} \ - setup -E setup.kibana.host=kibana:5601 \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -<1> Substitute your {kib} and {es} hosts and ports. -<2> If you are using the hosted {ess} in {ecloud}, replace -the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password -using this syntax: - -[source,shell] --------------------------------------------- --E cloud.id= \ --E cloud.auth=elastic: --------------------------------------------- - -endif::apm-server[] - -[float] -==== Configure {beatname_uc} on Docker - -The Docker image provides several methods for configuring {beatname_uc}. The -conventional approach is to provide a configuration file via a volume mount, but -it's also possible to create a custom image with your -configuration included. - -[float] -===== Example configuration file - -Download this example configuration file as a starting point: - -["source","sh",subs="attributes,callouts"] ------------------------------------------------- -curl -L -O {dockerconfig} ------------------------------------------------- - -[float] -===== Volume-mounted configuration - -One way to configure {beatname_uc} on Docker is to provide +{beatname_lc}.docker.yml+ via a volume mount. -With +docker run+, the volume mount can be specified like this. - -ifeval::["{beatname_lc}"=="filebeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \ - --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ - {dockerimage} {beatname_lc} -e -strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="journalbeat"] -Make sure you include the path to the host's journal. The path might be -`/var/log/journal` or `/run/log/journal`. - -["source", "sh", subs="attributes"] --------------------------------------------- -sudo docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="/var/log/journal:/var/log/journal" \ - --volume="/etc/machine-id:/etc/machine-id" \ - --volume="/run/systemd:/run/systemd" \ - --volume="/etc/hostname:/etc/hostname:ro" \ - {dockerimage} {beatname_lc} -e -strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ - --volume="/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro" \ - --volume="/proc:/hostfs/proc:ro" \ - --volume="/:/hostfs:ro" \ - {dockerimage} {beatname_lc} -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --cap-add="NET_RAW" \ - --cap-add="NET_ADMIN" \ - --network=host \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --cap-add="AUDIT_CONTROL" \ - --cap-add="AUDIT_READ" \ - --pid=host \ - {dockerimage} -e \ - --strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="heartbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="apm-server"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - -p 8200:8200 \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -<1> Substitute your {es} hosts and ports. -<2> If you are using the hosted {ess} in {ecloud}, replace -the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password -using the syntax shown earlier. - -[float] -===== Customize your configuration - -ifdef::has_docker_label_ex[] -The +{beatname_lc}.docker.yml+ file you downloaded earlier is configured to deploy {beats} modules based on the Docker labels applied to your containers. See <> for more details. Add labels to your application Docker containers, and they will be picked up by the {beats} autodiscover feature when they are deployed. Here is an example command for an Apache HTTP Server container with labels to configure the {filebeat} and {metricbeat} modules for the Apache HTTP Server: - -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ - --label co.elastic.logs/module=apache2 \ - --label co.elastic.logs/fileset.stdout=access \ - --label co.elastic.logs/fileset.stderr=error \ - --label co.elastic.metrics/module=apache \ - --label co.elastic.metrics/metricsets=status \ - --label co.elastic.metrics/hosts='${data.host}:${data.port}' \ - --detach=true \ - --name my-apache-app \ - -p 8080:80 \ - httpd:2.4 --------------------------------------------- -endif::[] - -ifndef::has_docker_label_ex[] -The +{beatname_lc}.docker.yml+ downloaded earlier should be customized for your environment. See <> for more details. Edit the configuration file and customize it to match your environment then re-deploy your {beatname_uc} container. -endif::[] - -[float] -===== Custom image configuration - -It's possible to embed your {beatname_uc} configuration in a custom image. -Here is an example Dockerfile to achieve this: - -ifeval::["{beatname_lc}"!="auditbeat"] - -["source", "dockerfile", subs="attributes"] --------------------------------------------- -FROM {dockerimage} -COPY {beatname_lc}.yml /usr/share/{beatname_lc}/{beatname_lc}.yml -USER root -RUN chown root:{beatname_lc} /usr/share/{beatname_lc}/{beatname_lc}.yml -USER {beatname_lc} --------------------------------------------- - -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] - -["source", "dockerfile", subs="attributes"] --------------------------------------------- -FROM {dockerimage} -COPY {beatname_lc}.yml /usr/share/{beatname_lc}/{beatname_lc}.yml --------------------------------------------- - -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc b/docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc deleted file mode 100644 index 92bdeca5e13..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-env-vars.asciidoc +++ /dev/null @@ -1,113 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// :standalone: -//// include::../../libbeat/docs/shared-env-vars.asciidoc[] -//// Specify :standalone: when this file is pulled into and index. When -//// the file is embedded in another file, do no specify :standalone: -////////////////////////////////////////////////////////////////////////// - -ifdef::standalone[] - -[[using-environ-vars]] -== Use environment variables in the configuration - -endif::[] - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see the {fleet-guide}[{fleet} User Guide] instead. - -You can use environment variable references in the config file to -set values that need to be configurable during deployment. To do this, use: - -`${VAR}` - -Where `VAR` is the name of the environment variable. - -Each variable reference is replaced at startup by the value of the environment -variable. The replacement is case-sensitive and occurs before the YAML file is -parsed. References to undefined variables are replaced by empty strings unless -you specify a default value or custom error text. - -To specify a default value, use: - -`${VAR:default_value}` - -Where `default_value` is the value to use if the environment variable is -undefined. - -To specify custom error text, use: - -`${VAR:?error_text}` - -Where `error_text` is custom text that will be prepended to the error -message if the environment variable cannot be expanded. - -If you need to use a literal `${` in your configuration file then you can write -`$${` to escape the expansion. - -After changing the value of an environment variable, you need to restart -{beatname_uc} to pick up the new value. - -[NOTE] -================================== -You can also specify environment variables when you override a config -setting from the command line by using the `-E` option. For example: - -`-E name=${NAME}` - -================================== - -[float] -=== Examples - -Here are some examples of configurations that use environment variables -and what each configuration looks like after replacement: - -[options="header"] -|================================== -|Config source |Environment setting |Config after replacement -|`name: ${NAME}` |`export NAME=elastic` |`name: elastic` -|`name: ${NAME}` |no setting |`name:` -|`name: ${NAME:beats}` |no setting |`name: beats` -|`name: ${NAME:beats}` |`export NAME=elastic` |`name: elastic` -|`name: ${NAME:?You need to set the NAME environment variable}` |no setting | None. Returns an error message that's prepended with the custom text. -|`name: ${NAME:?You need to set the NAME environment variable}` |`export NAME=elastic` | `name: elastic` -|================================== - -[float] -=== Specify complex objects in environment variables - -You can specify complex objects, such as lists or dictionaries, in environment -variables by using a JSON-like syntax. - -As with JSON, dictionaries and lists are constructed using `{}` and `[]`. But -unlike JSON, the syntax allows for trailing commas and slightly different string -quotation rules. Strings can be unquoted, single-quoted, or double-quoted, as a -convenience for simple settings and to make it easier for you to mix quotation -usage in the shell. Arrays at the top-level do not require brackets (`[]`). - -For example, the following environment variable is set to a list: - -[source,yaml] -------------------------------------------------------------------------------- -ES_HOSTS="10.45.3.2:9220,10.45.3.1:9230" -------------------------------------------------------------------------------- - -You can reference this variable in the config file: - -[source,yaml] -------------------------------------------------------------------------------- -output.elasticsearch: - hosts: '${ES_HOSTS}' -------------------------------------------------------------------------------- - -When {beatname_uc} loads the config file, it resolves the environment variable and -replaces it with the specified list before reading the `hosts` setting. - -NOTE: Do not use double-quotes (`"`) to wrap regular expressions, or the backslash (`\`) will be interpreted as an escape character. diff --git a/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc b/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc deleted file mode 100644 index cac7084cb48..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-instrumentation.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[[configuration-instrumentation]] -== Configure APM instrumentation - -++++ -Instrumentation -++++ - -IMPORTANT: {deprecation-notice-config} - -Libbeat uses the Elastic APM Go Agent to instrument its publishing pipeline. -Currently, only the {es} output is instrumented. -To gain insight into the performance of {beatname_uc}, you can enable this instrumentation and send trace data to APM Server. - -Example configuration with instrumentation enabled: - -["source","yaml"] ----- -instrumentation: - enabled: true - environment: production - hosts: - - "http://localhost:8200" - api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV ----- - -[float] -=== Configuration options - -You can specify the following options in the `instrumentation` section of the +{beatname_lc}.yml+ config file: - -[float] -==== `enabled` - -Set to `true` to enable instrumentation of {beatname_uc}. -Defaults to `false`. - -[float] -==== `environment` - -Set the environment in which {beatname_uc} is running, for example, `staging`, `production`, `dev`, etc. -Environments can be filtered in the {kibana-ref}/xpack-apm.html[{apm-app}]. - -[float] -==== `hosts` - -The {apm-server-ref-v}/getting-started-apm-server.html[APM Server] hosts to report instrumentation data to. -Defaults to `http://localhost:8200`. - -[float] -==== `api_key` - -{apm-server-ref-v}/api-key.html[API key] used to secure communication with the APM Server(s). -If `api_key` is set then `secret_token` will be ignored. - -[float] -==== `secret_token` - -{apm-server-ref-v}/secret-token.html[Secret token] used to secure communication with the APM Server(s). - -[float] -==== `profiling.cpu.enabled` - -Set to `true` to enable CPU profiling, where profile samples are recorded as events. - -This feature is experimental. - -[float] -==== `profiling.cpu.interval` - -Configure the CPU profiling interval. Defaults to `60s`. - -This feature is experimental. - -[float] -==== `profiling.cpu.duration` - -Configure the CPU profiling duration. Defaults to `10s`. - -This feature is experimental. - -[float] -==== `profiling.heap.enabled` - -Set to `true` to enable heap profiling. - -This feature is experimental. - -[float] -==== `profiling.heap.interval` - -Configure the heap profiling interval. Defaults to `60s`. - -This feature is experimental. diff --git a/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc deleted file mode 100644 index e05dd1ea7d0..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-kerberos-config.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -[[configuration-kerberos]] -== Configure Kerberos - -++++ -Kerberos -++++ - -IMPORTANT: {deprecation-notice-config} - -You can specify Kerberos options with any output or input that supports Kerberos, like {es}. - -The following encryption types are supported: - -// lint ignore -* aes128-cts-hmac-sha1-96 -* aes128-cts-hmac-sha256-128 -* aes256-cts-hmac-sha1-96 -* aes256-cts-hmac-sha384-192 -* des3-cbc-sha1-kd -* rc4-hmac - -Example output config with Kerberos password based authentication: - -[source,yaml] ----- -output.elasticsearch.hosts: ["http://my-elasticsearch.elastic.co:9200"] -output.elasticsearch.kerberos.auth_type: password -output.elasticsearch.kerberos.username: "elastic" -output.elasticsearch.kerberos.password: "changeme" -output.elasticsearch.kerberos.config_path: "/etc/krb5.conf" -output.elasticsearch.kerberos.realm: "ELASTIC.CO" ----- - -The service principal name for the {es} instance is constructed from these options. Based on this configuration -it is going to be `HTTP/my-elasticsearch.elastic.co@ELASTIC.CO`. - -[float] -=== Configuration options - -You can specify the following options in the `kerberos` section of the +{beatname_lc}.yml+ config file: - -[float] -==== `enabled` - -The `enabled` setting can be used to enable the `kerberos` configuration by setting -it to `false`. The default value is `true`. - -NOTE: Kerberos settings are disabled if either `enabled` is set to `false` or the -`kerberos` section is missing. - -[float] -==== `auth_type` - -There are two options to authenticate with Kerberos KDC: `password` and `keytab`. - -`password` expects the principal name and its password. When choosing `keytab`, you -have to specify a principal name and a path to a keytab. The keytab must contain -the keys of the selected principal. Otherwise, authentication will fail. - -[float] -==== `config_path` - -You need to set the path to the `krb5.conf`, so +{beatname_lc} can find the Kerberos KDC to -retrieve a ticket. - -[float] -==== `username` - -Name of the principal used to connect to the output. - -[float] -==== `password` - -If you configured `password` for `auth_type`, you have to provide a password -for the selected principal. - -[float] -==== `keytab` - -If you configured `keytab` for `auth_type`, you have to provide the path to the -keytab of the selected principal. - -[float] -==== `service_name` - -This option can only be configured for Kafka. It is the name of the Kafka service, usually `kafka`. - -[float] -==== `realm` - -Name of the realm where the output resides. diff --git a/docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc deleted file mode 100644 index 9ec0860e004..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-path-config.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-path-config.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - -[[configuration-path]] -== Configure project paths - -++++ -Project paths -++++ - -IMPORTANT: {deprecation-notice-config} - -The `path` section of the +{beatname_lc}.yml+ config file contains configuration -options that define where {beatname_uc} looks for its files. For example, {beatname_uc} -looks for the {es} template file in the configuration path and writes -log files in the logs path. -ifdef::has_registry[] -{beatname_uc} looks for its registry files in the data path. -endif::[] - -Please see the <> section for more details. - -Here is an example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -path.home: /usr/share/beat -path.config: /etc/beat -path.data: /var/lib/beat -path.logs: /var/log/ ------------------------------------------------------------------------------- - -Note that it is possible to override these options by using command line flags. - -[float] -=== Configuration options - -You can specify the following options in the `path` section of the +{beatname_lc}.yml+ config file: - -[float] -==== `home` - -The home path for the {beatname_uc} installation. This is the default base path for all -other path settings and for miscellaneous files that come with the distribution (for example, the -sample dashboards). If not set by a CLI flag or in the configuration file, the default -for the home path is the location of the {beatname_uc} binary. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.home: /usr/share/beats ------------------------------------------------------------------------------- - -[float] -==== `config` - -The configuration path for the {beatname_uc} installation. This is the default base path -for configuration files, including the main YAML configuration file and the -{es} template file. If not set by a CLI flag or in the configuration file, the default for the -configuration path is the home path. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.config: /usr/share/beats/config ------------------------------------------------------------------------------- - -[float] -==== `data` - -The data path for the {beatname_uc} installation. This is the default base path for all -the files in which {beatname_uc} needs to store its data. If not set by a CLI -flag or in the configuration file, the default for the data path is a `data` -subdirectory inside the home path. - - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.data: /var/lib/beats ------------------------------------------------------------------------------- - -TIP: When running multiple {beatname_uc} instances on the same host, make sure they -each have a distinct `path.data` value. - -[float] -==== `logs` - -The logs path for a {beatname_uc} installation. This is the default location for {beatname_uc}'s -log files. If not set by a CLI flag or in the configuration file, the default -for the logs path is a `logs` subdirectory inside the home path. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.logs: /var/log/beats ------------------------------------------------------------------------------- - -[float] -==== `system.hostfs` - -Specifies the mount point of the host's file system for use in monitoring a host. -This can either be set in the config, or with the `--system.hostfs` CLI flag. This is used for cgroup self-monitoring. -ifeval::["{beatname_lc}"=="metricbeat"] -This is also used by the system module to read files from `/proc` and `/sys`. -endif::[] - - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -system.hostfs: /mount/rootfs ------------------------------------------------------------------------------- diff --git a/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc b/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc deleted file mode 100644 index bdad0cb0bf4..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-securing-beat.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[id="securing-{beatname_lc}"] -= Secure {beatname_uc} - -++++ -Secure -++++ - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see <> instead. - -The following topics provide information about securing the {beatname_uc} -process and connecting to a cluster that has {security-features} enabled. - -You can use role-based access control and optionally, API keys to grant {beatname_uc} users access to -secured resources. - -* <> -* <>. - -After privileged users have been created, use authentication to connect to a secured Elastic cluster. - -* <> -ifndef::no-output-logstash[] -* <> -endif::[] - -ifdef::apm-server[] -For secure communication between APM Server and APM Agents, see <>. -endif::[] - -ifndef::serverless[] -ifndef::win_only[] -On Linux, {beatname_uc} can take advantage of secure computing mode to restrict the -system calls that a process can issue. - -* <> -endif::[] -endif::[] - -// APM HTTPS information -ifdef::beat-specific-security[] -include::{beat-specific-security}[] -endif::[] - - - -ifdef::apm-server[] -// APM privileges -include::{docdir}/legacy/feature-roles.asciidoc[] -// APM API keys -include::{docdir}/legacy/api-keys.asciidoc[] -endif::[] - -ifndef::apm-server[] -// Beat privileges -include::./security/users.asciidoc[] -// Beat API keys -include::./security/api-keys.asciidoc[] -endif::[] - -// APM Agent security -ifdef::apm-server[] -include::{docdir}/legacy/secure-communication-agents.asciidoc[] -endif::[] - -// Elasticsearch security -include::./https.asciidoc[] - -// Logstash security -ifndef::no-output-logstash[] -include::./shared-ssl-logstash-config.asciidoc[] -endif::[] - -// Linux Seccomp -ifndef::serverless[] -ifndef::win_only[] -include::./security/linux-seccomp.asciidoc[] -endif::[] -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc deleted file mode 100644 index 2f66aa47077..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-ssl-config.asciidoc +++ /dev/null @@ -1,575 +0,0 @@ -[[configuration-ssl]] -ifndef::apm-server[] -== Configure SSL - -++++ -SSL -++++ -endif::apm-server[] -ifdef::apm-server[] -== SSL output settings - -IMPORTANT: {deprecation-notice-config} -If you're using {fleet} and the Elastic APM integration, please see the {fleet-guide}[{fleet} User Guide] instead. - -You can specify SSL options with any output that supports SSL, like {es}, {ls}, or Kafka. -endif::[] - -ifndef::apm-server[] -You can specify SSL options when you configure: - -* <> that support SSL -ifndef::no_dashboards[] -* the <> -endif::[] -ifeval::["{beatname_lc}"=="heartbeat"] -* <> that support SSL -endif::[] -ifeval::["{beatname_lc}"=="metricbeat"] -* <> that define the host as an HTTP URL -endif::[] -endif::[] - -Example output config with SSL enabled: - -[source,yaml] ----- -output.elasticsearch.hosts: ["https://192.168.1.42:9200"] -output.elasticsearch.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] -output.elasticsearch.ssl.certificate: "/etc/pki/client/cert.pem" -output.elasticsearch.ssl.key: "/etc/pki/client/cert.key" ----- - -ifndef::no-output-logstash[] -Also see <>. -endif::[] - -ifndef::no_kibana[] -Example {kib} endpoint config with SSL enabled: - -[source,yaml] ----- -setup.kibana.host: "https://192.0.2.255:5601" -setup.kibana.ssl.enabled: true -setup.kibana.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] -setup.kibana.ssl.certificate: "/etc/pki/client/cert.pem" -setup.kibana.ssl.key: "/etc/pki/client/cert.key" ----- -endif::no_kibana[] - -ifeval::["{beatname_lc}"=="heartbeat"] -Example monitor with SSL enabled: - -[source,yaml] -------------------------------------------------------------------------------- -heartbeat.monitors: -- type: tcp - schedule: '@every 5s' - hosts: ["myhost"] - ports: [80, 9200, 5044] - ssl: - certificate_authorities: ['/etc/ca.crt'] - supported_protocols: ["TLSv1.0", "TLSv1.1", "TLSv1.2"] -------------------------------------------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -Example module with SSL enabled: - -[source,yaml] ----- -- module: http - namespace: "myservice" - enabled: true - period: 10s - hosts: ["https://localhost"] - path: "/stats" - headers: - Authorization: "Bearer test123" - ssl.verification_mode: "none" ----- -endif::[] - -There are a number of SSL configuration options available to you: - -* <> -* <> -* <> - -[discrete] -[[ssl-common-config]] -=== Common configuration options - -Common SSL configuration options can be used in both client and server configurations. -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[enabled]] -==== `enabled` - -To disable SSL configuration, set the value to `false`. The default value is `true`. - -[NOTE] -===== -SSL settings are disabled if either `enabled` is set to `false` or the -`ssl` section is missing. -===== - -[float] -[[supported-protocols]] -==== `supported_protocols` - -List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions -not configured, the connection will be dropped during or after the handshake. The -setting is a list of allowed protocol versions: -`SSLv3`, `TLSv1` for TLS version 1.0, `TLSv1.0`, `TLSv1.1`, `TLSv1.2`, and -`TLSv1.3`. - -The default value is `[TLSv1.1, TLSv1.2, TLSv1.3]`. - -[float] -[[cipher-suites]] -==== `cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library's https://golang.org/pkg/crypto/tls/[default suites] -are used (recommended). Note that TLS 1.3 cipher suites are not -individually configurable in Go, so they are not included in this list. - -// tag::cipher_suites[] -The following cipher suites are available: - -// lint disable -[options="header"] -|=== -| Cypher | Notes -| ECDHE-ECDSA-AES-128-CBC-SHA | -| ECDHE-ECDSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| ECDHE-ECDSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| ECDHE-ECDSA-AES-256-CBC-SHA | -| ECDHE-ECDSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| ECDHE-ECDSA-CHACHA20-POLY1305 | TLS 1.2 only. -| ECDHE-ECDSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -| ECDHE-RSA-3DES-CBC3-SHA | -| ECDHE-RSA-AES-128-CBC-SHA | -| ECDHE-RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| ECDHE-RSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| ECDHE-RSA-AES-256-CBC-SHA | -| ECDHE-RSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| ECDHE-RSA-CHACHA20-POLY1205 | TLS 1.2 only. -| ECDHE-RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -| RSA-3DES-CBC3-SHA | -| RSA-AES-128-CBC-SHA | -| RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| RSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| RSA-AES-256-CBC-SHA | -| RSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -|=== -// lint enable - -Here is a list of acronyms used in defining the cipher suites: - -* 3DES: - Cipher suites using triple DES - -* AES-128/256: - Cipher suites using AES with 128/256-bit keys. - -* CBC: - Cipher using Cipher Block Chaining as block cipher mode. - -* ECDHE: - Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange. - -* ECDSA: - Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication. - -* GCM: - Galois/Counter mode is used for symmetric key cryptography. - -* RC4: - Cipher suites using RC4. - -* RSA: - Cipher suites using RSA. - -* SHA, SHA256, SHA384: - Cipher suites using SHA-1, SHA-256 or SHA-384. -// end::cipher_suites[] - -[float] -[[curve-types]] -==== `curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -The following elliptic curve types are available: - -* P-256 -* P-384 -* P-521 -* X25519 - -[float] -[[ca-sha256]] -==== `ca_sha256` - -This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain. - -The pin is a base64 encoded string of the SHA-256 of the certificate. - -NOTE: This check is not a replacement for the normal SSL validation, but it adds additional validation. -If this option is used with `verification_mode` set to `none`, the check will always fail because -it will not receive any verified chains. - -[discrete] -[[ssl-client-config]] -=== Client configuration options - -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[client-certificate-authorities]] -==== `certificate_authorities` - -The list of root certificates for verifications is required. If `certificate_authorities` is empty or not set, the -system keystore is used. If `certificate_authorities` is self-signed, the host system -needs to trust that CA cert as well. - -By default you can specify a list of files that +{beatname_lc}+ will read, but you -can also embed a certificate directly in the `YAML` configuration: - -[source,yaml] ----- -certificate_authorities: - - | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[client-certificate]] -==== `certificate: "/etc/pki/client/cert.pem"` - -The path to the certificate for SSL client authentication is only required if -`client_authentication` is specified. If the certificate -is not specified, client authentication is not available. The connection -might fail if the server requests client authentication. If the SSL server does not -require client authentication, the certificate will be loaded, but not requested or used -by the server. - -When this option is configured, the <> option is also required. -The certificate option support embedding of the certificate: - -[source,yaml] ----- -certificate: | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[client-key]] -==== `key: "/etc/pki/client/cert.key"` - -The client certificate key used for client authentication and is only required -if `client_authentication` is configured. The key option support embedding of the private key: - -[source,yaml] ----- -key: | - -----BEGIN PRIVATE KEY----- - MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI - sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP - Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F - KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 - MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z - HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ - nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx - Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 - eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ - Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM - epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve - Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn - BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 - VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU - zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 - GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA - 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 - TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF - hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li - e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze - Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T - kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ - kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav - NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K - 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc - nygO9KTJuUiBrLr0AHEnqko= - -----END PRIVATE KEY----- ----- - -[float] -[[client-key-passphrase]] -==== `key_passphrase` - -The passphrase used to decrypt an encrypted key stored in the configured `key` file. - - -[float] -[[client-verification-mode]] -==== `verification_mode` - -Controls the verification of server certificates. Valid values are: - -`full`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. - -`strict`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. If the Subject Alternative -Name is empty, it returns an error. - -`certificate`:: -Verifies that the provided certificate is signed by a -trusted authority (CA), but does not perform any hostname verification. - -`none`:: -Performs _no verification_ of the server's certificate. This -mode disables many of the security benefits of SSL/TLS and should only be used -after cautious consideration. It is primarily intended as a temporary -diagnostic mechanism when attempting to resolve TLS errors; its use in -production environments is strongly discouraged. -+ -The default value is `full`. - -[discrete] -[[ssl-server-config]] -=== Server configuration options - -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[server-certificate-authorities]] -==== `certificate_authorities` - -The list of root certificates for client verifications is only required if -`client_authentication` is configured. If `certificate_authorities` is empty or not set, and -`client_authentication` is configured, the system keystore is used. - -If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. -By default you can specify a list of files that +{beatname_lc}+ will read, but you can also embed a certificate -directly in the `YAML` configuration: - -[source,yaml] ----- -certificate_authorities: - - | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[server-certificate]] -==== `certificate: "/etc/pki/server/cert.pem"` - -For server authentication, the path to the SSL authentication certificate must -be specified for TLS. If the certificate is not specified, startup will fail. - -When this option is configured, the <> option is also required. -The certificate option support embedding of the certificate: - -[source,yaml] ----- -certificate: | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[server-key]] -==== `key: "/etc/pki/server/cert.key"` - -The server certificate key used for authentication is required. -The key option support embedding of the private key: - -[source,yaml] ----- -key: | - -----BEGIN PRIVATE KEY----- - MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI - sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP - Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F - KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 - MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z - HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ - nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx - Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 - eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ - Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM - epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve - Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn - BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 - VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU - zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 - GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA - 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 - TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF - hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li - e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze - Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T - kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ - kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav - NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K - 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc - nygO9KTJuUiBrLr0AHEnqko= - -----END PRIVATE KEY----- ----- - -[float] -[[server-key-passphrase]] -==== `key_passphrase` - -The passphrase is used to decrypt an encrypted key stored in the configured `key` file. - -[float] -[[server-verification-mode]] -==== `verification_mode` - -Controls the verification of client certificates. Valid values are: - -`full`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. - -`strict`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. If the Subject Alternative -Name is empty, it returns an error. - -`certificate`:: -Verifies that the provided certificate is signed by a -trusted authority (CA), but does not perform any hostname verification. - -`none`:: -Performs _no verification_ of the server's certificate. This -mode disables many of the security benefits of SSL/TLS and should only be used -after cautious consideration. It is primarily intended as a temporary -diagnostic mechanism when attempting to resolve TLS errors; its use in -production environments is strongly discouraged. -+ -The default value is `full`. - -[float] -[[server-renegotiation]] -==== `renegotiation` - -This configures what types of TLS renegotiation are supported. The valid options -are: - -`never`:: -Disables renegotiation. - -`once`:: -Allows a remote server to request renegotiation once per connection. - -`freely`:: -Allows a remote server to request renegotiation repeatedly. -+ -The default value is `never`. - -ifeval::["{beatname_lc}" == "filebeat"] -[float] -[[server-client-renegotiation]] -==== `client_authentication` - -The type of client authentication mode. When `certificate_authorities` is set, it -defaults to `required`. Otherwise, it defaults to `none`. - -The valid options are: - -`none`:: -Disables client authentication. - -`optional`:: -When a client certificate is supplied, the server will verify it. - -`required`:: -Will require clients to provide a valid certificate. -endif::[] diff --git a/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc b/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc deleted file mode 100644 index f6ae9294868..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-ssl-logstash-config.asciidoc +++ /dev/null @@ -1,146 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-ssl-logstash-config.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[role="xpack"] -[[configuring-ssl-logstash]] -== Secure communication with {ls} - -IMPORTANT: {deprecation-notice-config} - -You can use SSL mutual authentication to secure connections between {beatname_uc} and {ls}. This ensures that -{beatname_uc} sends encrypted data to trusted {ls} servers only, and that the {ls} server receives data from -trusted {beatname_uc} clients only. - -To use SSL mutual authentication: - -. Create a certificate authority (CA) and use it to sign the certificates that you plan to use for -{beatname_uc} and {ls}. Creating a correct SSL/TLS infrastructure is outside the scope of this -document. There are many online resources available that describe how to create certificates. -+ -TIP: If you are using {security-features}, you can use the -{ref}/certutil.html[`elasticsearch-certutil` tool] to generate certificates. - -. Configure {beatname_uc} to use SSL. In the +{beatname_lc}.yml+ config file, specify the following settings under -`ssl`: -+ -* `certificate_authorities`: Configures {beatname_uc} to trust any certificates signed by the specified CA. If -`certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. - -* `certificate` and `key`: Specifies the certificate and key that {beatname_uc} uses to authenticate with -{ls}. -+ -For example: -+ -[source,yaml] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["logs.mycompany.com:5044"] - ssl.certificate_authorities: ["/etc/ca.crt"] - ssl.certificate: "/etc/client.crt" - ssl.key: "/etc/client.key" ------------------------------------------------------------------------------- -+ -For more information about these configuration options, see <>. - -. Configure {ls} to use SSL. In the {ls} config file, specify the following settings for the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[{beats} input plugin for {ls}]: -+ -* `ssl`: When set to true, enables {ls} to use SSL/TLS. -* `ssl_certificate_authorities`: Configures {ls} to trust any certificates signed by the specified CA. -* `ssl_certificate` and `ssl_key`: Specify the certificate and key that {ls} uses to authenticate with the client. -* `ssl_verify_mode`: Specifies whether the {ls} server verifies the client certificate against the CA. You -need to specify either `peer` or `force_peer` to make the server ask for the certificate and validate it. If you -specify `force_peer`, and {beatname_uc} doesn't provide a certificate, the {ls} connection will be closed. If you choose not to use {ref}/certutil.html[`certutil`], the certificates that you obtain must allow for both `clientAuth` and `serverAuth` if the extended key usage extension is present. -+ -For example: -+ -[source,json] ------------------------------------------------------------------------------- -input { - beats { - port => 5044 - ssl => true - ssl_certificate_authorities => ["/etc/ca.crt"] - ssl_certificate => "/etc/server.crt" - ssl_key => "/etc/server.key" - ssl_verify_mode => "force_peer" - } -} ------------------------------------------------------------------------------- -+ -For more information about these options, see the -https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[documentation for the {beats} input plugin]. - -[float] -[[testing-ssl-logstash]] -=== Validate the {ls} server's certificate - -Before running {beatname_uc}, you should validate the {ls} server's certificate. You can use `curl` to validate the certificate even though the protocol used to communicate with {ls} is not based on HTTP. For example: - -[source,shell] ------------------------------------------------------------------------------- -curl -v --cacert ca.crt https://logs.mycompany.com:5044 ------------------------------------------------------------------------------- - -If the test is successful, you'll receive an empty response error: - -[source,shell] ------------------------------------------------------------------------------- -* Rebuilt URL to: https://logs.mycompany.com:5044/ -* Trying 192.168.99.100... -* Connected to logs.mycompany.com (192.168.99.100) port 5044 (#0) -* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA -* Server certificate: logs.mycompany.com -* Server certificate: mycompany.com -> GET / HTTP/1.1 -> Host: logs.mycompany.com:5044 -> User-Agent: curl/7.43.0 -> Accept: */* -> -* Empty reply from server -* Connection #0 to host logs.mycompany.com left intact -curl: (52) Empty reply from server ------------------------------------------------------------------------------- - -The following example uses the IP address rather than the hostname to validate the certificate: - -[source,shell] ------------------------------------------------------------------------------- -curl -v --cacert ca.crt https://192.168.99.100:5044 ------------------------------------------------------------------------------- - -Validation for this test fails because the certificate is not valid for the specified IP address. It's only valid for the `logs.mycompany.com`, the hostname that appears in the Subject field of the certificate. - -[source,shell] ------------------------------------------------------------------------------- -* Rebuilt URL to: https://192.168.99.100:5044/ -* Trying 192.168.99.100... -* Connected to 192.168.99.100 (192.168.99.100) port 5044 (#0) -* WARNING: using IP address, SNI is being disabled by the OS. -* SSL: certificate verification failed (result: 5) -* Closing connection 0 -curl: (51) SSL: certificate verification failed (result: 5) ------------------------------------------------------------------------------- - -See the <> for info about resolving this issue. - -[float] -=== Test the {beatname_uc} to {ls} connection - -If you have {beatname_uc} running as a service, first stop the service. Then test your setup by running {beatname_uc} in -the foreground so you can quickly see any errors that occur: - -["source","sh",subs="attributes,callouts"] ------------------------------------------------------------------------------- -{beatname_lc} -c {beatname_lc}.yml -e -v ------------------------------------------------------------------------------- - -Any errors will be printed to the console. See the <> for info about -resolving common errors. diff --git a/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc b/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc deleted file mode 100644 index b6d649f2965..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared-systemd.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[running-with-systemd]] -=== {beatname_uc} and systemd - -IMPORTANT: {deprecation-notice-config} - -The DEB and RPM packages include a service unit for Linux systems with -systemd. On these systems, you can manage {beatname_uc} by using the usual -systemd commands. - -ifdef::apm-server[] -We recommend that the {beatname_pkg} process is run as a non-root user. -Therefore, that is the default setup for {beatname_uc}'s DEB package and RPM installation. -endif::apm-server[] - -[float] -==== Start and stop {beatname_uc} - -Use `systemctl` to start or stop {beatname_uc}: - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl start {beatname_pkg} ------------------------------------------------- - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl stop {beatname_pkg} ------------------------------------------------- - -By default, the {beatname_uc} service starts automatically when the system -boots. To enable or disable auto start use: - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl enable {beatname_pkg} ------------------------------------------------- - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl disable {beatname_pkg} ------------------------------------------------- - -[float] -==== {beatname_uc} status and logs - -To get the service status, use `systemctl`: - -["source", "sh", subs="attributes"] ------------------------------------------------- -systemctl status {beatname_pkg} ------------------------------------------------- - -Logs are stored by default in journald. To view the Logs, use `journalctl`: - -["source", "sh", subs="attributes"] ------------------------------------------------- -journalctl -u {beatname_pkg}.service ------------------------------------------------- - -[float] -=== Customize systemd unit for {beatname_uc} - -The systemd service unit file includes environment variables that you can -override to change the default options. - -// lint ignore usr -[cols=">. - -To override these variables, create a drop-in unit file in the -+/etc/systemd/system/{beatname_pkg}.service.d+ directory. - -For example a file with the following content placed in -+/etc/systemd/system/{beatname_pkg}.service.d/debug.conf+ -would override `BEAT_LOG_OPTS` to enable debug for {es} output. - -["source", "systemd", subs="attributes"] ------------------------------------------------- -[Service] -Environment="BEAT_LOG_OPTS=-d elasticsearch" ------------------------------------------------- - -To apply your changes, reload the systemd configuration and restart -the service: - -["source", "sh", subs="attributes"] ------------------------------------------------- -systemctl daemon-reload -systemctl restart {beatname_pkg} ------------------------------------------------- - -NOTE: It is recommended that you use a configuration management tool to -include drop-in unit files. If you need to add a drop-in manually, use -+systemctl edit {beatname_pkg}.service+. - -ifdef::apm-server[] -include::{docdir}/legacy/config-ownership.asciidoc[] -endif::apm-server[] diff --git a/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc b/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc deleted file mode 100644 index 82812c34bd1..00000000000 --- a/docs/legacy/copied-from-beats/docs/shared/configuring-intro.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ - -ifndef::apm-server[] -TIP: To get started quickly, read <<{beatname_lc}-installation-configuration>>. -endif::[] - -To configure {beatname_uc}, edit the configuration file. The default -configuration file is called +{beatname_lc}.yml+. The location of the file -varies by platform. To locate the file, see <>. - -ifndef::apm-server[] -There’s also a full example configuration file called +{beatname_lc}.reference.yml+ -that shows all non-deprecated options. -endif::[] - -TIP: See the -{beats-ref}/config-file-format.html[Config File Format] for more about the -structure of the config file. - -The following topics describe how to configure {beatname_uc}: diff --git a/docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc b/docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc deleted file mode 100644 index 2f95ad7a9d3..00000000000 --- a/docs/legacy/copied-from-beats/outputs/codec/docs/codec.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[configuration-output-codec]] -=== Change the output codec - -IMPORTANT: {deprecation-notice-config} - -For outputs that do not require a specific encoding, you can change the encoding -by using the codec configuration. You can specify either the `json` or `format` -codec. By default the `json` codec is used. - -*`json.pretty`*: If `pretty` is set to true, events will be nicely formatted. The default is false. - -*`json.escape_html`*: If `escape_html` is set to true, HTML symbols will be escaped in strings. The default is false. - -Example configuration that uses the `json` codec with pretty printing enabled to write events to the console: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - codec.json: - pretty: true - escape_html: false ------------------------------------------------------------------------------- - -*`format.string`*: Configurable format string used to create a custom formatted message. - -Example configurable that uses the `format` codec to print the events timestamp and message field to console: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - codec.format: - string: '%{[@timestamp]} %{[message]}' ------------------------------------------------------------------------------- diff --git a/docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc b/docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc deleted file mode 100644 index be6e0adac51..00000000000 --- a/docs/legacy/copied-from-beats/outputs/console/docs/console.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[console-output]] -=== Configure the Console output - -++++ -Console -++++ - -IMPORTANT: {deprecation-notice-config} - -The Console output writes events in JSON format to stdout. - -WARNING: The Console output should be used only for debugging issues as it can produce a large amount of logging data. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the console output by adding `output.console`. - -Example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - pretty: true ------------------------------------------------------------------------------- - -==== Configuration options - -You can specify the following `output.console` options in the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -===== `pretty` - -If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. - -===== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded using the `pretty` option. - -See <> for more information. - -===== `bulk_max_size` - -The maximum number of events to buffer internally during publishing. The default is 2048. - -Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this -setting does not affect how events are published. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. diff --git a/docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc b/docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc deleted file mode 100644 index 7f8b6f27a90..00000000000 --- a/docs/legacy/copied-from-beats/outputs/elasticsearch/docs/elasticsearch.asciidoc +++ /dev/null @@ -1,475 +0,0 @@ -[[elasticsearch-output]] -=== Configure the {es} output - -++++ -{es} -++++ - -IMPORTANT: {deprecation-notice-config} - -The {es} output sends events directly to {es} using the {es} HTTP API. - -Example configuration: - -["source","yaml",subs="attributes"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] <1> ----- -<1> To enable SSL, add `https` to all URLs defined under __hosts__. - -When sending data to a secured cluster through the `elasticsearch` -output, {beatname_uc} can use any of the following authentication methods: - -* Basic authentication credentials (username and password). -* Token-based (API key) authentication. -* Public Key Infrastructure (PKI) certificates. - -*Basic authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - username: "{beat_default_index_prefix}_writer" - password: "{pwd}" ----- - -*API key authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - api_key: "KnR6yE41RrSowb0kQ0HWoA" ----- - -*PKI certificate authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate: "/etc/pki/client/cert.pem" - ssl.key: "/etc/pki/client/cert.key" ----- - -See <> for details on each authentication method. - -==== Compatibility - -This output works with all compatible versions of {es}. See the -https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support -Matrix]. - -==== Configuration options - -You can specify the following options in the `elasticsearch` section of the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to `false`, the output is disabled. - -The default value is `true`. - - -[[hosts-option]] -===== `hosts` - -The list of {es} nodes to connect to. The events are distributed to -these nodes in round robin order. If one node becomes unreachable, the event is -automatically sent to another node. Each {es} node can be defined as a `URL` or `IP:PORT`. -For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. -If no port is specified, `9200` is used. - -NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken from the -<> and <> config options. - -[source,yaml] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] <1> - protocol: https - path: /elasticsearch ------------------------------------------------------------------------------- - -In the previous example, the {es} nodes are available at `https://10.45.3.2:9220/elasticsearch` and -`https://10.45.3.1:9230/elasticsearch`. - -===== `compression_level` - -The gzip compression level. Setting this value to `0` disables compression. -The compression level must be in the range of `1` (best speed) to `9` (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is `0`. - -===== `escape_html` - -Configure escaping of HTML in strings. Set to `true` to enable escaping. - -The default value is `false`. - -===== `api_key` - -Instead of using a username and password, you can use API keys to secure communication -with {es}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. - -See <> for more information. - -===== `username` - -The basic authentication username for connecting to {es}. - -This user needs the privileges required to publish events to {es}. -To create a user like this, see <>. - -===== `password` - -The basic authentication password for connecting to {es}. - -===== `parameters` - -Dictionary of HTTP parameters to pass within the URL with index operations. - -[[protocol-option]] -===== `protocol` - -The name of the protocol {es} is reachable on. The options are: -`http` or `https`. The default is `http`. However, if you specify a URL for -<>, the value of `protocol` is overridden by whatever scheme you -specify in the URL. - -[[path-option]] -===== `path` - -An HTTP path prefix that is prepended to the HTTP API calls. This is useful for -the cases where {es} listens behind an HTTP reverse proxy that exports -the API under a custom prefix. - -===== `headers` - -Custom HTTP headers to add to each request created by the {es} output. -Example: - -[source,yaml] ------------------------------------------------------------------------------- -output.elasticsearch.headers: - X-My-Header: Header contents ------------------------------------------------------------------------------- - -It is possible to specify multiple header values for the same header -name by separating them with a comma. - -===== `proxy_url` - -The URL of the proxy to use when connecting to the {es} servers. The -value may be either a complete URL or a "host[:port]", in which case the "http" -scheme is assumed. If a value is not specified through the configuration file -then proxy environment variables are used. See the -https://golang.org/pkg/net/http/#ProxyFromEnvironment[Go documentation] -for more information about the environment variables. - -// output.elasticsearch.index has been removed from APM Server -ifndef::apm-server[] - -[[index-option-es]] -===== `index` - -The index name to write events to when you're using daily indices. The default is -+"{beatname_lc}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}"+, for example, -+"{beatname_lc}-{version}-{localdate}"+. If you change this setting, you also -need to configure the `setup.template.name` and `setup.template.pattern` options -(see <>). - -ifndef::no_dashboards[] -If you are using the pre-built {kib} -dashboards, you also need to set the `setup.dashboards.index` option (see -<>). -endif::no_dashboards[] - -ifndef::no_ilm[] -When <> is enabled, the default `index` is -+"{beatname_lc}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}-%{index_num}"+, for example, -+"{beatname_lc}-{version}-{localdate}-000001"+. Custom `index` settings are ignored -when {ilm-init} is enabled. If you’re sending events to a cluster that supports index -lifecycle management, see <> to learn how to change the index name. -endif::no_ilm[] - -You can set the index dynamically by using a format string to access any event -field. For example, this configuration uses a custom field, `fields.log_type`, -to set the index: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - index: "%{[fields.log_type]}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" <1> ------------------------------------------------------------------------------- - -<1> We recommend including +{beat_version_key}+ in the name to avoid mapping issues -when you upgrade. - -With this configuration, all events with `log_type: normal` are sent to an -index named +normal-{version}-{localdate}+, and all events with -`log_type: critical` are sent to an index named -+critical-{version}-{localdate}+. - -TIP: To learn how to add custom fields to events, see the -<> option. - -See the <> setting for other ways to set the index -dynamically. -endif::apm-server[] - -// output.elasticsearch.indices has been removed from APM Server -ifndef::apm-server[] - -[[indices-option-es]] -===== `indices` - -An array of index selector rules. Each rule specifies the index to use for -events that match the rule. During publishing, {beatname_uc} uses the first -matching rule in the array. Rules can contain conditionals, format string-based -fields, and name mappings. If the `indices` setting is missing or no rule -matches, the <> setting is used. - -ifndef::no_ilm[] -Similar to `index`, defining custom `indices` will disable <>. -endif::no_ilm[] - -Rule settings: - -*`index`*:: The index format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. - -*`mappings`*:: A dictionary that takes the value returned by `index` and maps it -to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sets the index based on whether the `message` field -contains the specified string: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - indices: - - index: "warning-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" - when.contains: - message: "WARN" - - index: "error-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -This configuration results in indices named +warning-{version}-{localdate}+ -and +error-{version}-{localdate}+ (plus the default index if no matches are -found). - -The following example sets the index by taking the name returned by the `index` -format string and mapping it to a new name that's used for the index: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - indices: - - index: "%{[fields.log_type]}" - mappings: - critical: "sev1" - normal: "sev2" - default: "sev3" ------------------------------------------------------------------------------- - - -This configuration results in indices named `sev1`, `sev2`, and `sev3`. - -The `mappings` setting simplifies the configuration, but is limited to string -values. You cannot specify format strings within the mapping pairs. -endif::apm-server[] - -ifndef::no_ilm[] -[[ilm-es]] -===== `ilm` - -Configuration options for {ilm}. - -See <> for more information. -endif::no_ilm[] - -ifndef::no-pipeline[] -[[pipeline-option-es]] -===== `pipeline` - -A format string value that specifies the ingest node pipeline to write events to. - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipeline: my_pipeline_id ------------------------------------------------------------------------------- - -For more information, see <>. - -You can set the ingest node pipeline dynamically by using a format string to -access any event field. For example, this configuration uses a custom field, -`fields.log_type`, to set the pipeline for each event: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipeline: "%{[fields.log_type]}_pipeline" ------------------------------------------------------------------------------- - -With this configuration, all events with `log_type: normal` are sent to a pipeline -named `normal_pipeline`, and all events with `log_type: critical` are sent to a -pipeline named `critical_pipeline`. - -TIP: To learn how to add custom fields to events, see the -<> option. - -See the <> setting for other ways to set the -ingest node pipeline dynamically. - -[[pipelines-option-es]] -===== `pipelines` - -An array of pipeline selector rules. Each rule specifies the ingest node -pipeline to use for events that match the rule. During publishing, {beatname_uc} -uses the first matching rule in the array. Rules can contain conditionals, -format string-based fields, and name mappings. If the `pipelines` setting is -missing or no rule matches, the <> setting is -used. - -Rule settings: - -*`pipeline`*:: The pipeline format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `pipeline` and maps -it to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sends events to a specific pipeline based on whether the -`message` field contains the specified string: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipelines: - - pipeline: "warning_pipeline" - when.contains: - message: "WARN" - - pipeline: "error_pipeline" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -The following example sets the pipeline by taking the name returned by the -`pipeline` format string and mapping it to a new name that's used for the -pipeline: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipelines: - - pipeline: "%{[fields.log_type]}" - mappings: - critical: "sev1_pipeline" - normal: "sev2_pipeline" - default: "sev3_pipeline" ------------------------------------------------------------------------------- - - -With this configuration, all events with `log_type: critical` are sent to -`sev1_pipeline`, all events with `log_type: normal` are sent to a -`sev2_pipeline`, and all other events are sent to `sev3_pipeline`. - -For more information about ingest node pipelines, see -<>. - -endif::[] - -===== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - -===== `flush_bytes` - -The bulk request size threshold, in bytes, before flushing to {es}. -The value must have a suffix, e.g. `"1MB"`. The default is `5MB`. - -===== `flush_interval` - -The maximum duration to accumulate events for a bulk request before being flushed to {es}. -The value must have a duration suffix, e.g. `"5s"`. The default is `1s`. - -===== `backoff.init` - -The number of seconds to wait before trying to reconnect to {es} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - - -===== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{es} after a network error. The default is `60s`. - -===== `timeout` - -The HTTP request timeout in seconds for the {es} request. The default is 90. - -===== `ssl` - -Configuration options for SSL parameters like the certificate authority to use -for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to -{es}. - -See the <> guide -or <> for more information. - -===== `kerberos` - -Configuration options for Kerberos authentication. - -See <> for more information. diff --git a/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc b/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc deleted file mode 100644 index 11c2e23dacc..00000000000 --- a/docs/legacy/copied-from-beats/outputs/fileout/docs/fileout.asciidoc +++ /dev/null @@ -1,75 +0,0 @@ -[[file-output]] -=== Configure the File output - -++++ -File -++++ - -IMPORTANT: {deprecation-notice-config} - -The File output dumps the transactions into a file where each transaction is in a JSON format. -Currently, this output is used for testing, but it can be used as input for -{ls}. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the file output by adding `output.file`. - -Example configuration: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.file: - path: "/tmp/{beatname_lc}" - filename: {beatname_lc} - #rotate_every_kb: 10000 - #number_of_files: 7 - #permissions: 0600 - #rotate_on_startup: true ------------------------------------------------------------------------------- - -==== Configuration options - -You can specify the following `output.file` options in the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -[[path]] -===== `path` - -The path to the directory where the generated files will be saved. This option is -mandatory. - -===== `filename` - -The name of the generated files. The default is set to the Beat name. For example, the files -generated by default for {beatname_uc} would be "{beatname_lc}", "{beatname_lc}.1", "{beatname_lc}.2", and so on. - -===== `rotate_every_kb` - -The maximum size in kilobytes of each file. When this size is reached, the files are -rotated. The default value is 10240 KB. - -===== `number_of_files` - -The maximum number of files to save under <>. When this number of files is reached, the -oldest file is deleted, and the rest of the files are shifted from last to first. -The number of files must be between 2 and 1024. The default is 7. - -===== `permissions` - -Permissions to use for file creation. The default is 0600. - -===== `rotate_on_startup` - -If the output file already exists on startup, immediately rotate it and start writing to a new file instead of appending to the existing one. Defaults to true. - -===== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. diff --git a/docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc b/docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc deleted file mode 100644 index 51c0ad1b83c..00000000000 --- a/docs/legacy/copied-from-beats/outputs/kafka/docs/kafka.asciidoc +++ /dev/null @@ -1,335 +0,0 @@ -[[kafka-output]] -=== Configure the Kafka output - -++++ -Kafka -++++ - -IMPORTANT: {deprecation-notice-config} - -The Kafka output sends events to Apache Kafka. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the Kafka output by uncommenting the -Kafka section. - -Example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -output.kafka: - # initial brokers for reading cluster metadata - hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"] - - # message topic selection + partitioning - topic: '%{[fields.log_topic]}' - partition.round_robin: - reachable_only: false - - required_acks: 1 - compression: gzip - max_message_bytes: 1000000 ------------------------------------------------------------------------------- - -NOTE: Events bigger than <> will be dropped. To avoid this problem, make sure {beatname_uc} does not generate events bigger than <>. - -[[kafka-compatibility]] -==== Compatibility - -This output works with all Kafka versions in between 0.11 and 2.2.2. Older versions -might work as well, but are not supported. - -==== Configuration options - -You can specify the following options in the `kafka` section of the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The `enabled` config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -ifndef::apm-server[] -The default value is `true`. -endif::[] -ifdef::apm-server[] -The default value is `false`. -endif::[] - -===== `hosts` - -The list of Kafka broker addresses from where to fetch the cluster metadata. -The cluster metadata contain the actual Kafka brokers events are published to. - -===== `version` - -Kafka version {beatname_lc} is assumed to run against. Defaults to 1.0.0. - -Event timestamps will be added, if version 0.10.0.0+ is enabled. - -Valid values are all Kafka releases in between `0.8.2.0` and `2.0.0`. - -See <> for information on supported versions. - -===== `username` - -The username for connecting to Kafka. If username is configured, the password -must be configured as well. - -===== `password` - -The password for connecting to Kafka. - -===== `sasl.mechanism` - -beta[] - -The SASL mechanism to use when connecting to Kafka. It can be one of: - -* `PLAIN` for SASL/PLAIN. -* `SCRAM-SHA-256` for SCRAM-SHA-256. -* `SCRAM-SHA-512` for SCRAM-SHA-512. - -If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` -are provided. Otherwise, SASL authentication is disabled. - -To use `GSSAPI` mechanism to authenticate with Kerberos, you must leave this -field empty, and use the <> options. - - -[[topic-option-kafka]] -===== `topic` - -The Kafka topic used for produced events. - -You can set the topic dynamically by using a format string to access any -event field. For example, this configuration uses a custom field, -`fields.log_topic`, to set the topic for each event: - -[source,yaml] ------ -topic: '%{[fields.log_topic]}' ------ - -TIP: To learn how to add custom fields to events, see the -<> option. - -See the <> setting for other ways to set the -topic dynamically. - -[[topics-option-kafka]] -===== `topics` - -An array of topic selector rules. Each rule specifies the `topic` to use for -events that match the rule. During publishing, {beatname_uc} sets the `topic` -for each event based on the first matching rule in the array. Rules -can contain conditionals, format string-based fields, and name mappings. If the -`topics` setting is missing or no rule matches, the -<> field is used. - -Rule settings: - -*`topic`*:: The topic format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `topic` and maps it -to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sets the topic based on whether the message field contains -the specified string: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.kafka: - hosts: ["localhost:9092"] - topic: "logs-%{[agent.version]}" - topics: - - topic: "critical-%{[agent.version]}" - when.contains: - message: "CRITICAL" - - topic: "error-%{[agent.version]}" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -This configuration results in topics named +critical-{version}+, -+error-{version}+, and +logs-{version}+. - -===== `key` - -Optional formatted string specifying the Kafka event key. If configured, the -event key can be extracted from the event using a format string. - -See the Kafka documentation for the implications of a particular choice of key; -by default, the key is chosen by the Kafka cluster. - -===== `partition` - -Kafka output broker event partitioning strategy. Must be one of `random`, -`round_robin`, or `hash`. By default the `hash` partitioner is used. - -*`random.group_events`*: Sets the number of events to be published to the same - partition, before the partitioner selects a new partition by random. The - default value is 1 meaning after each event a new partition is picked randomly. - -*`round_robin.group_events`*: Sets the number of events to be published to the - same partition, before the partitioner selects the next partition. The default - value is 1 meaning after each event the next partition will be selected. - -*`hash.hash`*: List of fields used to compute the partitioning hash value from. - If no field is configured, the events `key` value will be used. - -*`hash.random`*: Randomly distribute events if no hash or key value can be computed. - -All partitioners will try to publish events to all partitions by default. If a -partition's leader becomes unreachable for the beat, the output might block. All -partitioners support setting `reachable_only` to overwrite this -behavior. If `reachable_only` is set to `true`, events will be published to -available partitions only. - -NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. - -===== `client_id` - -The configurable client ID used for logging, debugging, and auditing purposes. The default is "beats". - -===== `worker` - -The number of concurrent load-balanced Kafka output workers. - -===== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. - -===== `metadata` - -Kafka metadata update settings. The metadata do contain information about -brokers, topics, partition, and active leaders to use for publishing. - -*`refresh_frequency`*:: Metadata refresh interval. Defaults to 10 minutes. - -*`full`*:: Strategy to use when fetching metadata, when this option is `true`, the client will maintain -a full set of metadata for all the available topics, if the this option is set to `false` it will only refresh the -metadata for the configured topics. The default is false. - -*`retry.max`*:: Total number of metadata update retries when cluster is in middle of leader election. The default is 3. - -*`retry.backoff`*:: Waiting time between retries during leader elections. Default is `250ms`. - -===== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - -===== `backoff.init` - -The number of seconds to wait before trying to republish to Kafka -after a network error. After waiting `backoff.init` seconds, {beatname_uc} -tries to republish. If the attempt fails, the backoff timer is increased -exponentially up to `backoff.max`. After a successful publish, the backoff -timer is reset. The default is `1s`. - -===== `backoff.max` - -The maximum number of seconds to wait before attempting to republish to -Kafka after a network error. The default is `60s`. - -===== `bulk_max_size` - -The maximum number of events to bulk in a single Kafka request. The default is 2048. - -===== `bulk_flush_frequency` - -Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. - -===== `timeout` - -The number of seconds to wait for responses from the Kafka brokers before timing -out. The default is 30 (seconds). - -===== `broker_timeout` - -The maximum duration a broker will wait for number of required ACKs. The default is `10s`. - -===== `channel_buffer_size` - -Per Kafka broker number of messages buffered in output pipeline. The default is 256. - -===== `keep_alive` - -The keep-alive period for an active network connection. If `0s`, keep-alives are disabled. The default is `0s`. - -===== `compression` - -Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. - -[IMPORTANT] -.Known issue with Azure Event Hub for Kafka -==== -When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. -==== - -===== `compression_level` - -Sets the compression level used by gzip. Setting this value to 0 disables compression. -The compression level must be in the range of 1 (best speed) to 9 (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is 4. - -[[kafka-max_message_bytes]] -===== `max_message_bytes` - -The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. - -===== `required_acks` - -The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. - -Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. - -===== `enable_krb5_fast` - -beta[] - -Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. It is separate from the standard Kerberos settings because this flag only applies to the Kafka output. The default is `false`. - -===== `ssl` - -Configuration options for SSL parameters like the root CA for Kafka connections. - The Kafka host keystore should be created with the -`-keyalg RSA` argument to ensure it uses a cipher supported by -https://github.com/Shopify/sarama/wiki/Frequently-Asked-Questions#why-cant-sarama-connect-to-my-kafka-cluster-using-ssl[{filebeat}'s Kafka library]. -See <> for more information. - -[[kerberos-option-kafka]] -===== `kerberos` - -beta[] - -Configuration options for Kerberos authentication. - -See <> for more information. diff --git a/docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc b/docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc deleted file mode 100644 index 5b179552fa8..00000000000 --- a/docs/legacy/copied-from-beats/outputs/logstash/docs/logstash.asciidoc +++ /dev/null @@ -1,394 +0,0 @@ -[[logstash-output]] -=== Configure the {ls} output - -++++ -{ls} -++++ - -IMPORTANT: {deprecation-notice-config} - -The {ls} output sends events directly to {ls} by using the lumberjack -protocol, which runs over TCP. {ls} allows for additional processing and routing of -generated events. - -// tag::shared-logstash-config[] - -[IMPORTANT] -.Prerequisite -To send events to {ls}, you also need to create a {ls} configuration pipeline -that listens for incoming {beats} connections and indexes the received events into -{es}. For more information, see -{logstash-ref}/getting-started-with-logstash.html[Getting Started with {ls}]. -Also see the documentation for the -{logstash-ref}/plugins-inputs-beats.html[{beats} input] and -{logstash-ref}/plugins-outputs-elasticsearch.html[{es} output] plugins. - -If you want to use {ls} to perform additional processing on the data collected by -{beatname_uc}, you need to configure {beatname_uc} to use {ls}. - -To do this, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out and enable the {ls} output by uncommenting the -{ls} section: - -[source,yaml] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["127.0.0.1:5044"] ------------------------------------------------------------------------------- - -The `hosts` option specifies the {ls} server and the port (`5044`) where {ls} is configured to listen for incoming -{beats} connections. - -ifeval::["{beatname_lc}"=="filebeat"] -Want to use <> with {ls}? You need to do -some extra setup. For more information, see -{logstash-ref}/filebeat-modules.html[Working with {beatname_uc} modules]. -endif::[] - -// end::shared-logstash-config[] - -==== Accessing metadata fields - -Every event sent to {ls} contains the following metadata fields that you can -use in {ls} for indexing and filtering: - -ifndef::apm-server[] -["source","json",subs="attributes"] ------------------------------------------------------------------------------- -{ - ... - "@metadata": { <1> - "beat": "{beat_default_index_prefix}", <2> - "version": "{version}" <3> - } -} ------------------------------------------------------------------------------- -<1> {beatname_uc} uses the `@metadata` field to send metadata to {ls}. See the -{logstash-ref}/event-dependent-configuration.html#metadata[{ls} documentation] -for more about the `@metadata` field. -<2> The default is {beat_default_index_prefix}. To change this value, set the -<> option in the {beatname_uc} config file. -<3> The current version of {beatname_uc}. - -You can access this metadata from within the {ls} config file to set values -dynamically based on the contents of the metadata. -endif::[] - -ifdef::apm-server[] -["source","json",subs="attributes"] ------------------------------------------------------------------------------- -{ - ... - "@metadata": { <1> - "beat": "{beat_default_index_prefix}", <2> - "pipeline":"apm", <3> - "version": "{version}" <4> - } -} ------------------------------------------------------------------------------- -<1> {beatname_uc} uses the `@metadata` field to send metadata to {ls}. See the -{logstash-ref}/event-dependent-configuration.html#metadata[{ls} documentation] -for more about the `@metadata` field. -<2> The default is {beat_default_index_prefix}. To change this value, set the -<> option in the {beatname_uc} config file. -<3> The default pipeline configuration: `apm`. Additional pipelines can be enabled -with a {logstash-ref}/use-ingest-pipelines.html[{ls} pipeline config]. -<4> The current version of {beatname_uc}. - -In addition to metadata, {beatname_uc} provides the `processor.event` field, which -can be used to separate {apm-overview-ref-v}/apm-data-model.html[event types] into different indices. -endif::[] - -ifndef::apm-server[] -For example, the following {ls} configuration file tells -{ls} to use the index reported by {beatname_uc} for indexing events -into {es}: - -[source,logstash] ------------------------------------------------------------------------------- - -input { - beats { - port => 5044 - } -} - -output { - elasticsearch { - hosts => ["http://localhost:9200"] - index => "%{[@metadata][beat]}-%{[@metadata][version]}" <1> - } -} ------------------------------------------------------------------------------- -<1> `%{[@metadata][beat]}` sets the first part of the index name to the value -of the `beat` metadata field and `%{[@metadata][version]}` sets the second part to -the Beat's version. For example: -+{beat_default_index_prefix}-{version}+. -endif::[] - -ifdef::apm-server[] -For example, the following {ls} configuration file tells -{ls} to use the index and event types reported by {beatname_uc} for indexing events -into {es}: - -[source,logstash] ------- -input { - beats { - port => 5044 - } -} - -filter { - if [@metadata][beat] == "apm" { - if [processor][event] == "sourcemap" { - mutate { - add_field => { "[@metadata][index]" => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[processor][event]}" } <1> - } - } else { - mutate { - add_field => { "[@metadata][index]" => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[processor][event]}-%{+yyyy.MM.dd}" } <2> - } - } - } -} - -output { - elasticsearch { - hosts => ["http://localhost:9200"] - index => "%{[@metadata][index]}" - } -} ------- -<1> Creates a new field named `@metadata.index`. -`%{[@metadata][beat]}` sets the first part of the index name to the value of the `metadata.beat` field. -`%{[@metadata][version]}` sets the second part to {beatname_uc}'s version. -`%{[processor][event]}` sets the final part based on the APM event type. -For example: +{beat_default_index_prefix}-{version}-sourcemap+. -<2> In addition to the above rules, this pattern appends a date to the `index` name so {ls} creates a new index each day. -For example: +{beat_default_index_prefix}-{version}-transaction-{sample_date_0}+. -endif::[] - -Events indexed into {es} with the {ls} configuration shown here -will be similar to events directly indexed by {beatname_uc} into {es}. - -ifndef::apm-server[] -NOTE: If {ilm-init} is not being used, set `index` to `%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}` instead so {ls} creates an index per day, based on the `@timestamp` value of the events coming from {beats}. -endif::[] - -ifdef::apm-server[] -==== {ls} and {ilm-init} - -When used with {apm-server-ref}/ilm.html[{ilm-cap}], {ls} does not need to create a new index each day. -Here's a sample {ls} configuration file that would accomplish this: - -[source,logstash] ------- -input { - beats { - port => 5044 - } -} - -output { - elasticsearch { - hosts => ["http://localhost:9200"] - index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[processor][event]}" <1> - } -} ------- -<1> Outputs documents to an index: -`%{[@metadata][beat]}` sets the first part of the index name to the value of the `metadata.beat` field. -`%{[@metadata][version]}` sets the second part to {beatname_uc}'s version. -`%{[processor][event]}` sets the final part based on the APM event type. -For example: +{beat_default_index_prefix}-{version}-sourcemap+. -endif::[] - -==== Compatibility - -This output works with all compatible versions of {ls}. See the -https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support -Matrix]. - -==== Configuration options - -You can specify the following options in the `logstash` section of the -+{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -ifndef::apm-server[] -The default value is `true`. -endif::[] -ifdef::apm-server[] -The default value is `false`. -endif::[] - -[[hosts]] -===== `hosts` - -The list of known {ls} servers to connect to. If load balancing is disabled, but -multiple hosts are configured, one host is selected randomly (there is no precedence). -If one host becomes unreachable, another one is selected randomly. - -All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. - -===== `compression_level` - -The gzip compression level. Setting this value to 0 disables compression. -The compression level must be in the range of 1 (best speed) to 9 (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is 3. - -===== `escape_html` - -Configure escaping of HTML in strings. Set to `true` to enable escaping. - -The default value is `false`. - -===== `worker` - -The number of workers per configured host publishing events to {ls}. This -is best used with load balancing mode enabled. Example: If you have 2 hosts and -3 workers, in total 6 workers are started (3 for each host). - -[[loadbalance]] -===== `loadbalance` - -If set to true and multiple {ls} hosts are configured, the output plugin -load balances published events onto all {ls} hosts. If set to false, -the output plugin sends all events to only one host (determined at random) and -will switch to another host if the selected one becomes unresponsive. The default value is false. - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["localhost:5044", "localhost:5045"] - loadbalance: true - index: {beatname_lc} ------------------------------------------------------------------------------- - -===== `ttl` - -Time to live for a connection to {ls} after which the connection will be re-established. -Useful when {ls} hosts represent load balancers. Since the connections to {ls} hosts -are sticky, operating behind load balancers can lead to uneven load distribution between the instances. -Specifying a TTL on the connection allows to achieve equal connection distribution between the -instances. Specifying a TTL of 0 will disable this feature. - -The default value is 0. - -NOTE: The "ttl" option is not yet supported on an asynchronous {ls} client (one with the "pipelining" option set). - -===== `pipelining` - -Configures the number of batches to be sent asynchronously to {ls} while waiting -for ACK from {ls}. Output only becomes blocking once number of `pipelining` -batches have been written. Pipelining is disabled if a value of 0 is -configured. The default value is 2. - -===== `proxy_url` - -The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The -value must be a URL with a scheme of `socks5://`. The protocol used to -communicate to {ls} is not based on HTTP so a web-proxy cannot be used. - -If the SOCKS5 proxy server requires client authentication, then a username and -password can be embedded in the URL as shown in the example. - -When using a proxy, hostnames are resolved on the proxy server instead of on the -client. You can change this behavior by setting the -<> option. - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["remote-host:5044"] - proxy_url: socks5://user:password@socks5-proxy:2233 ------------------------------------------------------------------------------- - -[[logstash-proxy-use-local-resolver]] -===== `proxy_use_local_resolver` - -The `proxy_use_local_resolver` option determines if {ls} hostnames are -resolved locally when using a proxy. The default value is false, which means -that when a proxy is used the name resolution occurs on the proxy server. - -[[logstash-index]] -===== `index` - -The index root name to write events to. The default is the Beat name. For -example +"{beat_default_index_prefix}"+ generates +"[{beat_default_index_prefix}-]{version}-YYYY.MM.DD"+ -indices (for example, +"{beat_default_index_prefix}-{version}-2017.04.26"+). - -NOTE: This parameter's value will be assigned to the `metadata.beat` field. It -can then be accessed in {ls}'s output section as `%{[@metadata][beat]}`. - -===== `ssl` - -Configuration options for SSL parameters like the root CA for {ls} connections. See -<> for more information. To use SSL, you must also configure the -https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[{beats} input plugin for {ls}] to use SSL/TLS. - -===== `timeout` - -The number of seconds to wait for responses from the {ls} server before timing out. The default is 30 (seconds). - -===== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - -===== `bulk_max_size` - -The maximum number of events to bulk in a single {ls} request. The default is 2048. - -If the Beat sends single events, the events are collected into batches. If the Beat publishes -a large batch of events (larger than the value specified by `bulk_max_size`), the batch is -split. - -Specifying a larger batch size can improve performance by lowering the overhead of sending events. -However big batch sizes can also increase processing times, which might result in -API errors, killed connections, timed-out publishing requests, and, ultimately, lower -throughput. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. - - -===== `slow_start` - -If enabled, only a subset of events in a batch of events is transferred per transaction. -The number of events to be sent increases up to `bulk_max_size` if no error is encountered. -On error, the number of events per transaction is reduced again. - -The default is `false`. - -===== `backoff.init` - -The number of seconds to wait before trying to reconnect to {ls} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -===== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{ls} after a network error. The default is `60s`. diff --git a/docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc b/docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc deleted file mode 100644 index e99fc468ce2..00000000000 --- a/docs/legacy/copied-from-beats/outputs/redis/docs/redis.asciidoc +++ /dev/null @@ -1,240 +0,0 @@ -[[redis-output]] -=== Configure the Redis output - -++++ -Redis -++++ - -IMPORTANT: {deprecation-notice-config} - -The Redis output inserts the events into a Redis list or a Redis channel. -This output plugin is compatible with -the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html[Redis input plugin] for {ls}. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the Redis output by adding `output.redis`. - -Example configuration: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - password: "my_password" - key: "{beatname_lc}" - db: 0 - timeout: 5 ------------------------------------------------------------------------------- - -==== Compatibility - -This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, -but are not supported. - -==== Configuration options - -You can specify the following `output.redis` options in the +{beatname_lc}.yml+ config file: - -===== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -===== `hosts` - -The list of Redis servers to connect to. If load balancing is enabled, the events are -distributed to the servers in the list. If one server becomes unreachable, the events are -distributed to the reachable servers only. You can define each Redis server by specifying -`HOST` or `HOST:PORT`. For example: `"192.15.3.2"` or `"test.redis.io:12345"`. If you -don't specify a port number, the value configured by `port` is used. -Configure each Redis server with an `IP:PORT` pair or with a `URL`. For -example: `redis://localhost:6379` or `rediss://localhost:6379`. -URLs can include a server-specific password. For example: `redis://:password@localhost:6379`. -The `redis` scheme will disable the `ssl` settings for the host, while `rediss` -will enforce TLS. If `rediss` is specified and no `ssl` settings are -configured, the output uses the system certificate store. - -===== `index` - -The index name added to the events metadata for use by {ls}. The default is "{beatname_lc}". - -[[key-option-redis]] -===== `key` - -The name of the Redis list or channel the events are published to. If not -configured, the value of the `index` setting is used. - -You can set the key dynamically by using a format string to access any event -field. For example, this configuration uses a custom field, `fields.list`, to -set the Redis list key. If `fields.list` is missing, `fallback` is used: - -["source","yaml"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - key: "%{[fields.list]:fallback}" ------------------------------------------------------------------------------- - - -TIP: To learn how to add custom fields to events, see the -<> option. - -See the <> setting for other ways to set the key -dynamically. - -[[keys-option-redis]] -===== `keys` - -An array of key selector rules. Each rule specifies the `key` to use for events -that match the rule. During publishing, {beatname_uc} uses the first matching -rule in the array. Rules can contain conditionals, format string-based fields, -and name mappings. If the `keys` setting is missing or no rule matches, the -<> setting is used. - -Rule settings: - -*`index`*:: The key format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `key` and maps it to -a new name. - -*`default`*:: The default string value to use if `mappings` does not find a match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -Example `keys` settings: - -["source","yaml"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - key: "default_list" - keys: - - key: "info_list" # send to info_list if `message` field contains INFO - when.contains: - message: "INFO" - - key: "debug_list" # send to debug_list if `message` field contains DEBUG - when.contains: - message: "DEBUG" - - key: "%{[fields.list]}" - mappings: - http: "frontend_list" - nginx: "frontend_list" - mysql: "backend_list" ------------------------------------------------------------------------------- - -===== `password` - -The password to authenticate with. The default is no authentication. - -===== `db` - -The Redis database number where the events are published. The default is 0. - -===== `datatype` - -The Redis data type to use for publishing events.If the data type is `list`, the -Redis RPUSH command is used and all events are added to the list with the key defined under `key`. -If the data type `channel` is used, the Redis `PUBLISH` command is used and means that all events -are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. -The default value is `list`. - -===== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. - -===== `worker` - -The number of workers to use for each host configured to publish events to Redis. Use this setting along with the -`loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). - -===== `loadbalance` - -If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all -Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch -to another host if the currently selected one becomes unreachable. The default value is true. - -===== `timeout` - -The Redis connection timeout in seconds. The default is 5 seconds. - -===== `backoff.init` - -The number of seconds to wait before trying to reconnect to Redis after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -===== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -Redis after a network error. The default is `60s`. - -===== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - - -===== `bulk_max_size` - -The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. - -If the Beat sends single events, the events are collected into batches. If the -Beat publishes a large batch of events (larger than the value specified by -`bulk_max_size`), the batch is split. - -Specifying a larger batch size can improve performance by lowering the overhead -of sending events. However big batch sizes can also increase processing times, -which might result in API errors, killed connections, timed-out publishing -requests, and, ultimately, lower throughput. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. - -===== `ssl` - -Configuration options for SSL parameters like the root CA for Redis connections -guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See -<> for more information. - -===== `proxy_url` - -The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The -value must be a URL with a scheme of `socks5://`. You cannot use a web proxy -because the protocol used to communicate with Redis is not based on HTTP. - -If the SOCKS5 proxy server requires client authentication, you can embed -a username and password in the URL. - -When using a proxy, hostnames are resolved on the proxy server instead of on the -client. You can change this behavior by setting the -<> option. - -[[redis-proxy-use-local-resolver]] -===== `proxy_use_local_resolver` - -This option determines whether Redis hostnames are resolved locally when using a proxy. -The default value is false, which means that name resolution occurs on the proxy server. diff --git a/docs/legacy/data-ingestion.asciidoc b/docs/legacy/data-ingestion.asciidoc deleted file mode 100644 index e376728fece..00000000000 --- a/docs/legacy/data-ingestion.asciidoc +++ /dev/null @@ -1,85 +0,0 @@ -[[tune-data-ingestion]] -== Tune data ingestion - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -This section explains how to adapt data ingestion according to your needs. - -* <> -* <> - - -[[tune-apm-server]] -=== Tune APM Server - -++++ -APM Server -++++ - -IMPORTANT: {deprecation-notice-data} - -* <> -* <> -* <> - -[[add-apm-server-instances]] -[float] -==== Add APM Server instances - -If the APM Server cannot process data quickly enough, -you will see request timeouts. - -One way to solve this problem is to increase processing power. -This can be done by either migrating your APM Server to a more powerful machine -or adding more APM Server instances. -Having several instances will also increase <>. - -[[reduce-payload-size]] -[float] -==== Reduce the payload size - -Large payloads may result in request timeouts. -You can reduce the payload size by decreasing the flush interval in the agents. -This will cause agents to send smaller and more frequent requests. - -Optionally you can also <> or <>. - -Read more in the {apm-agents-ref}/index.html[agents documentation]. - -[[adjust-event-rate]] -[float] -==== Adjust anonymous auth rate limit - -Agents make use of long running requests and flush as many events over a single request as possible. -Thus, the rate limiter for anonymous authentication is bound to the number of _events_ sent per second, per IP. - -If the event rate limit is hit while events on an established request are sent, the request is not immediately terminated. The intake of events is only throttled to <>, which means that events are queued and processed slower. Only when the allowed buffer queue is also full, does the request get terminated with a `429 - rate limit exceeded` HTTP response. If an agent tries to establish a new request, but the rate limit is already hit, a `429` will be sent immediately. - -Increasing the <> default value will help avoid `rate limit exceeded` errors. - -[[tune-es]] -=== Tune {es} - -++++ -{es} -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -The {es} reference provides insight on tuning {es}. - -{ref}/tune-for-indexing-speed.html[Tune for indexing speed] provides information on: - -* Refresh interval -* Disabling swapping -* Optimizing file system cache -* Considerations regarding faster hardware -* Setting the indexing buffer size - -{ref}/tune-for-disk-usage.html[Tune for disk usage] provides information on: - -* Disabling unneeded features -* Shard size -* Shrink index diff --git a/docs/legacy/error-api.asciidoc b/docs/legacy/error-api.asciidoc deleted file mode 100644 index 776f2de1bc0..00000000000 --- a/docs/legacy/error-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[error-api]] -=== Errors - -An error or a logged error message captured by an agent occurring in a monitored service. - -[float] -[[error-schema]] -==== Error Schema - -APM Server uses JSON Schema to validate requests. The specification for errors is defined on -{github_repo_link}/docs/spec/v2/error.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/error.json[] ----- diff --git a/docs/legacy/error-indices.asciidoc b/docs/legacy/error-indices.asciidoc deleted file mode 100644 index 4b7a74ea5db..00000000000 --- a/docs/legacy/error-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[error-indices]] -== Example error documents - -++++ -Error documents -++++ - -This example shows what error documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/errors.json[] ----- diff --git a/docs/legacy/events-api.asciidoc b/docs/legacy/events-api.asciidoc deleted file mode 100644 index 7add657c429..00000000000 --- a/docs/legacy/events-api.asciidoc +++ /dev/null @@ -1,130 +0,0 @@ -[[events-api]] -== Events Intake API - -++++ -Events intake -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -NOTE: Most users do not need to interact directly with the events intake API. - -The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. -Agents communicate with the Server by sending events -- captured pieces of information -- in an HTTP request. -Events can be: - -* Transactions -* Spans -* Errors -* Metrics - -Each event is sent as its own line in the HTTP request body. -This is known as http://ndjson.org[newline delimited JSON (NDJSON)]. - -With NDJSON, agents can open an HTTP POST request and use chunked encoding to stream events to the APM Server -as soon as they are recorded in the agent. -This makes it simple for agents to serialize each event to a stream of newline delimited JSON. -The APM Server also treats the HTTP body as a compressed stream and thus reads and handles each event independently. - -See the {apm-overview-ref-v}/apm-data-model.html[APM Data Model] to learn more about the different types of events. - -[[events-api-endpoint]] -[float] -=== Endpoint - -Send an `HTTP POST` request to the APM Server `intake/v2/events` endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/events ------------------------------------------------------------- - -For <> send an `HTTP POST` request to the APM Server `intake/v2/rum/events` endpoint instead: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/rum/events ------------------------------------------------------------- - -[[events-api-response]] -[float] -=== Response - -On success, the server will respond with a 202 Accepted status code and no body. - -Keep in mind that events can succeed and fail independently of each other. Only if all events succeed does the server respond with a 202. - -[[events-api-errors]] -[float] -=== Errors - -There are two types of errors that the APM Server may return to an agent: - -* Event related errors (typically validation errors) -* Non-event related errors - -The APM Server processes events one after the other. -If an error is encountered while processing an event, -the error encountered as well as the document causing the error are added to an internal array. -The APM Server will only save 5 event related errors. -If it encounters more than 5 event related errors, -the additional errors will not be returned to agent. -Once all events have been processed, -the error response is sent. - -Some errors, not relating to specific events, -may terminate the request immediately. -For example: IP rate limit reached, wrong metadata, etc. -If at any point one of these errors is encountered, -it is added to the internal array and immediately returned. - -An example error response might look something like this: - -[source,json] ------------------------------------------------------------- -{ - "errors": [ - { - "message": "", <1> - "document": "" <2> - },{ - "message": "", - "document": "" - },{ - "message": "", - "document": "" - },{ - "message": "too many requests" <3> - }, - ], - "accepted": 2320 <4> -} ------------------------------------------------------------- - -<1> An event related error -<2> The document causing the error -<3> An immediately returning non-event related error -<4> The number of accepted events - -If you're developing an agent, these errors can be useful for debugging. - -[[events-api-schema-definition]] -[float] -=== Event API Schemas - -The APM Server uses a collection of JSON Schemas for validating requests to the intake API: - -* <> -* <> -* <> -* <> -* <> -* <> - -include::./metadata-api.asciidoc[] -include::./transaction-api.asciidoc[] -include::./span-api.asciidoc[] -include::./error-api.asciidoc[] -include::./metricset-api.asciidoc[] -include::./example-intake-events.asciidoc[] diff --git a/docs/legacy/example-intake-events.asciidoc b/docs/legacy/example-intake-events.asciidoc deleted file mode 100644 index f7731ae9b3f..00000000000 --- a/docs/legacy/example-intake-events.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[example-intake-events]] -=== Example Request Body - -A request body example containing one event for all currently supported event types. - -[source,json] ----- -include::../data/intake-api/generated/events.ndjson[] ----- diff --git a/docs/legacy/exploring-es-data.asciidoc b/docs/legacy/exploring-es-data.asciidoc deleted file mode 100644 index 8b7c015668d..00000000000 --- a/docs/legacy/exploring-es-data.asciidoc +++ /dev/null @@ -1,97 +0,0 @@ -[[exploring-es-data]] -= Explore data in {es} - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM stores data for each {apm-overview-ref-v}/apm-data-model.html[event type] -in separate indices. By default, <> is enabled and event data is stored using the following index naming patterns: - -["source","text"] ----- -apm-%{[version]}-transaction-000001 -apm-%{[version]}-span-000001 -apm-%{[version]}-error-000001 -apm-%{[version]}-metric-000001 -apm-%{[version]}-sourcemap ----- - -If you've disabled {ilm-init} and are instead using daily indices, the default index naming pattern is: - -["source","text"] ----- -apm-%{[version]}-transaction-%{+yyyy.MM.dd} -apm-%{[version]}-span-%{+yyyy.MM.dd} -apm-%{[version]}-error-%{+yyyy.MM.dd} -apm-%{[version]}-metric-%{+yyyy.MM.dd} -apm-%{[version]}-sourcemap ----- - -TIP: If your APM data is being stored in a different format, you may be using an outdated `apm-server.yml` file. You must update your `apm-server.yml` file in order to take advantage of the new format of indices. - -[float] -[[sample-apm-document]] -== Sample APM documents - -Sample documents for each of the APM event types are available on these pages: - -* <> -* <> -* <> -* <> -* <> - -[float] -[[elasticsearch-query-examples]] -== {es} query examples - -The following examples enable you to interact with {es}'s REST API. -One possible way to do this is using {kib}'s -{kibana-ref}/console-kibana.html[{dev-tools-app} console]. - -Indices, templates, and index-level operations can also be manged via {kib}'s -{kibana-ref}/managing-indices.html[Index management] panel. - -To see an overview of existing indices, run: -["source","sh"] ----- -GET _cat/indices/apm* ----- -// CONSOLE - -To query all documents collected with a specific APM Server version: -["source","sh",subs="attributes"] ----- -GET apm-{version}-*/_search ----- -// CONSOLE - -To query a specific event type, for example, transactions: -["source","sh",subs="attributes"] ----- -GET apm-*transactions-*/_search ----- -// CONSOLE - -If you are interested in the _settings_ and _mappings_ of the Elastic APM indices, -first, run a query to find template names: - -["source","sh"] ----- -GET _cat/templates/apm* ----- -// CONSOLE - -Then, retrieve the specific template you are interested in: -["source","sh"] ----- -GET /_template/your-template-name ----- -// CONSOLE - - -include::./transaction-indices.asciidoc[] -include::./span-indices.asciidoc[] -include::./error-indices.asciidoc[] -include::./metricset-indices.asciidoc[] -include::./sourcemap-indices.asciidoc[] diff --git a/docs/legacy/feature-roles.asciidoc b/docs/legacy/feature-roles.asciidoc deleted file mode 100644 index f035ae87cce..00000000000 --- a/docs/legacy/feature-roles.asciidoc +++ /dev/null @@ -1,364 +0,0 @@ -[role="xpack"] -[[feature-roles]] -== Grant users access to secured resources - -IMPORTANT: {deprecation-notice-config} - -You can use role-based access control to grant users access to secured -resources. The roles that you set up depend on your organization's security -requirements and the minimum privileges required to use specific features. - -Typically, you need to create the following separate roles: - -* <>: To publish events collected by {beatname_uc}. -* <>: One for sending monitoring -information, and another for viewing it. -* <>: To create and manage API keys. -* <>: To view -APM Agent central configurations. - -{es-security-features} provides {ref}/built-in-roles.html[built-in roles] that grant a -subset of the privileges needed by APM users. -When possible, assign users the built-in roles to minimize the affect of future changes on your security strategy. -If no built-in role is available, you can assign users the privileges needed to accomplish a specific task. -In general, there are three types of privileges you'll work with: - -* **{es} cluster privileges**: Manage the actions a user can perform against your cluster. -* **{es} index privileges**: Control access to the data in specific indices your cluster. -* **{kib} space privileges**: Grant users write or read access to features and apps within {kib}. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-to-publish-events]] -=== Grant privileges and roles needed for writing events - -++++ -Create a _writer_ user -++++ - -IMPORTANT: {deprecation-notice-config} - -APM users that publish events to {es} need privileges to write to APM data streams. - -[float] -==== General writer role - -To grant an APM user the required privileges for writing events to {es}: - -. Create a *general writer role*, called something like `apm_writer`, -that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -|Index -|`auto_configure` on `traces-apm*`, `logs-apm*`, and `metrics-apm*` indices -|Permits auto-creation of indices and data streams - -|Index -|`create_doc` on `traces-apm*`, `logs-apm*`, and `metrics-apm*` indices -|Write events into {es} -|==== - -. If <> is enabled, additional privileges are required to read source maps. -See {kibana-ref}/rum-sourcemap-api.html[RUM source map API] for more details. -Assign these extra privileges to the *general writer role*. - -. Assign the *general writer role* to users who need to publish APM data. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-to-publish-monitoring]] -=== Grant privileges and roles needed for monitoring - -++++ -Create a _monitoring_ user -++++ - -IMPORTANT: {deprecation-notice-config} - -{es-security-features} provides built-in users and roles for publishing and viewing monitoring data. -The privileges and roles needed to publish monitoring data -depend on the method used to collect that data. - -* <> -** <> -** <> -* <> - -[float] -[[privileges-to-publish-monitoring-write]] -==== Publish monitoring data - -[IMPORTANT] -==== -**{ecloud} users:** This section does not apply to our -https://www.elastic.co/cloud/elasticsearch-service[hosted {ess}]. -Monitoring on {ecloud} is enabled by clicking the *Enable* button in the *Monitoring* panel. -==== - -[float] -[[privileges-to-publish-monitoring-internal]] -===== Internal collection - -If you're using <> to -collect metrics about {beatname_uc}, {security-features} provides -the +{beat_monitoring_user}+ {ref}/built-in-users.html[built-in user] and -+{beat_monitoring_user}+ {ref}/built-in-roles.html[built-in role] to send -monitoring information. You can use the built-in user, if it's available in your -environment, or create a user who has the built-in role assigned, -or create a user and manually assign the privileges needed to send monitoring -information. - -If you use the built-in +{beat_monitoring_user}+ user, -make sure you set the password before using it. - -If you don't use the +{beat_monitoring_user}+ user: - --- -. Create a *monitoring role*, called something like -+{beat_default_index_prefix}_monitoring_writer+, that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -|Index -|`create_index` on `.monitoring-beats-*` indices -|Create monitoring indices in {es} - -|Index -|`create_doc` on `.monitoring-beats-*` indices -|Write monitoring events into {es} -|==== -+ -. Assign the *monitoring role* to users who need to write monitoring data to {es}. --- - -[float] -[[privileges-to-publish-monitoring-metricbeat]] -===== {metricbeat} collection - -NOTE: When using {metricbeat} to collect metrics, -no roles or users need to be created with APM Server. -See <> -for complete details on setting up {metricbeat} collection. - -If you're <> to collect -metrics about {beatname_uc}, {security-features} provides the `remote_monitoring_user` -{ref}/built-in-users.html[built-in user], and the `remote_monitoring_collector` -and `remote_monitoring_agent` {ref}/built-in-roles.html[built-in roles] for -collecting and sending monitoring information. You can use the built-in user, if -it's available in your environment, or create a user who has the privileges -needed to collect and send monitoring information. - -If you use the built-in `remote_monitoring_user` user, -make sure you set the password before using it. - -If you don't use the `remote_monitoring_user` user: - --- -. Create a *monitoring user* on the production cluster who will collect and send monitoring -information. Assign the following roles to the *monitoring user*: -+ -[options="header"] -|==== -|Role | Purpose - -|`remote_monitoring_collector` -|Collect monitoring metrics from {beatname_uc} - -|`remote_monitoring_agent` -|Send monitoring data to the monitoring cluster -|==== --- - -[float] -[[privileges-to-publish-monitoring-view]] -==== View monitoring data - -To grant users the required privileges for viewing monitoring data: - -. Create a *monitoring role*, called something like -+{beat_default_index_prefix}_monitoring_viewer+, that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -| Spaces -|`Read` on Stack monitoring -|Read-only access to the {stack-monitor-app} feature in {kib}. - -| Spaces -|`Read` on Dashboards -|Read-only access to the Dashboards feature in {kib}. -|==== -+ -. Assign the *monitoring role*, along with the following built-in roles, to users who -need to view monitoring data for {beatname_uc}: -+ -[options="header"] -|==== -|Role | Purpose - -|`monitoring_user` -|Grants access to monitoring indices for {beatname_uc} -|==== - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-api-key]] -=== Grant privileges and roles needed for API key management - -++++ -Create an _API key_ user -++++ - -IMPORTANT: {deprecation-notice-config} - -You can configure <> to authorize requests to APM Server. -To create an APM Server user with the required privileges for creating and managing API keys: - -. Create an **API key role**, called something like `apm_api_key`, -that has the following `cluster` level privileges: -+ -[options="header"] -|==== -| Privilege | Purpose - -|`manage_own_api_key` -|Allow {beatname_uc} to create, retrieve, and invalidate API keys -|==== - -. Depending on what the **API key role** will be used for, -also assign the appropriate `apm` application-level privileges: -+ -* To **receive Agent configuration**, assign `config_agent:read`. -* To **ingest agent data**, assign `event:write`. -* To **upload source maps**, assign `sourcemap:write`. - -. Assign the **API key role** to users that need to create and manage API keys. -Users with this role can only create API keys that have the same or lower access rights. - -[float] -[[privileges-api-key-example]] -=== Example API key role - -The following example assigns the required cluster privileges, -and the ingest agent data `apm` API key application privileges to a role named `apm_api_key`: - -[source,kibana] ----- -PUT _security/role/apm_api_key <1> -{ - "cluster": [ - "manage_own_api_key" <2> - ], - "applications": [ - { - "application": "apm", - "privileges": [ - "event:write" <3> - ], - "resources": [ - "*" - ] - } - ] -} ----- -<1> `apm_api_key` is the name of the role we're assigning these privileges to. Any name can be used. -<2> Required cluster privileges. -<3> Required for API keys that will be used to ingest agent events. - - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-agent-central-config]] -=== Grant privileges and roles needed for APM Agent central configuration - -++++ -Create a _central config_ user -++++ - -IMPORTANT: {deprecation-notice-config} - -[[privileges-agent-central-config-server]] -==== APM Server central configuration management - -APM Server acts as a proxy between your APM agents and the {apm-app}. -The {apm-app} communicates any changed settings to APM Server so that your agents only need to poll the Server -to determine which central configuration settings have changed. - -To grant an APM Server user with the required privileges for managing central configuration, -assign the user the following privileges: - -[options="header"] -|==== -|Type | Privilege | Purpose - -| Spaces -|`Read` on {beat_kib_app} -|Allow {beatname_uc} to manage central configurations via the {beat_kib_app} -|==== - -TIP: Looking for privileges and roles needed use central configuration from the {apm-app} or {apm-app} API? -See {kibana-ref}/apm-app-central-config-user.html[{apm-app} central configuration user]. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -// [[privileges-create-api-keys]] -// === Grant privileges and roles needed to create APM Server API keys - -// ++++ -// Create an _APM API key_ user -// ++++ - -// CONTENT - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[more-security-roles]] -=== Additional APM users and roles - -IMPORTANT: {deprecation-notice-config} - -In addition to the {beatname_uc} users described in this documentation, -you'll likely need to create users for other APM tasks: - -* An {kibana-ref}/apm-app-reader.html[APM reader], for {kib} users who need to view the {apm-app}, -or create and edit visualizations that access +{beat_default_index_prefix}-*+ data. -* Various {kibana-ref}/apm-app-api-user.html[{apm-app} API users], -for interacting with the APIs exposed by the {apm-app}. - -[float] -[[learn-more-security]] -=== Learn more about users and roles - -Want to learn more about creating users and roles? See -{ref}/secure-cluster.html[Secure a cluster]. Also see: - -* {ref}/security-privileges.html[Security privileges] for a description of -available privileges -* {ref}/built-in-roles.html[Built-in roles] for a description of roles that -you can assign to users diff --git a/docs/legacy/field-name-changes.asciidoc b/docs/legacy/field-name-changes.asciidoc deleted file mode 100644 index b56c4e52723..00000000000 --- a/docs/legacy/field-name-changes.asciidoc +++ /dev/null @@ -1,61 +0,0 @@ -[frame="topbot",options="header"] -|====================== -|Old Field|New Field -|`beat.hostname` |`observer.hostname` -|`beat.name` |`observer.type` -|`beat.version` |`observer.version` -|`context.custom` |`error.custom` -|`context.db.instance` |`span.db.instance` -|`context.db.statement` |`span.db.statement` -|`context.db.type` |`span.db.type` -|`context.db.user` |`span.db.user.name` -|`context.http.method` |`span.http.method` -|`context.http.status_code` |`span.http.response.status_code` -|`context.http.url` |`span.http.url.original` -|`context.process.argv` |`process.args` -|`context.process.pid` |`process.pid` -|`context.process.ppid` |`process.ppid` -|`context.process.title` |`process.title` -|`context.request.body` |`http.request.body.original` -|`context.request.cookies` |`http.request.cookies` -|`context.request.env` |`http.request.env` -|`context.request.headers` |`http.request.headers` -|`context.request.http_version` |`http.version` -|`context.request.method` |`http.request.method` -|`context.request.socket` |`http.request.socket` -|`context.request.url.full` |`url.full` -|`context.request.url.hash` |`url.fragment` -|`context.request.url.hostname` |`url.domain` -|`context.request.url.pathname` |`url.path` -|`context.request.url.port` |`url.port` -|`context.request.url.protocol` |`url.scheme` -|`context.request.url.raw` |`url.original` -|`context.request.url.search` |`url.query` -|`context.response.finished` |`http.response.finished` -|`context.response.headers.content-type` |`http.response.headers.content-type` -|`context.response.headers_sent` |`http.response.headers_sent` -|`context.response.status_code` |`http.response.status_code` -|`context.service.agent.name` |`agent.name` -|`context.service.agent.version` |`agent.version` -|`context.service.environment` |`service.environment` -|`context.service.framework.name` |`service.framework.name` -|`context.service.framework.version` |`service.framework.version` -|`context.service.language.name` |`service.language.name` -|`context.service.language.version` |`service.language.version` -|`context.service.name` |`service.name` -|`context.service.runtime.name` |`service.runtime.name` -|`context.service.runtime.version` |`service.runtime.version` -|`context.service.version` |`service.version` -|`context.system.architecture` |`host.architecture` -|`context.system.hostname` |`host.hostname` -|`context.system.ip` |`host.ip` -|`context.system.platform` |`host.os.platform` -|`context.tags` |`labels` -|`context.user.email` |`user.email` -|`context.user.id` |`user.id` -|`context.user.ip` |`client.ip` -|`context.user.user-agent` |`user_agent.original` -|`context.user.username` |`user.name` -|`listening` |`observer.listening` -|====================== - diff --git a/docs/legacy/fields.asciidoc b/docs/legacy/fields.asciidoc deleted file mode 100644 index 9d0bd5fec9e..00000000000 --- a/docs/legacy/fields.asciidoc +++ /dev/null @@ -1,22956 +0,0 @@ - -//// -This file is generated! See _meta/fields.yml and scripts/generate_fields_docs.py -//// - -[[exported-fields]] -= Exported fields - - -This document describes the fields that are exported by Apm-Server. They are -grouped in the following categories: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -[[exported-fields-apm-application-metrics]] -== APM Application Metrics fields - -APM application metrics. - - -*`histogram`*:: -+ --- -type: histogram - --- - -[[exported-fields-apm-error]] -== APM Error fields - -Error-specific data for APM - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`message`*:: -+ --- -The original error message. - -type: text - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== url - -A complete Url, with scheme, host and path. - - - -*`url.scheme`*:: -+ --- -The protocol of the request, e.g. "https:". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -The full, possibly agent-assembled URL of the request, e.g https://example.com:443/search?q=elasticsearch#top. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.domain`*:: -+ --- -The hostname of the request, e.g. "example.com". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -The port of the request, e.g. 443. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -The path of the request, e.g. "/search". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query string of the request, e.g. "q=elasticsearch". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -A fragment specifying a location in a web page , e.g. "top". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.version`*:: -+ --- -The http version of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.request.method`*:: -+ --- -The http method of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.headers`*:: -+ --- -The canonical headers of the monitored HTTP request. - - -type: object - -Object is not enabled. - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.response.status_code`*:: -+ --- -The status code of the HTTP response. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.finished`*:: -+ --- -Used by the Node agent to indicate when in the response life cycle an error has occurred. - - -type: boolean - --- - -*`http.response.headers`*:: -+ --- -The canonical headers of the monitored HTTP response. - - -type: object - -Object is not enabled. - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.domain`*:: -+ --- -Domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - -[float] -=== error - -Data captured by an agent representing an event occurring in a monitored service. - - - -*`error.id`*:: -+ --- -The ID of the error. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.culprit`*:: -+ --- -Function call which was the primary perpetrator of this event. - -type: keyword - --- - -*`error.grouping_key`*:: -+ --- -Hash of select properties of the logged error for grouping purposes. - - -type: keyword - --- - -*`error.grouping_name`*:: -+ --- -Name to associate with an error group. Errors belonging to the same group (same grouping_key) may have differing values for grouping_name. Consumers may choose one arbitrarily. - - -type: keyword - --- - -[float] -=== exception - -Information about the originally thrown error. - - - -*`error.exception.code`*:: -+ --- -The error code set when the error happened, e.g. database error code. - -type: keyword - --- - -*`error.exception.message`*:: -+ --- -The original error message. - -type: text - --- - -*`error.exception.module`*:: -+ --- -The module namespace of the original error. - -type: keyword - --- - -*`error.exception.type`*:: -+ --- -The type of the original error, e.g. the Java exception class name. - -type: keyword - --- - -*`error.exception.handled`*:: -+ --- -Indicator whether the error was caught somewhere in the code or not. - -type: boolean - --- - -[float] -=== log - -Additional information added by logging the error. - - - -*`error.log.level`*:: -+ --- -The severity of the record. - -type: keyword - --- - -*`error.log.logger_name`*:: -+ --- -The name of the logger instance used. - -type: keyword - --- - -*`error.log.message`*:: -+ --- -The additionally logged error message. - -type: text - --- - -*`error.log.param_message`*:: -+ --- -A parametrized message. E.g. 'Could not connect to %s'. The property message is still required, and should be equal to the param_message, but with placeholders replaced. In some situations the param_message is used to group errors together. - - -type: keyword - --- - -[[exported-fields-apm-profile]] -== APM Profile fields - -Profiling-specific data for APM. - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`profile.id`*:: -+ --- -Unique ID for the profile. All samples within a profile will have the same profile ID. - - -type: keyword - --- - -*`profile.duration`*:: -+ --- -Duration of the profile, in nanoseconds. All samples within a profile will have the same duration. To aggregate durations, you should first group by the profile ID. - - -type: long - --- - - -*`profile.cpu.ns`*:: -+ --- -Amount of CPU time profiled, in nanoseconds. - - -type: long - --- - - -*`profile.wall.us`*:: -+ --- -Amount of wall time profiled, in microseconds. - - -type: long - --- - - -*`profile.samples.count`*:: -+ --- -Number of profile samples for the profiling period. - - -type: long - --- - - -*`profile.alloc_objects.count`*:: -+ --- -Number of objects allocated since the process started. - - -type: long - --- - - -*`profile.alloc_space.bytes`*:: -+ --- -Amount of memory allocated, in bytes, since the process started. - - -type: long - --- - - -*`profile.inuse_objects.count`*:: -+ --- -Number of objects allocated and currently in use. - - -type: long - --- - - -*`profile.inuse_space.bytes`*:: -+ --- -Amount of memory allocated, in bytes, and currently in use. - - -type: long - --- - - -*`profile.top.id`*:: -+ --- -Unique ID for the top stack frame in the context of its callers. - - -type: keyword - --- - -*`profile.top.function`*:: -+ --- -Function name for the top stack frame. - - -type: keyword - --- - -*`profile.top.filename`*:: -+ --- -Source code filename for the top stack frame. - - -type: keyword - --- - -*`profile.top.line`*:: -+ --- -Source code line number for the top stack frame. - - -type: long - --- - - -*`profile.stack.id`*:: -+ --- -Unique ID for a stack frame in the context of its callers. - - -type: keyword - --- - -*`profile.stack.function`*:: -+ --- -Function name for a stack frame. - - -type: keyword - --- - -*`profile.stack.filename`*:: -+ --- -Source code filename for a stack frame. - - -type: keyword - --- - -*`profile.stack.line`*:: -+ --- -Source code line number for a stack frame. - - -type: long - --- - -[[exported-fields-apm-sourcemap]] -== APM Sourcemap fields - -Sourcemap files enriched with metadata - - - -[float] -=== service - -Service fields. - - - -*`sourcemap.service.name`*:: -+ --- -The name of the service this sourcemap belongs to. - - -type: keyword - --- - -*`sourcemap.service.version`*:: -+ --- -Service version. - - -type: keyword - --- - -*`sourcemap.bundle_filepath`*:: -+ --- -Location of the sourcemap relative to the file requesting it. - - -type: keyword - --- - -[[exported-fields-apm-span]] -== APM Span fields - -Span-specific data for APM. - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.domain`*:: -+ --- -Domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`child.id`*:: -+ --- -The ID(s) of the child event(s). - - -type: keyword - --- - - -*`span.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg: 'db.postgresql.query', 'template.erb', 'cache', etc). - - -type: keyword - --- - -*`span.subtype`*:: -+ --- -A further sub-division of the type (e.g. postgresql, elasticsearch) - - -type: keyword - --- - -*`span.id`*:: -+ --- -The ID of the span stored as hex encoded string. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`span.name`*:: -+ --- -Generic designation of a span in the scope of a transaction. - - -type: keyword - --- - -*`span.action`*:: -+ --- -The specific kind of event within the sub-type represented by the span (e.g. query, connect) - - -type: keyword - --- - - -*`span.start.us`*:: -+ --- -Offset relative to the transaction's timestamp identifying the start of the span, in microseconds. - - -type: long - --- - - -*`span.duration.us`*:: -+ --- -Duration of the span, in microseconds. - - -type: long - --- - -*`span.sync`*:: -+ --- -Indicates whether the span was executed synchronously or asynchronously. - - -type: boolean - --- - - -*`span.db.link`*:: -+ --- -Database link. - - -type: keyword - --- - -*`span.db.rows_affected`*:: -+ --- -Number of rows affected by the database statement. - - -type: long - --- - - -[float] -=== service - -Destination service context - - -*`span.destination.service.type`*:: -+ --- -Type of the destination service (e.g. 'db', 'elasticsearch'). Should typically be the same as span.type. DEPRECATED: this field will be removed in a future release - - -type: keyword - --- - -*`span.destination.service.name`*:: -+ --- -Identifier for the destination service (e.g. 'http://elastic.co', 'elasticsearch', 'rabbitmq') DEPRECATED: this field will be removed in a future release - - -type: keyword - --- - -*`span.destination.service.resource`*:: -+ --- -Identifier for the destination service resource being operated on (e.g. 'http://elastic.co:80', 'elasticsearch', 'rabbitmq/queue_name') - - -type: keyword - --- - - - -*`span.message.queue.name`*:: -+ --- -Name of the message queue or topic where the message is published or received. - - -type: keyword - --- - - -*`span.message.age.ms`*:: -+ --- -Age of a message in milliseconds. - - -type: long - --- - - -*`span.composite.count`*:: -+ --- -Number of compressed spans the composite span represents. - - -type: long - --- - - -*`span.composite.sum.us`*:: -+ --- -Sum of the durations of the compressed spans, in microseconds. - - -type: long - --- - -*`span.composite.compression_strategy`*:: -+ --- -The compression strategy that was used. - - -type: keyword - --- - -[[exported-fields-apm-span-metrics-xpack]] -== APM Span Metrics fields - -APM span metrics are used for showing rate of requests and latency between instrumented services. - - - -*`metricset.period`*:: -+ --- -Current data collection period for this event in milliseconds. - -type: long - --- - - - -*`span.destination.service.response_time.count`*:: -+ --- -Number of aggregated outgoing requests. - -type: long - --- - -*`span.destination.service.response_time.sum.us`*:: -+ --- -Aggregated duration of outgoing requests, in microseconds. - -type: long - --- - -[[exported-fields-apm-transaction]] -== APM Transaction fields - -Transaction-specific data for APM - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -[float] -=== url - -A complete Url, with scheme, host and path. - - - -*`url.scheme`*:: -+ --- -The protocol of the request, e.g. "https:". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -The full, possibly agent-assembled URL of the request, e.g https://example.com:443/search?q=elasticsearch#top. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.domain`*:: -+ --- -The hostname of the request, e.g. "example.com". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -The port of the request, e.g. 443. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -The path of the request, e.g. "/search". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query string of the request, e.g. "q=elasticsearch". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -A fragment specifying a location in a web page , e.g. "top". - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.version`*:: -+ --- -The http version of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.request.method`*:: -+ --- -The http method of the request leading to this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.headers`*:: -+ --- -The canonical headers of the monitored HTTP request. - - -type: object - -Object is not enabled. - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`http.response.status_code`*:: -+ --- -The status code of the HTTP response. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.finished`*:: -+ --- -Used by the Node agent to indicate when in the response life cycle an error has occurred. - - -type: boolean - --- - -*`http.response.headers`*:: -+ --- -The canonical headers of the monitored HTTP response. - - -type: object - -Object is not enabled. - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== faas - -Function as a service fields. - - - -*`faas.execution`*:: -+ --- -Request ID of the function invocation. - - -type: keyword - --- - -*`faas.coldstart`*:: -+ --- -Boolean indicating whether the function invocation was a coldstart or not. - - -type: boolean - --- - -*`faas.trigger.type`*:: -+ --- -The trigger type. - - -type: keyword - --- - -*`faas.trigger.request_id`*:: -+ --- -The ID of the origin trigger request. - - -type: keyword - --- - -[float] -=== service - -Service fields. - - - -*`service.id`*:: -+ --- -Immutable id of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`service.origin.id`*:: -+ --- -Immutable id of the service emitting this event. - - -type: keyword - --- - -*`service.origin.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - --- - -*`service.origin.version`*:: -+ --- -The version of the service the data was collected from. - - -type: keyword - --- - - -*`session.id`*:: -+ --- -The ID of the session to which the event belongs. - - -type: keyword - --- - -*`session.sequence`*:: -+ --- -The sequence number of the event within the session to which the event belongs. - - -type: long - --- - - - -*`transaction.duration.us`*:: -+ --- -Total duration of this transaction, in microseconds. - - -type: long - --- - -*`transaction.result`*:: -+ --- -The result of the transaction. HTTP status code for HTTP-related transactions. - - -type: keyword - --- - -*`transaction.marks`*:: -+ --- -A user-defined mapping of groups of marks in milliseconds. - - -type: object - --- - -*`transaction.marks.*.*`*:: -+ --- -A user-defined mapping of groups of marks in milliseconds. - - -type: object - --- - - -*`transaction.experience.cls`*:: -+ --- -The Cumulative Layout Shift metric - -type: scaled_float - --- - -*`transaction.experience.fid`*:: -+ --- -The First Input Delay metric - -type: scaled_float - --- - -*`transaction.experience.tbt`*:: -+ --- -The Total Blocking Time metric - -type: scaled_float - --- - -[float] -=== longtask - -Longtask duration/count metrics - - -*`transaction.experience.longtask.count`*:: -+ --- -The total number of of longtasks - -type: long - --- - -*`transaction.experience.longtask.sum`*:: -+ --- -The sum of longtask durations - -type: scaled_float - --- - -*`transaction.experience.longtask.max`*:: -+ --- -The max longtask duration - -type: scaled_float - --- - - -*`transaction.span_count.dropped`*:: -+ --- -The total amount of dropped spans for this transaction. - -type: long - --- - - - -*`transaction.message.queue.name`*:: -+ --- -Name of the message queue or topic where the message is published or received. - - -type: keyword - --- - - -*`transaction.message.age.ms`*:: -+ --- -Age of a message in milliseconds. - - -type: long - --- - - -*`trace.id`*:: -+ --- -The ID of the trace to which the event belongs to. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`parent.id`*:: -+ --- -The ID of the parent event. - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.domain`*:: -+ --- -The domain of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.origin.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. - - -type: keyword - --- - -*`cloud.origin.provider`*:: -+ --- -Name of the cloud provider. - - -type: keyword - --- - -*`cloud.origin.region`*:: -+ --- -Region in which this host, resource, or service is located. - - -type: keyword - --- - -*`cloud.origin.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[[exported-fields-apm-transaction-metrics]] -== APM Transaction Metrics fields - -APM transaction metrics, and transaction metrics-specific properties, such as transaction.root. - - - -*`processor.name`*:: -+ --- -Processor name. - -type: keyword - --- - -*`processor.event`*:: -+ --- -Processor event. - -type: keyword - --- - -*`timeseries.instance`*:: -+ --- -Time series instance ID - -type: keyword - --- - - -*`timestamp.us`*:: -+ --- -Timestamp of the event in microseconds since Unix epoch. - - -type: long - --- - -*`labels`*:: -+ --- -A flat mapping of user-defined labels with string, boolean or number values. - - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`metricset.name`*:: -+ --- -Name of the set of metrics. - - -type: keyword - -example: transaction - --- - -[float] -=== service - -Service fields. - - - -*`service.name`*:: -+ --- -Immutable name of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service emitting this event. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.environment`*:: -+ --- -Service environment. - - -type: keyword - --- - - -*`service.node.name`*:: -+ --- -Unique meaningful name of the service node. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`service.language.name`*:: -+ --- -Name of the programming language used. - - -type: keyword - --- - -*`service.language.version`*:: -+ --- -Version of the programming language used. - - -type: keyword - --- - - -*`service.runtime.name`*:: -+ --- -Name of the runtime used. - - -type: keyword - --- - -*`service.runtime.version`*:: -+ --- -Version of the runtime used. - - -type: keyword - --- - - -*`service.framework.name`*:: -+ --- -Name of the framework used. - - -type: keyword - --- - -*`service.framework.version`*:: -+ --- -Version of the framework used. - - -type: keyword - --- - - -*`transaction.id`*:: -+ --- -The transaction ID. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.sampled`*:: -+ --- -Transactions that are 'sampled' will include all available information. Transactions that are not sampled will not have spans or context. - - -type: boolean - --- - -*`transaction.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg. 'request', 'backgroundjob', etc) - - -type: keyword - --- - -*`transaction.name`*:: -+ --- -Generic designation of a transaction in the scope of a single service (eg. 'GET /users/:id'). - - -type: keyword - --- - -*`transaction.name.text`*:: -+ --- -type: text - --- - -[float] -=== self_time - -Portion of the transaction's duration where no direct child was running - - - -*`transaction.self_time.count`*:: -+ --- -Number of aggregated transactions. - -type: long - --- - - -*`transaction.self_time.sum.us`*:: -+ --- -Aggregated transaction duration, excluding the time periods where a direct child was running, in microseconds. - - -type: long - --- - - -*`transaction.root`*:: -+ --- -Identifies metrics for root transactions. This can be used for calculating metrics for traces. - - -type: boolean - --- - -*`transaction.result`*:: -+ --- -The result of the transaction. HTTP status code for HTTP-related transactions. - - -type: keyword - --- - - -*`span.type`*:: -+ --- -Keyword of specific relevance in the service's domain (eg: 'db.postgresql.query', 'template.erb', 'cache', etc). - - -type: keyword - --- - -*`span.subtype`*:: -+ --- -A further sub-division of the type (e.g. postgresql, elasticsearch) - - -type: keyword - --- - -[float] -=== self_time - -Portion of the span's duration where no direct child was running - - - -*`span.self_time.count`*:: -+ --- -Number of aggregated spans. - -type: long - --- - - -*`span.self_time.sum.us`*:: -+ --- -Aggregated span duration, excluding the time periods where a direct child was running, in microseconds. - - -type: long - --- - - -[float] -=== service - -Destination service context - - -*`span.destination.service.resource`*:: -+ --- -Identifier for the destination service resource being operated on (e.g. 'http://elastic.co:80', 'elasticsearch', 'rabbitmq/queue_name') - - -type: keyword - --- - - -*`agent.name`*:: -+ --- -Name of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent used. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -The Ephemeral ID identifies a running process. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. These fields help correlate data based containers from any runtime. - - - -*`container.id`*:: -+ --- -Unique container id. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== kubernetes - -Kubernetes metadata reported by agents - - - -*`kubernetes.namespace`*:: -+ --- -Kubernetes namespace - - -type: keyword - --- - - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -[float] -=== network - -Optional network fields - - - -[float] -=== connection - -Network connection details - - - -*`network.connection.type`*:: -+ --- -Network connection type, eg. "wifi", "cell" - - -type: keyword - --- - -*`network.connection.subtype`*:: -+ --- -Detailed network connection sub-type, e.g. "LTE", "CDMA" - - -type: keyword - --- - -[float] -=== carrier - -Network operator - - - -*`network.carrier.name`*:: -+ --- -Carrier name, eg. Vodafone, T-Mobile, etc. - - -type: keyword - --- - -*`network.carrier.mcc`*:: -+ --- -Mobile country code - - -type: keyword - --- - -*`network.carrier.mnc`*:: -+ --- -Mobile network code - - -type: keyword - --- - -*`network.carrier.icc`*:: -+ --- -ISO country code, eg. US - - -type: keyword - --- - -[float] -=== host - -Optional host fields. - - - -*`host.architecture`*:: -+ --- -The architecture of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.hostname`*:: -+ --- -The hostname of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host the event was recorded on. It can contain same information as host.hostname or a name specified by the user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -IP of the host that records the event. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`host.os.platform`*:: -+ --- -The platform of the host the event was recorded on. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== process - -Information pertaining to the running process where the data was collected - - - -*`process.args`*:: -+ --- -Process arguments. May be filtered to protect sensitive information. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Numeric process ID of the service process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Numeric ID of the service's parent process. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Service process title. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`observer.listening`*:: -+ --- -Address the server is listening on. - - -type: keyword - --- - -*`observer.hostname`*:: -+ --- -Hostname of the APM Server. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -APM Server version. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type will be set to `apm-server`. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.id`*:: -+ --- -Unique identifier of the APM Server. - - -type: keyword - --- - -*`observer.ephemeral_id`*:: -+ --- -Ephemeral identifier of the APM Server. - - -type: keyword - --- - - -*`user.name`*:: -+ --- -The username of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Identifier of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.email`*:: -+ --- -Email of the logged in user. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`client.domain`*:: -+ --- -Client domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.ip`*:: -+ --- -IP address of the client of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`source.domain`*:: -+ --- -Source domain. - - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.ip`*:: -+ --- -IP address of the source of a recorded event. This is typically obtained from a request's X-Forwarded-For or the X-Real-IP header or falls back to a given configuration for remote address. - - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - - -type: long - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== destination - -Destination fields describe details about the destination of a packet/event. -Destination fields are usually populated in conjunction with source fields. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.ip`*:: -+ --- -IP addess of the destination. Can be one of multiple IPv4 or IPv6 addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. They often show up in web service logs coming from the parsed user agent string. - - - -*`user_agent.original`*:: -+ --- -Unparsed version of the user_agent. - - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -Software agent acting in behalf of a user, eg. a web browser / OS combination. - - -type: text - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== device - -Information concerning the device. - - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Cloud metadata reported by agents - - - - -*`cloud.account.id`*:: -+ --- -Cloud account ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -Cloud account name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Cloud availability zone name - -type: keyword - -example: us-east1-a - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.instance.id`*:: -+ --- -Cloud instance/machine ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Cloud instance/machine name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.machine.type`*:: -+ --- -Cloud instance/machine type - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.project.id`*:: -+ --- -Cloud project ID - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -Cloud project name - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Cloud provider name - -type: keyword - -example: gcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Cloud region name - -type: keyword - -example: us-east1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - - -*`cloud.service.name`*:: -+ --- -Cloud service name, intended to distinguish services running on different platforms within a provider. - - -type: keyword - --- - - -*`event.outcome`*:: -+ --- -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. - - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[[exported-fields-apm-transaction-metrics-xpack]] -== APM Transaction Metrics fields - -APM transaction metrics, and transaction metrics-specific properties, requiring licensed features such as the histogram field type. - - - - - -*`transaction.duration.histogram`*:: -+ --- -Pre-aggregated histogram of transaction durations. - - -type: histogram - --- - -[[exported-fields-beat-common]] -== Beat fields - -Contains common beat fields available in all event types. - - - -*`agent.hostname`*:: -+ --- -Deprecated - use agent.name or agent.id to identify an agent. - - -type: alias - -alias to: agent.name - --- - -*`beat.timezone`*:: -+ --- -type: alias - -alias to: event.timezone - --- - -*`fields`*:: -+ --- -Contains user configurable fields. - - -type: object - --- - -*`beat.name`*:: -+ --- -type: alias - -alias to: host.name - --- - -*`beat.hostname`*:: -+ --- -type: alias - -alias to: agent.name - --- - -*`timeseries.instance`*:: -+ --- -Time series instance id - -type: keyword - --- - -[[exported-fields-cloud]] -== Cloud provider metadata fields - -Metadata from cloud providers added by the add_cloud_metadata processor. - - - -*`cloud.image.id`*:: -+ --- -Image ID for the cloud instance. - - -example: ami-abcd1234 - --- - -*`meta.cloud.provider`*:: -+ --- -type: alias - -alias to: cloud.provider - --- - -*`meta.cloud.instance_id`*:: -+ --- -type: alias - -alias to: cloud.instance.id - --- - -*`meta.cloud.instance_name`*:: -+ --- -type: alias - -alias to: cloud.instance.name - --- - -*`meta.cloud.machine_type`*:: -+ --- -type: alias - -alias to: cloud.machine.type - --- - -*`meta.cloud.availability_zone`*:: -+ --- -type: alias - -alias to: cloud.availability_zone - --- - -*`meta.cloud.project_id`*:: -+ --- -type: alias - -alias to: cloud.project.id - --- - -*`meta.cloud.region`*:: -+ --- -type: alias - -alias to: cloud.region - --- - -[[exported-fields-docker-processor]] -== Docker fields - -Docker stats collected from Docker. - - - - -*`docker.container.id`*:: -+ --- -type: alias - -alias to: container.id - --- - -*`docker.container.image`*:: -+ --- -type: alias - -alias to: container.image.name - --- - -*`docker.container.name`*:: -+ --- -type: alias - -alias to: container.name - --- - -*`docker.container.labels`*:: -+ --- -Image labels. - - -type: object - --- - -[[exported-fields-ecs]] -== ECS fields - - -This section defines Elastic Common Schema (ECS) fields—a common set of fields -to be used when storing event data in {es}. - -This is an exhaustive list, and fields listed here are not necessarily used by {beatname_uc}. -The goal of ECS is to enable and encourage users of {es} to normalize their event data, -so that they can better analyze, visualize, and correlate the data represented in their events. - -See the {ecs-ref}[ECS reference] for more information. - -*`@timestamp`*:: -+ --- -Date/time when the event originated. -This is the date/time extracted from the event, typically representing when the event was generated by the source. -If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. -Required field for all events. - -type: date - -example: 2016-05-23T08:05:34.853Z - -required: True - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`labels`*:: -+ --- -Custom key/value pairs. -Can be used to add meta information to events. Should not contain nested objects. All values are stored as keyword. -Example: `docker` and `k8s` labels. - -type: object - -example: {"application": "foo-bar", "env": "production"} - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`message`*:: -+ --- -For log events the message field contains the log message, optimized for viewing in a log viewer. -For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. -If multiple messages exist, they can be combined into one message. - -type: match_only_text - -example: Hello World - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tags`*:: -+ --- -List of keywords used to tag each event. - -type: keyword - -example: ["production", "env2"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== agent - -The agent fields contain the data about the software entity, if any, that collects, detects, or observes events on a host, or takes measurements on a host. -Examples include Beats. Agents may also run on observers. ECS agent.* fields shall be populated with details of the agent running on the host or observer where the event happened or the measurement was taken. - - -*`agent.build.original`*:: -+ --- -Extended build information for the agent. -This field is intended to contain any build information that a data source may provide, no specific formatting is required. - -type: keyword - -example: metricbeat version 7.6.0 (amd64), libbeat 7.6.0 [6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c built 2020-02-05 23:10:10 +0000 UTC] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.ephemeral_id`*:: -+ --- -Ephemeral identifier of this agent (if one exists). -This id normally changes across restarts, but `agent.id` does not. - -type: keyword - -example: 8a4f500f - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.id`*:: -+ --- -Unique identifier of this agent (if one exists). -Example: For Beats this would be beat.id. - -type: keyword - -example: 8a4f500d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.name`*:: -+ --- -Custom name of the agent. -This is a name that can be given to an agent. This can be helpful if for example two Filebeat instances are running on the same host but a human readable separation is needed on which Filebeat instance data is coming from. -If no name is given, the name is often left empty. - -type: keyword - -example: foo - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.type`*:: -+ --- -Type of the agent. -The agent type always stays the same and should be given by the agent used. In case of Filebeat the agent would always be Filebeat also if two Filebeat instances are run on the same machine. - -type: keyword - -example: filebeat - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`agent.version`*:: -+ --- -Version of the agent. - -type: keyword - -example: 6.0.0-rc2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== as - -An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet. - - -*`as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== client - -A client is defined as the initiator of a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the client is the initiator of the TCP connection that sends the SYN packet(s). For other protocols, the client is generally the initiator or requestor in the network transaction. Some systems use the term "originator" to refer the client in TCP connections. The client fields describe details about the system acting as the client in the network event. Client fields are usually populated in conjunction with server fields. Client fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`client.address`*:: -+ --- -Some event client addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.bytes`*:: -+ --- -Bytes sent from the client to the server. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.domain`*:: -+ --- -Client domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`client.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`client.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`client.ip`*:: -+ --- -IP address of the client (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.mac`*:: -+ --- -MAC address of the client. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.nat.ip`*:: -+ --- -Translated IP of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.nat.port`*:: -+ --- -Translated port of source based NAT sessions (e.g. internal client to internet). -Typically connections traversing load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.packets`*:: -+ --- -Packets sent from the client to the server. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.port`*:: -+ --- -Port of the client. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.registered_domain`*:: -+ --- -The highest registered client domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`client.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`client.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`client.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== cloud - -Fields related to the cloud or infrastructure the events are coming from. - - -*`cloud.account.id`*:: -+ --- -The cloud account or organization id used to identify different entities in a multi-tenant environment. -Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. - -type: keyword - -example: 666777888999 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.account.name`*:: -+ --- -The cloud account name or alias used to identify different entities in a multi-tenant environment. -Examples: AWS account name, Google Cloud ORG display name. - -type: keyword - -example: elastic-dev - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.availability_zone`*:: -+ --- -Availability zone in which this host, resource, or service is located. - -type: keyword - -example: us-east-1c - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.id`*:: -+ --- -Instance ID of the host machine. - -type: keyword - -example: i-1234567890abcdef0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.instance.name`*:: -+ --- -Instance name of the host machine. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.machine.type`*:: -+ --- -Machine type of the host machine. - -type: keyword - -example: t2.medium - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.id`*:: -+ --- -The cloud project identifier. -Examples: Google Cloud Project id, Azure Project id. - -type: keyword - -example: my-project - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.project.name`*:: -+ --- -The cloud project name. -Examples: Google Cloud Project name, Azure Project name. - -type: keyword - -example: my project - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.provider`*:: -+ --- -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - -type: keyword - -example: aws - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.region`*:: -+ --- -Region in which this host, resource, or service is located. - -type: keyword - -example: us-east-1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`cloud.service.name`*:: -+ --- -The cloud service name is intended to distinguish services running on different platforms within a provider, eg AWS EC2 vs Lambda, GCP GCE vs App Engine, Azure VM vs App Server. -Examples: app engine, app service, cloud run, fargate, lambda. - -type: keyword - -example: lambda - --- - -[float] -=== code_signature - -These fields contain information about binary code signatures. - - -*`code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -[float] -=== container - -Container fields are used for meta information about the specific container that is the source of information. -These fields help correlate data based containers from any runtime. - - -*`container.id`*:: -+ --- -Unique container id. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.image.name`*:: -+ --- -Name of the image the container was built on. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.image.tag`*:: -+ --- -Container image tags. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.labels`*:: -+ --- -Image labels. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.name`*:: -+ --- -Container name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`container.runtime`*:: -+ --- -Runtime managing this container. - -type: keyword - -example: docker - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== data_stream - -The data_stream fields take part in defining the new data stream naming scheme. -In the new data stream naming scheme the value of the data stream fields combine to the name of the actual data stream in the following manner: `{data_stream.type}-{data_stream.dataset}-{data_stream.namespace}`. This means the fields can only contain characters that are valid as part of names of data streams. More details about this can be found in this https://www.elastic.co/blog/an-introduction-to-the-elastic-data-stream-naming-scheme[blog post]. -An Elasticsearch data stream consists of one or more backing indices, and a data stream name forms part of the backing indices names. Due to this convention, data streams must also follow index naming restrictions. For example, data stream names cannot include `\`, `/`, `*`, `?`, `"`, `<`, `>`, `|`, ` ` (space character), `,`, or `#`. Please see the Elasticsearch reference for additional https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params[restrictions]. - - -*`data_stream.dataset`*:: -+ --- -The field can contain anything that makes sense to signify the source of the data. -Examples include `nginx.access`, `prometheus`, `endpoint` etc. For data streams that otherwise fit, but that do not have dataset set we use the value "generic" for the dataset value. `event.dataset` should have the same value as `data_stream.dataset`. -Beyond the Elasticsearch data stream naming criteria noted above, the `dataset` value has additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: nginx.access - --- - -*`data_stream.namespace`*:: -+ --- -A user defined namespace. Namespaces are useful to allow grouping of data. -Many users already organize their indices this way, and the data stream naming scheme now provides this best practice as a default. Many users will populate this field with `default`. If no value is used, it falls back to `default`. -Beyond the Elasticsearch index naming criteria noted above, `namespace` value has the additional restrictions: - * Must not contain `-` - * No longer than 100 characters - -type: constant_keyword - -example: production - --- - -*`data_stream.type`*:: -+ --- -An overarching type for the data stream. -Currently allowed values are "logs" and "metrics". We expect to also add "traces" and "synthetics" in the near future. - -type: constant_keyword - -example: logs - --- - -[float] -=== destination - -Destination fields capture details about the receiver of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Destination fields are usually populated in conjunction with source fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`destination.address`*:: -+ --- -Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.bytes`*:: -+ --- -Bytes sent from the destination to the source. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.domain`*:: -+ --- -Destination domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`destination.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`destination.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`destination.ip`*:: -+ --- -IP address of the destination (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.mac`*:: -+ --- -MAC address of the destination. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.nat.port`*:: -+ --- -Port the source session is translated to by NAT Device. -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.packets`*:: -+ --- -Packets sent from the destination to the source. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.port`*:: -+ --- -Port of the destination. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.registered_domain`*:: -+ --- -The highest registered destination domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`destination.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`destination.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`destination.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== dll - -These fields contain information about code libraries dynamically loaded into processes. - -Many operating systems refer to "shared code libraries" with different names, but this field set refers to all of the following: -* Dynamic-link library (`.dll`) commonly used on Windows -* Shared Object (`.so`) commonly used on Unix-like operating systems -* Dynamic library (`.dylib`) commonly used on macOS - - -*`dll.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`dll.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`dll.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`dll.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`dll.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`dll.name`*:: -+ --- -Name of the library. -This generally maps to the name of the file on disk. - -type: keyword - -example: kernel32.dll - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.path`*:: -+ --- -Full file path of the library. - -type: keyword - -example: C:\Windows\System32\kernel32.dll - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dll.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== dns - -Fields describing DNS queries and answers. -DNS events should either represent a single DNS query prior to getting answers (`dns.type:query`) or they should represent a full exchange and contain the query details as well as all of the answers that were provided for this query (`dns.type:answer`). - - -*`dns.answers`*:: -+ --- -An array containing an object for each answer section returned by the server. -The main keys that should be present in these objects are defined by ECS. Records that have more information may contain more keys than what ECS defines. -Not all DNS data sources give all details about DNS answers. At minimum, answer objects must contain the `data` key. If more information is available, map as much of it to ECS as possible, and add any additional fields to the answer objects as custom fields. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.class`*:: -+ --- -The class of DNS data contained in this resource record. - -type: keyword - -example: IN - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.data`*:: -+ --- -The data describing the resource. -The meaning of this data depends on the type and class of the resource record. - -type: keyword - -example: 10.10.10.10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.name`*:: -+ --- -The domain name to which this resource record pertains. -If a chain of CNAME is being resolved, each answer's `name` should be the one that corresponds with the answer's `data`. It should not simply be the original `question.name` repeated. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.ttl`*:: -+ --- -The time interval in seconds that this resource record may be cached before it should be discarded. Zero values mean that the data should not be cached. - -type: long - -example: 180 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.answers.type`*:: -+ --- -The type of data contained in this resource record. - -type: keyword - -example: CNAME - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.header_flags`*:: -+ --- -Array of 2 letter DNS header flags. -Expected values are: AA, TC, RD, RA, AD, CD, DO. - -type: keyword - -example: ["RD", "RA"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.id`*:: -+ --- -The DNS packet identifier assigned by the program that generated the query. The identifier is copied to the response. - -type: keyword - -example: 62111 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.op_code`*:: -+ --- -The DNS operation code that specifies the kind of query in the message. This value is set by the originator of a query and copied into the response. - -type: keyword - -example: QUERY - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.class`*:: -+ --- -The class of records being queried. - -type: keyword - -example: IN - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.name`*:: -+ --- -The name being queried. -If the name field contains non-printable characters (below 32 or above 126), those characters should be represented as escaped base 10 integers (\DDD). Back slashes and quotes should be escaped. Tabs, carriage returns, and line feeds should be converted to \t, \r, and \n respectively. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.registered_domain`*:: -+ --- -The highest registered domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.subdomain`*:: -+ --- -The subdomain is all of the labels under the registered_domain. -If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: www - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.question.type`*:: -+ --- -The type of record being queried. - -type: keyword - -example: AAAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.resolved_ip`*:: -+ --- -Array containing all IPs seen in `answers.data`. -The `answers` array can be difficult to use, because of the variety of data formats it can contain. Extracting all IP addresses seen in there to `dns.resolved_ip` makes it possible to index them as IP addresses, and makes them easier to visualize and query for. - -type: ip - -example: ["10.10.10.10", "10.10.10.11"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.response_code`*:: -+ --- -The DNS response code. - -type: keyword - -example: NOERROR - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`dns.type`*:: -+ --- -The type of DNS event captured, query or answer. -If your source of DNS events only gives you DNS queries, you should only create dns events of type `dns.type:query`. -If your source of DNS events gives you answers as well, you should create one event per query (optionally as soon as the query is seen). And a second event containing all query details as well as an array of answers. - -type: keyword - -example: answer - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== ecs - -Meta-information specific to ECS. - - -*`ecs.version`*:: -+ --- -ECS version this event conforms to. `ecs.version` is a required field and must exist in all events. -When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events. - -type: keyword - -example: 1.0.0 - -required: True - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== elf - -These fields contain Linux Executable Linkable Format (ELF) metadata. - - -*`elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -[float] -=== error - -These fields can represent errors of any kind. -Use them for errors that happen while fetching events or in cases where the event itself contains an error. - - -*`error.code`*:: -+ --- -Error code describing the error. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.id`*:: -+ --- -Unique identifier for the error. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.message`*:: -+ --- -Error message. - -type: match_only_text - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.stack_trace`*:: -+ --- -The stack trace of this error in plain text. - -type: wildcard - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`error.stack_trace.text`*:: -+ --- -type: match_only_text - --- - -*`error.type`*:: -+ --- -The type of the error, for example the class name of the exception. - -type: keyword - -example: java.lang.NullPointerException - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== event - -The event fields are used for context information about the log or metric event itself. -A log is defined as an event containing details of something that happened. Log events must include the time at which the thing happened. Examples of log events include a process starting on a host, a network packet being sent from a source to a destination, or a network connection between a client and a server being initiated or closed. A metric is defined as an event containing one or more numerical measurements and the time at which the measurement was taken. Examples of metric events include memory pressure measured on a host and device temperature. See the `event.kind` definition in this section for additional details about metric and state events. - - -*`event.action`*:: -+ --- -The action captured by the event. -This describes the information in the event. It is more specific than `event.category`. Examples are `group-add`, `process-started`, `file-created`. The value is normally defined by the implementer. - -type: keyword - -example: user-password-change - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.agent_id_status`*:: -+ --- -Agents are normally responsible for populating the `agent.id` field value. If the system receiving events is capable of validating the value based on authentication information for the client then this field can be used to reflect the outcome of that validation. -For example if the agent's connection is authenticated with mTLS and the client cert contains the ID of the agent to which the cert was issued then the `agent.id` value in events can be checked against the certificate. If the values match then `event.agent_id_status: verified` is added to the event, otherwise one of the other allowed values should be used. -If no validation is performed then the field should be omitted. -The allowed values are: -`verified` - The `agent.id` field value matches expected value obtained from auth metadata. -`mismatch` - The `agent.id` field value does not match the expected value obtained from auth metadata. -`missing` - There was no `agent.id` field in the event to validate. -`auth_metadata_missing` - There was no auth metadata or it was missing information about the agent ID. - -type: keyword - -example: verified - --- - -*`event.category`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. -`event.category` represents the "big buckets" of ECS categories. For example, filtering on `event.category:process` yields all events relating to process activity. This field is closely related to `event.type`, which is used as a subcategory. -This field is an array. This will allow proper categorization of some events that fall in multiple categories. - -type: keyword - -example: authentication - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.code`*:: -+ --- -Identification code for this event, if one exists. -Some event sources use event codes to identify messages unambiguously, regardless of message language or wording adjustments over time. An example of this is the Windows Event ID. - -type: keyword - -example: 4648 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.created`*:: -+ --- -event.created contains the date/time when the event was first read by an agent, or by your pipeline. -This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. -In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent's or pipeline's ability to keep up with your event source. -In case the two timestamps are identical, @timestamp should be used. - -type: date - -example: 2016-05-23T08:05:34.857Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.dataset`*:: -+ --- -Name of the dataset. -If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. -It's recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. - -type: keyword - -example: apache.access - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.duration`*:: -+ --- -Duration of the event in nanoseconds. -If event.start and event.end are known this value should be the difference between the end and start time. - -type: long - -format: duration - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.end`*:: -+ --- -event.end contains the date when the event ended or when the activity was last observed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.hash`*:: -+ --- -Hash (perhaps logstash fingerprint) of raw field to be able to demonstrate log integrity. - -type: keyword - -example: 123456789012345678901234567890ABCD - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.id`*:: -+ --- -Unique ID to describe the event. - -type: keyword - -example: 8a4f500d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.ingested`*:: -+ --- -Timestamp when an event arrived in the central data store. -This is different from `@timestamp`, which is when the event originally occurred. It's also different from `event.created`, which is meant to capture the first time an agent saw the event. -In normal conditions, assuming no tampering, the timestamps should chronologically look like this: `@timestamp` < `event.created` < `event.ingested`. - -type: date - -example: 2016-05-23T08:05:35.101Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.kind`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. -`event.kind` gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. -The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data coming in at a regular interval or not. - -type: keyword - -example: alert - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.module`*:: -+ --- -Name of the module this data is coming from. -If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. - -type: keyword - -example: apache - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.original`*:: -+ --- -Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. -This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from `_source`. If users wish to override this and index this field, please see `Field data types` in the `Elasticsearch Reference`. - -type: keyword - -example: Sep 19 08:26:10 host CEF:0|Security| threatmanager|1.0|100| worm successfully stopped|10|src=10.0.0.1 dst=2.1.2.2spt=1232 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`event.outcome`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. -`event.outcome` simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. -Note that when a single transaction is described in multiple events, each event may populate different values of `event.outcome`, according to their perspective. -Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. -Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with `event.type:info`, or any events for which an outcome does not make logical sense. - -type: keyword - -example: success - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.provider`*:: -+ --- -Source of the event. -Event transports such as Syslog or the Windows Event Log typically mention the source of an event. It can be the name of the software that generated the event (e.g. Sysmon, httpd), or of a subsystem of the operating system (kernel, Microsoft-Windows-Security-Auditing). - -type: keyword - -example: kernel - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.reason`*:: -+ --- -Reason why this event happened, according to the source. -This describes the why of a particular action or outcome captured in the event. Where `event.action` captures the action from the event, `event.reason` describes why that action was taken. For example, a web proxy with an `event.action` which denied the request may also populate `event.reason` with the reason why (e.g. `blocked site`). - -type: keyword - -example: Terminated an unexpected process - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.reference`*:: -+ --- -Reference URL linking to additional information about this event. -This URL links to a static definition of this event. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://system.example.com/event/#0001234 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.risk_score`*:: -+ --- -Risk score or priority of the event (e.g. security solutions). Use your system's original value here. - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.risk_score_norm`*:: -+ --- -Normalized risk score or priority of the event, on a scale of 0 to 100. -This is mainly useful if you use more than one system that assigns risk scores, and you want to see a normalized value across all systems. - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.sequence`*:: -+ --- -Sequence number of the event. -The sequence number is a value published by some event sources, to make the exact ordering of events unambiguous, regardless of the timestamp precision. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.severity`*:: -+ --- -The numeric severity of the event according to your event source. -What the different severity values mean can be different between sources and use cases. It's up to the implementer to make sure severities are consistent across events from the same source. -The Syslog severity belongs in `log.syslog.severity.code`. `event.severity` is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the `log.syslog.severity.code` to `event.severity`. - -type: long - -example: 7 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.start`*:: -+ --- -event.start contains the date when the event started or when the activity was first observed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.timezone`*:: -+ --- -This field should be populated when the event's timestamp does not include timezone information already (e.g. default Syslog timestamps). It's optional otherwise. -Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.type`*:: -+ --- -This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. -`event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. -This field is an array. This will allow proper categorization of some events that fall in multiple event types. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`event.url`*:: -+ --- -URL linking to an external system to continue investigation of this event. -This URL links to another system where in-depth investigation of the specific occurrence of this event can take place. Alert events, indicated by `event.kind:alert`, are a common use case for this field. - -type: keyword - -example: https://mysystem.example.com/alert/5271dedb-f5b0-4218-87f0-4ac4870a38fe - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== file - -A file is defined as a set of information that has been created on, or has existed on a filesystem. -File objects can be associated with host events, network events, and/or file events (e.g., those produced by File Integrity Monitoring [FIM] products or services). File fields provide details about the affected file associated with the event or metric. - - -*`file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.path.text`*:: -+ --- -type: match_only_text - --- - -*`file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`file.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`file.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== geo - -Geo fields can carry data about a specific location related to an event. -This geolocation information can be derived from techniques such as Geo IP, or be user-supplied. - - -*`geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -[float] -=== group - -The group fields are meant to represent groups that are relevant to the event. - - -*`group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== hash - -The hash fields represent different bitwise hash algorithms and their values. -Field names for common hashes (e.g. MD5, SHA1) are predefined. Add fields for other hashes by lowercasing the hash algorithm name and using underscore separators as appropriate (snake case, e.g. sha3_512). -Note that this fieldset is used for common hashes that may be computed over a range of generic bytes. Entity-specific hashes such as ja3 or imphash are placed in the fieldsets to which they relate (tls and pe, respectively). - - -*`hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -[float] -=== host - -A host is defined as a general computing instance. -ECS host.* fields should be populated with details about the host on which the event happened, or from which the measurement was taken. Host types include hardware, virtual machines, Docker containers, and Kubernetes nodes. - - -*`host.architecture`*:: -+ --- -Operating system architecture. - -type: keyword - -example: x86_64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.cpu.usage`*:: -+ --- -Percent CPU used which is normalized by the number of CPU cores and it ranges from 0 to 1. -Scaling factor: 1000. -For example: For a two core host, this value should be the average of the two cores, between 0 and 1. - -type: scaled_float - --- - -*`host.disk.read.bytes`*:: -+ --- -The total number of bytes (gauge) read successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.disk.write.bytes`*:: -+ --- -The total number of bytes (gauge) written successfully (aggregated from all disks) since the last metric collection. - -type: long - --- - -*`host.domain`*:: -+ --- -Name of the domain of which the host is a member. -For example, on Windows this could be the host's Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host's LDAP provider. - -type: keyword - -example: CONTOSO - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`host.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`host.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`host.hostname`*:: -+ --- -Hostname of the host. -It normally contains what the `hostname` command returns on the host machine. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.id`*:: -+ --- -Unique host id. -As hostname is not always unique, use values that are meaningful in your environment. -Example: The current usage of `beat.name`. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.ip`*:: -+ --- -Host ip addresses. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.mac`*:: -+ --- -Host MAC addresses. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.name`*:: -+ --- -Name of the host. -It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.network.egress.bytes`*:: -+ --- -The number of bytes (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.egress.packets`*:: -+ --- -The number of packets (gauge) sent out on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.bytes`*:: -+ --- -The number of bytes received (gauge) on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.network.ingress.packets`*:: -+ --- -The number of packets (gauge) received on all network interfaces by the host since the last metric collection. - -type: long - --- - -*`host.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`host.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`host.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.type`*:: -+ --- -Type of host. -For Cloud providers this can be the machine type like `t2.medium`. If vm, this could be the container, for example, or other information meaningful in your environment. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.uptime`*:: -+ --- -Seconds the host has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`host.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`host.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`host.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== http - -Fields related to HTTP activity. Use the `url` field set to store the url of the request. - - -*`http.request.body.bytes`*:: -+ --- -Size in bytes of the request body. - -type: long - -example: 887 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.body.content`*:: -+ --- -The full HTTP request body. - -type: wildcard - -example: Hello world - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.request.bytes`*:: -+ --- -Total size in bytes of the request (body and headers). - -type: long - -example: 1437 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.id`*:: -+ --- -A unique identifier for each HTTP request to correlate logs between clients and servers in transactions. -The id may be contained in a non-standard HTTP header, such as `X-Request-ID` or `X-Correlation-ID`. - -type: keyword - -example: 123e4567-e89b-12d3-a456-426614174000 - --- - -*`http.request.method`*:: -+ --- -HTTP request method. -Prior to ECS 1.6.0 the following guidance was provided: -"The field value must be normalized to lowercase for querying." -As of ECS 1.6.0, the guidance is deprecated because the original case of the method may be useful in anomaly detection. Original case will be mandated in ECS 2.0.0 - -type: keyword - -example: GET, POST, PUT, PoST - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.request.mime_type`*:: -+ --- -Mime type of the body of the request. -This value must only be populated based on the content of the request body, not on the `Content-Type` header. Comparing the mime type of a request with the request's Content-Type header can be helpful in detecting threats or misconfigured clients. - -type: keyword - -example: image/gif - --- - -*`http.request.referrer`*:: -+ --- -Referrer for this HTTP request. - -type: keyword - -example: https://blog.example.com/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.bytes`*:: -+ --- -Size in bytes of the response body. - -type: long - -example: 887 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.content`*:: -+ --- -The full HTTP response body. - -type: wildcard - -example: Hello world - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.body.content.text`*:: -+ --- -type: match_only_text - --- - -*`http.response.bytes`*:: -+ --- -Total size in bytes of the response (body and headers). - -type: long - -example: 1437 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.response.mime_type`*:: -+ --- -Mime type of the body of the response. -This value must only be populated based on the content of the response body, not on the `Content-Type` header. Comparing the mime type of a response with the response's Content-Type header can be helpful in detecting misconfigured servers. - -type: keyword - -example: image/gif - --- - -*`http.response.status_code`*:: -+ --- -HTTP response status code. - -type: long - -example: 404 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`http.version`*:: -+ --- -HTTP version. - -type: keyword - -example: 1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== interface - -The interface fields are used to record ingress and egress interface information when reported by an observer (e.g. firewall, router, load balancer) in the context of the observer handling a network connection. In the case of a single observer interface (e.g. network sensor on a span port) only the observer.ingress information should be populated. - - -*`interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - --- - -*`interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - --- - -*`interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - --- - -[float] -=== log - -Details about the event's logging mechanism or logging transport. -The log.* fields are typically populated with details about the logging mechanism used to create and/or transport the event. For example, syslog details belong under `log.syslog.*`. -The details specific to your event source are typically not logged under `log.*`, but rather in `event.*` or in other ECS fields. - - -*`log.file.path`*:: -+ --- -Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. -If the event wasn't read from a log file, do not populate this field. - -type: keyword - -example: /var/log/fun-times.log - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.level`*:: -+ --- -Original log level of the log event. -If the source of the event provides a log level or textual severity, this is the one that goes in `log.level`. If your source doesn't specify one, you may put your event transport's severity here (e.g. Syslog severity). -Some examples are `warn`, `err`, `i`, `informational`. - -type: keyword - -example: error - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.logger`*:: -+ --- -The name of the logger inside an application. This is usually the name of the class which initialized the logger, or can be a custom name. - -type: keyword - -example: org.elasticsearch.bootstrap.Bootstrap - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.file.line`*:: -+ --- -The line number of the file containing the source code which originated the log event. - -type: integer - -example: 42 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.file.name`*:: -+ --- -The name of the file containing the source code which originated the log event. -Note that this field is not meant to capture the log file. The correct field to capture the log file is `log.file.path`. - -type: keyword - -example: Bootstrap.java - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.origin.function`*:: -+ --- -The name of the function or method which originated the log event. - -type: keyword - -example: init - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.original`*:: -+ --- -Deprecated for removal in next major version release. This field is superseded by `event.original`. -This is the original log message and contains the full log message before splitting it up in multiple parts. -In contrast to the `message` field which can contain an extracted part of the log message, this field contains the original, full log message. It can have already some modifications applied like encoding or new lines removed to clean up the log message. -This field is not indexed and doc_values are disabled so it can't be queried but the value can be retrieved from `_source`. - -type: keyword - -example: Sep 19 08:26:10 localhost My log - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`log.syslog`*:: -+ --- -The Syslog metadata of the event, if the event was transmitted via Syslog. Please see RFCs 5424 or 3164. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.facility.code`*:: -+ --- -The Syslog numeric facility of the log event, if available. -According to RFCs 5424 and 3164, this value should be an integer between 0 and 23. - -type: long - -example: 23 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.facility.name`*:: -+ --- -The Syslog text-based facility of the log event, if available. - -type: keyword - -example: local7 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.priority`*:: -+ --- -Syslog numeric priority of the event, if available. -According to RFCs 5424 and 3164, the priority is 8 * facility + severity. This number is therefore expected to contain a value between 0 and 191. - -type: long - -example: 135 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.severity.code`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different numeric severity value (e.g. firewall, IDS), your source's numeric severity should go to `event.severity`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `event.severity`. - -type: long - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`log.syslog.severity.name`*:: -+ --- -The Syslog numeric severity of the log event, if available. -If the event source publishing via Syslog provides a different severity value (e.g. firewall, IDS), your source's text severity should go to `log.level`. If the event source does not specify a distinct severity, you can optionally copy the Syslog severity to `log.level`. - -type: keyword - -example: Error - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== network - -The network is defined as the communication path over which a host or network event happens. -The network.* fields should be populated with details about the network activity associated with an event. - - -*`network.application`*:: -+ --- -A name given to an application level protocol. This can be arbitrarily assigned for things like microservices, but also apply to things like skype, icq, facebook, twitter. This would be used in situations where the vendor or service can be decoded such as from the source/dest IP owners, ports, or wire format. -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: aim - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.bytes`*:: -+ --- -Total bytes transferred in both directions. -If `source.bytes` and `destination.bytes` are known, `network.bytes` is their sum. - -type: long - -example: 368 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.community_id`*:: -+ --- -A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. -Learn more at https://github.com/corelight/community-id-spec. - -type: keyword - -example: 1:hO+sN4H+MG5MY/8hIrXPqc4ZQz0= - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.direction`*:: -+ --- -Direction of the network traffic. -Recommended values are: - * ingress - * egress - * inbound - * outbound - * internal - * external - * unknown - -When mapping events from a host-based monitoring context, populate this field from the host's point of view, using the values "ingress" or "egress". -When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". -Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. - -type: keyword - -example: inbound - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.forwarded_ip`*:: -+ --- -Host IP address when the source IP address is the proxy. - -type: ip - -example: 192.1.1.2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.iana_number`*:: -+ --- -IANA Protocol Number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. - -type: keyword - -example: 6 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner`*:: -+ --- -Network.inner fields are added in addition to network.vlan fields to describe the innermost VLAN when q-in-q VLAN tagging is present. Allowed fields include vlan.id and vlan.name. Inner vlan fields are typically used when sending traffic with multiple 802.1q encapsulations to a network sensor (e.g. Zeek, Wireshark.) - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.inner.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.name`*:: -+ --- -Name given by operators to sections of their network. - -type: keyword - -example: Guest Wifi - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.packets`*:: -+ --- -Total packets transferred in both directions. -If `source.packets` and `destination.packets` are known, `network.packets` is their sum. - -type: long - -example: 24 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.protocol`*:: -+ --- -L7 Network protocol name. ex. http, lumberjack, transport protocol. -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: http - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.transport`*:: -+ --- -Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: tcp - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.type`*:: -+ --- -In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc -The field value must be normalized to lowercase for querying. See the documentation section "Implementing ECS". - -type: keyword - -example: ipv4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`network.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== observer - -An observer is defined as a special network, security, or application device used to detect, observe, or create network, security, or application-related events and metrics. -This could be a custom hardware appliance or a server that has been configured to run special network, security, or application software. Examples include firewalls, web proxies, intrusion detection/prevention systems, network monitoring sensors, web application firewalls, data loss prevention systems, and APM servers. The observer.* fields shall be populated with details of the system, if any, that detects, observes and/or creates a network, security, or application event or metric. Message queues and ETL components used in processing events or metrics are not considered observers in ECS. - - -*`observer.egress`*:: -+ --- -Observer.egress holds information like interface number and name, vlan, and zone information to classify egress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.egress.zone`*:: -+ --- -Network zone of outbound traffic as reported by the observer to categorize the destination area of egress traffic, e.g. Internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: Public_Internet - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`observer.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`observer.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`observer.hostname`*:: -+ --- -Hostname of the observer. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress`*:: -+ --- -Observer.ingress holds information like interface number and name, vlan, and zone information to classify ingress traffic. Single armed monitoring such as a network sensor on a span port should only use observer.ingress to categorize traffic. - -type: object - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.alias`*:: -+ --- -Interface alias as reported by the system, typically used in firewall implementations for e.g. inside, outside, or dmz logical interface naming. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.id`*:: -+ --- -Interface ID as reported by an observer (typically SNMP interface ID). - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.interface.name`*:: -+ --- -Interface name as reported by the system. - -type: keyword - -example: eth0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ingress.zone`*:: -+ --- -Network zone of incoming traffic as reported by the observer to categorize the source area of ingress traffic. e.g. internal, External, DMZ, HR, Legal, etc. - -type: keyword - -example: DMZ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.ip`*:: -+ --- -IP addresses of the observer. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.mac`*:: -+ --- -MAC addresses of the observer. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: ["00-00-5E-00-53-23", "00-00-5E-00-53-24"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.name`*:: -+ --- -Custom name of the observer. -This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. -If no custom name is needed, the field can be left empty. - -type: keyword - -example: 1_proxySG - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`observer.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`observer.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.product`*:: -+ --- -The product name of the observer. - -type: keyword - -example: s200 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.serial_number`*:: -+ --- -Observer serial number. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.type`*:: -+ --- -The type of the observer the data is coming from. -There is no predefined list of observer types. Some examples are `forwarder`, `firewall`, `ids`, `ips`, `proxy`, `poller`, `sensor`, `APM server`. - -type: keyword - -example: firewall - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.vendor`*:: -+ --- -Vendor name of the observer. - -type: keyword - -example: Symantec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`observer.version`*:: -+ --- -Observer version. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== orchestrator - -Fields that describe the resources which container orchestrators manage or act upon. - - -*`orchestrator.api_version`*:: -+ --- -API version being used to carry out the action - -type: keyword - -example: v1beta1 - --- - -*`orchestrator.cluster.name`*:: -+ --- -Name of the cluster. - -type: keyword - --- - -*`orchestrator.cluster.url`*:: -+ --- -URL of the API used to manage the cluster. - -type: keyword - --- - -*`orchestrator.cluster.version`*:: -+ --- -The version of the cluster. - -type: keyword - --- - -*`orchestrator.namespace`*:: -+ --- -Namespace in which the action is taking place. - -type: keyword - -example: kube-system - --- - -*`orchestrator.organization`*:: -+ --- -Organization affected by the event (for multi-tenant orchestrator setups). - -type: keyword - -example: elastic - --- - -*`orchestrator.resource.name`*:: -+ --- -Name of the resource being acted upon. - -type: keyword - -example: test-pod-cdcws - --- - -*`orchestrator.resource.type`*:: -+ --- -Type of resource being acted upon. - -type: keyword - -example: service - --- - -*`orchestrator.type`*:: -+ --- -Orchestrator cluster type (e.g. kubernetes, nomad or cloudfoundry). - -type: keyword - -example: kubernetes - --- - -[float] -=== organization - -The organization fields enrich data with information about the company or entity the data is associated with. -These fields help you arrange or filter data stored in an index by one or multiple organizations. - - -*`organization.id`*:: -+ --- -Unique identifier for the organization. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`organization.name`*:: -+ --- -Organization name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`organization.name.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== os - -The OS fields contain information about the operating system. - - -*`os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - --- - -*`os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - --- - -*`os.full.text`*:: -+ --- -type: match_only_text - --- - -*`os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - --- - -*`os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - --- - -*`os.name.text`*:: -+ --- -type: match_only_text - --- - -*`os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - --- - -*`os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - --- - -[float] -=== package - -These fields contain information about an installed software package. It contains general information about a package, such as name, version or size. It also contains installation details, such as time or location. - - -*`package.architecture`*:: -+ --- -Package architecture. - -type: keyword - -example: x86_64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.build_version`*:: -+ --- -Additional information about the build version of the installed package. -For example use the commit SHA of a non-released package. - -type: keyword - -example: 36f4f7e89dd61b0988b12ee000b98966867710cd - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.checksum`*:: -+ --- -Checksum of the installed package for verification. - -type: keyword - -example: 68b329da9893e34099c7d8ad5cb9c940 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.description`*:: -+ --- -Description of the package. - -type: keyword - -example: Open source programming language to build simple/reliable/efficient software. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.install_scope`*:: -+ --- -Indicating how the package was installed, e.g. user-local, global. - -type: keyword - -example: global - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.installed`*:: -+ --- -Time when package was installed. - -type: date - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.license`*:: -+ --- -License under which the package was released. -Use a short name, e.g. the license identifier from SPDX License List where possible (https://spdx.org/licenses/). - -type: keyword - -example: Apache License 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.name`*:: -+ --- -Package name - -type: keyword - -example: go - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.path`*:: -+ --- -Path where the package is installed. - -type: keyword - -example: /usr/local/Cellar/go/1.12.9/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.reference`*:: -+ --- -Home page or reference URL of the software in this package, if available. - -type: keyword - -example: https://golang.org - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.size`*:: -+ --- -Package size in bytes. - -type: long - -example: 62231 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.type`*:: -+ --- -Type of package. -This should contain the package file type, rather than the package manager name. Examples: rpm, dpkg, brew, npm, gem, nupkg, jar. - -type: keyword - -example: rpm - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`package.version`*:: -+ --- -Package version - -type: keyword - -example: 1.12.9 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== pe - -These fields contain Windows Portable Executable (PE) metadata. - - -*`pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -[float] -=== process - -These fields contain information about a process. -These fields can help you correlate metrics information with a process id/name from a log message. The `process.pid` often stays in the metric itself and is copied to the global field for correlation. - - -*`process.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.args`*:: -+ --- -Array of process arguments, starting with the absolute path to the executable. -May be filtered to protect sensitive information. - -type: keyword - -example: ["/usr/bin/ssh", "-l", "user", "10.0.0.16"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.args_count`*:: -+ --- -Length of the process.args array. -This field can be useful for querying or performing bucket analysis on how many arguments were provided to start a process. More arguments may be an indication of suspicious activity. - -type: long - -example: 4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`process.parent.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`process.parent.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`process.parent.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`process.parent.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.command_line`*:: -+ --- -Full command line that started the process, including the absolute path to the executable, and all arguments. -Some arguments may be filtered to protect sensitive information. - -type: wildcard - -example: /usr/bin/ssh -l user 10.0.0.16 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.command_line.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`process.parent.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`process.parent.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`process.parent.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`process.parent.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`process.parent.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`process.parent.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`process.parent.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`process.parent.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`process.parent.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`process.parent.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`process.parent.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`process.parent.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`process.parent.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`process.parent.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`process.parent.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`process.parent.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`process.parent.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`process.parent.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`process.parent.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`process.parent.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`process.parent.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`process.parent.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`process.parent.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`process.parent.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`process.parent.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`process.parent.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`process.parent.end`*:: -+ --- -The time the process ended. - -type: date - -example: 2016-05-23T08:05:34.853Z - --- - -*`process.parent.entity_id`*:: -+ --- -Unique identifier for the process. -The implementation of this is specified by the data source, but some examples of what could be used here are a process-generated UUID, Sysmon Process GUIDs, or a hash of some uniquely identifying components of a process. -Constructing a globally unique identifier is a common practice to mitigate PID reuse as well as to identify a specific process over time, across multiple monitored hosts. - -type: keyword - -example: c2c455d9f99375d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.executable`*:: -+ --- -Absolute path to the process executable. - -type: keyword - -example: /usr/bin/ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.executable.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.exit_code`*:: -+ --- -The exit code of the process, if this is a termination event. -The field should be absent if there is no exit code for the event (e.g. process start). - -type: long - -example: 137 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`process.parent.name`*:: -+ --- -Process name. -Sometimes called program name or similar. - -type: keyword - -example: ssh - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.name.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.ppid`*:: -+ --- -Parent process' pid. - -type: long - -example: 4241 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.parent.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.parent.working_directory.text`*:: -+ --- -type: match_only_text - --- - -*`process.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pgid`*:: -+ --- -Identifier of the group of processes the process belongs to. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.pid`*:: -+ --- -Process id. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.ppid`*:: -+ --- -Parent process' pid. - -type: long - -example: 4241 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.start`*:: -+ --- -The time the process started. - -type: date - -example: 2016-05-23T08:05:34.853Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.thread.id`*:: -+ --- -Thread ID. - -type: long - -example: 4242 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.thread.name`*:: -+ --- -Thread name. - -type: keyword - -example: thread-0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title`*:: -+ --- -Process title. -The proctitle, some times the same as process name. Can also be different: for example a browser setting its title to the web page currently opened. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.title.text`*:: -+ --- -type: match_only_text - --- - -*`process.uptime`*:: -+ --- -Seconds the process has been up. - -type: long - -example: 1325 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.working_directory`*:: -+ --- -The working directory of the process. - -type: keyword - -example: /home/alice - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`process.working_directory.text`*:: -+ --- -type: match_only_text - --- - -[float] -=== registry - -Fields related to Windows Registry operations. - - -*`registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== related - -This field set is meant to facilitate pivoting around a piece of data. -Some pieces of information can be seen in many places in an ECS event. To facilitate searching for them, store an array of all seen values to their corresponding field in `related.`. -A concrete example is IP addresses, which can be under host, observer, source, destination, client, server, and network.forwarded_ip. If you append all IPs to `related.ip`, you can then search for a given IP trivially, no matter where it appeared, by querying `related.ip:192.0.2.15`. - - -*`related.hash`*:: -+ --- -All the hashes seen on your event. Populating this field, then using it to search for hashes can help in situations where you're unsure what the hash algorithm is (and therefore which key name to search). - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.hosts`*:: -+ --- -All hostnames or other host identifiers seen on your event. Example identifiers include FQDNs, domain names, workstation names, or aliases. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.ip`*:: -+ --- -All of the IPs seen on your event. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`related.user`*:: -+ --- -All the user names or other user identifiers seen on the event. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== rule - -Rule fields are used to capture the specifics of any observer or agent rules that generate alerts or other notable events. -Examples of data sources that would populate the rule fields include: network admission control platforms, network or host IDS/IPS, network firewalls, web application firewalls, url filters, endpoint detection and response (EDR) systems, etc. - - -*`rule.author`*:: -+ --- -Name, organization, or pseudonym of the author or authors who created the rule used to generate this event. - -type: keyword - -example: ["Star-Lord"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.category`*:: -+ --- -A categorization value keyword used by the entity using the rule for detection of this event. - -type: keyword - -example: Attempted Information Leak - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.description`*:: -+ --- -The description of the rule generating the event. - -type: keyword - -example: Block requests to public DNS over HTTPS / TLS protocols - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.id`*:: -+ --- -A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. - -type: keyword - -example: 101 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.license`*:: -+ --- -Name of the license under which the rule used to generate this event is made available. - -type: keyword - -example: Apache 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.name`*:: -+ --- -The name of the rule or signature generating the event. - -type: keyword - -example: BLOCK_DNS_over_TLS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.reference`*:: -+ --- -Reference URL to additional information about the rule used to generate this event. -The URL can point to the vendor's documentation about the rule. If that's not available, it can also be a link to a more general page describing this type of alert. - -type: keyword - -example: https://en.wikipedia.org/wiki/DNS_over_TLS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.ruleset`*:: -+ --- -Name of the ruleset, policy, group, or parent category in which the rule used to generate this event is a member. - -type: keyword - -example: Standard_Protocol_Filters - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.uuid`*:: -+ --- -A rule ID that is unique within the scope of a set or group of agents, observers, or other entities using the rule for detection of this event. - -type: keyword - -example: 1100110011 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`rule.version`*:: -+ --- -The version / revision of the rule being used for analysis. - -type: keyword - -example: 1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== server - -A Server is defined as the responder in a network connection for events regarding sessions, connections, or bidirectional flow records. -For TCP events, the server is the receiver of the initial SYN packet(s) of the TCP connection. For other protocols, the server is generally the responder in the network transaction. Some systems actually use the term "responder" to refer the server in TCP connections. The server fields describe details about the system acting as the server in the network event. Server fields are usually populated in conjunction with client fields. Server fields are generally not populated for packet-level events. -Client / server representations can add semantic context to an exchange, which is helpful to visualize the data in certain situations. If your context falls in that category, you should still ensure that source and destination are filled appropriately. - - -*`server.address`*:: -+ --- -Some event server addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.bytes`*:: -+ --- -Bytes sent from the server to the client. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.domain`*:: -+ --- -Server domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`server.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`server.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`server.ip`*:: -+ --- -IP address of the server (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.mac`*:: -+ --- -MAC address of the server. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.nat.ip`*:: -+ --- -Translated ip of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.nat.port`*:: -+ --- -Translated port of destination based NAT sessions (e.g. internet to private DMZ) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.packets`*:: -+ --- -Packets sent from the server to the client. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.port`*:: -+ --- -Port of the server. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.registered_domain`*:: -+ --- -The highest registered server domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`server.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`server.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`server.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== service - -The service fields describe the service for or from which the data was collected. -These fields help you find and correlate logs for a specific service and version. - - -*`service.address`*:: -+ --- -Address where data about this service was collected from. -This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). - -type: keyword - -example: 172.26.0.2:5432 - --- - -*`service.environment`*:: -+ --- -Identifies the environment where the service is running. -If the same service runs in different environments (production, staging, QA, development, etc.), the environment can identify other instances of the same service. Can also group services and applications from the same environment. - -type: keyword - -example: production - --- - -*`service.ephemeral_id`*:: -+ --- -Ephemeral identifier of this service (if one exists). -This id normally changes across restarts, but `service.id` does not. - -type: keyword - -example: 8a4f500f - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.id`*:: -+ --- -Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. -This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. -Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. - -type: keyword - -example: d37e5ebfe0ae6c4972dbe9f0174a1637bb8247f6 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.name`*:: -+ --- -Name of the service data is collected from. -The name of the service is normally user given. This allows for distributed services that run on multiple hosts to correlate the related instances based on the name. -In the case of Elasticsearch the `service.name` could contain the cluster name. For Beats the `service.name` is by default a copy of the `service.type` field if no name is specified. - -type: keyword - -example: elasticsearch-metrics - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.node.name`*:: -+ --- -Name of a service node. -This allows for two nodes of the same service running on the same host to be differentiated. Therefore, `service.node.name` should typically be unique across nodes of a given service. -In the case of Elasticsearch, the `service.node.name` could contain the unique node name within the Elasticsearch cluster. In cases where the service doesn't have the concept of a node name, the host name or container name can be used to distinguish running instances that make up this service. If those do not provide uniqueness (e.g. multiple instances of the service running on the same host) - the node name can be manually set. - -type: keyword - -example: instance-0000000016 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.state`*:: -+ --- -Current state of the service. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.type`*:: -+ --- -The type of the service data is collected from. -The type can be used to group and correlate logs and metrics from one service type. -Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. - -type: keyword - -example: elasticsearch - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`service.version`*:: -+ --- -Version of the service the data was collected from. -This allows to look at a data set only for a specific version of a service. - -type: keyword - -example: 3.2.4 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== source - -Source fields capture details about the sender of a network exchange/packet. These fields are populated from a network event, packet, or other event containing details of a network transaction. -Source fields are usually populated in conjunction with destination fields. The source and destination fields are considered the baseline and should always be filled if an event contains source and destination details from a network transaction. If the event also contains identification of the client and server roles, then the client and server fields should also be populated. - - -*`source.address`*:: -+ --- -Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the `.address` field. -Then it should be duplicated to `.ip` or `.domain`, depending on which one it is. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.bytes`*:: -+ --- -Bytes sent from the source to the destination. - -type: long - -example: 184 - -format: bytes - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.domain`*:: -+ --- -Source domain. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`source.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`source.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`source.ip`*:: -+ --- -IP address of the source (IPv4 or IPv6). - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.mac`*:: -+ --- -MAC address of the source. -The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. - -type: keyword - -example: 00-00-5E-00-53-23 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.nat.ip`*:: -+ --- -Translated ip of source based NAT sessions (e.g. internal client to internet) -Typically connections traversing load balancers, firewalls, or routers. - -type: ip - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.nat.port`*:: -+ --- -Translated port of source based NAT sessions. (e.g. internal client to internet) -Typically used with load balancers, firewalls, or routers. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.packets`*:: -+ --- -Packets sent from the source to the destination. - -type: long - -example: 12 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.port`*:: -+ --- -Port of the source. - -type: long - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.registered_domain`*:: -+ --- -The highest registered source domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`source.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`source.user.name.text`*:: -+ --- -type: match_only_text - --- - -*`source.user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== threat - -Fields to classify events and alerts according to a threat taxonomy such as the MITRE ATT&CK® framework. -These fields are for users to classify alerts from all of their sources (e.g. IDS, NGFW, etc.) within a common taxonomy. The threat.tactic.* are meant to capture the high level category of the threat (e.g. "impact"). The threat.technique.* fields are meant to capture which kind of approach is used by this detected threat, to accomplish the goal (e.g. "endpoint denial of service"). - - -*`threat.enrichments`*:: -+ --- -A list of associated indicators objects enriching the event, and the context of that association/enrichment. - -type: nested - --- - -*`threat.enrichments.indicator`*:: -+ --- -Object containing associated indicators enriching the event. - -type: object - --- - -*`threat.enrichments.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.enrichments.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.enrichments.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.confidence`*:: -+ --- -Identifies the confidence rating assigned by the provider using STIX confidence scales. Expected values: - * Not Specified, None, Low, Medium, High - * 0-10 - * Admirality Scale (1-6) - * DNI Scale (5-95) - * WEP Scale (Impossible - Certain) - -type: keyword - -example: High - --- - -*`threat.enrichments.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.enrichments.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.enrichments.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.enrichments.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.enrichments.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.enrichments.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.enrichments.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.enrichments.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.enrichments.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.enrichments.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.enrichments.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.enrichments.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.enrichments.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.enrichments.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.enrichments.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.enrichments.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.enrichments.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.enrichments.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.enrichments.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.enrichments.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.enrichments.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.enrichments.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.enrichments.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.enrichments.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.enrichments.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.enrichments.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.enrichments.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.enrichments.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.enrichments.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.enrichments.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.enrichments.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.enrichments.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.enrichments.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.enrichments.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.enrichments.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.enrichments.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.enrichments.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.enrichments.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.enrichments.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.enrichments.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.enrichments.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.enrichments.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.enrichments.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.enrichments.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.enrichments.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.enrichments.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.enrichments.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.enrichments.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.enrichments.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.enrichments.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.enrichments.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.enrichments.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.enrichments.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.enrichments.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.enrichments.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.enrichments.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.enrichments.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.enrichments.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.enrichments.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: White - --- - -*`threat.enrichments.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.enrichments.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.enrichments.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.enrichments.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.enrichments.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.enrichments.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.enrichments.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.enrichments.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.enrichments.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.enrichments.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.enrichments.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.enrichments.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.enrichments.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.enrichments.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.enrichments.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.enrichments.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.enrichments.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.enrichments.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.enrichments.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.enrichments.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.enrichments.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.enrichments.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.enrichments.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.enrichments.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.enrichments.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.enrichments.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.enrichments.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.enrichments.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.enrichments.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.enrichments.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.enrichments.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.enrichments.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.enrichments.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.enrichments.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.enrichments.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.enrichments.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.enrichments.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.enrichments.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.enrichments.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.enrichments.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.enrichments.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.enrichments.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.enrichments.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.enrichments.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.enrichments.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.enrichments.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.enrichments.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.enrichments.matched.atomic`*:: -+ --- -Identifies the atomic indicator value that matched a local environment endpoint or network event. - -type: keyword - -example: bad-domain.com - --- - -*`threat.enrichments.matched.field`*:: -+ --- -Identifies the field of the atomic indicator that matched a local environment endpoint or network event. - -type: keyword - -example: file.hash.sha256 - --- - -*`threat.enrichments.matched.id`*:: -+ --- -Identifies the _id of the indicator document enriching the event. - -type: keyword - -example: ff93aee5-86a1-4a61-b0e6-0cdc313d01b5 - --- - -*`threat.enrichments.matched.index`*:: -+ --- -Identifies the _index of the indicator document enriching the event. - -type: keyword - -example: filebeat-8.0.0-2021.05.23-000011 - --- - -*`threat.enrichments.matched.type`*:: -+ --- -Identifies the type of match that caused the event to be enriched with the given indicator - -type: keyword - -example: indicator_match_rule - --- - -*`threat.framework`*:: -+ --- -Name of the threat framework used to further categorize and classify the tactic and technique of the reported threat. Framework classification can be provided by detecting systems, evaluated at ingest time, or retrospectively tagged to events. - -type: keyword - -example: MITRE ATT&CK - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.group.alias`*:: -+ --- -The alias(es) of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group alias(es). - -type: keyword - -example: [ "Magecart Group 6" ] - --- - -*`threat.group.id`*:: -+ --- -The id of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group id. - -type: keyword - -example: G0037 - --- - -*`threat.group.name`*:: -+ --- -The name of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group name. - -type: keyword - -example: FIN6 - --- - -*`threat.group.reference`*:: -+ --- -The reference URL of the group for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® group reference URL. - -type: keyword - -example: https://attack.mitre.org/groups/G0037/ - --- - -*`threat.indicator.as.number`*:: -+ --- -Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. - -type: long - -example: 15169 - --- - -*`threat.indicator.as.organization.name`*:: -+ --- -Organization name. - -type: keyword - -example: Google LLC - --- - -*`threat.indicator.as.organization.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.confidence`*:: -+ --- -Identifies the confidence rating assigned by the provider using STIX confidence scales. -Recommended values: - * Not Specified, None, Low, Medium, High - * 0-10 - * Admirality Scale (1-6) - * DNI Scale (5-95) - * WEP Scale (Impossible - Certain) - -type: keyword - -example: High - --- - -*`threat.indicator.description`*:: -+ --- -Describes the type of action conducted by the threat. - -type: keyword - -example: IP x.x.x.x was observed delivering the Angler EK. - --- - -*`threat.indicator.email.address`*:: -+ --- -Identifies a threat indicator as an email address (irrespective of direction). - -type: keyword - -example: phish@example.com - --- - -*`threat.indicator.file.accessed`*:: -+ --- -Last time the file was accessed. -Note that not all filesystems keep track of access time. - -type: date - --- - -*`threat.indicator.file.attributes`*:: -+ --- -Array of file attributes. -Attributes names will vary by platform. Here's a non-exhaustive list of values that are expected in this field: archive, compressed, directory, encrypted, execute, hidden, read, readonly, system, write. - -type: keyword - -example: ["readonly", "system"] - --- - -*`threat.indicator.file.code_signature.digest_algorithm`*:: -+ --- -The hashing algorithm used to sign the process. -This value can distinguish signatures when a file is signed multiple times by the same signer but with a different digest algorithm. - -type: keyword - -example: sha256 - --- - -*`threat.indicator.file.code_signature.exists`*:: -+ --- -Boolean to capture if a signature is present. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.signing_id`*:: -+ --- -The identifier used to sign the process. -This is used to identify the application manufactured by a software vendor. The field is relevant to Apple *OS only. - -type: keyword - -example: com.apple.xpc.proxy - --- - -*`threat.indicator.file.code_signature.status`*:: -+ --- -Additional information about the certificate status. -This is useful for logging cryptographic errors with the certificate validity or trust status. Leave unpopulated if the validity or trust of the certificate was unchecked. - -type: keyword - -example: ERROR_UNTRUSTED_ROOT - --- - -*`threat.indicator.file.code_signature.subject_name`*:: -+ --- -Subject name of the code signer - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.code_signature.team_id`*:: -+ --- -The team identifier used to sign the process. -This is used to identify the team or vendor of a software product. The field is relevant to Apple *OS only. - -type: keyword - -example: EQHXZ8M8AV - --- - -*`threat.indicator.file.code_signature.timestamp`*:: -+ --- -Date and time when the code signature was generated and signed. - -type: date - -example: 2021-01-01T12:10:30Z - --- - -*`threat.indicator.file.code_signature.trusted`*:: -+ --- -Stores the trust status of the certificate chain. -Validating the trust of the certificate chain may be complicated, and this field should only be populated by tools that actively check the status. - -type: boolean - -example: true - --- - -*`threat.indicator.file.code_signature.valid`*:: -+ --- -Boolean to capture if the digital signature is verified against the binary content. -Leave unpopulated if a certificate was unchecked. - -type: boolean - -example: true - --- - -*`threat.indicator.file.created`*:: -+ --- -File creation time. -Note that not all filesystems store the creation time. - -type: date - --- - -*`threat.indicator.file.ctime`*:: -+ --- -Last time the file attributes or metadata changed. -Note that changes to the file content will update `mtime`. This implies `ctime` will be adjusted at the same time, since `mtime` is an attribute of the file. - -type: date - --- - -*`threat.indicator.file.device`*:: -+ --- -Device that is the source of the file. - -type: keyword - -example: sda - --- - -*`threat.indicator.file.directory`*:: -+ --- -Directory where the file is located. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice - --- - -*`threat.indicator.file.drive_letter`*:: -+ --- -Drive letter where the file is located. This field is only relevant on Windows. -The value should be uppercase, and not include the colon. - -type: keyword - -example: C - --- - -*`threat.indicator.file.elf.architecture`*:: -+ --- -Machine architecture of the ELF file. - -type: keyword - -example: x86-64 - --- - -*`threat.indicator.file.elf.byte_order`*:: -+ --- -Byte sequence of ELF file. - -type: keyword - -example: Little Endian - --- - -*`threat.indicator.file.elf.cpu_type`*:: -+ --- -CPU type of the ELF file. - -type: keyword - -example: Intel - --- - -*`threat.indicator.file.elf.creation_date`*:: -+ --- -Extracted when possible from the file's metadata. Indicates when it was built or compiled. It can also be faked by malware creators. - -type: date - --- - -*`threat.indicator.file.elf.exports`*:: -+ --- -List of exported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.header.abi_version`*:: -+ --- -Version of the ELF Application Binary Interface (ABI). - -type: keyword - --- - -*`threat.indicator.file.elf.header.class`*:: -+ --- -Header class of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.data`*:: -+ --- -Data table of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.header.entrypoint`*:: -+ --- -Header entrypoint of the ELF file. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.header.object_version`*:: -+ --- -"0x1" for original ELF files. - -type: keyword - --- - -*`threat.indicator.file.elf.header.os_abi`*:: -+ --- -Application Binary Interface (ABI) of the Linux OS. - -type: keyword - --- - -*`threat.indicator.file.elf.header.type`*:: -+ --- -Header type of the ELF file. - -type: keyword - --- - -*`threat.indicator.file.elf.header.version`*:: -+ --- -Version of the ELF header. - -type: keyword - --- - -*`threat.indicator.file.elf.imports`*:: -+ --- -List of imported element names and types. - -type: flattened - --- - -*`threat.indicator.file.elf.sections`*:: -+ --- -An array containing an object for each section of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.sections.*`. - -type: nested - --- - -*`threat.indicator.file.elf.sections.chi2`*:: -+ --- -Chi-square probability distribution of the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.entropy`*:: -+ --- -Shannon entropy calculation from the section. - -type: long - -format: number - --- - -*`threat.indicator.file.elf.sections.flags`*:: -+ --- -ELF Section List flags. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.name`*:: -+ --- -ELF Section List name. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_offset`*:: -+ --- -ELF Section List offset. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.physical_size`*:: -+ --- -ELF Section List physical size. - -type: long - -format: bytes - --- - -*`threat.indicator.file.elf.sections.type`*:: -+ --- -ELF Section List type. - -type: keyword - --- - -*`threat.indicator.file.elf.sections.virtual_address`*:: -+ --- -ELF Section List virtual address. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.sections.virtual_size`*:: -+ --- -ELF Section List virtual size. - -type: long - -format: string - --- - -*`threat.indicator.file.elf.segments`*:: -+ --- -An array containing an object for each segment of the ELF file. -The keys that should be present in these objects are defined by sub-fields underneath `elf.segments.*`. - -type: nested - --- - -*`threat.indicator.file.elf.segments.sections`*:: -+ --- -ELF object segment sections. - -type: keyword - --- - -*`threat.indicator.file.elf.segments.type`*:: -+ --- -ELF object segment type. - -type: keyword - --- - -*`threat.indicator.file.elf.shared_libraries`*:: -+ --- -List of shared libraries used by this ELF object. - -type: keyword - --- - -*`threat.indicator.file.elf.telfhash`*:: -+ --- -telfhash symbol hash for ELF file. - -type: keyword - --- - -*`threat.indicator.file.extension`*:: -+ --- -File extension, excluding the leading dot. -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.file.fork_name`*:: -+ --- -A fork is additional data associated with a filesystem object. -On Linux, a resource fork is used to store additional data with a filesystem object. A file always has at least one fork for the data portion, and additional forks may exist. -On NTFS, this is analogous to an Alternate Data Stream (ADS), and the default data stream for a file is just called $DATA. Zone.Identifier is commonly used by Windows to track contents downloaded from the Internet. An ADS is typically of the form: `C:\path\to\filename.extension:some_fork_name`, and `some_fork_name` is the value that should populate `fork_name`. `filename.extension` should populate `file.name`, and `extension` should populate `file.extension`. The full path, `file.path`, will include the fork name. - -type: keyword - -example: Zone.Identifer - --- - -*`threat.indicator.file.gid`*:: -+ --- -Primary group ID (GID) of the file. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.file.group`*:: -+ --- -Primary group name of the file. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.hash.md5`*:: -+ --- -MD5 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha1`*:: -+ --- -SHA1 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha256`*:: -+ --- -SHA256 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.sha512`*:: -+ --- -SHA512 hash. - -type: keyword - --- - -*`threat.indicator.file.hash.ssdeep`*:: -+ --- -SSDEEP hash. - -type: keyword - --- - -*`threat.indicator.file.inode`*:: -+ --- -Inode representing the file in the filesystem. - -type: keyword - -example: 256383 - --- - -*`threat.indicator.file.mime_type`*:: -+ --- -MIME type should identify the format of the file or stream of bytes using https://www.iana.org/assignments/media-types/media-types.xhtml[IANA official types], where possible. When more than one type is applicable, the most specific type should be used. - -type: keyword - --- - -*`threat.indicator.file.mode`*:: -+ --- -Mode of the file in octal representation. - -type: keyword - -example: 0640 - --- - -*`threat.indicator.file.mtime`*:: -+ --- -Last time the file content was modified. - -type: date - --- - -*`threat.indicator.file.name`*:: -+ --- -Name of the file including the extension, without the directory. - -type: keyword - -example: example.png - --- - -*`threat.indicator.file.owner`*:: -+ --- -File owner's username. - -type: keyword - -example: alice - --- - -*`threat.indicator.file.path`*:: -+ --- -Full path to the file, including the file name. It should include the drive letter, when appropriate. - -type: keyword - -example: /home/alice/example.png - --- - -*`threat.indicator.file.path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.pe.architecture`*:: -+ --- -CPU architecture target for the file. - -type: keyword - -example: x64 - --- - -*`threat.indicator.file.pe.company`*:: -+ --- -Internal company name of the file, provided at compile-time. - -type: keyword - -example: Microsoft Corporation - --- - -*`threat.indicator.file.pe.description`*:: -+ --- -Internal description of the file, provided at compile-time. - -type: keyword - -example: Paint - --- - -*`threat.indicator.file.pe.file_version`*:: -+ --- -Internal version of the file, provided at compile-time. - -type: keyword - -example: 6.3.9600.17415 - --- - -*`threat.indicator.file.pe.imphash`*:: -+ --- -A hash of the imports in a PE file. An imphash -- or import hash -- can be used to fingerprint binaries even after recompilation or other code-level transformations have occurred, which would change more traditional hash values. -Learn more at https://www.fireeye.com/blog/threat-research/2014/01/tracking-malware-import-hashing.html. - -type: keyword - -example: 0c6803c4e922103c4dca5963aad36ddf - --- - -*`threat.indicator.file.pe.original_file_name`*:: -+ --- -Internal name of the file, provided at compile-time. - -type: keyword - -example: MSPAINT.EXE - --- - -*`threat.indicator.file.pe.product`*:: -+ --- -Internal product name of the file, provided at compile-time. - -type: keyword - -example: Microsoft® Windows® Operating System - --- - -*`threat.indicator.file.size`*:: -+ --- -File size in bytes. -Only relevant when `file.type` is "file". - -type: long - -example: 16384 - --- - -*`threat.indicator.file.target_path`*:: -+ --- -Target path for symlinks. - -type: keyword - --- - -*`threat.indicator.file.target_path.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.file.type`*:: -+ --- -File type (file, dir, or symlink). - -type: keyword - -example: file - --- - -*`threat.indicator.file.uid`*:: -+ --- -The user ID (UID) or security identifier (SID) of the file owner. - -type: keyword - -example: 1001 - --- - -*`threat.indicator.first_seen`*:: -+ --- -The date and time when intelligence source first reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.geo.city_name`*:: -+ --- -City name. - -type: keyword - -example: Montreal - --- - -*`threat.indicator.geo.continent_code`*:: -+ --- -Two-letter code representing continent's name. - -type: keyword - -example: NA - --- - -*`threat.indicator.geo.continent_name`*:: -+ --- -Name of the continent. - -type: keyword - -example: North America - --- - -*`threat.indicator.geo.country_iso_code`*:: -+ --- -Country ISO code. - -type: keyword - -example: CA - --- - -*`threat.indicator.geo.country_name`*:: -+ --- -Country name. - -type: keyword - -example: Canada - --- - -*`threat.indicator.geo.location`*:: -+ --- -Longitude and latitude. - -type: geo_point - -example: { "lon": -73.614830, "lat": 45.505918 } - --- - -*`threat.indicator.geo.name`*:: -+ --- -User-defined description of a location, at the level of granularity they care about. -Could be the name of their data centers, the floor number, if this describes a local physical entity, city names. -Not typically used in automated geolocation. - -type: keyword - -example: boston-dc - --- - -*`threat.indicator.geo.postal_code`*:: -+ --- -Postal code associated with the location. -Values appropriate for this field may also be known as a postcode or ZIP code and will vary widely from country to country. - -type: keyword - -example: 94040 - --- - -*`threat.indicator.geo.region_iso_code`*:: -+ --- -Region ISO code. - -type: keyword - -example: CA-QC - --- - -*`threat.indicator.geo.region_name`*:: -+ --- -Region name. - -type: keyword - -example: Quebec - --- - -*`threat.indicator.geo.timezone`*:: -+ --- -The time zone of the location, such as IANA time zone name. - -type: keyword - -example: America/Argentina/Buenos_Aires - --- - -*`threat.indicator.ip`*:: -+ --- -Identifies a threat indicator as an IP address (irrespective of direction). - -type: ip - -example: 1.2.3.4 - --- - -*`threat.indicator.last_seen`*:: -+ --- -The date and time when intelligence source last reported sighting this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.marking.tlp`*:: -+ --- -Traffic Light Protocol sharing markings. -Recommended values are: - * WHITE - * GREEN - * AMBER - * RED - -type: keyword - -example: WHITE - --- - -*`threat.indicator.modified_at`*:: -+ --- -The date and time when intelligence source last modified information for this indicator. - -type: date - -example: 2020-11-05T17:25:47.000Z - --- - -*`threat.indicator.port`*:: -+ --- -Identifies a threat indicator as a port number (irrespective of direction). - -type: long - -example: 443 - --- - -*`threat.indicator.provider`*:: -+ --- -The name of the indicator's provider. - -type: keyword - -example: lrz_urlhaus - --- - -*`threat.indicator.reference`*:: -+ --- -Reference URL linking to additional information about this indicator. - -type: keyword - -example: https://system.example.com/indicator/0001234 - --- - -*`threat.indicator.registry.data.bytes`*:: -+ --- -Original bytes written with base64 encoding. -For Windows registry operations, such as SetValueEx and RegQueryValueEx, this corresponds to the data pointed by `lp_data`. This is optional but provides better recoverability and should be populated for REG_BINARY encoded values. - -type: keyword - -example: ZQBuAC0AVQBTAAAAZQBuAAAAAAA= - --- - -*`threat.indicator.registry.data.strings`*:: -+ --- -Content when writing string types. -Populated as an array when writing string data to the registry. For single string registry types (REG_SZ, REG_EXPAND_SZ), this should be an array with one string. For sequences of string with REG_MULTI_SZ, this array will be variable length. For numeric data, such as REG_DWORD and REG_QWORD, this should be populated with the decimal representation (e.g `"1"`). - -type: wildcard - -example: ["C:\rta\red_ttp\bin\myapp.exe"] - --- - -*`threat.indicator.registry.data.type`*:: -+ --- -Standard registry type for encoding contents - -type: keyword - -example: REG_SZ - --- - -*`threat.indicator.registry.hive`*:: -+ --- -Abbreviated name for the hive. - -type: keyword - -example: HKLM - --- - -*`threat.indicator.registry.key`*:: -+ --- -Hive-relative path of keys. - -type: keyword - -example: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe - --- - -*`threat.indicator.registry.path`*:: -+ --- -Full path, including hive, key and value - -type: keyword - -example: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger - --- - -*`threat.indicator.registry.value`*:: -+ --- -Name of the value written. - -type: keyword - -example: Debugger - --- - -*`threat.indicator.scanner_stats`*:: -+ --- -Count of AV/EDR vendors that successfully detected malicious file or URL. - -type: long - -example: 4 - --- - -*`threat.indicator.sightings`*:: -+ --- -Number of times this indicator was observed conducting threat activity. - -type: long - -example: 20 - --- - -*`threat.indicator.type`*:: -+ --- -Type of indicator as represented by Cyber Observable in STIX 2.0. -Recommended values: - * autonomous-system - * artifact - * directory - * domain-name - * email-addr - * file - * ipv4-addr - * ipv6-addr - * mac-addr - * mutex - * port - * process - * software - * url - * user-account - * windows-registry-key - * x509-certificate - -type: keyword - -example: ipv4-addr - --- - -*`threat.indicator.url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - --- - -*`threat.indicator.url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - --- - -*`threat.indicator.url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - --- - -*`threat.indicator.url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - --- - -*`threat.indicator.url.full.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - --- - -*`threat.indicator.url.original.text`*:: -+ --- -type: match_only_text - --- - -*`threat.indicator.url.password`*:: -+ --- -Password of the request. - -type: keyword - --- - -*`threat.indicator.url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - --- - -*`threat.indicator.url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - --- - -*`threat.indicator.url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - --- - -*`threat.indicator.url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - --- - -*`threat.indicator.url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - --- - -*`threat.indicator.url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`threat.indicator.url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - --- - -*`threat.indicator.url.username`*:: -+ --- -Username of the request. - -type: keyword - --- - -*`threat.indicator.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`threat.indicator.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`threat.indicator.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`threat.indicator.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`threat.indicator.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`threat.indicator.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`threat.indicator.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`threat.indicator.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`threat.indicator.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`threat.indicator.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`threat.indicator.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`threat.indicator.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`threat.indicator.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`threat.indicator.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`threat.indicator.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`threat.indicator.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`threat.indicator.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`threat.indicator.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`threat.indicator.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`threat.indicator.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`threat.indicator.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`threat.indicator.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -*`threat.software.alias`*:: -+ --- -The alias(es) of the software for a set of related intrusion activity that are tracked by a common name in the security community. -While not required, you can use a MITRE ATT&CK® associated software description. - -type: keyword - -example: [ "X-Agent" ] - --- - -*`threat.software.id`*:: -+ --- -The id of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software id. - -type: keyword - -example: S0552 - --- - -*`threat.software.name`*:: -+ --- -The name of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software name. - -type: keyword - -example: AdFind - --- - -*`threat.software.platforms`*:: -+ --- -The platforms of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended Values: - * AWS - * Azure - * Azure AD - * GCP - * Linux - * macOS - * Network - * Office 365 - * SaaS - * Windows - -While not required, you can use a MITRE ATT&CK® software platforms. - -type: keyword - -example: [ "Windows" ] - --- - -*`threat.software.reference`*:: -+ --- -The reference URL of the software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -While not required, you can use a MITRE ATT&CK® software reference URL. - -type: keyword - -example: https://attack.mitre.org/software/S0552/ - --- - -*`threat.software.type`*:: -+ --- -The type of software used by this threat to conduct behavior commonly modeled using MITRE ATT&CK®. -Recommended values - * Malware - * Tool - - While not required, you can use a MITRE ATT&CK® software type. - -type: keyword - -example: Tool - --- - -*`threat.tactic.id`*:: -+ --- -The id of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: TA0002 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.tactic.name`*:: -+ --- -Name of the type of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/) - -type: keyword - -example: Execution - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.tactic.reference`*:: -+ --- -The reference url of tactic used by this threat. You can use a MITRE ATT&CK® tactic, for example. (ex. https://attack.mitre.org/tactics/TA0002/ ) - -type: keyword - -example: https://attack.mitre.org/tactics/TA0002/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.id`*:: -+ --- -The id of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: T1059 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.name`*:: -+ --- -The name of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: Command and Scripting Interpreter - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.reference`*:: -+ --- -The reference url of technique used by this threat. You can use a MITRE ATT&CK® technique, for example. (ex. https://attack.mitre.org/techniques/T1059/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/ - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`threat.technique.subtechnique.id`*:: -+ --- -The full id of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: T1059.001 - --- - -*`threat.technique.subtechnique.name`*:: -+ --- -The name of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: PowerShell - --- - -*`threat.technique.subtechnique.name.text`*:: -+ --- -type: match_only_text - --- - -*`threat.technique.subtechnique.reference`*:: -+ --- -The reference url of subtechnique used by this threat. You can use a MITRE ATT&CK® subtechnique, for example. (ex. https://attack.mitre.org/techniques/T1059/001/) - -type: keyword - -example: https://attack.mitre.org/techniques/T1059/001/ - --- - -[float] -=== tls - -Fields related to a TLS connection. These fields focus on the TLS protocol itself and intentionally avoids in-depth analysis of the related x.509 certificate files. - - -*`tls.cipher`*:: -+ --- -String indicating the cipher used during the current connection. - -type: keyword - -example: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the client. This is usually mutually-exclusive of `client.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the client. This is usually mutually-exclusive of `client.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the client. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.issuer`*:: -+ --- -Distinguished name of subject of the issuer of the x.509 certificate presented by the client. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.ja3`*:: -+ --- -A hash that identifies clients based on how they perform an SSL/TLS handshake. - -type: keyword - -example: d4e5b18d6b55c71272893221c96ba240 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.not_after`*:: -+ --- -Date/Time indicating when client certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.not_before`*:: -+ --- -Date/Time indicating when client certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.server_name`*:: -+ --- -Also called an SNI, this tells the server which hostname to which the client is attempting to connect to. When this value is available, it should get copied to `destination.domain`. - -type: keyword - -example: www.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.subject`*:: -+ --- -Distinguished name of subject of the x.509 certificate presented by the client. - -type: keyword - -example: CN=myclient, OU=Documentation Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.supported_ciphers`*:: -+ --- -Array of ciphers offered by the client during the client hello. - -type: keyword - -example: ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`tls.client.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.client.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.curve`*:: -+ --- -String indicating the curve used for the given cipher, when applicable. - -type: keyword - -example: secp256r1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.established`*:: -+ --- -Boolean flag indicating if the TLS negotiation was successful and transitioned to an encrypted tunnel. - -type: boolean - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.next_protocol`*:: -+ --- -String indicating the protocol being tunneled. Per the values in the IANA registry (https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids), this string should be lower case. - -type: keyword - -example: http/1.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.resumed`*:: -+ --- -Boolean flag indicating if this TLS connection was resumed from an existing TLS negotiation. - -type: boolean - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.certificate`*:: -+ --- -PEM-encoded stand-alone certificate offered by the server. This is usually mutually-exclusive of `server.certificate_chain` since this value also exists in that list. - -type: keyword - -example: MII... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.certificate_chain`*:: -+ --- -Array of PEM-encoded certificates that make up the certificate chain offered by the server. This is usually mutually-exclusive of `server.certificate` since that value should be the first certificate in the chain. - -type: keyword - -example: ["MII...", "MII..."] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.md5`*:: -+ --- -Certificate fingerprint using the MD5 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0F76C7F2C55BFD7D8E8B8F4BFBF0C9EC - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.sha1`*:: -+ --- -Certificate fingerprint using the SHA1 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 9E393D93138888D288266C2D915214D1D1CCEB2A - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.hash.sha256`*:: -+ --- -Certificate fingerprint using the SHA256 digest of DER-encoded version of certificate offered by the server. For consistency with other hash values, this value should be formatted as an uppercase hash. - -type: keyword - -example: 0687F666A054EF17A08E2F2162EAB4CBC0D265E1D7875BE74BF3C712CA92DAF0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.issuer`*:: -+ --- -Subject of the issuer of the x.509 certificate presented by the server. - -type: keyword - -example: CN=Example Root CA, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.ja3s`*:: -+ --- -A hash that identifies servers based on how they perform an SSL/TLS handshake. - -type: keyword - -example: 394441ab65754e2207b1e1b457b3641d - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.not_after`*:: -+ --- -Timestamp indicating when server certificate is no longer considered valid. - -type: date - -example: 2021-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.not_before`*:: -+ --- -Timestamp indicating when server certificate is first considered valid. - -type: date - -example: 1970-01-01T00:00:00.000Z - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.subject`*:: -+ --- -Subject of the x.509 certificate presented by the server. - -type: keyword - -example: CN=www.example.com, OU=Infrastructure Team, DC=example, DC=com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -{yes-icon} {ecs-ref}[ECS] field. - -Field is not indexed. - --- - -*`tls.server.x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.server.x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.version`*:: -+ --- -Numeric part of the version parsed from the original string. - -type: keyword - -example: 1.2 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`tls.version_protocol`*:: -+ --- -Normalized lowercase protocol name parsed from original string. - -type: keyword - -example: tls - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`span.id`*:: -+ --- -Unique identifier of the span within the scope of its trace. -A span represents an operation within a transaction, such as a request to another service, or a database query. - -type: keyword - -example: 3ff9a8981b7ccd5a - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`trace.id`*:: -+ --- -Unique identifier of the trace. -A trace groups multiple events like transactions that belong together. For example, a user request handled by multiple inter-connected services. - -type: keyword - -example: 4bf92f3577b34da6a3ce929d0e0e4736 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`transaction.id`*:: -+ --- -Unique identifier of the transaction within the scope of its trace. -A transaction is the highest level of work measured within a service, such as a request to a server. - -type: keyword - -example: 00f067aa0ba902b7 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== url - -URL fields provide support for complete or partial URLs, and supports the breaking down into scheme, domain, path, and so on. - - -*`url.domain`*:: -+ --- -Domain of the url, such as "www.elastic.co". -In some cases a URL may refer to an IP and/or port directly, without a domain name. In this case, the IP address would go to the `domain` field. -If the URL contains a literal IPv6 address enclosed by `[` and `]` (IETF RFC 2732), the `[` and `]` characters should also be captured in the `domain` field. - -type: keyword - -example: www.elastic.co - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.extension`*:: -+ --- -The field contains the file extension from the original request url, excluding the leading dot. -The file extension is only set if it exists, as not every url has a file extension. -The leading period must not be included. For example, the value must be "png", not ".png". -Note that when the file name has multiple extensions (example.tar.gz), only the last one should be captured ("gz", not "tar.gz"). - -type: keyword - -example: png - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.fragment`*:: -+ --- -Portion of the url after the `#`, such as "top". -The `#` is not part of the fragment. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full`*:: -+ --- -If full URLs are important to your use case, they should be stored in `url.full`, whether this field is reconstructed or present in the event source. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.full.text`*:: -+ --- -type: match_only_text - --- - -*`url.original`*:: -+ --- -Unmodified original url as seen in the event source. -Note that in network monitoring, the observed URL may be a full URL, whereas in access logs, the URL is often just represented as a path. -This field is meant to represent the URL as it was observed, complete or not. - -type: wildcard - -example: https://www.elastic.co:443/search?q=elasticsearch#top or /search?q=elasticsearch - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.original.text`*:: -+ --- -type: match_only_text - --- - -*`url.password`*:: -+ --- -Password of the request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.path`*:: -+ --- -Path of the request, such as "/search". - -type: wildcard - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.port`*:: -+ --- -Port of the request, such as 443. - -type: long - -example: 443 - -format: string - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.query`*:: -+ --- -The query field describes the query string of the request, such as "q=elasticsearch". -The `?` is excluded from the query string. If a URL contains no `?`, there is no query field. If there is a `?` but no query, the query field exists with an empty string. The `exists` query can be used to differentiate between the two cases. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.registered_domain`*:: -+ --- -The highest registered url domain, stripped of the subdomain. -For example, the registered domain for "foo.example.com" is "example.com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last two labels will not work well for TLDs such as "co.uk". - -type: keyword - -example: example.com - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.scheme`*:: -+ --- -Scheme of the request, such as "https". -Note: The `:` is not part of the scheme. - -type: keyword - -example: https - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.subdomain`*:: -+ --- -The subdomain portion of a fully qualified domain name includes all of the names except the host name under the registered_domain. In a partially qualified domain, or if the the qualification level of the full name cannot be determined, subdomain contains all of the names below the registered domain. -For example the subdomain portion of "www.east.mydomain.co.uk" is "east". If the domain has multiple levels of subdomain, such as "sub2.sub1.example.com", the subdomain field should contain "sub2.sub1", with no trailing period. - -type: keyword - -example: east - --- - -*`url.top_level_domain`*:: -+ --- -The effective top level domain (eTLD), also known as the domain suffix, is the last part of the domain name. For example, the top level domain for example.com is "com". -This value can be determined precisely with a list like the public suffix list (http://publicsuffix.org). Trying to approximate this by simply taking the last label will not work well for effective TLDs such as "co.uk". - -type: keyword - -example: co.uk - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`url.username`*:: -+ --- -Username of the request. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== user - -The user fields describe information about the user that is relevant to the event. -Fields can have one entry or multiple entries. If a user has more than one id, provide an array that includes all of them. - - -*`user.changes.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.changes.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.changes.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.changes.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.changes.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.changes.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.changes.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.changes.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.changes.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.changes.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.effective.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.effective.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.effective.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.effective.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.effective.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.effective.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.effective.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.effective.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.effective.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.effective.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -*`user.email`*:: -+ --- -User email address. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.group.name`*:: -+ --- -Name of the group. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user.target.domain`*:: -+ --- -Name of the directory the user is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.email`*:: -+ --- -User email address. - -type: keyword - --- - -*`user.target.full_name`*:: -+ --- -User's full name, if available. - -type: keyword - -example: Albert Einstein - --- - -*`user.target.full_name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.group.domain`*:: -+ --- -Name of the directory the group is a member of. -For example, an LDAP or Active Directory domain name. - -type: keyword - --- - -*`user.target.group.id`*:: -+ --- -Unique identifier for the group on the system/platform. - -type: keyword - --- - -*`user.target.group.name`*:: -+ --- -Name of the group. - -type: keyword - --- - -*`user.target.hash`*:: -+ --- -Unique user hash to correlate information for a user in anonymized form. -Useful if `user.id` or `user.name` contain confidential information and cannot be used. - -type: keyword - --- - -*`user.target.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - -example: S-1-5-21-202424912787-2692429404-2351956786-1000 - --- - -*`user.target.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: a.einstein - --- - -*`user.target.name.text`*:: -+ --- -type: match_only_text - --- - -*`user.target.roles`*:: -+ --- -Array of user roles at the time of the event. - -type: keyword - -example: ["kibana_admin", "reporting_user"] - --- - -[float] -=== user_agent - -The user_agent fields normally come from a browser request. -They often show up in web service logs coming from the parsed user agent string. - - -*`user_agent.device.name`*:: -+ --- -Name of the device. - -type: keyword - -example: iPhone - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.name`*:: -+ --- -Name of the user agent. - -type: keyword - -example: Safari - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original`*:: -+ --- -Unparsed user_agent string. - -type: keyword - -example: Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.original.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.family`*:: -+ --- -OS family (such as redhat, debian, freebsd, windows). - -type: keyword - -example: debian - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full`*:: -+ --- -Operating system name, including the version or code name. - -type: keyword - -example: Mac OS Mojave - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.full.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.kernel`*:: -+ --- -Operating system kernel version as a raw string. - -type: keyword - -example: 4.4.0-112-generic - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name`*:: -+ --- -Operating system name, without the version. - -type: keyword - -example: Mac OS X - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.name.text`*:: -+ --- -type: match_only_text - --- - -*`user_agent.os.platform`*:: -+ --- -Operating system platform (such centos, ubuntu, windows). - -type: keyword - -example: darwin - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.os.type`*:: -+ --- -Use the `os.type` field to categorize the operating system into one of the broad commercial families. -One of these following values should be used (lowercase): linux, macos, unix, windows. -If the OS you're dealing with is not in the list, the field should not be populated. Please let us know by opening an issue with ECS, to propose its addition. - -type: keyword - -example: macos - --- - -*`user_agent.os.version`*:: -+ --- -Operating system version as a raw string. - -type: keyword - -example: 10.14.1 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`user_agent.version`*:: -+ --- -Version of the user agent. - -type: keyword - -example: 12.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== vlan - -The VLAN fields are used to identify 802.1q tag(s) of a packet, as well as ingress and egress VLAN associations of an observer in relation to a specific packet or connection. -Network.vlan fields are used to record a single VLAN tag, or the outer tag in the case of q-in-q encapsulations, for a packet or connection as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. -Network.inner VLAN fields are used to report inner q-in-q 802.1q tags (multiple 802.1q encapsulations) as observed, typically provided by a network sensor (e.g. Zeek, Wireshark) passively reporting on traffic. Network.inner VLAN fields should only be used in addition to network.vlan fields to indicate q-in-q tagging. -Observer.ingress and observer.egress VLAN values are used to record observer specific information when observer events contain discrete ingress and egress VLAN information, typically provided by firewalls, routers, or load balancers. - - -*`vlan.id`*:: -+ --- -VLAN ID as reported by the observer. - -type: keyword - -example: 10 - --- - -*`vlan.name`*:: -+ --- -Optional VLAN name as reported by the observer. - -type: keyword - -example: outside - --- - -[float] -=== vulnerability - -The vulnerability fields describe information about a vulnerability that is relevant to an event. - - -*`vulnerability.category`*:: -+ --- -The type of system or architecture that the vulnerability affects. These may be platform-specific (for example, Debian or SUSE) or general (for example, Database or Firewall). For example (https://qualysguard.qualys.com/qwebhelp/fo_portal/knowledgebase/vulnerability_categories.htm[Qualys vulnerability categories]) -This field must be an array. - -type: keyword - -example: ["Firewall"] - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.classification`*:: -+ --- -The classification of the vulnerability scoring system. For example (https://www.first.org/cvss/) - -type: keyword - -example: CVSS - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.description`*:: -+ --- -The description of the vulnerability that provides additional context of the vulnerability. For example (https://cve.mitre.org/about/faqs.html#cve_entry_descriptions_created[Common Vulnerabilities and Exposure CVE description]) - -type: keyword - -example: In macOS before 2.12.6, there is a vulnerability in the RPC... - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.description.text`*:: -+ --- -type: match_only_text - --- - -*`vulnerability.enumeration`*:: -+ --- -The type of identifier used for this vulnerability. For example (https://cve.mitre.org/about/) - -type: keyword - -example: CVE - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.id`*:: -+ --- -The identification (ID) is the number portion of a vulnerability entry. It includes a unique identification number for the vulnerability. For example (https://cve.mitre.org/about/faqs.html#what_is_cve_id)[Common Vulnerabilities and Exposure CVE ID] - -type: keyword - -example: CVE-2019-00001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.reference`*:: -+ --- -A resource that provides additional information, context, and mitigations for the identified vulnerability. - -type: keyword - -example: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.report_id`*:: -+ --- -The report or scan identification number. - -type: keyword - -example: 20191018.0001 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.scanner.vendor`*:: -+ --- -The name of the vulnerability scanner vendor. - -type: keyword - -example: Tenable - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.base`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Base scores cover an assessment for exploitability metrics (attack vector, complexity, privileges, and user interaction), impact metrics (confidentiality, integrity, and availability), and scope. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.environmental`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Environmental scores cover an assessment for any modified Base metrics, confidentiality, integrity, and availability requirements. For example (https://www.first.org/cvss/specification-document) - -type: float - -example: 5.5 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.temporal`*:: -+ --- -Scores can range from 0.0 to 10.0, with 10.0 being the most severe. -Temporal scores cover an assessment for code maturity, remediation level, and confidence. For example (https://www.first.org/cvss/specification-document) - -type: float - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.score.version`*:: -+ --- -The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. -CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: 2.0 - -{yes-icon} {ecs-ref}[ECS] field. - --- - -*`vulnerability.severity`*:: -+ --- -The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. For example (https://nvd.nist.gov/vuln-metrics/cvss) - -type: keyword - -example: Critical - -{yes-icon} {ecs-ref}[ECS] field. - --- - -[float] -=== x509 - -This implements the common core fields for x509 certificates. This information is likely logged with TLS sessions, digital signatures found in executable binaries, S/MIME information in email bodies, or analysis of files on disk. -When the certificate relates to a file, use the fields at `file.x509`. When hashes of the DER-encoded certificate are available, the `hash` data set should be populated as well (e.g. `file.hash.sha256`). -Events that contain certificate information about network connections, should use the x509 fields under the relevant TLS fields: `tls.server.x509` and/or `tls.client.x509`. - - -*`x509.alternative_names`*:: -+ --- -List of subject alternative names (SAN). Name types vary by certificate authority and certificate type but commonly contain IP addresses, DNS names (and wildcards), and email addresses. - -type: keyword - -example: *.elastic.co - --- - -*`x509.issuer.common_name`*:: -+ --- -List of common name (CN) of issuing certificate authority. - -type: keyword - -example: Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.country`*:: -+ --- -List of country (C) codes - -type: keyword - -example: US - --- - -*`x509.issuer.distinguished_name`*:: -+ --- -Distinguished name (DN) of issuing certificate authority. - -type: keyword - -example: C=US, O=Example Inc, OU=www.example.com, CN=Example SHA2 High Assurance Server CA - --- - -*`x509.issuer.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: Mountain View - --- - -*`x509.issuer.organization`*:: -+ --- -List of organizations (O) of issuing certificate authority. - -type: keyword - -example: Example Inc - --- - -*`x509.issuer.organizational_unit`*:: -+ --- -List of organizational units (OU) of issuing certificate authority. - -type: keyword - -example: www.example.com - --- - -*`x509.issuer.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.not_after`*:: -+ --- -Time at which the certificate is no longer considered valid. - -type: date - -example: 2020-07-16 03:15:39+00:00 - --- - -*`x509.not_before`*:: -+ --- -Time at which the certificate is first considered valid. - -type: date - -example: 2019-08-16 01:40:25+00:00 - --- - -*`x509.public_key_algorithm`*:: -+ --- -Algorithm used to generate the public key. - -type: keyword - -example: RSA - --- - -*`x509.public_key_curve`*:: -+ --- -The curve used by the elliptic curve public key algorithm. This is algorithm specific. - -type: keyword - -example: nistp521 - --- - -*`x509.public_key_exponent`*:: -+ --- -Exponent used to derive the public key. This is algorithm specific. - -type: long - -example: 65537 - -Field is not indexed. - --- - -*`x509.public_key_size`*:: -+ --- -The size of the public key space in bits. - -type: long - -example: 2048 - --- - -*`x509.serial_number`*:: -+ --- -Unique serial number issued by the certificate authority. For consistency, if this value is alphanumeric, it should be formatted without colons and uppercase characters. - -type: keyword - -example: 55FBB9C7DEBF09809D12CCAA - --- - -*`x509.signature_algorithm`*:: -+ --- -Identifier for certificate signature algorithm. We recommend using names found in Go Lang Crypto library. See https://github.com/golang/go/blob/go1.14/src/crypto/x509/x509.go#L337-L353. - -type: keyword - -example: SHA256-RSA - --- - -*`x509.subject.common_name`*:: -+ --- -List of common names (CN) of subject. - -type: keyword - -example: shared.global.example.net - --- - -*`x509.subject.country`*:: -+ --- -List of country (C) code - -type: keyword - -example: US - --- - -*`x509.subject.distinguished_name`*:: -+ --- -Distinguished name (DN) of the certificate subject entity. - -type: keyword - -example: C=US, ST=California, L=San Francisco, O=Example, Inc., CN=shared.global.example.net - --- - -*`x509.subject.locality`*:: -+ --- -List of locality names (L) - -type: keyword - -example: San Francisco - --- - -*`x509.subject.organization`*:: -+ --- -List of organizations (O) of subject. - -type: keyword - -example: Example, Inc. - --- - -*`x509.subject.organizational_unit`*:: -+ --- -List of organizational units (OU) of subject. - -type: keyword - --- - -*`x509.subject.state_or_province`*:: -+ --- -List of state or province names (ST, S, or P) - -type: keyword - -example: California - --- - -*`x509.version_number`*:: -+ --- -Version of x509 format. - -type: keyword - -example: 3 - --- - -[[exported-fields-host-processor]] -== Host fields - -Info collected for the host machine. - - - - -*`host.containerized`*:: -+ --- -If the host is a container. - - -type: boolean - --- - -*`host.os.build`*:: -+ --- -OS build information. - - -type: keyword - -example: 18D109 - --- - -*`host.os.codename`*:: -+ --- -OS codename, if any. - - -type: keyword - -example: stretch - --- - -[[exported-fields-kubernetes-processor]] -== Kubernetes fields - -Kubernetes metadata added by the kubernetes processor - - - - -*`kubernetes.pod.name`*:: -+ --- -Kubernetes pod name - - -type: keyword - --- - -*`kubernetes.pod.uid`*:: -+ --- -Kubernetes Pod UID - - -type: keyword - --- - -*`kubernetes.pod.ip`*:: -+ --- -Kubernetes Pod IP - - -type: ip - --- - - -*`kubernetes.namespace.name`*:: -+ --- -Kubernetes namespace name - - -type: keyword - --- - -*`kubernetes.namespace.uuid`*:: -+ --- -Kubernetes namespace uuid - - -type: keyword - --- - -*`kubernetes.namespace.labels.*`*:: -+ --- -Kubernetes namespace labels map - - -type: object - --- - -*`kubernetes.namespace.annotations.*`*:: -+ --- -Kubernetes namespace annotations map - - -type: object - --- - -*`kubernetes.node.name`*:: -+ --- -Kubernetes node name - - -type: keyword - --- - -*`kubernetes.node.hostname`*:: -+ --- -Kubernetes hostname as reported by the node’s kernel - - -type: keyword - --- - -*`kubernetes.labels.*`*:: -+ --- -Kubernetes labels map - - -type: object - --- - -*`kubernetes.annotations.*`*:: -+ --- -Kubernetes annotations map - - -type: object - --- - -*`kubernetes.selectors.*`*:: -+ --- -Kubernetes selectors map - - -type: object - --- - -*`kubernetes.replicaset.name`*:: -+ --- -Kubernetes replicaset name - - -type: keyword - --- - -*`kubernetes.deployment.name`*:: -+ --- -Kubernetes deployment name - - -type: keyword - --- - -*`kubernetes.statefulset.name`*:: -+ --- -Kubernetes statefulset name - - -type: keyword - --- - -*`kubernetes.container.name`*:: -+ --- -Kubernetes container name (different than the name from the runtime) - - -type: keyword - --- - -[[exported-fields-process]] -== Process fields - -Process metadata fields - - - - -*`process.exe`*:: -+ --- -type: alias - -alias to: process.executable - --- - -[float] -=== owner - -Process owner information. - - -*`process.owner.id`*:: -+ --- -Unique identifier of the user. - -type: keyword - --- - -*`process.owner.name`*:: -+ --- -Short name or login of the user. - -type: keyword - -example: albert - --- - -*`process.owner.name.text`*:: -+ --- -type: text - --- - -[[exported-fields-system]] -== System Metrics fields - -System status metrics, like CPU and memory usage, that are collected from the operating system. - - - -[float] -=== system - -`system` contains local system metrics. - - - -[float] -=== cpu - -`cpu` contains local CPU stats. - - - -*`system.cpu.total.norm.pct`*:: -+ --- -The percentage of CPU time spent by the process since the last event. This value is normalized by the number of CPU cores and it ranges from 0 to 100%. - - -type: scaled_float - -format: percent - --- - -[float] -=== memory - -`memory` contains local memory stats. - - - -*`system.memory.total`*:: -+ --- -Total memory. - - -type: long - -format: bytes - --- - -[float] -=== actual - -Actual memory used and free. - - - -*`system.memory.actual.free`*:: -+ --- -Actual free memory in bytes. It is calculated based on the OS. On Linux it consists of the free memory plus caches and buffers. On OSX it is a sum of free memory and the inactive memory. On Windows, it is equal to `system.memory.free`. - - -type: long - -format: bytes - --- - -[float] -=== process - -`process` contains process metadata, CPU metrics, and memory metrics. - - - -[float] -=== cpu - -`cpu` contains local CPU stats. - - - -*`system.process.cpu.total.norm.pct`*:: -+ --- -The percentage of CPU time spent by the process since the last event. This value is normalized by the number of CPU cores and it ranges from 0 to 100%. - - -type: scaled_float - -format: percent - --- - -[float] -=== memory - -Memory-specific statistics per process. - - -*`system.process.memory.size`*:: -+ --- -The total virtual memory the process has. - - -type: long - -format: bytes - --- - -*`system.process.memory.rss.bytes`*:: -+ --- -The Resident Set Size. The amount of memory the process occupied in main memory (RAM). - - -type: long - -format: bytes - --- - -[float] -=== cgroup - -Metrics and limits for the cgroup, collected by APM agents on Linux. - - -[float] -=== cpu - -CPU-specific cgroup metrics and limits. - - -*`system.process.cgroup.cpu.id`*:: -+ --- -ID for the current cgroup CPU. - -type: keyword - --- - -[float] -=== cfs - -Completely Fair Scheduler (CFS) cgroup metrics. - - -*`system.process.cgroup.cpu.cfs.period.us`*:: -+ --- -CFS period in microseconds. - -type: long - --- - -*`system.process.cgroup.cpu.cfs.quota.us`*:: -+ --- -CFS quota in microseconds. - -type: long - --- - -*`system.process.cgroup.cpu.stats.periods`*:: -+ --- -Number of periods seen by the CPU. - -type: long - --- - -*`system.process.cgroup.cpu.stats.throttled.periods`*:: -+ --- -Number of throttled periods seen by the CPU. - -type: long - --- - -*`system.process.cgroup.cpu.stats.throttled.ns`*:: -+ --- -Nanoseconds spent throttled seen by the CPU. - -type: long - --- - -[float] -=== cpuacct - -CPU Accounting-specific cgroup metrics and limits. - - -*`system.process.cgroup.cpuacct.id`*:: -+ --- -ID for the current cgroup CPU. - -type: keyword - --- - -*`system.process.cgroup.cpuacct.total.ns`*:: -+ --- -Total CPU time for the current cgroup CPU in nanoseconds. - -type: long - --- - -[float] -=== memory - -Memory-specific cgroup metrics and limits. - - -*`system.process.cgroup.memory.mem.limit.bytes`*:: -+ --- -Memory limit for the current cgroup slice. - -type: long - -format: bytes - --- - -*`system.process.cgroup.memory.mem.usage.bytes`*:: -+ --- -Memory usage by the current cgroup slice. - -type: long - -format: bytes - --- diff --git a/docs/legacy/getting-started-apm-server.asciidoc b/docs/legacy/getting-started-apm-server.asciidoc deleted file mode 100644 index c4c84a55e08..00000000000 --- a/docs/legacy/getting-started-apm-server.asciidoc +++ /dev/null @@ -1,344 +0,0 @@ -[[getting-started-apm-server]] -== Getting started with APM Server - -++++ -Get started -++++ - -IMPORTANT: {deprecation-notice-installation} - -IMPORTANT: Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates, -{ilm-init} policies, and ingest pipelines. APM Server will only send data to {es} _after_ the APM integration has been installed. - -The easiest way to get started with Elastic APM is by using our -{ess-product}[hosted {es} Service] on {ecloud}. -The {es} Service is available on AWS, GCP, and Azure, -and automatically configures APM Server to work with {es} and {kib}. - -[float] -=== Hosted {es} Service - -Skip managing your own {es}, {kib}, and APM Server by using our -{ess-product}[hosted {es} Service] on -{ecloud}. - -image::images/apm-architecture-cloud.png[Install Elastic APM on cloud] - -{ess-trial}[Try out the {es} Service for free], -then see {cloud}/ec-manage-apm-settings.html[Add APM user settings] for information on how to configure Elastic APM. - -[float] -=== Install and manage the stack yourself - -If you'd rather install the stack yourself, first see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. - -You'll need: - -* *{es}* for storing and indexing data. -* *{kib}* for visualizing with the APM UI. - -We recommend you use the same version of {es}, {kib}, and APM Server. - -image::images/apm-architecture-diy.png[Install Elastic APM yourself] - -See {stack-ref}/installing-elastic-stack.html[Installing the {stack}] -for more information about installing these products. -After installing the {stack}, read the following topics to learn how to install, -configure, and start APM Server: - -* <> -* <> -* <> -* <> - -// ******************************************************* -// STEP 1 -// ******************************************************* - -[[installing]] -=== Step 1: Install - -IMPORTANT: {deprecation-notice-installation} - -NOTE: *Before you begin*: If you haven't installed the {stack}, do that now. -See {stack-ref}/installing-elastic-stack.html[Learn how to install the -{stack} on your own hardware]. - -To download and install {beatname_uc}, use the commands below that work with your system. -If you use `apt` or `yum`, you can <> -to update to the newest version more easily. - -ifeval::["{release-state}"!="unreleased"] -See our https://www.elastic.co/downloads/apm[download page] -for other installation options, such as 32-bit images. -endif::[] - -[[deb]] -*deb:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-amd64.deb -sudo dpkg -i apm-server-{version}-amd64.deb ----------------------------------------------------------------------- - -endif::[] - -[[rpm]] -*RPM:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-x86_64.rpm -sudo rpm -vi apm-server-{version}-x86_64.rpm ----------------------------------------------------------------------- - -endif::[] - -[[linux]] -*Other Linux:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-linux-x86_64.tar.gz -tar xzvf apm-server-{version}-linux-x86_64.tar.gz ------------------------------------------------- -endif::[] - -[[mac]] -*Mac:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-darwin-x86_64.tar.gz -tar xzvf apm-server-{version}-darwin-x86_64.tar.gz ------------------------------------------------- - -endif::[] - -[[installing-on-windows]] -*Windows:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -. Download the APM Server Windows zip file from the -https://www.elastic.co/downloads/apm/apm-server[downloads page]. - -. Extract the contents of the zip file into `C:\Program Files`. - -. Rename the `apm-server--windows` directory to `APM-Server`. - -. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select *Run As Administrator*). -If you are running Windows XP, you may need to download and install PowerShell. - -. From the PowerShell prompt, run the following commands to install APM Server as a Windows service: -+ -[source,shell] ----------------------------------------------------------------------- -PS > cd 'C:\Program Files\APM-Server' -PS C:\Program Files\APM-Server> .\install-service-apm-server.ps1 ----------------------------------------------------------------------- - -NOTE: If script execution is disabled on your system, -you need to set the execution policy for the current session to allow the script to run. -For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-apm-server.ps1`. - -endif::[] - -[[docker]] -*Docker:* - -See <> for deploying Docker containers. - -// ******************************************************* -// STEP 2 -// ******************************************************* - -[[apm-server-configuration]] -=== Step 2: Set up and configure - -IMPORTANT: {deprecation-notice-installation} - -[float] -==== {ecloud} - -If you're running APM in Elastic cloud, -see {cloud}/ec-manage-apm-settings.html[Add APM user settings] for information on how to configure Elastic APM. - -[float] -==== Self installation - -// This content is reused in the upgrading guide -// tag::why-apm-integration[] -Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates, -{ilm-init} policies, and ingest pipelines. APM Server will only send data to {es} _after_ the APM integration has been installed. -// end::why-apm-integration[] - -[float] -===== Install the APM integration - -// This content is reused in the upgrading guide -// tag::install-apm-integration[] -NOTE: An internet connection is required to install the APM integration. -If your environment has network traffic restrictions, there are ways to work around this requirement. -See {fleet-guide}/air-gapped.html[Air-gapped environments] for more information. - -// lint ignore elastic-agent -. Open {kib} and select **Add integrations** > **Elastic APM**. -. Click **APM integration**. -. Click **Add Elastic APM**. -. Click **Save and continue**. -. Click **Add Elastic Agent later**. You do not need to run an {agent} to complete the setup. -// end::install-apm-integration[] - -[float] -===== Configure APM - -Configure APM by editing the `apm-server.yml` configuration file. -The location of this file varies by platform--see the <> for help locating it. - -A minimal configuration file might look like this: - -[source,yaml] ----- -apm-server: - host: "localhost:8200" <1> -output.elasticsearch: - hosts: ["localhost:9200"] <2> - username: "elastic" <3> - password: "changeme" <4> ----- -<1> The `host:port` APM Server listens on. -<2> The {es} `host:port` to connect to. -<3> This example uses basic authentication. -The user provided here needs the privileges required to publish events to {es}. -To create a dedicated user for this role, see <>. -<4> We've hard-coded the password here, -but you should store sensitive values in the <>. - -All available configuration options are outlined in -{apm-server-ref-v}/configuring-howto-apm-server.html[configuring APM Server]. - -// ******************************************************* -// STEP 3 -// ******************************************************* - -[[apm-server-starting]] -=== Step 3: Start - -IMPORTANT: {deprecation-notice-installation} - -In a production environment, you would put APM Server on its own machines, -similar to how you run {es}. -You _can_ run it on the same machines as {es}, but this is not recommended, -as the processes will be competing for resources. - -To start APM Server, run: - -[source,bash] ----------------------------------- -./apm-server -e ----------------------------------- - -NOTE: The `-e` <> enables logging to stderr and disables syslog/file output. -Remove this flag if you've enabled logging in the configuration file. -For Linux systems, see <>. - -You should see APM Server start up. -It will try to connect to {es} on localhost port `9200` and expose an API to agents on port `8200`. -You can change the defaults in `apm-server.yml` or by supplying a different address on the command line: - -[source,bash] ----------------------------------- -./apm-server -e -E output.elasticsearch.hosts=ElasticsearchAddress:9200 -E apm-server.host=localhost:8200 ----------------------------------- - -[float] -[[running-deb-rpm]] -==== Debian Package / RPM - -For Debian package and RPM installations, we recommend the `apm-server` process runs as a non-root user. -Therefore, these installation methods create an `apm-server` user which you can use to start the process. -In addition, {beatname_uc} will only start if the configuration file is -<>. - -To start the APM Server in this case, run: - -[source,bash] ----------------------------------- -sudo -u apm-server apm-server [] ----------------------------------- - -By default, APM Server loads its configuration file from `/etc/apm-server/apm-server.yml`. -See the <<_deb_and_rpm,deb & rpm default paths>> for a full directory layout. - -// ******************************************************* -// STEP 4 -// ******************************************************* - -[[next-steps]] -=== Step 4: Next steps - -IMPORTANT: {deprecation-notice-installation} - -// Use a tagged region to pull APM Agent information from the APM Overview -If you haven't already, you can now install APM Agents in your services! - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -Once you have at least one {apm-agent} sending data to APM Server, -you can start visualizing your data in the {kibana-ref}/xpack-apm.html[{apm-app}]. - -If you're migrating from Jaeger, see <>. - -// Shared APM & YUM -include::{libbeat-dir}/repositories.asciidoc[] - -// Shared docker -include::{libbeat-dir}/shared-docker.asciidoc[] diff --git a/docs/legacy/guide/apm-breaking-changes.asciidoc b/docs/legacy/guide/apm-breaking-changes.asciidoc deleted file mode 100644 index 269eb5e722c..00000000000 --- a/docs/legacy/guide/apm-breaking-changes.asciidoc +++ /dev/null @@ -1,285 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[apm-breaking-changes]] -== Breaking changes - -This section discusses the changes that you need to be aware of when migrating your application from one version of APM to another. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -Also see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -//NOTE: The notable-breaking-changes tagged regions are re-used in the -//Installation and Upgrade Guide - -// tag::notable-v8-breaking-changes[] -// end::notable-v8-breaking-changes[] -// tag::716-bc[] -// end::716-bc[] - -// tag::715-bc[] -[[breaking-7.15.0]] -=== 7.15.0 APM Breaking changes - -The following breaking changes were introduced in 7.15: - -- `network.connection_type` is now `network.connection.type` {pull}5671[5671] -- `transaction.page` and `error.page` no longer recorded {pull}5872[5872] -- experimental:["This breaking change applies to the experimental tail-based sampling feature."] `apm-server.sampling.tail` now requires `apm-server.data_streams.enabled` {pull}5952[5952] -- beta:["This breaking change applies to the beta APM integration."] The `traces-sampled-*` data stream is now `traces-apm.sampled-*` {pull}5952[5952] - -// end::715-bc[] - -[[breaking-7.14.0]] -=== 7.14.0 APM Breaking changes - -// tag::714-bc[] -No breaking changes. -// end::714-bc[] - -[[breaking-7.13.0]] -=== 7.13.0 APM Breaking changes - -// tag::713-bc[] -No breaking changes. -// end::713-bc[] - -[[breaking-7.12.0]] -=== 7.12.0 APM Breaking changes - -// tag::712-bc[] -There are three breaking changes to be aware of; -these changes only impact users ingesting data with -{apm-server-ref-v}/jaeger.html[Jaeger clients]. - -* Leading `0s` are no longer removed from Jaeger client trace/span ids. -+ --- -This change ensures distributed tracing continues to work across platforms by creating -consistent, full trace/span IDs from Jaeger clients, Elastic APM agents, -and OpenTelemetry SDKs. --- - -* Jaeger spans will now have a type of "app" where they previously were "custom". -+ --- -If the Jaeger span type is not inferred, it will now be "app". -This aligns with the OpenTelemetry Collector exporter -and improves the functionality of the _time spent by span type_ charts in the {apm-app}. --- - -* Jaeger spans may now have a more accurate outcome of "unknown". -+ --- -Previously, a "success" outcome was assumed when a span didn't fail. -The new default assigns "unknown", and only sets an outcome of "success" or "failure" when -the outcome is explicitly known. -This change aligns with Elastic APM agents and the OpenTelemetry Collector exporter. --- -// end::712-bc[] - -[[breaking-7.11.0]] -=== 7.11.0 APM Breaking changes - -// tag::notable-breaking-changes[] -No breaking changes. -// end::notable-breaking-changes[] - -[[breaking-7.10.0]] -=== 7.10.0 APM Breaking changes - -// tag::notable-breaking-changes[] -No breaking changes. -// end::notable-breaking-changes[] - -[[breaking-7.9.0]] -=== 7.9.0 APM Breaking changes - -// tag::notable-v79-breaking-changes[] -No breaking changes. -// end::notable-v79-breaking-changes[] - -[[breaking-7.8.0]] -=== 7.8.0 APM Breaking changes - -// tag::notable-v78-breaking-changes[] -No breaking changes. -// end::notable-v78-breaking-changes[] - -[[breaking-7.7.0]] -=== 7.7.0 APM Breaking changes - -// tag::notable-v77-breaking-changes[] -There are no breaking changes in APM Server. -However, a previously hardcoded feature is now configurable. -Failing to follow these {apm-guide-7x}/upgrading-to-77.html[upgrade steps] will result in increased span metadata ingestion when upgrading to version 7.7. -// end::notable-v77-breaking-changes[] - -[[breaking-7.6.0]] -=== 7.6.0 APM Breaking changes - -// tag::notable-v76-breaking-changes[] -No breaking changes. -// end::notable-v76-breaking-changes[] - -[[breaking-7.5.0]] -=== 7.5.0 APM Breaking changes - -// tag::notable-v75-breaking-changes[] - -APM Server:: -+ -* Introduced dedicated `apm-server.ilm.setup.*` flags. -This means you can now customize {ilm-init} behavior from within the APM Server configuration. -As a side effect, `setup.template.*` settings will be ignored for {ilm-init} related templates per event type. -See {apm-server-ref}/ilm.html[set up {ilm-init}] for more information. -+ -* By default, {ilm-init} policies will not longer be versioned. -All event types will switch to the new default policy: rollover after 30 days or when reaching a size of 50 GB. -See {apm-server-ref}/ilm.html[default policy] for more information. - -APM:: -+ -* To make use of all the new features introduced in 7.5, -you must ensure you are using version 7.5+ of APM Server and version 7.5+ of {kib}. - -// end::notable-v75-breaking-changes[] - -[[breaking-7.4.0]] -=== 7.4.0 APM Breaking changes - -// tag::notable-v74-breaking-changes[] -No breaking changes. -// end::notable-v74-breaking-changes[] - -[[breaking-7.3.0]] -=== 7.3.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.2.0]] -=== 7.2.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.1.0]] -=== 7.1.0 APM Breaking changes - -No breaking changes. - -[[breaking-7.0.0]] -=== 7.0.0 APM Breaking changes - -APM Server:: -+ -[[breaking-remove-v1]] -**Removed deprecated Intake v1 API endpoints.** Before upgrading APM Server, -ensure all APM agents are upgraded to a version that supports APM Server ≥ 6.5. -View the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] -to determine if your agent versions are compatible. -+ -[[breaking-ecs]] -**Moved fields in {es} to be compliant with the Elastic Common Schema (ECS).** -APM has aligned with the field names defined in the -https://github.com/elastic/ecs[Elastic Common Schema (ECS)]. -Utilizing this common schema will allow for easier data correlation within {es}. -+ -See the ECS field changes table for full details on which fields have changed. - -APM UI:: -+ -[[breaking-new-endpoints]] -**Moved to new data endpoints.** -When you upgrade to 7.x, -data in indices created prior to 7.0 will not automatically appear in the APM UI. -We offer a {kib} Migration Assistant (in the {kib} Management section) to help you migrate your data. -The migration assistant will re-index your older data in the new ECS format. - -[float] -[[ecs-compliance]] -==== Elastic Common Schema field changes - -include::../field-name-changes.asciidoc[] - -[[breaking-6.8.0]] -=== 6.8.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.7.0]] -=== 6.7.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.6.0]] -=== 6.6.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.5.0]] -=== 6.5.0 APM Breaking changes - -No breaking changes. - -[[breaking-6.4.0]] -=== 6.4.0 APM Breaking changes - -We previously split APM data into separate indices (transaction, span, error, etc.). -In 6.4 APM {kib} UI starts to leverage those separate indices for queries. - -In case you only update {kib} but run an older version of APM Server you will not be able to see any APM data by default. -To fix this, use the {kibana-ref}/apm-settings-kb.html[{kib} APM settings] to specify the location of the APM index: -["source","sh"] ------------------------------------------------------------- -apm_oss.errorIndices: apm-* -apm_oss.spanIndices: apm-* -apm_oss.transactionIndices: apm-* -apm_oss.onboardingIndices: apm-* ------------------------------------------------------------- - -In case you are upgrading APM Server from an older version, you might need to refresh your APM index pattern for certain APM UI features to work. -Also ensure to add the new config options in `apm-server.yml` in case you keep your existing configuration file: -["source","sh"] ------------------------------------------------------------- -output.elasticsearch: - indices: - - index: "apm-%{[observer.version]}-sourcemap" - when.contains: - processor.event: "sourcemap" - - index: "apm-%{[observer.version]}-error-%{+yyyy.MM.dd}" - when.contains: - processor.event: "error" - - index: "apm-%{[observer.version]}-transaction-%{+yyyy.MM.dd}" - when.contains: - processor.event: "transaction" - - index: "apm-%{[observer.version]}-span-%{+yyyy.MM.dd}" - when.contains: - processor.event: "span" - - index: "apm-%{[observer.version]}-metric-%{+yyyy.MM.dd}" - when.contains: - processor.event: "metric" - - index: "apm-%{[observer.version]}-onboarding-%{+yyyy.MM.dd}" - when.contains: - processor.event: "onboarding" ------------------------------------------------------------- diff --git a/docs/legacy/guide/apm-data-model.asciidoc b/docs/legacy/guide/apm-data-model.asciidoc deleted file mode 100644 index 5ce016a238a..00000000000 --- a/docs/legacy/guide/apm-data-model.asciidoc +++ /dev/null @@ -1,295 +0,0 @@ -[[apm-data-model]] -== Data Model - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM agents capture different types of information from within their instrumented applications. -These are known as events, and can be `spans`, `transactions`, `errors`, or `metrics`. - -* <> -* <> -* <> -* <> - -Events can contain additional <> which further enriches your data. - -[[transaction-spans]] -=== Spans - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -*Spans* contain information about the execution of a specific code path. -They measure from the start to the end of an activity, -and they can have a parent/child relationship with other spans. - -Agents automatically instrument a variety of libraries to capture these spans from within your application, -but you can also use the Agent API for custom instrumentation of specific code paths. - -Among other things, spans can contain: - -* A `transaction.id` attribute that refers to its parent <>. -* A `parent.id` attribute that refers to its parent span or transaction. -* Its start time and duration. -* A `name`. -* A `type`, `subtype`, and `action`. -* An optional `stack trace`. Stack traces consist of stack frames, -which represent a function call on the call stack. -They include attributes like function name, file name and path, line number, etc. - -TIP: Most agents limit keyword fields, like `span.id`, to 1024 characters, -and non-keyword fields, like `span.start.us`, to 10,000 characters. - -Spans are stored in {apm-server-ref-v}/span-indices.html[span indices]. -This storage is separate from {apm-server-ref-v}/transaction-indices.html[transaction indices] by default. - -[float] -[[dropped-spans]] -==== Dropped spans - -For performance reasons, APM agents can choose to sample or omit spans purposefully. -This can be useful in preventing edge cases, like long-running transactions with over 100 spans, -that would otherwise overload both the Agent and the APM Server. -When this occurs, the {apm-app} will display the number of spans dropped. - -To configure the number of spans recorded per transaction, see the relevant Agent documentation: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-max-spans[`ELASTIC_APM_TRANSACTION_MAX_SPANS`] -* iOS: _Not yet supported_ -* Java: {apm-java-ref-v}/config-core.html#config-transaction-max-spans[`transaction_max_spans`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-max-spans[`TransactionMaxSpans`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-max-spans[`transactionMaxSpans`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-max-spans[`transaction_max_spans`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] - -[float] -[[missing-spans]] -==== Missing spans - -Agents stream spans to the APM Server separately from their transactions. -Because of this, unforeseen errors may cause spans to go missing. -Agents know how many spans a transaction should have; -if the number of expected spans does not equal the number of spans received by the APM Server, -the {apm-app} will calculate the difference and display a message. - -[[transactions]] -=== Transactions - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -*Transactions* are a special kind of <> that have additional attributes associated with them. -They describe an event captured by an Elastic {apm-agent} instrumenting a service. -You can think of transactions as the highest level of work you’re measuring within a service. -As an example, a transaction might be a: - -* Request to your server -* Batch job -* Background job -* Custom transaction type - -Agents decide whether to sample transactions or not, -and provide settings to control sampling behavior. -If sampled, the <> of a transaction are sent and stored as separate documents. -Within one transaction there can be 0, 1, or many spans captured. - -A transaction contains: - -* The timestamp of the event -* A unique id, type, and name -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. -* Other relevant information depending on the agent. Example: The JavaScript RUM agent captures transaction marks, -which are points in time relative to the start of the transaction with some label. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -Transactions are grouped by their `type` and `name` in the APM UI's -{kibana-ref}/transactions.html[Transaction overview]. -If you're using a supported framework, APM agents will automatically handle the naming for you. -If you're not, or if you wish to override the default, -all agents have API methods to manually set the `type` and `name`. - -* `type` should be a keyword of specific relevance in the service's domain, -e.g. `request`, `backgroundjob`, etc. -* `name` should be a generic designation of a transaction in the scope of a single service, -e.g. `GET /users/:id`, `UsersController#show`, etc. - -TIP: Most agents limit keyword fields (e.g. `labels`) to 1024 characters, -non-keyword fields (e.g. `span.db.statement`) to 10,000 characters. - -Transactions are stored in {apm-server-ref-v}/transaction-indices.html[transaction indices]. - -[[errors]] -=== Errors - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -An error event contains at least -information about the original `exception` that occurred -or about a `log` created when the exception occurred. -For simplicity, errors are represented by a unique ID. - -An Error contains: - -* Both the captured `exception` and the captured `log` of an error can contain a `stack trace`, -which is helpful for debugging. -* The `culprit` of an error indicates where it originated. -* An error might relate to the <> during which it happened, -via the `transaction.id`. -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -TIP: Most agents limit keyword fields (e.g. `error.id`) to 1024 characters, -non-keyword fields (e.g. `error.exception.message`) to 10,000 characters. - -Errors are stored in {apm-server-ref-v}/error-indices.html[error indices]. - -[[metrics]] -=== Metrics - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -APM agents automatically pick up basic host-level metrics, -including system and process-level CPU and memory metrics. -Agent specific metrics are also available, -like {apm-java-ref-v}/metrics.html[JVM metrics] in the Java Agent, -and {apm-go-ref-v}/metrics.html[Go runtime] metrics in the Go Agent. - -Infrastructure and application metrics are important sources of information when debugging production systems, -which is why we've made it easy to filter metrics for specific hosts or containers in the {kib} {kibana-ref}/metrics.html[metrics overview]. - -Metrics have the `processor.event` property set to `metric`. - -TIP: Most agents limit keyword fields (e.g. `processor.event`) to 1024 characters, -non-keyword fields (e.g. `system.memory.total`) to 10,000 characters. - -Metrics are stored in {apm-server-ref-v}/metricset-indices.html[metric indices]. - -For a full list of tracked metrics, see the relevant agent documentation: - -* {apm-go-ref-v}/metrics.html[Go] -* {apm-java-ref-v}/metrics.html[Java] -* {apm-node-ref-v}/metrics.html[Node.js] -* {apm-py-ref-v}/metrics.html[Python] -* {apm-ruby-ref-v}/metrics.html[Ruby] - -// This heading is linked to from the APM UI section in Kibana -[[metadata]] -=== Metadata - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Metadata can enrich your events and make application performance monitoring even more useful. -Let's explore the different types of metadata that Elastic APM offers. - -[float] -[[labels-fields]] -==== Labels - -Labels add *indexed* information to transactions, spans, and errors. -Indexed means the data is searchable and aggregatable in {es}. -Add additional key-value pairs to define multiple labels. - -* Indexed: Yes -* {es} type: {ref}/object.html[object] -* {es} field: `labels` -* Applies to: <> | <> | <> - -Label values can be a string, boolean, or number, although some agents only support string values at this time. -Because labels for a given key, regardless of agent used, are stored in the same place in {es}, -all label values of a given key must have the same data type. -Multiple data types per key will throw an exception, for example: `{foo: bar}` and `{foo: 42}` is not allowed. - -IMPORTANT: Avoid defining too many user-specified labels. -Defining too many unique fields in an index is a condition that can lead to a -{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-label[`SetLabel`] -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-tag[`setLabel`] -* .NET: {apm-dotnet-ref-v}/public-api.html#api-transaction-tags[`Labels`] -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-label[`setLabel`] | {apm-node-ref-v}/agent-api.html#apm-add-labels[`addLabels`] -* PHP: {apm-php-ref}/public-api.html#api-transaction-interface-set-label[`Transaction` `setLabel`] | {apm-php-ref}/public-api.html#api-span-interface-set-label[`Span` `setLabel`] -* Python: {apm-py-ref-v}/api.html#api-label[`elasticapm.label()`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-label[`set_label`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-add-labels[`addLabels`] - -[float] -[[custom-fields]] -==== Custom context - -Custom context adds *non-indexed*, -custom contextual information to transactions and errors. -Non-indexed means the data is not searchable or aggregatable in {es}, -and you cannot build dashboards on top of the data. -This also means you don't have to worry about {ref}/mapping.html#mapping-limit-settings[mapping explosions], -as these fields are not added to the mapping. - -Non-indexed information is useful for providing contextual information to help you -quickly debug performance issues or errors. - -* Indexed: No -* {es} type: {ref}/object.html[object] -* {es} fields: `transaction.custom` | `error.custom` -* Applies to: <> | <> - -IMPORTANT: Setting a circular object, a large object, or a non JSON serializable object can lead to errors. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-custom[`SetCustom`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-custom-context[`addCustomContext`] -* .NET: _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-custom-context[`set_custom_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-custom-context[`set_custom_context`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] - -[float] -[[user-fields]] -==== User context - -User context adds *indexed* user information to transactions and errors. -Indexed means the data is searchable and aggregatable in {es}. - -* Indexed: Yes -* {es} type: {ref}/keyword.html[keyword] -* {es} fields: `user.email` | `user.name` | `user.id` -* Applies to: <> | <> - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-username[`SetUsername`] | {apm-go-ref-v}/api.html#context-set-user-id[`SetUserID`] | -{apm-go-ref-v}/api.html#context-set-user-email[`SetUserEmail`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-set-user[`setUser`] -* .NET _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-user-context[`set_user_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-user[`set_user`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] diff --git a/docs/legacy/guide/apm-doc-directory.asciidoc b/docs/legacy/guide/apm-doc-directory.asciidoc deleted file mode 100644 index a5390e97a36..00000000000 --- a/docs/legacy/guide/apm-doc-directory.asciidoc +++ /dev/null @@ -1,73 +0,0 @@ -[[components]] -== Components and documentation - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM consists of four components: *APM agents*, *APM Server*, *{es}*, and *{kib}*. - -image::./images/apm-architecture-cloud.png[Architecture of Elastic APM] - -[float] -=== APM Agents - -APM agents are open source libraries written in the same language as your service. -You may only need one, or you might use all of them. -You install them into your service as you would install any other library. -They instrument your code and collect performance data and errors at runtime. -This data is buffered for a short period and sent on to APM Server. - -Each agent has its own documentation: - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -[float] -=== APM Server - -APM Server is a free and open application that receives performance data from your APM agents. -It's a {apm-server-ref-v}/overview.html#why-separate-component[separate component by design], -which helps keep the agents light, prevents certain security risks, and improves compatibility across the {stack}. - -After the APM Server has validated and processed events from the APM agents, -the server transforms the data into {es} documents and stores them in corresponding -{apm-server-ref-v}/exploring-es-data.html[{es} indices]. -In a matter of seconds, you can start viewing your application performance data in the {kib} {apm-app}. - -The {apm-server-ref-v}/index.html[APM Server reference] provides everything you need when it comes to working with the server. -Here you can learn more about {apm-server-ref-v}/getting-started-apm-server.html[installation], -{apm-server-ref-v}/configuring-howto-apm-server.html[configuration], -{apm-server-ref-v}/securing-apm-server.html[security], -{apm-server-ref-v}/monitoring.html[monitoring], and more. - -[float] -=== {es} - -{ref}/index.html[{es}] is a highly scalable free and open full-text search and analytics engine. -It allows you to store, search, and analyze large volumes of data quickly and in near real time. -{es} is used to store APM performance metrics and make use of its aggregations. - -[float] -=== {kib} {apm-app} - -{kibana-ref}/index.html[{kib}] is a free and open analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with data stored in {es}. - -Since application performance monitoring is all about visualizing data and detecting bottlenecks, -it's crucial you understand how to use the {kibana-ref}/xpack-apm.html[{apm-app}] in {kib}. -The following sections will help you get started: - -* {apm-app-ref}/apm-ui.html[Set up] -* {apm-app-ref}/apm-getting-started.html[Get started] -* {apm-app-ref}/apm-how-to.html[How-to guides] - -APM also has built-in integrations with {ml-cap}. To learn more about this feature, -or the {anomaly-detect} feature that's built on top of it, -refer to {kibana-ref}/machine-learning-integration.html[{ml-cap} integration]. diff --git a/docs/legacy/guide/cross-cluster-search.asciidoc b/docs/legacy/guide/cross-cluster-search.asciidoc deleted file mode 100644 index 0bc955be510..00000000000 --- a/docs/legacy/guide/cross-cluster-search.asciidoc +++ /dev/null @@ -1,49 +0,0 @@ -[[apm-cross-cluster-search]] -=== Cross-cluster search - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM utilizes {es}'s cross-cluster search functionality. -Cross-cluster search lets you run a single search request against one or more -{ref}/modules-remote-clusters.html[remote clusters] -- -making it easy to search APM data across multiple sources. -This means you can also have deployments per data type, making sizing and scaling more predictable, -and allowing for better performance while managing multiple observability use cases. - -[float] -[[set-up-ccs]] -==== Set up cross-cluster search - -*Step 1. Set up remote clusters.* - -If you're using the Hosted {es} Service, see {cloud}/ec-enable-ccs.html[Enable cross-cluster search]. - -// lint ignore elasticsearch -You can add remote clusters directly in {kib}, under *Management* > *Elasticsearch* > *Remote clusters*. -All you need is a name for the remote cluster and the seed node(s). -Remember the names of your remote clusters, you'll need them in step two. -See {ref}/ccr-getting-started.html[managing remote clusters] for detailed information on the setup process. - -Alternatively, you can {ref}/modules-remote-clusters.html#configuring-remote-clusters[configure remote clusters] -in {es}'s `elasticsearch.yml` file. - -*Step 2. Edit the default {apm-app} index pattern.* - -{apm-app} {data-sources} determine which clusters and indices to display data from. -{data-sources-cap} follow this convention: `:`. - -To display data from all remote clusters and the local cluster, -duplicate and prepend the defaults with `*:`. -For example, the default {data-source} for Error indices is `logs-apm*,apm*`. -To add all remote clusters, change this to `*:logs-apm*,*:apm*,logs-apm*,apm*` - -You can also specify certain clusters to display data from, for example, -`cluster-one:logs-apm*,cluster-one:apm*,logs-apm*,apm*`. - -There are two ways to edit the default {data-source}: - -* In the {apm-app} -- Navigate to *APM* > *Settings* > *Indices*, and change all `xpack.apm.indices.*` values to -include remote clusters. -* In `kibana.yml` -- Update the {kibana-ref}/apm-settings-kb.html[`xpack.apm.indices.*`] configuration values to -include remote clusters. diff --git a/docs/legacy/guide/data-security.asciidoc b/docs/legacy/guide/data-security.asciidoc deleted file mode 100644 index 784bcf9ba8f..00000000000 --- a/docs/legacy/guide/data-security.asciidoc +++ /dev/null @@ -1,461 +0,0 @@ -[[data-security]] -=== Data security - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -When setting up Elastic APM, it's essential to review all captured data carefully to ensure -it does not contain sensitive information. -When it does, we offer several different ways to filter, manipulate, or obfuscate this data. - -**Built-in data filters** - -Elastic APM provides built-in support for filtering the following types of data: - -[options="header"] -|==== -|Data type |Common sensitive data -|<> |Passwords, credit card numbers, authorization, etc. -|<> |Passwords, credit card numbers, etc. -|<> |Client IP address and user agent. -|<> |URLs visited, click events, user browser errors, resources used, etc. -|<> |Sensitive user or business information -|==== - -**Custom filters** - -There are two ways to filter other types APM data: - -|==== -|<> | Applied at ingestion time. -All agents and fields are supported. Data leaves the instrumented service. -There are no performance overhead implications on the instrumented service. - -|<> | Not supported by all agents. -Data is sanitized before leaving the instrumented service. -Potential overhead implications on the instrumented service -|==== - -[discrete] -[[built-in-filtering]] -=== Built-in data filtering - -Elastic APM provides built-in support for filtering or obfuscating the following types of data. - -[discrete] -[[filter-http-header]] -==== HTTP headers - -By default, APM agents capture HTTP request and response headers (including cookies). -Most Elastic APM agents provide the ability to sanitize HTTP header fields, -including cookies and `application/x-www-form-urlencoded` data (POST form fields). -Query string and captured request bodies, like `application/json` data, are not sanitized. - -The default list of sanitized fields attempts to target common field names for data relating to -passwords, credit card numbers, authorization, etc., but can be customized to fit your data. -This sensitive data never leaves the instrumented service. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-sanitize-field-names[`ELASTIC_APM_SANITIZE_FIELD_NAMES`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`sanitize_field_names`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-sanitize-field-names[`sanitizeFieldNames`] -* Node.js: {apm-node-ref-v}/configuration.html#sanitize-field-names[`sanitizeFieldNames`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] - -Alternatively, you can completely disable the capturing of HTTP headers. -This setting also supports {kibana-ref}/agent-configuration.html[Central configuration]: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-headers[`ELASTIC_APM_CAPTURE_HEADERS`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`capture_headers`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-headers[`CaptureHeaders`] -* Node.js: {apm-node-ref-v}/configuration.html#capture-headers[`captureHeaders`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-headers[`capture_headers`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-headers[`capture_headers`] - -[discrete] -[[filter-http-body]] -==== HTTP bodies - -By default, the body of HTTP requests is not recorded. -Request bodies often contain sensitive data like passwords or credit card numbers, -so use care when enabling this feature. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-body[`ELASTIC_APM_CAPTURE_BODY`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`capture_body`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-body[`CaptureBody`] -* Node.js: {apm-node-ref-v}//configuration.html#capture-body[`captureBody`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-body[`capture_body`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-body[`capture_body`] - -[discrete] -[[filter-personal-data]] -==== Personal data - -By default, the APM Server captures some personal data associated with trace events: - -* `client.ip`: The client's IP address. Typically derived from the HTTP headers of incoming requests. -`client.ip` is also used in conjunction with the {ref}/geoip-processor.html[`geoip` processor] to assign -geographical information to trace events. To learn more about how `client.ip` is derived, -see <>. -* `user_agent`: User agent data, including the client operating system, device name, vendor, and version. - -The capturing of this data can be turned off by setting -<<`capture_personal_data`,capture_personal_data>> to `false`. - -[discrete] -[[filter-real-user-data]] -==== Real user monitoring data - -Protecting user data is important. -For that reason, individual RUM instrumentations can be disabled in the RUM agent with the -{apm-rum-ref-v}/configuration.html#disable-instrumentations[`disableInstrumentations`] configuration variable. -Disabled instrumentations produce no spans or transactions. - -[options="header"] -|==== -|Disable |Configuration value -|HTTP requests |`fetch` and `xmlhttprequest` -|Page load metrics including static resources |`page-load` -|JavaScript errors on the browser |`error` -|User click events including URLs visited, mouse clicks, and navigation events |`eventtarget` -|Single page application route changes |`history` -|==== - -[discrete] -[[filter-database-statements]] -==== Database statements - -For SQL databases, APM agents do not capture the parameters of prepared statements. -Note that Elastic APM currently does not make an effort to strip parameters of regular statements. -Not using prepared statements makes your code vulnerable to SQL injection attacks, -so be sure to use prepared statements. - -For non-SQL data stores, such as {es} or MongoDB, -Elastic APM captures the full statement for queries. -For inserts or updates, the full document is not stored. -To filter or obfuscate data in non-SQL database statements, -or to remove the statement entirely, -you can set up an ingest node pipeline. - -[discrete] -[[filter-agent-specific]] -==== Agent-specific options - -Certain agents offer additional filtering and obfuscating options: - -**Agent configuration options** - -* (Node.js) Remove errors raised by the server-side process: -Disable with {apm-node-ref-v}/configuration.html#capture-exceptions[captureExceptions]. - -* (Java) Remove process arguments from transactions: -* Disabled by default with {apm-java-ref-v}/config-reporter.html#config-include-process-args[`include_process_args`]. - -[discrete] -[[custom-filters]] -=== Custom filters - -There are two ways to filter or obfuscate other types of APM data: - -* <> -* <> - -[discrete] -[[filter-ingest-pipeline]] -==== Create an ingest node pipeline filter - -Ingest node pipelines specify a series of processors that transform data in a specific way. -Transformation happens prior to indexing–inflicting no performance overhead on the monitored application. -Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. - -**Example** - -Say you decide to <>, -but quickly notice that sensitive information is being collected in the -`http.request.body.original` field: - -[source,json] ----- -{ - "email": "test@abc.com", - "password": "hunter2" -} ----- - -To obfuscate the passwords stored in the request body, -use a series of {ref}/processors.html[ingest processors]. -To start, create a pipeline with a simple description and an empty array of processors: - -[source,json] ----- -{ - "pipeline": { - "description": "redact http.request.body.original.password", - "processors": [] <1> - } -} ----- -<1> The processors defined below will go in this array - -Add the first processor to the processors array. -Because the agent captures the request body as a string, use the -{ref}/json-processor.html[JSON processor] to convert the original field value into a structured JSON object. -Save this JSON object in a new field: - -[source,json] ----- -{ - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } -} ----- - -If `body.original_json` is not `null`, redact the `password` with the {ref}/set-processor.html[set processor], -by setting the value of `body.original_json.password` to `"redacted"`: - -[source,json] ----- -{ - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } -} ----- - -Use the {ref}/convert-processor.html[convert processor] to convert the JSON value of `body.original_json` to a string and set it as the `body.original` value: - -[source,json] ----- -{ - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -Finally, use the {ref}/remove-processor.html[remove processor] to remove the `body.original_json` field: - -[source,json] ----- -{ - "remove": { - "field": "http.request.body.original", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -Now that the pipeline has been defined, -use the {ref}/put-pipeline-api.html[create or update pipeline API] to register the new pipeline in {es}. -Name the pipeline `apm_redacted_body_password`: - -[source,console] ----- -PUT _ingest/pipeline/apm_redacted_body_password -{ - "description": "redact http.request.body.original.password", - "processors": [ - { - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } - }, - { - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } - }, - { - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - }, - { - "remove": { - "field": "http.request.body.original_json", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - } - ] -} ----- - -To make sure the `apm_redacted_body_password` pipeline works correctly, -test it with the {ref}/simulate-pipeline-api.html[simulate pipeline API]. -This API allows you to run multiple documents through a pipeline to ensure it is working correctly. - -The request below simulates running three different documents through the pipeline: - -[source,console] ----- -POST _ingest/pipeline/apm_redacted_body_password/_simulate -{ - "docs": [ - { - "_source": { <1> - "http": { - "request": { - "body": { - "original": """{"email": "test@abc.com", "password": "hunter2"}""" - } - } - } - } - }, - { - "_source": { <2> - "some-other-field": true - } - }, - { - "_source": { <3> - "http": { - "request": { - "body": { - "original": """["invalid json" """ - } - } - } - } - } - ] -} ----- -<1> This document features the same sensitive data from the original example above -<2> This document only contains an unrelated field -<3> This document contains invalid JSON - -The API response should be similar to this: - -[source,json] ----- -{ - "docs" : [ - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : { - "password" : "redacted", - "email" : "test@abc.com" - } - } - } - } - } - } - }, - { - "doc" : { - "_source" : { - "nobody" : true - } - } - }, - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : """["invalid json" """ - } - } - } - } - } - } - ] -} ----- - -As you can see, only the first simulated document has a redacted password field. -As expected, all other documents are unaffected. - -The final step in this process is to add the newly created `apm_redacted_body_password` pipeline -to the default `apm` pipeline. This ensures that all APM data ingested into {es} runs through the pipeline. - -Get the current list of `apm` pipelines: - -[source,console] ----- -GET _ingest/pipeline/apm ----- - -Append the newly created pipeline to the end of the processors array and register the `apm` pipeline. -Your request will look similar to this: - -[source,console] ----- -{ - "apm" : { - "processors" : [ - { - "pipeline" : { - "name" : "apm_user_agent" - } - }, - { - "pipeline" : { - "name" : "apm_user_geo" - } - }, - { - "pipeline": { - "name": "apm_redacted_body_password" - } - ], - "description" : "Default enrichment for APM events" - } -} ----- - -That's it! Sit back and relax–passwords have been redacted from your APM HTTP body data. - -TIP: See {apm-server-ref-v}/configuring-ingest-node.html[parse data using ingest node pipelines] -to learn more about the default `apm` pipeline. - -[discrete] -[[filter-in-agent]] -==== {apm-agent} filters - -Some APM agents offer a way to manipulate or drop APM events _before_ they are sent to the APM Server. -Please see the relevant agent's documentation for more information and examples: - -// * Go: {apm-go-ref-v}/[] -// * Java: {apm-java-ref-v}/[] -* .NET: {apm-dotnet-ref-v}/public-api.html#filter-api[Filter API]. -* Node.js: {apm-node-ref-v}/agent-api.html#apm-add-filter[`addFilter()`]. -// * PHP: {apm-php-ref-v}[] -* Python: {apm-py-ref-v}/sanitizing-data.html[custom processors]. -// * Ruby: {apm-ruby-ref-v}/[] diff --git a/docs/legacy/guide/distributed-tracing.asciidoc b/docs/legacy/guide/distributed-tracing.asciidoc deleted file mode 100644 index d3364839fde..00000000000 --- a/docs/legacy/guide/distributed-tracing.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -[[distributed-tracing]] -=== Distributed tracing - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -A `trace` is a group of <> and <> with a common root. -Each `trace` tracks the entirety of a single request. -When a `trace` travels through multiple services, as is common in a microservice architecture, -it is known as a distributed trace. - -[float] -=== Why is distributed tracing important? - -Distributed tracing enables you to analyze performance throughout your microservice architecture -by tracing the entirety of a request -- from the initial web request on your front-end service -all the way to database queries made on your back-end services. - -Tracking requests as they propagate through your services provides an end-to-end picture of -where your application is spending time, where errors are occurring, and where bottlenecks are forming. -Distributed tracing eliminates individual service's data silos and reveals what's happening outside of -service borders. - -For supported technologies, distributed tracing works out-of-the-box, with no additional configuration required. - -[float] -=== How distributed tracing works - -Distributed tracing works by injecting a custom `traceparent` HTTP header into outgoing requests. -This header includes information, like `trace-id`, which is used to identify the current trace, -and `parent-id`, which is used to identify the parent of the current span on incoming requests -or the current span on an outgoing request. - -When a service is working on a request, it checks for the existence of this HTTP header. -If it's missing, the service starts a new trace. -If it exists, the service ensures the current action is added as a child of the existing trace, -and continues to propagate the trace. - -[float] -==== Trace propagation examples - -In this example, Elastic's Ruby agent communicates with Elastic's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex1.png[How traceparent propagation works] - -In this example, Elastic's Ruby agent communicates with OpenTelemetry's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex2.png[How traceparent propagation works] - -In this example, the trace meets a piece of middleware that doesn't propagate the `traceparent` header. -The distributed trace ends and any further communication will result in a new trace. - -// lint ignore traceparent -image::./images/dt-trace-ex3.png[How traceparent propagation works] - - -[float] -[[w3c-tracecontext]] -==== W3C Trace Context specification - -All Elastic agents now support the official W3C Trace Context specification and `traceparent` header. -See the table below for the minimum required agent version: - -[options="header"] -|==== -|Agent name |Agent Version -|**Go Agent**| ≥`1.6` -|**Java Agent**| ≥`1.14` -|**.NET Agent**| ≥`1.3` -|**Node.js Agent**| ≥`3.4` -|**Python Agent**| ≥`5.4` -|**Ruby Agent**| ≥`3.5` -|**RUM Agent**| ≥`5.0` -|==== - -NOTE: Older Elastic agents use a unique `elastic-apm-traceparent` header. -For backward-compatibility purposes, new versions of Elastic agents still support this header. - -[float] -=== Visualize distributed tracing - -The {apm-app}'s timeline visualization provides a visual deep-dive into each of your application's traces: - -[role="screenshot"] -image::./images/apm-distributed-tracing.png[Distributed tracing in the APM UI] - -[float] -=== Manual distributed tracing - -Elastic agents automatically propagate distributed tracing context for supported technologies. -If your service communicates over a different, unsupported protocol, -you can manually propagate distributed tracing context from a sending service to a receiving service -with each agent's API. - -[float] -==== Add the `traceparent` header to outgoing requests - -Sending services must add the `traceparent` header to outgoing requests. - --- -include::../../shared/distributed-trace-send/distributed-trace-send-widget.asciidoc[] --- - -[float] -==== Parse the `traceparent` header on incoming requests - -Receiving services must parse the incoming `traceparent` header, -and start a new transaction or span as a child of the received context. - --- -include::../../shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc[] --- - -[float] -=== Distributed tracing with RUM - -Some additional setup may be required to correlate requests correctly with the Real User Monitoring (RUM) agent. - -See the {apm-rum-ref}/distributed-tracing-guide.html[RUM distributed tracing guide] -for information on enabling cross-origin requests, setting up server configuration, -and working with dynamically-generated HTML. diff --git a/docs/legacy/guide/docker-compose.yml b/docs/legacy/guide/docker-compose.yml deleted file mode 100644 index 454e9886bb1..00000000000 --- a/docs/legacy/guide/docker-compose.yml +++ /dev/null @@ -1,75 +0,0 @@ -version: '2.2' -services: - apm-server: - image: docker.elastic.co/apm/apm-server:{VERSION} - depends_on: - elasticsearch: - condition: service_healthy - kibana: - condition: service_healthy - cap_add: ["CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID"] - cap_drop: ["ALL"] - ports: - - 8200:8200 - networks: - - elastic - command: > - apm-server -e - -E apm-server.rum.enabled=true - -E setup.kibana.host=kibana:5601 - -E setup.template.settings.index.number_of_replicas=0 - -E apm-server.kibana.enabled=true - -E apm-server.kibana.host=kibana:5601 - -E output.elasticsearch.hosts=["elasticsearch:9200"] - healthcheck: - interval: 10s - retries: 12 - test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:8200/ - - elasticsearch: - image: docker.elastic.co/elasticsearch/elasticsearch:{VERSION} - environment: - - bootstrap.memory_lock=true - - cluster.name=docker-cluster - - cluster.routing.allocation.disk.threshold_enabled=false - - discovery.type=single-node - - ES_JAVA_OPTS=-XX:UseAVX=2 -Xms1g -Xmx1g - ulimits: - memlock: - hard: -1 - soft: -1 - volumes: - - esdata:/usr/share/elasticsearch/data - ports: - - 9200:9200 - networks: - - elastic - healthcheck: - interval: 20s - retries: 10 - test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"' - - kibana: - image: docker.elastic.co/kibana/kibana:{VERSION} - depends_on: - elasticsearch: - condition: service_healthy - environment: - ELASTICSEARCH_URL: http://elasticsearch:9200 - ELASTICSEARCH_HOSTS: http://elasticsearch:9200 - ports: - - 5601:5601 - networks: - - elastic - healthcheck: - interval: 10s - retries: 20 - test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status - -volumes: - esdata: - driver: local - -networks: - elastic: - driver: bridge \ No newline at end of file diff --git a/docs/legacy/guide/features.asciidoc b/docs/legacy/guide/features.asciidoc deleted file mode 100644 index 8a2dcf39174..00000000000 --- a/docs/legacy/guide/features.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[apm-features]] -== Elastic APM features - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -++++ -Features -++++ - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./data-security.asciidoc[] - -include::./distributed-tracing.asciidoc[] - -include::./rum.asciidoc[] - -include::./trace-sampling.asciidoc[] - -include::./opentracing.asciidoc[] - -include::./opentelemetry-elastic.asciidoc[] - -include::./obs-integrations.asciidoc[] - -include::./cross-cluster-search.asciidoc[] \ No newline at end of file diff --git a/docs/legacy/guide/images/7.7-apm-agent-configuration.png b/docs/legacy/guide/images/7.7-apm-agent-configuration.png deleted file mode 100644 index ded0553219a..00000000000 Binary files a/docs/legacy/guide/images/7.7-apm-agent-configuration.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.7-apm-alert.png b/docs/legacy/guide/images/7.7-apm-alert.png deleted file mode 100644 index 4cee7214637..00000000000 Binary files a/docs/legacy/guide/images/7.7-apm-alert.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.7-service-maps-java.png b/docs/legacy/guide/images/7.7-service-maps-java.png deleted file mode 100644 index e1a42f4c76e..00000000000 Binary files a/docs/legacy/guide/images/7.7-service-maps-java.png and /dev/null differ diff --git a/docs/legacy/guide/images/7.8-service-map-anomaly.png b/docs/legacy/guide/images/7.8-service-map-anomaly.png deleted file mode 100644 index b661e8f09d1..00000000000 Binary files a/docs/legacy/guide/images/7.8-service-map-anomaly.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-architecture-cloud.png b/docs/legacy/guide/images/apm-architecture-cloud.png deleted file mode 100644 index 6bc7001fb9f..00000000000 Binary files a/docs/legacy/guide/images/apm-architecture-cloud.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-architecture-diy.png b/docs/legacy/guide/images/apm-architecture-diy.png deleted file mode 100644 index d4e96466081..00000000000 Binary files a/docs/legacy/guide/images/apm-architecture-diy.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-distributed-tracing.png b/docs/legacy/guide/images/apm-distributed-tracing.png deleted file mode 100644 index 7d51e273f9d..00000000000 Binary files a/docs/legacy/guide/images/apm-distributed-tracing.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-breakdown-charts.png b/docs/legacy/guide/images/apm-highlight-breakdown-charts.png deleted file mode 100644 index cbe6eb4bbe3..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-breakdown-charts.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-rum-maps.png b/docs/legacy/guide/images/apm-highlight-rum-maps.png deleted file mode 100644 index f3992fac4a8..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-rum-maps.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-highlight-sample-rate.png b/docs/legacy/guide/images/apm-highlight-sample-rate.png deleted file mode 100644 index 11c63d0dcfb..00000000000 Binary files a/docs/legacy/guide/images/apm-highlight-sample-rate.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-settings-kib.png b/docs/legacy/guide/images/apm-settings-kib.png deleted file mode 100644 index 876f135da93..00000000000 Binary files a/docs/legacy/guide/images/apm-settings-kib.png and /dev/null differ diff --git a/docs/legacy/guide/images/apm-transactions-overview.png b/docs/legacy/guide/images/apm-transactions-overview.png deleted file mode 100644 index c3c10fcb35e..00000000000 Binary files a/docs/legacy/guide/images/apm-transactions-overview.png and /dev/null differ diff --git a/docs/legacy/guide/images/breakdown-release-notes.png b/docs/legacy/guide/images/breakdown-release-notes.png deleted file mode 100644 index afca76a7632..00000000000 Binary files a/docs/legacy/guide/images/breakdown-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/chained-exceptions.png b/docs/legacy/guide/images/chained-exceptions.png deleted file mode 100644 index e187defe5a0..00000000000 Binary files a/docs/legacy/guide/images/chained-exceptions.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-sampling-example.png b/docs/legacy/guide/images/dt-sampling-example.png deleted file mode 100644 index 015b7c67e7f..00000000000 Binary files a/docs/legacy/guide/images/dt-sampling-example.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex1.png b/docs/legacy/guide/images/dt-trace-ex1.png deleted file mode 100644 index ca97955ee8b..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex1.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex2.png b/docs/legacy/guide/images/dt-trace-ex2.png deleted file mode 100644 index 3df0827f586..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex2.png and /dev/null differ diff --git a/docs/legacy/guide/images/dt-trace-ex3.png b/docs/legacy/guide/images/dt-trace-ex3.png deleted file mode 100644 index 1bb666b030a..00000000000 Binary files a/docs/legacy/guide/images/dt-trace-ex3.png and /dev/null differ diff --git a/docs/legacy/guide/images/ecommerce-dashboard.png b/docs/legacy/guide/images/ecommerce-dashboard.png deleted file mode 100644 index f68dc3cc568..00000000000 Binary files a/docs/legacy/guide/images/ecommerce-dashboard.png and /dev/null differ diff --git a/docs/legacy/guide/images/geo-location.jpg b/docs/legacy/guide/images/geo-location.jpg deleted file mode 100644 index 5b80e1e7a8f..00000000000 Binary files a/docs/legacy/guide/images/geo-location.jpg and /dev/null differ diff --git a/docs/legacy/guide/images/java-kafka.png b/docs/legacy/guide/images/java-kafka.png deleted file mode 100644 index b568e3592e9..00000000000 Binary files a/docs/legacy/guide/images/java-kafka.png and /dev/null differ diff --git a/docs/legacy/guide/images/java-metadata.png b/docs/legacy/guide/images/java-metadata.png deleted file mode 100644 index f7d28526f43..00000000000 Binary files a/docs/legacy/guide/images/java-metadata.png and /dev/null differ diff --git a/docs/legacy/guide/images/jvm-release-notes.png b/docs/legacy/guide/images/jvm-release-notes.png deleted file mode 100644 index ffeab27e102..00000000000 Binary files a/docs/legacy/guide/images/jvm-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/kibana-geo-data.png b/docs/legacy/guide/images/kibana-geo-data.png deleted file mode 100644 index a80faefed97..00000000000 Binary files a/docs/legacy/guide/images/kibana-geo-data.png and /dev/null differ diff --git a/docs/legacy/guide/images/open-telemetry-elastic-arch.png b/docs/legacy/guide/images/open-telemetry-elastic-arch.png deleted file mode 100644 index 7530deb3618..00000000000 Binary files a/docs/legacy/guide/images/open-telemetry-elastic-arch.png and /dev/null differ diff --git a/docs/legacy/guide/images/open-telemetry-exporter-arch.png b/docs/legacy/guide/images/open-telemetry-exporter-arch.png deleted file mode 100644 index 4499d65ec6b..00000000000 Binary files a/docs/legacy/guide/images/open-telemetry-exporter-arch.png and /dev/null differ diff --git a/docs/legacy/guide/images/open-telemetry-protocol-arch.png b/docs/legacy/guide/images/open-telemetry-protocol-arch.png deleted file mode 100644 index 31a382ad393..00000000000 Binary files a/docs/legacy/guide/images/open-telemetry-protocol-arch.png and /dev/null differ diff --git a/docs/legacy/guide/images/remote-config-release-notes.png b/docs/legacy/guide/images/remote-config-release-notes.png deleted file mode 100644 index 19e52a203be..00000000000 Binary files a/docs/legacy/guide/images/remote-config-release-notes.png and /dev/null differ diff --git a/docs/legacy/guide/images/siem-apm-integration.png b/docs/legacy/guide/images/siem-apm-integration.png deleted file mode 100644 index ef217bcbad2..00000000000 Binary files a/docs/legacy/guide/images/siem-apm-integration.png and /dev/null differ diff --git a/docs/legacy/guide/images/structured-filters.jpg b/docs/legacy/guide/images/structured-filters.jpg deleted file mode 100644 index c454707025d..00000000000 Binary files a/docs/legacy/guide/images/structured-filters.jpg and /dev/null differ diff --git a/docs/legacy/guide/index.asciidoc b/docs/legacy/guide/index.asciidoc deleted file mode 100644 index ce78e0cfd51..00000000000 --- a/docs/legacy/guide/index.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -include::../../version.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:apm-ref-all: https://www.elastic.co/guide/en/apm/get-started/ - -ifndef::apm-integration-docs[] -[[gettting-started]] -= APM Overview -endif::[] - -ifdef::apm-integration-docs[] -// Overwrite links to the APM Overview and APM Server Ref. Point to APM Guide instead. -:apm-overview-ref-v: {apm-guide-ref} -:apm-guide-ref: {apm-guide-ref} -:apm-server-ref-v: {apm-guide-ref} -:apm-server-ref: {apm-guide-ref} - -[[legacy-apm-overview]] -= Legacy APM Overview - -include::./overview.asciidoc[] -endif::[] - -include::./apm-doc-directory.asciidoc[] - -include::./install-and-run.asciidoc[] - -include::./quick-start-overview.asciidoc[] - -include::./apm-data-model.asciidoc[] - -include::./features.asciidoc[] - -include::./troubleshooting.asciidoc[] - -include::./apm-breaking-changes.asciidoc[] - -include::./redirects.asciidoc[] diff --git a/docs/legacy/guide/install-and-run.asciidoc b/docs/legacy/guide/install-and-run.asciidoc deleted file mode 100644 index f045766d889..00000000000 --- a/docs/legacy/guide/install-and-run.asciidoc +++ /dev/null @@ -1,102 +0,0 @@ -[[install-and-run]] -== Quick start guide - -IMPORTANT: {deprecation-notice-installation} - -This guide describes how to get started quickly with Elastic APM. You’ll learn how to: - -* Spin up {es}, {kib}, and APM Server on {ess} -* Install APM agents -* Basic configuration options -* Visualize your APM data in {kib} - -[float] -[[before-installation]] -=== Step 1: Spin up the {stack} - -include::../tab-widgets/spin-up-stack-widget.asciidoc[] - -[float] -[[agents]] -=== Step 2: Install APM agents - -// This tagged region is reused in the Observability docs. -// tag::apm-agent[] -APM agents are written in the same language as your service. -To monitor a new service, you must install the agent and configure it with a service name, APM Server URL, and Secret token or API key. - -[[choose-service-name]] -* *Service name*: Service names are used to differentiate data from each of your services. -Elastic APM includes the service name field on every document that it saves in {es}. -If you change the service name after using Elastic APM, -you will see the old service name and the new service name as two separate services. -Make sure you choose a good service name before you get started. -+ -The service name can only contain alphanumeric characters, -spaces, underscores, and dashes (must match `^[a-zA-Z0-9 _-]+$`). - -* *APM Server URL*: The host and port that APM Server listens for events on. - -* *Secret token or API key*: Authentication method for Agent/Server communication. -See {apm-server-ref-v}/secure-communication-agents.html[secure communication with APM Agents] to learn more. - -Select your service's language for installation instructions: -// end::apm-agent[] - --- -include::../tab-widgets/install-agents-widget.asciidoc[] --- - -TIP: Check the {apm-overview-ref-v}/agent-server-compatibility.html[Agent/Server compatibility matrix] to ensure you're using agents that are compatible with your version of {es}. - - -[float] -[[configure-apm]] -=== Step 3: Advanced configuration (optional) - -// This tagged region is reused in the Observability docs. -// tag::configure-agents[] -There are many different ways to tweak and tune the Elastic APM ecosystem to your needs. - -*Configure APM agents* - -APM agents have a number of configuration options that allow you to fine tune things like -environment names, sampling rates, instrumentations, metrics, and more. -Broadly speaking, there are two ways to configure APM agents: -// end::configure-agents[] - -include::../tab-widgets/configure-agent-widget.asciidoc[] - -*Configure APM Server* - -include::../tab-widgets/configure-server-widget.asciidoc[] - -[float] -[[visualize-kibana]] -=== Step 4: Visualize in {kib} - -The {apm-app} in {kib} allows you to monitor your software services and applications in real-time; -visualize detailed performance information on your services, identify and analyze errors, -and monitor host-level and agent-specific metrics like JVM and Go runtime metrics. - -To open the {apm-app}: - -. Launch {kib}: -+ --- -include::../../shared/open-kibana/open-kibana-widget.asciidoc[] --- - -. In the side navigation, under *{observability}*, select *APM*. - -[float] -[[what-next]] -=== What's next? - -Now that you have APM data streaming into {ES}, -head over to the {kibana-ref}/xpack-apm.html[{apm-app} reference] to learn more about what you can -do with {kib}'s {apm-app}. - -// Need to add more here -// Get a deeper understanding by learning about [[concepts]] -// Learn how to do things with [[how-to guides]] \ No newline at end of file diff --git a/docs/legacy/guide/obs-integrations.asciidoc b/docs/legacy/guide/obs-integrations.asciidoc deleted file mode 100644 index f7c7a52936b..00000000000 --- a/docs/legacy/guide/obs-integrations.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[observability-integrations]] -=== {observability} integrations - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM supports integrations with other observability solutions. - -// remove float tag once other integrations are added -[float] -[[apm-logging-integration]] -==== Logging integration - -Many applications use logging frameworks to help record, format, and append an application's logs. -Elastic APM now offers a way to make your application logs even more useful, -by integrating with the most popular logging frameworks in their respective languages. -This means you can easily inject trace information into your logs, -allowing you to explore logs in the {observability-guide}/monitor-logs.html[{logs-app}], -then jump straight into the corresponding APM traces -- all while preserving the trace context. - -To get started: - -. Enable log correlation -. Add APM identifiers to your logs -. Ingest your logs into {es} - -[float] -===== Enable Log correlation - -// temporary attribute for ECS 1.1 -// Remove after 7.4 release -:ecs-ref: https://www.elastic.co/guide/en/ecs/1.1 - -Some Agents require you to first enable log correlation in the Agent. -This is done with a configuration variable, and is different for each Agent. -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] for further information. - -// Not enough of the Agent docs are ready yet. -// Commenting these out and will replace when ready. -// * *Java*: {apm-java-ref-v}/config-logging.html#config-enable-log-correlation[`enable_log_correlation`] -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -===== Add APM identifiers to your logs - -Once log correlation is enabled, -you must ensure your logs contain APM identifiers. -In some supported frameworks, this is already done for you. -In other scenarios, like for unstructured logs, -you'll need to add APM identifiers to your logs in any easy to parse manner. - -The identifiers we're interested in are: {ecs-ref}/ecs-tracing.html[`trace.id`] and -{ecs-ref}/ecs-tracing.html[`transaction.id`]. Certain Agents also support the `span.id` field. - -This process for adding these fields will differ based the Agent you're using, the logging framework, -and the type and structure of your logs. - -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] to learn more. - -// Not enough of the Agent docs have been backported yet. -// Commenting these out and will replace when ready. -// * *Go*: {apm-go-ref-v}/supported-tech.html#supported-tech-logging[Logging frameworks] -// * *Java*: {apm-java-ref-v}/[] NOT merged yet https://github.com/elastic/apm-agent-java/pull/854 -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] Not backported yet https://www.elastic.co/guide/en/apm/agent/ruby/master/log-correlation.html -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -===== Ingest your logs into {es} - -Once your logs contain the appropriate identifiers (fields), you need to ingest them into {es}. -Luckily, we've got a tool for that -- {filebeat} is Elastic's log shipper. -The {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] -guide will walk you through the setup process. - -Because logging frameworks and formats vary greatly between different programming languages, -there is no one-size-fits-all approach for ingesting your logs into {es}. -The following tips should hopefully get you going in the right direction: - -**Download {filebeat}** - -There are many ways to download and get started with {filebeat}. -Read the {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] guide to determine which is best for you. - -**Configure {filebeat}** - -Modify the {filebeat-ref}/configuring-howto-filebeat.html[`filebeat.yml`] configuration file to your needs. -Here are some recommendations: - -* Set `filebeat.inputs` to point to the source of your logs -* Point {filebeat} to the same {stack} that is receiving your APM data - * If you're using Elastic cloud, set `cloud.id` and `cloud.auth`. - * If your using a manual setup, use `output.elasticsearch.hosts`. - -[source,yml] ----- -filebeat.inputs: -- type: log <1> - paths: <2> - - /var/log/*.log -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWMNjN2Q3YTllOTYyNTc0Mw==" <3> -cloud.auth: "elastic:YOUR_PASSWORD" <4> ----- -<1> Configures the `log` input -<2> Path(s) that must be crawled to fetch the log lines -<3> Used to resolve the {es} and {kib} URLs for {ecloud} -<4> Authorization token for {ecloud} - -**JSON logs** - -For JSON logs you can use the {filebeat-ref}/filebeat-input-log.html[`log` input] to read lines from log files. -Here's what a sample configuration might look like: - -[source,yml] ----- -filebeat.inputs: - json.keys_under_root: true <1> - json.add_error_key: true <2> - json.message_key: message <3> ----- -<1> `true` copies JSON keys to the top level in the output document -<2> Tells {filebeat} to add an `error.message` and `error.type: json` key in case of JSON unmarshalling errors -<3> Specifies the JSON key on which to apply line filtering and multiline settings - -**Parsing unstructured logs** - -Consider the following log that is decorated with the `transaction.id` and `trace.id` fields: - -[source,log] ----- -2019-09-18 21:29:49,525 - django.server - ERROR - "GET / HTTP/1.1" 500 27 | elasticapm transaction.id=fcfbbe447b9b6b5a trace.id=f965f4cc5b59bdc62ae349004eece70c span.id=None ----- - -All that's needed now is an {filebeat-ref}/configuring-ingest-node.html[ingest node processor] to preprocess your logs and -extract these structured fields before they are indexed in {es}. -To do this, you'd need to create a pipeline that uses {es}'s {ref}/grok-processor.html[Grok Processor]. -Here's an example: - -[source, json] ----- -PUT _ingest/pipeline/log-correlation -{ - "description": "Parses the log correlation IDs out of the raw plain-text log", - "processors": [ - { - "grok": { - "field": "message", <1> - "patterns": ["%{GREEDYDATA:message} | elasticapm transaction.id=%{DATA:transaction.id} trace.id=%{DATA:trace.id} span.id=%{DATA:span.id}"] <2> - } - } - ] -} ----- -<1> The field to use for grok expression parsing -<2> An ordered list of grok expression to match and extract named captures with: -`%{DATA:transaction.id}` captures the value of `transaction.id`, -`%{DATA:trace.id}` captures the value or `trace.id`, and -`%{DATA:span.id}` captures the value of `span.id`. - -NOTE: Depending on how you've added APM data to your logs, -you may need to tweak this grok pattern in order to work for your setup. -In addition, it's possible to extract more structure out of your logs. -Make sure to follow the {ecs-ref}/ecs-field-reference.html[Elastic Common Schema] -when defining which fields you are storing in {es}. - -Then, configure {filebeat} to use the processor in `filebeat.yml`: - -[source, json] ----- -output.elasticsearch: - pipeline: "log-correlation" ----- - -If your logs contain messages that span multiple lines of text (common in Java stack traces), -you'll also need to configure {filebeat-ref}/multiline-examples.html[multiline settings]. - -The following example shows how to configure {filebeat} to handle a multiline message where the first line of the message begins with a bracket ([). - -[source,yml] ----- -multiline.pattern: '^\[' -multiline.negate: true -multiline.match: after ----- diff --git a/docs/legacy/guide/opentelemetry-elastic.asciidoc b/docs/legacy/guide/opentelemetry-elastic.asciidoc deleted file mode 100644 index 073dda96db0..00000000000 --- a/docs/legacy/guide/opentelemetry-elastic.asciidoc +++ /dev/null @@ -1,495 +0,0 @@ -[[open-telemetry-elastic]] -=== OpenTelemetry integration - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -:ot-spec: https://github.com/open-telemetry/opentelemetry-specification/blob/master/README.md -:ot-grpc: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc -:ot-http: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp -:ot-contrib: https://github.com/open-telemetry/opentelemetry-collector-contrib -:ot-resource: {ot-contrib}/tree/main/processor/resourceprocessor -:ot-attr: {ot-contrib}/blob/main/processor/attributesprocessor -:ot-repo: https://github.com/open-telemetry/opentelemetry-collector -:ot-pipelines: https://opentelemetry.io/docs/collector/configuration/#service -:ot-extension: {ot-repo}/blob/master/extension/README.md -:ot-scaling: {ot-repo}/blob/master/docs/performance.md - -:ot-collector: https://opentelemetry.io/docs/collector/getting-started/ -:ot-dockerhub: https://hub.docker.com/r/otel/opentelemetry-collector-contrib - -https://opentelemetry.io/docs/concepts/what-is-opentelemetry/[OpenTelemetry] is a set -of APIs, SDKs, tooling, and integrations that enable the capture and management of -telemetry data from your services for greater observability. For more information about the -OpenTelemetry project, see the {ot-spec}[spec]. - -Elastic OpenTelemetry integrations allow you to reuse your existing OpenTelemetry -instrumentation to quickly analyze distributed traces and metrics to help you monitor -business KPIs and technical components with the {stack}. - -[float] -[[open-telemetry-elastic-protocol]] -==== APM Server native support of OpenTelemetry protocol - -Elastic APM Server natively supports the OpenTelemetry protocol. -This means trace data and metrics collected from your applications and infrastructure can -be sent directly to Elastic APM Server using the OpenTelemetry protocol. - -image::./images/open-telemetry-protocol-arch.png[OpenTelemetry Elastic architecture diagram] - -[float] -[[instrument-apps-apm-server]] -====== Instrument applications - -To export traces and metrics to APM Server, ensure that you have instrumented your services and applications -with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app using the -https://github.com/open-telemetry/opentelemetry-java-instrumentation[OpenTelemetry agent for Java]. - -By defining the following environment variables, you can configure the OTLP endpoint so that the OpenTelemetry agent communicates with -APM Server. - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production -export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200 -export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer an_apm_secret_token" -java -javaagent:/path/to/opentelemetry-javaagent-all.jar \ - -classpath lib/*:classes/ \ - com.mycompany.checkout.CheckoutServiceServer ----- - -|=== - -| `OTEL_RESOURCE_ATTRIBUTES` | Fields that describe the service and the environment that the service runs in. See <> for more information. - -| `OTEL_EXPORTER_OTLP_ENDPOINT` | APM Server URL. The host and port that APM Server listens for events on. - -| `OTEL_EXPORTER_OTLP_HEADERS` | Authorization header that includes the Elastic APM Secret token or API key: `"Authorization=Bearer an_apm_secret_token"` or `"Authorization=ApiKey an_api_key"`. - -For information on how to format an API key, see our {apm-server-ref-v}/api-key.html[API key] docs. - -Please note the required space between `Bearer` and `an_apm_secret_token`, and `APIKey` and `an_api_key`. - -| `OTEL_EXPORTER_OTLP_CERTIFICATE` | The trusted certificate used to verify the TLS credentials of the client. (optional) - -|=== - -You are now ready to collect traces and <> before <> -and <> in {kib}. - -[float] -[[open-telemetry-collector]] -===== Connect OpenTelemetry Collector instances - -Using the OpenTelemetry collector instances in your architecture, you can connect them to Elastic {observability} using the OTLP exporter. - -[source,yaml] ----- -receivers: <1> - # ... - otlp: - -processors: <2> - # ... - memory_limiter: - check_interval: 1s - limit_mib: 2000 - batch: - -exporters: - logging: - loglevel: warn <3> - otlp/elastic: <4> - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" <5> <7> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_APM_SECRET_TOKEN}" <6> <7> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: <8> - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> The receivers, such as -the https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver[OTLP receiver], that forward data emitted by APM agents or the https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/hostmetricsreceiver[host metrics receiver]. -<2> We recommend using the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md[Batch processor] and also suggest using the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.md[memory limiter processor]. For more information, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors[Recommended processors]. -<3> The https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter[logging exporter] is helpful for troubleshooting and supports various logging levels: `debug`, `info`, `warn`, and `error`. -<4> Elastic {observability} endpoint configuration. -APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport {ot-grpc}[(OTLP/gRPC)] -and the OTLP protocol over HTTP transport {ot-http}[(OTLP/HTTP)]. -To learn more about these exporters, see the OpenTelemetry Collector documentation: -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter[OTLP/HTTP Exporter] or -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter[OTLP/gRPC exporter]. -<5> Hostname and port of the APM Server endpoint. For example, `elastic-apm-server:8200`. -<6> Credential for Elastic APM {apm-server-ref-v}/secret-token.html[secret token authorization] (`Authorization: "Bearer a_secret_token"`) or {apm-server-ref-v}/api-key.html[API key authorization] (`Authorization: "ApiKey an_api_key"`). -<7> Environment-specific configuration parameters can be conveniently passed in as environment variables documented https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables[here] (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_SECRET_TOKEN`). -<8> To send OpenTelemetry logs to {stack} version 8.0+, declare a `logs` pipeline. - -You're now ready to export traces and metrics from your services and applications. - -[float] -[[open-telemetry-elastic-metrics]] -==== Collect metrics - -IMPORTANT: When collecting metrics, please note that the https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html[`DoubleValueRecorder`] -and https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/LongValueObserver.html[`LongValueRecorder`] metrics are not yet supported. - -Here's an example of how to capture business metrics from a Java application. - -[source,java] ----- -// initialize metric -Meter meter = GlobalMetricsProvider.getMeter("my-frontend"); -DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build(); - -public void createOrder(HttpServletRequest request) { - - // create order in the database - ... - // increment business metrics for monitoring - orderValueCounter.add(orderPrice); -} ----- - -See the https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md[Open Telemetry Metrics API] -for more information. - -[float] -[[open-telemetry-elastic-verify]] -===== Verify OpenTelemetry metrics data - -Use *Discover* to validate that metrics are successfully reported to {kib}. - -. Launch {kib}: -+ --- -include::../../shared/open-kibana/open-kibana-widget.asciidoc[] --- - -. Open the main menu, then click *Discover*. -. Select `apm-*` as your index pattern. -. Filter the data to only show documents with metrics: `processor.name :"metric"` -. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return -only OpenTelemetry metrics documents. - -[float] -[[open-telemetry-elastic-kibana]] -===== Visualize in {kib} - -TSVB within {kib} is the recommended visualization for OpenTelemetry metrics. TSVB is a time series data visualizer that allows you to use the -{es} aggregation framework's full power. With TSVB, you can combine an infinite number of aggregations to display complex data. - -// lint ignore ecommerce -In this example eCommerce OpenTelemetry dashboard, there are four visualizations: sales, order count, product cache, and system load. The dashboard provides us with business -KPI metrics, along with performance-related metrics. - -[role="screenshot"] -image::./images/ecommerce-dashboard.png[OpenTelemetry visualizations] - -Let's look at how this dashboard was created, specifically the Sales USD and System load visualizations. - -. Open the main menu, then click *Dashboard*. -. Click *Create dashboard*. -. Click *Save*, enter the name of your dashboard, and then click *Save* again. -. Let’s add a Sales USD visualization. Click *Edit*. -. Click *Create new* and then select *TSVB*. -. For the label name, enter Sales USD, and then select the following: -+ -* Aggregation: `Counter Rate`. -* Field: `order_sum`. -* Scale: `auto`. -* Group by: `Everything` -. Click *Save*, enter Sales USD as the visualization name, and then click *Save and return*. -. Now let's create a visualization of load averages on the system. Click *Create new*. -. Select *TSVB*. -. Select the following: -+ -* Aggregation: `Average`. -* Field: `system.cpu.load_average.1m`. -* Group by: `Terms`. -* By: `host.ip`. -* Top: `10`. -* Order by: `Doc Count (default)`. -* Direction: `Descending`. -. Click *Save*, enter System load per host IP as the visualization name, and then click *Save and return*. -+ -Both visualizations are now displayed on your custom dashboard. - -IMPORTANT: By default, Discover shows data for the last 15 minutes. If you have a time-based index -and no data displays, you might need to increase the time range. - -[float] -[[open-telemetry-aws-lambda-elastic]] -==== AWS Lambda Support - -AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {observability}. - -To get started, follow the official AWS Distro for OpenTelemetry Lambda https://aws-otel.github.io/docs/getting-started/lambda[getting started documentation] and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster. - -[float] -[[open-telemetry-aws-lambda-elastic-java]] -===== Instrumenting AWS Lambda Java functions - -NOTE: For a better startup time, we recommend using SDK-based instrumentation, i.e. manual instrumentation of the code, rather than auto instrumentation. - -To instrument AWS Lambda Java functions, follow the official https://aws-otel.github.io/docs/getting-started/lambda/lambda-java[AWS Distro for OpenTelemetry Lambda Support For Java]. - -Noteworthy configuration elements: - -* AWS Lambda Java functions should extend `com.amazonaws.services.lambda.runtime.RequestHandler`, -+ -[source,java] ----- -public class ExampleRequestHandler implements RequestHandler { - public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) { - // add your code ... - } -} ----- - -* When using SDK-based instrumentation, frameworks you want to gain visibility of should be manually instrumented -** The below example instruments https://square.github.io/okhttp/4.x/okhttp/okhttp3/-ok-http-client/[OkHttpClient] with the OpenTelemetry instrument https://search.maven.org/artifact/io.opentelemetry.instrumentation/opentelemetry-okhttp-3.0/1.3.1-alpha/jar[io.opentelemetry.instrumentation:opentelemetry-okhttp-3.0:1.3.1-alpha] -+ -[source,java] ----- -import io.opentelemetry.instrumentation.okhttp.v3_0.OkHttpTracing; - -OkHttpClient httpClient = new OkHttpClient.Builder() - .addInterceptor(OkHttpTracing.create(GlobalOpenTelemetry.get()).newInterceptor()) - .build(); ----- - -* The configuration of the OpenTelemetry Collector, with the definition of the Elastic {observability} endpoint, can be added to the root directory of the Lambda binaries (e.g. defined in `src/main/resources/opentelemetry-collector.yaml`) -+ -[source,yaml] ----- -# Copy opentelemetry-collector.yaml in the root directory of the lambda function -# Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/opentelemetry-collector.yaml' -receivers: - otlp: - protocols: - http: - grpc: - -exporters: - logging: - loglevel: debug - otlp/elastic: - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_OTLP_ENDPOINT}" <1> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_OTLP_TOKEN}" <1> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> Environment-specific configuration parameters can be conveniently passed in as environment variables: `ELASTIC_OTLP_ENDPOINT` and `ELASTIC_OTLP_TOKEN` - -* Configure the AWS Lambda Java function with: -** https://docs.aws.amazon.com/lambda/latest/dg/API_Layer.html[Function -layer]: The latest https://aws-otel.github.io/docs/getting-started/lambda/lambda-java[AWS -Lambda layer for OpenTelemetry] (e.g. `arn:aws:lambda:eu-west-1:901920570463:layer:aws-otel-java-wrapper-ver-1-2-0:1`) -** https://docs.aws.amazon.com/lambda/latest/dg/API_TracingConfig.html[`TracingConfig` / Mode] set to `PassTrough` -** https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionConfiguration.html[`FunctionConfiguration` / Timeout] set to more than 10 seconds to support the longer cold start inherent to the Lambda Java Runtime -** Export the environment variables: -*** `AWS_LAMBDA_EXEC_WRAPPER="/opt/otel-proxy-handler"` for wrapping handlers proxied through the API Gateway (see https://aws-otel.github.io/docs/getting-started/lambda/lambda-java#enable-auto-instrumentation-for-your-lambda-function[here]) -*** `OTEL_PROPAGATORS="tracecontext, baggage"` to override the default setting that also enables X-Ray headers causing interferences between OpenTelemetry and X-Ray -*** `OPENTELEMETRY_COLLECTOR_CONFIG_FILE="/var/task/opentelemetry-collector.yaml"` to specify the path to your OpenTelemetry Collector configuration - -[float] -[[open-telemetry-aws-lambda-elastic-java-terraform]] -===== Instrumenting AWS Lambda Java functions with Terraform - -We recommend using an infrastructure as code solution like Terraform or Ansible to manage the configuration of your AWS Lambda functions. - -Here is an example of AWS Lambda Java function managed with Terraform and the https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function[AWS Provider / Lambda Functions]: - -* Sample Terraform code: https://github.com/cyrille-leclerc/my-serverless-shopping-cart/tree/main/checkout-function/deploy -* Note that the Terraform code to manage the HTTP API Gateway (https://github.com/cyrille-leclerc/my-serverless-shopping-cart/tree/main/utils/terraform/api-gateway-proxy[here]) is copied from the official OpenTelemetry Lambda sample https://github.com/open-telemetry/opentelemetry-lambda/tree/e72467a085a2a6e57af133032f85ac5b8bbbb8d1/utils[here] - -[float] -[[open-telemetry-aws-lambda-elastic-nodejs]] -===== Instrumenting AWS Lambda Node.js functions - -NOTE: For a better startup time, we recommend using SDK-based instrumentation for manual instrumentation of the code rather than auto instrumentation. - -To instrument AWS Lambda Node.js functions, see https://aws-otel.github.io/docs/getting-started/lambda/lambda-js[AWS Distro for OpenTelemetry Lambda Support For JavaScript]. - -The configuration of the OpenTelemetry Collector, with the definition of the Elastic {observability} endpoint, can be added to the root directory of the Lambda binaries: `src/main/resources/opentelemetry-collector.yaml`. - -[source,yaml] ----- -# Copy opentelemetry-collector.yaml in the root directory of the lambda function -# Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/opentelemetry-collector.yaml' -receivers: - otlp: - protocols: - http: - grpc: - -exporters: - logging: - loglevel: debug - otlp/elastic: - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_OTLP_ENDPOINT}" <1> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_OTLP_TOKEN}" <1> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> Environment-specific configuration parameters can be conveniently passed in as environment variables: `ELASTIC_OTLP_ENDPOINT` and `ELASTIC_OTLP_TOKEN` - -Configure the AWS Lambda Node.js function: - -* https://docs.aws.amazon.com/lambda/latest/dg/API_Layer.html[Function -layer]: The latest https://aws-otel.github.io/docs/getting-started/lambda/lambda-js[AWS -Lambda layer for OpenTelemetry]. For example, `arn:aws:lambda:eu-west-1:901920570463:layer:aws-otel-nodejs-ver-0-23-0:1`) -* https://docs.aws.amazon.com/lambda/latest/dg/API_TracingConfig.html[`TracingConfig` / Mode] set to `PassTrough` -* https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionConfiguration.html[`FunctionConfiguration` / Timeout] set to more than 10 seconds to support the cold start of the Lambda JavaScript Runtime -* Export the environment variables: -** `AWS_LAMBDA_EXEC_WRAPPER="/opt/otel-handler"` for wrapping handlers proxied through the API Gateway. See https://aws-otel.github.io/docs/getting-started/lambda/lambda-js#enable-auto-instrumentation-for-your-lambda-function[enable auto instrumentation for your lambda-function]. -** `OTEL_PROPAGATORS="tracecontext"` to override the default setting that also enables X-Ray headers causing interferences between OpenTelemetry and X-Ray -** `OPENTELEMETRY_COLLECTOR_CONFIG_FILE="/var/task/opentelemetry-collector.yaml"` to specify the path to your OpenTelemetry Collector configuration -** `OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:55681/v1/traces"` this environment variable is required to be set until https://github.com/open-telemetry/opentelemetry-js/pull/2331[PR #2331] is merged and released. -** `OTEL_TRACES_SAMPLER="AlwaysOn"` define the required sampler strategy if it is not sent from the caller. Note that `Always_on` can potentially create a very large amount of data, so in production set the correct sampling configuration, as per the https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling[specification]. - -[float] -[[open-telemetry-aws-lambda-elastic-nodejs-terraform]] -===== Instrumenting AWS Lambda Node.js functions with Terraform - -To manage the configuration of your AWS Lambda functions, we recommend using an infrastructure as code solution like Terraform or Ansible. - -Here is an example of AWS Lambda Node.js function managed with Terraform and the https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function[AWS Provider / Lambda Functions]: - -* https://github.com/michaelhyatt/terraform-aws-nodejs-api-worker-otel/tree/v0.23[Sample Terraform code] - -[float] -[[elastic-open-telemetry-resource-attributes]] -==== Resource attributes - -A resource attribute is a key/value pair containing information about the entity producing telemetry. -Resource attributes are mapped to Elastic Common Schema (ECS) fields like `service.*`, `cloud.*`, `process.*`, etc. -These fields describe the service and the environment that the service runs in. - -The examples below set the Elastic (ECS) `service.environment` field for the resource, i.e. service, that is producing trace events. -Note that Elastic maps the OpenTelemetry `deployment.environment` field to -the ECS `service.environment` field on ingestion. - -**OpenTelemetry agent** - -Use the `OTEL_RESOURCE_ATTRIBUTES` environment variable to pass resource attributes at process invocation. - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production ----- - -**OpenTelemetry collector** - -Use the {ot-resource}[resource processor] to set or apply changes to resource attributes. - -[source,yaml] ----- -... -processors: - resource: - attributes: - - key: deployment.environment - action: insert - value: production -... ----- - -[TIP] --- -Need to add event attributes instead? -Use attributes--not to be confused with resource attributes--to add data to span, log, or metric events. -Attributes can be added as a part of the OpenTelemetry instrumentation process or with the {ot-attr}[attributes processor]. --- - -[float] -[[elastic-open-telemetry-proxy-apm]] -==== Proxy requests to APM Server - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to the APM Server. - -If you use the OTLP/gRPC protocol, requests to the APM Server must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: `"Content-Type: application/grpc"`. - -When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to the APM Server follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you'd select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request. - -For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: -https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/[Application Load Balancer Support for End-to-End HTTP/2 and gRPC]. - -For more information on how APM Server services gRPC requests, see -https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11[Muxing gRPC and HTTP/1.1]. - -[float] -[[elastic-open-telemetry-known-limitations]] -==== Limitations - -[float] -[[elastic-open-telemetry-traces-limitations]] -===== OpenTelemetry traces - -* Traces of applications using `messaging` semantics might be wrongly displayed as `transactions` in the APM UI, while they should be considered `spans`. https://github.com/elastic/apm-server/issues/7001[#7001] -* Inability to see Stack traces in spans -* Inability in APM views to view the "Time Spent by Span Type" https://github.com/elastic/apm-server/issues/5747[#5747] -* Metrics derived from traces (throughput, latency, and errors) are not accurate when traces are sampled before being ingested by Elastic {observability} (for example, by an OpenTelemetry Collector or OpenTelemetry {apm-agent} or SDK) https://github.com/elastic/apm/issues/472[#472] - -[float] -[[elastic-open-telemetry-metrics-limitations]] -===== OpenTelemetry metrics - -* Inability to see host metrics in Elastic Metrics Infrastructure view when using the OpenTelemetry Collector host metrics receiver https://github.com/elastic/apm-server/issues/5310[#5310] - -[float] -[[elastic-open-telemetry-logs-limitations]] -===== OpenTelemetry logs - -* OpenTelemetry logs are not yet supported https://github.com/elastic/apm-server/issues/5491[#5491] - -[float] -[[elastic-open-telemetry-otlp-limitations]] -===== OpenTelemetry Line Protocol (OTLP) - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol with ProtoBuf payload. -APM Server does not yet support JSON Encoding for OTLP/HTTP. - -[float] -[[elastic-open-telemetry-collector-exporter]] -===== OpenTelemetry Collector exporter for Elastic - -The https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticexporter#legacy-opentelemetry-collector-exporter-for-elastic[OpenTelemetry Collector exporter for Elastic] -was deprecated in 7.13 and replaced by the native support of the OpenTelemetry Line Protocol in -Elastic {observability} (OTLP). To learn more, see -https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticexporter#migration[migration]. diff --git a/docs/legacy/guide/opentracing.asciidoc b/docs/legacy/guide/opentracing.asciidoc deleted file mode 100644 index de90fa8917c..00000000000 --- a/docs/legacy/guide/opentracing.asciidoc +++ /dev/null @@ -1,24 +0,0 @@ -[[opentracing]] -=== OpenTracing bridge - -IMPORTANT: {deprecation-notice-data} - -Most Elastic APM agents have https://opentracing.io/[OpenTracing] compatible bridges. - -The OpenTracing bridge allows you to create Elastic APM <> and <> using the OpenTracing API. -This means you can reuse your existing OpenTracing instrumentation to quickly and easily begin using Elastic APM. - -[float] -==== Agent specific details - -Not all features of the OpenTracing API are supported, and there are some Elastic APM-specific tags you should be aware of. Please see the relevant Agent documentation for more detailed information: - -* {apm-go-ref-v}/opentracing.html[Go agent] -* {apm-java-ref-v}/opentracing-bridge.html[Java agent] -* {apm-node-ref-v}/opentracing.html[Node.js agent] -// * {apm-py-ref-v}/opentelemetry-bridge.html[Python agent] -* https://www.elastic.co/guide/en/apm/agent/python/6.x/opentelemetry-bridge.html[Python agent] -* {apm-ruby-ref-v}/opentracing.html[Ruby agent] -* {apm-rum-ref-v}/opentracing.html[JavaScript Real User Monitoring (RUM) agent] - -Additionally, the iOS agent can utilize the https://github.com/open-telemetry/opentelemetry-swift/tree/main/Sources/Importers/OpenTracingShim[`opentelemetry-swift/OpenTracingShim`]. diff --git a/docs/legacy/guide/overview.asciidoc b/docs/legacy/guide/overview.asciidoc deleted file mode 100644 index 7dbb95308d6..00000000000 --- a/docs/legacy/guide/overview.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -**** -There are two ways to install, run, and manage Elastic APM: - -* With the Elastic APM integration -* With the standalone (legacy) APM Server binary - -This documentation focuses on option two: the **standalone (legacy) APM Server binary**. -{deprecation-notice-installation} -**** - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -== Give Elastic APM a try - -Learn more about the <> that make up Elastic APM, -or jump right into the <>. - -NOTE: These docs will indiscriminately use the word "service" for both services and applications. \ No newline at end of file diff --git a/docs/legacy/guide/quick-start-overview.asciidoc b/docs/legacy/guide/quick-start-overview.asciidoc deleted file mode 100644 index 7f8e29a2fec..00000000000 --- a/docs/legacy/guide/quick-start-overview.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ - -[[quick-start-overview]] -=== Quick start development environment - -IMPORTANT: {deprecation-notice-installation} - -// This tagged region is reused in the Observability docs. -// tag::dev-environment[] -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -If you're just looking for a quick way to try out Elastic APM, you can easily get started with Docker. -Just follow the steps below. - -**Create a docker-compose.yml file** - -The https://www.docker.elastic.co/[Elastic Docker registry] contains Docker images for all of the products -in the {stack}. -You can use Docker compose to easily get the default distributions of {es}, {kib}, -and APM Server up and running in Docker. - -Create a `docker-compose.yml` file and copy and paste in the following: - -["source","yaml",subs="attributes"] --------------------------------------------- -include::./docker-compose.yml[] --------------------------------------------- - -**Compose** - -Run `docker-compose up`. -Compose will download the official docker containers and start {es}, {kib}, and APM Server. - -**Install Agents** - -When Compose finishes, navigate to http://localhost:5601/app/kibana#/home/tutorial/apm. -Complete steps 4-6 to configure your application to collect and report APM data. - -**Visualize** - -Use the {apm-app} at http://localhost:5601/app/apm to visualize your application performance data! - -When you're done, `ctrl+c` will stop all of the containers. - -**Advanced Docker usage** - -If you're interested in learning more about all of the APM features available, -or running the Elastic stack on Docker in a production environment, see the following documentation: - -* {apm-server-ref-v}/running-on-docker.html[Running APM Server on Docker] -* {ref}/docker.html#docker-compose-file[Running {es} and {kib} on Docker] - -endif::[] -// end::dev-environment[] diff --git a/docs/legacy/guide/redirects.asciidoc b/docs/legacy/guide/redirects.asciidoc deleted file mode 100644 index 4c2a140b801..00000000000 --- a/docs/legacy/guide/redirects.asciidoc +++ /dev/null @@ -1,60 +0,0 @@ -ifndef::apm-integration-docs[] -["appendix",role="exclude",id="redirects"] -= Deleted pages -endif::[] - -ifdef::apm-integration-docs[] -["appendix",role="exclude",id="legacy-apm-redirects"] -= Deleted pages -endif::[] - -The following pages do not exist. They may have moved, been deleted, or have not been created yet. - -[role="exclude",id="go-compatibility"] -=== Go Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="java-compatibility"] -=== Java Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="dotnet-compatibility"] -=== .NET Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="nodejs-compatibility"] -=== Node.js Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="python-compatibility"] -=== Python Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="ruby-compatibility"] -=== Ruby Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="rum-compatibility"] -=== RUM Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="apm-release-notes"] -=== APM release highlights - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -Please see <>. - -[role="exclude",id="whats-new"] -=== What's new in APM {minor-version} - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. diff --git a/docs/legacy/guide/rum.asciidoc b/docs/legacy/guide/rum.asciidoc deleted file mode 100644 index 0cbfd7ec13c..00000000000 --- a/docs/legacy/guide/rum.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[rum]] -=== Real User Monitoring (RUM) - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Real User Monitoring captures user interaction with clients such as web browsers. -The {apm-rum-ref-v}[JavaScript Agent] is Elastic’s RUM Agent. -To use it you need to {apm-server-ref-v}/configuration-rum.html[enable RUM support] in the APM Server. - -Unlike Elastic APM backend agents which monitor requests and responses, -the RUM JavaScript agent monitors the real user experience and interaction within your client-side application. -The RUM JavaScript agent is also framework-agnostic, which means it can be used with any front-end JavaScript application. - -You will be able to measure metrics such as "Time to First Byte", `domInteractive`, -and `domComplete` which helps you discover performance issues within your client-side application as well as issues that relate to the latency of your server-side application. \ No newline at end of file diff --git a/docs/legacy/guide/trace-sampling.asciidoc b/docs/legacy/guide/trace-sampling.asciidoc deleted file mode 100644 index 5a1395b918f..00000000000 --- a/docs/legacy/guide/trace-sampling.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[[trace-sampling]] -=== Transaction sampling - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -Elastic APM supports head-based, probability sampling. -_Head-based_ means the sampling decision for each trace is made when that trace is initiated. -_Probability sampling_ means that each trace has a defined and equal probability of being sampled. - -For example, a sampling value of `.2` indicates a transaction sample rate of `20%`. -This means that only `20%` of traces will send and retain all of their associated information. -The remaining traces will drop contextual information to reduce the transfer and storage size of the trace. - -TIP: The APM integration supports both head-based and tail-based sampling. -Learn more <>. - -[float] -==== Why sample? - -Distributed tracing can generate a substantial amount of data, -and storage can be a concern for users running `100%` sampling -- especially as they scale. - -The goal of probability sampling is to provide you with a representative set of data that allows -you to make statistical inferences about the entire group of data. -In other words, in most cases, you can still find anomalous patterns in your applications, detect outages, track errors, -and lower MTTR, even when sampling at less than `100%`. - -[float] -==== What data is sampled? - -A sampled trace retains all data associated with it. - -Non-sampled traces drop <> data. -Spans contain more granular information about what is happening within a transaction, -like external requests or database calls. -Spans also contain contextual information and labels. - -Regardless of the sampling decision, all traces retain transaction and error data. -This means the following data will always accurately reflect *all* of your application's requests, regardless of the configured sampling rate: - -* Transaction duration and transactions per minute -* Transaction breakdown metrics -* Errors, error occurrence, and error rate - -// To turn off the sending of all data, including transaction and error data, set `active` to `false`. - -[float] -==== Sample rates - -What's the best sampling rate? Unfortunately, there isn't one. -Sampling is dependent on your data, the throughput of your application, data retention policies, and other factors. -A sampling rate from `.1%` to `100%` would all be considered normal. -You may even decide to have a unique sample rate per service -- for example, if a certain service -experiences considerably more or less traffic than another. - -// Regardless, cost conscious customers are likely to be fine with a lower sample rate. - -[float] -==== Sampling with distributed tracing - -The initiating service makes the sampling decision in a distributed trace, -and all downstream services respect that decision. - -In each example below, `Service A` initiates four transactions. -In the first example, `Service A` samples at `.5` (`50%`). In the second, `Service A` samples at `1` (`100%`). -Each subsequent service respects the initial sampling decision, regardless of their configured sample rate. -The result is a sampling percentage that matches the initiating service: - -image::./images/dt-sampling-example.png[How sampling impacts distributed tracing] - -[float] -==== {apm-app} implications - -Because the transaction sample rate is respected by downstream services, -the {apm-app} always knows which transactions have and haven't been sampled. -This prevents the app from showing broken traces. -In addition, because transaction and error data is never sampled, -you can always expect metrics and errors to be accurately reflected in the {apm-app}. - -*Service maps* - -Service maps rely on distributed traces to draw connections between services. -A minimum required version of APM agents is required for Service maps to work. -See {kibana-ref}/service-maps.html[Service maps] for more information. - -// Follow-up: Add link from https://www.elastic.co/guide/en/kibana/current/service-maps.html#service-maps-how -// to this page. - -[float] -==== Adjust the sample rate - -There are three ways to adjust the transaction sample rate of your APM agents: - -Dynamic:: -The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment -basis with {kibana-ref}/agent-configuration.html[APM Agent Configuration] in {kib}. - -{kib} API:: -APM Agent configuration exposes an API that can be used to programmatically change -your agents' sampling rate. -An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. - -Configuration:: -Each agent provides a configuration value used to set the transaction sample rate. -See the relevant agent's documentation for more details: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-sample-rate[`ELASTIC_APM_TRANSACTION_SAMPLE_RATE`] -* Java: {apm-java-ref-v}/config-core.html#config-transaction-sample-rate[`transaction_sample_rate`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-sample-rate[`TransactionSampleRate`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-sample-rate[`transactionSampleRate`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] \ No newline at end of file diff --git a/docs/legacy/guide/troubleshooting.asciidoc b/docs/legacy/guide/troubleshooting.asciidoc deleted file mode 100644 index d9a0dee05b7..00000000000 --- a/docs/legacy/guide/troubleshooting.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[troubleshooting-guide]] -== Troubleshooting - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, see <>. - -If you run into trouble, there are three places you can look for help. - -[float] -=== Troubleshooting documentation - -The APM Server, {apm-app}, and each {apm-agent} has a troubleshooting guide: - -* {apm-server-ref-v}/troubleshooting.html[APM Server troubleshooting] -* {kibana-ref}/troubleshooting.html[{apm-app} troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[.NET agent troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[Go agent troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[iOS agent troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[Java agent troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[Node.js agent troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[PHP agent troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[Python agent troubleshooting] -* {apm-ruby-ref-v}/debugging.html[Ruby agent troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[RUM troubleshooting] - -[float] -=== Elastic Support - -We offer a support experience unlike any other. -Our team of professionals 'speak human and code' and love making your day. -https://www.elastic.co/subscriptions[Learn more about subscriptions]. - -[float] -=== Discussion forum - -For additional questions and feature requests, -visit our https://discuss.elastic.co/c/apm[discussion forum]. diff --git a/docs/legacy/high-availability.asciidoc b/docs/legacy/high-availability.asciidoc deleted file mode 100644 index 8ca589a68fe..00000000000 --- a/docs/legacy/high-availability.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[high-availability]] -=== High Availability - -IMPORTANT: {deprecation-notice-installation} - -To achieve high availability -you can place multiple instances of APM Server behind a regular HTTP load balancer, -for example HAProxy or Nginx. - -The endpoint `/` always returns an `HTTP 200`. -You can configure your load balancer to send HTTP requests to this endpoint -to determine if an APM Server is running. -See <> for more information on that endpoint. - -In case of temporal issues, like unavailable {es} or a sudden high workload, -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. - -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. diff --git a/docs/legacy/howto.asciidoc b/docs/legacy/howto.asciidoc deleted file mode 100644 index 68651422676..00000000000 --- a/docs/legacy/howto.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[howto-guides]] -= How-to guides - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -Learn how to perform common {beatname_uc} configuration and management tasks. - -* <> -* <> -* <> -* <<{beatname_lc}-template>> -* <> -* <> -* <> - -include::./sourcemaps.asciidoc[] - -include::./ilm.asciidoc[] - -include::./jaeger-support.asciidoc[] - -include::{libbeat-dir}/howto/load-index-templates.asciidoc[] - -include::./storage-management.asciidoc[] - -include::./configuring-ingest.asciidoc[] - -include::./data-ingestion.asciidoc[] diff --git a/docs/legacy/ilm.asciidoc b/docs/legacy/ilm.asciidoc deleted file mode 100644 index 81cdddb5eb3..00000000000 --- a/docs/legacy/ilm.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[ilm]] -== Custom {ilm} - -// Appends `-legacy` to each section's ID so that they are different from the APM integration IDs -:append-legacy: -legacy - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::../ilm-how-to.asciidoc[tag=ilm-integration] diff --git a/docs/legacy/index.asciidoc b/docs/legacy/index.asciidoc deleted file mode 100644 index 62a2715de16..00000000000 --- a/docs/legacy/index.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -// Remove these two include statements when the APM Server Reference is removed from the build -include::../version.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:libbeat-dir: {docdir}/legacy/copied-from-beats/docs -:libbeat-outputs-dir: {docdir}/legacy/copied-from-beats/outputs -:version: {apm_server_version} -:beatname_lc: apm-server -:beatname_uc: APM Server -:beatname_pkg: {beatname_lc} -:beat_kib_app: APM app -:beat_monitoring_user: apm_system -:beat_monitoring_user_version: 6.5.0 -:beat_monitoring_version: 6.5 -:beat_default_index_prefix: apm -:access_role: {beat_default_index_prefix}_user -:beat_version_key: observer.version -:dockerimage: docker.elastic.co/apm/{beatname_lc}:{version} -:dockergithub: https://github.com/elastic/apm-server-docker/tree/{doc-branch} -:dockerconfig: https://raw.githubusercontent.com/elastic/apm-server/{doc-branch}/apm-server.docker.yml -:discuss_forum: apm -:github_repo_name: apm-server -:sample_date_0: 2019.10.20 -:sample_date_1: 2019.10.21 -:sample_date_2: 2019.10.22 -:repo: apm-server -:no_kibana: -:no_ilm: -:no-pipeline: -:no-processors: -:no-indices-rules: -:no_dashboards: -:apm-server: -:deb_os: -:rpm_os: -:mac_os: -:docker_platform: -:win_os: -:linux_os: -:apm-package-dir: {docdir}/legacy/apm-package - -:github_repo_link: https://github.com/elastic/apm-server/blob/v{version} -ifeval::["{version}" == "8.0.0"] -:github_repo_link: https://github.com/elastic/apm-server/blob/main -endif::[] - -:downloads: https://artifacts.elastic.co/downloads/apm-server - -ifndef::apm-integration-docs[] -[[apm-server]] -= APM Server Reference -endif::[] - -ifdef::apm-integration-docs[] -// Overwrite links to the APM Overview and APM Server Ref. Point to APM Guide instead. -:apm-overview-ref-v: {apm-guide-ref} -:apm-guide-ref: {apm-guide-ref} -:apm-server-ref-v: {apm-guide-ref} -:apm-server-ref: {apm-guide-ref} - -[[overview]] -= Legacy APM Server Reference - -include::./overview.asciidoc[] -endif::[] - -include::./getting-started-apm-server.asciidoc[] - -include::./setting-up-and-running.asciidoc[] - -include::./howto.asciidoc[leveloffset=+1] - -:beat-specific-output-config: {docdir}/legacy/configuring-output-after.asciidoc -include::./configuring.asciidoc[leveloffset=+1] - -:beat-specific-security: {docdir}/legacy/security.asciidoc -include::{libbeat-dir}/shared-securing-beat.asciidoc[leveloffset=+1] - -include::{libbeat-dir}/monitoring/monitoring-beats.asciidoc[leveloffset=+1] - -include::./intake-api.asciidoc[leveloffset=+1] - -include::./exploring-es-data.asciidoc[leveloffset=+1] - -include::./fields.asciidoc[leveloffset=+1] - -include::./troubleshooting.asciidoc[leveloffset=+1] - -include::./breaking-changes.asciidoc[leveloffset=+1] - -include::./redirects.asciidoc[] diff --git a/docs/legacy/intake-api.asciidoc b/docs/legacy/intake-api.asciidoc deleted file mode 100644 index 6ff004f3988..00000000000 --- a/docs/legacy/intake-api.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[intake-api]] -= API - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -The APM Server exposes endpoints for: - -* <> -* <> -* <> -* <> - -include::./events-api.asciidoc[] -include::./sourcemap-api.asciidoc[] -include::./agent-configuration.asciidoc[] -include::./server-info.asciidoc[] diff --git a/docs/legacy/jaeger-reference.asciidoc b/docs/legacy/jaeger-reference.asciidoc deleted file mode 100644 index 794c16bf54f..00000000000 --- a/docs/legacy/jaeger-reference.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -[[jaeger-reference]] -== Configure Jaeger - -++++ -Jaeger -++++ - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -// this content is reused in the how-to guides -// tag::jaeger-intro[] -Elastic APM integrates with https://www.jaegertracing.io/[Jaeger], an open-source, distributed tracing system. -This integration allows users with an existing Jaeger setup to switch from the default Jaeger backend, -to the {stack} -- transform data with APM Server, store data in {es}, and visualize traces in the {kib} {apm-app}. -Best of all, no instrumentation changes are needed in your application code. -// end::jaeger-intro[] - -Ready to get started? See the <> guide. - -[float] -[[jaeger-supported]] -=== Supported architecture - -Jaeger architecture supports different data formats and transport protocols -that define how data can be sent to a collector. Elastic APM, as a Jaeger collector, -supports communication with *Jaeger agents* via gRPC. - -* APM Server serves Jaeger gRPC over the same <> as the Elastic {apm-agent} protocol. - -* The APM Server gRPC endpoint supports TLS. If `apm-server.ssl` is configured, -SSL settings will automatically be applied to APM Server's Jaeger gRPC endpoint. - -* The gRPC endpoint supports probabilistic sampling. -Sampling decisions can be configured <> with APM Agent central configuration, or <> in each Jaeger client. - -See the https://www.jaegertracing.io/docs/1.22/architecture[Jaeger docs] -for more information on Jaeger architecture. - -[float] -[[jaeger-caveats]] -=== Caveats - -There are some limitations and differences between Elastic APM and Jaeger that you should be aware of. - -*Jaeger integration limitations:* - -* Because Jaeger has its own trace context header, and does not currently support W3C trace context headers, -it is not possible to mix and match the use of Elastic's APM agents and Jaeger's clients. -* Elastic APM only supports probabilistic sampling. - -*Differences between APM Agents and Jaeger Clients:* - -* Jaeger clients only sends trace data. -APM agents support a larger number of features, like -multiple types of metrics, and application breakdown charts. -When using Jaeger, features like this will not be available in the {apm-app}. -* Elastic APM's {apm-overview-ref-v}/apm-data-model.html[data model] is different than Jaegers. -For Jaeger trace data to work with Elastic's data model, we rely on spans being tagged with the appropriate -https://github.com/opentracing/specification/blob/master/semantic_conventions.md[`span.kind`]. -** Server Jaeger spans are mapped to Elastic APM {apm-overview-ref-v}/transactions.html[transactions]. -** Client Jaeger spans are mapped to Elastic APM {apm-overview-ref-v}/transaction-spans.html[spans] -- unless the span is the root, in which case it is mapped to an Elastic APM {apm-overview-ref-v}/transactions.html[transaction]. diff --git a/docs/legacy/jaeger-support.asciidoc b/docs/legacy/jaeger-support.asciidoc deleted file mode 100644 index 137f74942f4..00000000000 --- a/docs/legacy/jaeger-support.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[[jaeger]] -== Jaeger integration - -++++ -Integrate with Jaeger -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -include::./jaeger-reference.asciidoc[tag=jaeger-intro] - -[float] -[[jaeger-get-started]] -==== Get started - -Connect your preexisting Jaeger setup to Elastic APM in three steps: - -* <> -* <> -* <> - -IMPORTANT: There are <> to this integration. - -[float] -[[jaeger-configure-agent-client]] -==== Configure Jaeger agents - -APM Server serves Jaeger gRPC over the same <> as the Elastic {apm-agent} protocol. - -include::./tab-widgets/jaeger-widget.asciidoc[] - -[float] -[[jaeger-configure-sampling]] -==== Configure Sampling - -APM Server supports probabilistic sampling, which can be used to reduce the amount of data that your agents collect and send. -Probabilistic sampling makes a random sampling decision based on the configured sampling value. -For example, a value of `.2` means that 20% of traces will be sampled. - -There are two different ways to configure the sampling rate of your Jaeger agents: - -* <> -* <> - -[float] -[[jaeger-configure-sampling-central]] -===== APM Agent central configuration (default) - -Central sampling, with APM Agent central configuration, -allows Jaeger clients to poll APM Server for the sampling rate. -This means sample rates can be configured on the fly, on a per-service and per-environment basis. - -include::./tab-widgets/jaeger-sampling-widget.asciidoc[] - -[float] -[[jaeger-configure-sampling-local]] -===== Local sampling in each Jaeger client - -If you don't have access to the {apm-app}, -you'll need to change the Jaeger client's `sampler.type` and `sampler.param`. -This enables you to set the sampling configuration locally in each Jaeger client. -See the official https://www.jaegertracing.io/docs/1.22/sampling/[Jaeger sampling documentation] -for more information. - -[float] -[[jaeger-configure-start]] -==== Start sending span data - -That's it! Data sent from Jaeger clients to the APM Server can now be viewed in the {apm-app}. diff --git a/docs/legacy/metadata-api.asciidoc b/docs/legacy/metadata-api.asciidoc deleted file mode 100644 index 45c69529ed8..00000000000 --- a/docs/legacy/metadata-api.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[metadata-api]] -=== Metadata - -Every new connection to the APM Server starts with a `metadata` stanza. -This provides general metadata concerning the other objects in the stream. - -Rather than send this metadata information from the agent multiple times, -the APM Server hangs on to this information and applies it to other objects in the stream as necessary. - -TIP: Metadata is stored under `context` when viewing documents in {es}. - -* <> -* <> - -[[kubernetes-data]] -[float] -==== Kubernetes data - -APM agents automatically read Kubernetes data and send it to the APM Server. -In most instances, agents are able to read this data from inside the container. -If this is not the case, or if you wish to override this data, you can set environment variables for the agents to read. -These environment variable are set via the Kubernetes https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables[Downward API]. -Here's how you would add the environment variables to your Kubernetes pod spec: - -[source,yaml] ----- - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: KUBERNETES_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: KUBERNETES_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: KUBERNETES_POD_UID - valueFrom: - fieldRef: - fieldPath: metadata.uid ----- - -The table below maps these environment variables to the APM metadata event field: - -[options="header"] -|===== -|Environment variable |Metadata field name -| `KUBERNETES_NODE_NAME` |system.kubernetes.node.name -| `KUBERNETES_POD_NAME` |system.kubernetes.pod.name -| `KUBERNETES_NAMESPACE` |system.kubernetes.namespace -| `KUBERNETES_POD_UID` |system.kubernetes.pod.uid -|===== - -[[metadata-schema]] -[float] -==== Metadata Schema - -APM Server uses JSON Schema to validate requests. The specification for metadata is defined on -{github_repo_link}/docs/spec/v2/metadata.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/metadata.json[] ----- \ No newline at end of file diff --git a/docs/legacy/metricset-api.asciidoc b/docs/legacy/metricset-api.asciidoc deleted file mode 100644 index 4a42e2e4b3a..00000000000 --- a/docs/legacy/metricset-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[metricset-api]] -=== Metrics - -Metrics contain application metric data captured by an {apm-agent}. - -[[metricset-schema]] -[float] -==== Metric Schema - -APM Server uses JSON Schema to validate requests. The specification for metrics is defined on -{github_repo_link}/docs/spec/v2/metricset.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/metricset.json[] ----- diff --git a/docs/legacy/metricset-indices.asciidoc b/docs/legacy/metricset-indices.asciidoc deleted file mode 100644 index c6e15023917..00000000000 --- a/docs/legacy/metricset-indices.asciidoc +++ /dev/null @@ -1,140 +0,0 @@ -[[metricset-indices]] -== Metrics documents - -++++ -Metrics documents -++++ - -APM Server stores application metrics sent by agents as documents in {es}. -Metric documents contain a timestamp, one or more metric fields, -and non-numerical fields describing the resource to which the metrics pertain. - -// lint ignore g1 -For example, the {apm-java-agent} produces {apm-java-ref-v}/metrics.html#metrics-jvm[JVM-specific metrics]. -This includes garbage collection metrics (`jvm.gc.count`, `jvm.gc.time`) which are related to a specific memory manager, -such as "G1 Young Generation", identified by the field `labels.name`. -See <> for an example document containing these metrics. - -Metric documents can be identified by searching for `processor.event: metric`. - -[float] -[[internal-metrics]] -=== APM-defined metrics - -The APM Agents and APM Server also calculate metrics from trace events, used to power various features of Elastic APM. -These metrics are described below. - -[float] -[[breakdown-metrics-fields]] -==== Breakdown metrics - -To power the {apm-app-ref}/transactions.html[Time spent by span type] graph, -agents collect summarized metrics about the timings of spans and transactions, -broken down by span type. - -*`span.self_time.count`* and *`span.self_time.sum.us`*:: -+ --- -These metrics measure the "self-time" for a span type, and optional subtype, -within a transaction group. Together these metrics can be used to calculate -the average duration and percentage of time spent on each type of operation -within a transaction group. - -These metric documents can be identified by searching for `metricset.name: span_breakdown`. - -You can filter and group by these dimensions: - -* `transaction.name`: The name of the enclosing transaction group, for example `GET /` -* `transaction.type`: The type of the enclosing transaction, for example `request` -* `span.type`: The type of the span, for example `app`, `template` or `db` -* `span.subtype`: The sub-type of the span, for example `mysql` (optional) --- - -[float] -==== Transaction metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates transaction events into latency distribution metrics. - -*`transaction.duration.histogram`*:: -+ --- -This metric measures the latency distribution of transaction groups, -used to power visualizations and analytics in Elastic APM. - -These metric documents can be identified by searching for `metricset.name: transaction`. - -You can filter and group by these dimensions (some of which are optional, for example `container.id`): - -* `transaction.name`: The name of the transaction, for example `GET /` -* `transaction.type`: The type of the transaction, for example `request` -* `transaction.result`: The result of the transaction, for example `HTTP 2xx` -* `transaction.root`: A boolean flag indicating whether the transaction is the root of a trace -* `event.outcome`: The outcome of the transaction, for example `success` -* `agent.name`: The name of the {apm-agent} that instrumented the transaction, for example `java` -* `service.name`: The name of the service that served the transaction -* `service.version`: The version of the service that served the transaction -* `service.node.name`: The name of the service instance that served the transaction -* `service.environment`: The environment of the service that served the transaction -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.language.version`: The language version of the service that served the transaction -* `service.runtime.name`: The runtime name of the service that served the transaction, for example `jRuby` -* `service.runtime.version`: The runtime version that served the transaction -* `host.hostname`: The hostname of the service that served the transaction -* `host.os.platform`: The platform name of the service that served the transaction, for example `linux` -* `container.id`: The container ID of the service that served the transaction -* `kubernetes.pod.name`: The name of the Kubernetes pod running the service that served the transaction -* `cloud.provider`: The cloud provider hosting the service instance that served the transaction -* `cloud.region`: The cloud region hosting the service instance that served the transaction -* `cloud.availability_zone`: The cloud availability zone hosting the service instance that served the transaction -* `cloud.account.id`: The cloud account id of the service that served the transaction -* `cloud.account.name`: The cloud account name of the service that served the transaction -* `cloud.machine.type`: The cloud machine type or instance type of the service that served the transaction -* `cloud.project.id`: The cloud project identifier of the service that served the transaction -* `cloud.project.name`: The cloud project name of the service that served the transaction -* `cloud.service.name`: The cloud service name of the service that served the transaction -* `faas.coldstart`: Whether the _serverless_ service that served the transaction had a cold start. -* `faas.trigger.type`: The trigger type that the function / lambda was executed by of the service that served the transaction --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -==== Service-destination metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates span events into "service destination" metrics. - -*`span.destination.service.response_time.count`* and *`span.destination.service.response_time.sum.us`*:: -+ --- -These metrics measure the count and total duration of requests from one service to another service. -These are used to calculate the throughput and latency of requests to backend services such as databases in -{kibana-ref}/service-maps.html[Service maps]. - -These metric documents can be identified by searching for `metricset.name: service_destination`. - -You can filter and group by these dimensions: - -* `span.destination.service.resource`: The destination service resource, for example `mysql` -* `event.outcome`: The outcome of the operation, for example `success` -* `agent.name`: The name of the {apm-agent} that instrumented the operation, for example `java` -* `service.name`: The name of the service that made the request -* `service.environment`: The environment of the service that made the request --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -[[example-metric-document]] -=== Example metric document - -Below is an example of a metric document as stored in {es}, containing JVM metrics produced by the {apm-java-agent}. -The document contains two related metrics: `jvm.gc.time` and `jvm.gc.count`. These are accompanied by various fields describing -the environment in which the metrics were captured: service name, host name, Kubernetes pod UID, container ID, process ID, and more. -These fields make it possible to search and aggregate across various dimensions, such as by service, host, and Kubernetes pod. - -[source,json] ----- -include::../data/elasticsearch/metricset.json[] ----- diff --git a/docs/legacy/overview.asciidoc b/docs/legacy/overview.asciidoc deleted file mode 100644 index 2ac4cc023af..00000000000 --- a/docs/legacy/overview.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -**** -There are two ways to install, run, and manage Elastic APM: - -* With the Elastic APM integration -* With the standalone (legacy) APM Server binary - -This documentation focuses on option two: the **standalone (legacy) APM Server binary**. -{deprecation-notice-installation} -**** - -The APM Server receives data from APM agents and transforms them into {es} documents. -It does this by exposing an HTTP server endpoint to which agents stream the APM data they collect. -After the APM Server has validated and processed events from the APM agents, -the server transforms the data into {es} documents and stores them in corresponding {es} indices. - -The APM Server works in conjunction with {apm-agents-ref}/index.html[APM agents], {ref}/index.html[{es}], and {kibana-ref}/index.html[{kib}]. Please view the {apm-overview-ref-v}/index.html[APM Overview] for details on how these components work together. - -NOTE: APM Server is built with the {beats-ref}[{beats}] framework and leverages its functionality. - -[float] -[[why-separate-component]] -=== Why is APM Server a separate component? - -The APM Server is a separate component for the following reasons: - -* It helps to keep the agents as light as possible. -* Since the APM Server is a stateless separate component, it can be scaled independently. -* Data is collected in browsers for Real User Monitoring. - APM Server prevents these browsers from interacting directly with {es} (which poses a security risk). -* APM Server controls the amount of data flowing into {es}. -* In cases where {es} becomes unresponsive, -APM Server can buffer data temporarily without adding overhead to the agents. -* Acts as a middleware for source mapping for JavaScript in the browser. -* Provides a JSON API for agents to use and thereby improves compatibility across different versions of agents and the {stack}. diff --git a/docs/legacy/redirects.asciidoc b/docs/legacy/redirects.asciidoc deleted file mode 100644 index b80598ae6bf..00000000000 --- a/docs/legacy/redirects.asciidoc +++ /dev/null @@ -1,264 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -// Event Types - -[role="exclude",id="event-types"] -=== Event types - -This page has moved. Please see {apm-overview-ref-v}/apm-data-model.html[APM data model]. - -// [role="exclude",id="errors"] -// === Errors - -// This page has moved. Please see {apm-overview-ref-v}/errors.html[Errors]. - -// [role="exclude",id="transactions"] -// === Transactions - -// This page has moved. Please see {apm-overview-ref-v}/transactions.html[Transactions]. - -// [role="exclude",id="transactions-spans"] -// === Spans - -// This page has moved. Please see {apm-overview-ref-v}/transaction-spans.html[Spans]. - -// Error API - -[role="exclude",id="error-endpoint"] -=== Error endpoint - -The error endpoint has been deprecated. Instead, see <>. - -[role="exclude",id="error-schema-definition"] -=== Error schema definition - -The error schema has moved. Please see <>. - -[role="exclude",id="error-api-examples"] -=== Error API examples - -The error API examples have moved. Please see <>. - -[role="exclude",id="error-payload-schema"] -=== Error payload schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-service-schema"] -=== Error service schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-system-schema"] -=== Error system schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-context-schema"] -=== Error context schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-stacktraceframe-schema"] -=== Error stack trace frame schema - -This schema has changed. Please see <>. - -[role="exclude",id="payload-with-error"] -=== Payload with error - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-exception"] -=== Payload with minimal exception - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-log"] -=== Payload with minimal log - -This is no longer helpful. Please see <>. - -// Transaction API - -[role="exclude",id="transaction-endpoint"] -=== Transaction endpoint - -The transaction endpoint has been deprecated. Instead, see <>. - -[role="exclude",id="transaction-schema-definition"] -=== Transaction schema definition - -The transaction schema has moved. Please see <>. - -[role="exclude",id="transaction-api-examples"] -=== Transaction API examples - -The transaction API examples have moved. Please see <>. - -[role="exclude",id="transaction-span-schema"] -=== Transaction span schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-payload-schema"] -=== Transaction payload schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-service-schema"] -=== Transaction service schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-system-schema"] -=== Transaction system schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-context-schema"] -=== Transaction context schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-stacktraceframe-schema"] -=== Transaction stack trace frame schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-request-schema"] -=== Transaction request schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-user-schema"] -=== Transaction user schema - -This schema has changed. Please see <>. - -[role="exclude",id="payload-with-transactions"] -=== Payload with transactions - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-transaction"] -=== Payload with minimal transaction - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-span"] -=== Payload with minimal span - -This is no longer helpful. Please see <>. - -[role="exclude",id="example-intakev2-events"] -=== Example Request Body - -This page has moved. Please see <>. - -// V1 intake API - -[role="exclude",id="request-too-large"] -=== HTTP 413: Request body too large - -This error can no longer occur. Please see <> for an updated overview of potential issues. - -[role="exclude",id="configuration-v1-api"] -=== Configuration options: v1 Intake API - -Intake API v1 is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="max_unzipped_size"] -=== `max_unzipped_size` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="concurrent_requests"] -=== `concurrent_requests` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="metrics.enabled"] -=== `metrics.enabled` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="max_request_queue_time"] -=== `max_request_queue_time` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="configuration-v2-api"] -=== Configuration options: v2 Intake API - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-v1"] -=== `configuration-rum-v1` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="rate_limit_v1"] -=== `rate_limit_v1` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-v2"] -=== `configuration-rum-v2` - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-general"] -=== Configuration options: general - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="use-v1-and-v2"] -=== Tuning APM Server using both v1 and v2 intake API - -This section has moved. Please see <> for how to tune APM Server. - -// Dashboards - -[role="exclude",id="load-dashboards-logstash"] -=== Tuning APM Server using both v1 and v2 intake API - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="url-option"] -=== setup.dashboards.url - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="file-option"] -=== setup.dashboards.file - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="load-kibana-dashboards"] -=== Dashboards - -Loading {kib} dashboards from APM Server is no longer supported. -Please use the {kibana-ref}/xpack-apm.html[{kib} APM UI] instead. -As an alternative, a small number of dashboards and visualizations are available in the -https://github.com/elastic/apm-contrib/tree/main/kibana[apm-contrib] repository. - -// [role="exclude",id="rum"] -// === Rum - -// This section has moved. Please see <>. - -ifndef::apm-integration-docs[] -[role="exclude",id="api-key"] -=== API keys - -This section has moved. See <>. - -[role="exclude",id="secret-token"] -=== API keys - -This section has moved. See <>. -endif::[] diff --git a/docs/legacy/secure-communication-agents.asciidoc b/docs/legacy/secure-communication-agents.asciidoc deleted file mode 100644 index 40a5ad6ebc9..00000000000 --- a/docs/legacy/secure-communication-agents.asciidoc +++ /dev/null @@ -1,641 +0,0 @@ -[[secure-communication-agents]] -== Secure communication with APM agents - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -Communication between APM agents and APM Server can be both encrypted and authenticated. -Encryption is achievable through <>. - -Authentication can be achieved in two main ways: - -* <> -* <> - -Both options can be enabled at the same time, -allowing Elastic APM agents to chose whichever mechanism they support. -In addition, since both mechanisms involve sending a secret as plain text, -they should be used in combination with SSL/TLS encryption. - -As soon as an authenticated communication is enabled, requests without a valid token or API key will be denied by APM Server. -An exception to this rule can be configured with <>, -which is useful for APM agents running on the client side, like the Real User Monitoring (RUM) agent. - -There is a less straightforward and more restrictive way to authenticate clients through -<>, which is currently a mainstream option only -for the RUM agent (through the browser) and the Jaeger agent. - -[[ssl-setup]] -=== SSL/TLS communication - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -// Use the shared ssl short description -include::./ssl-input.asciidoc[] - -[[api-key-legacy]] -=== API keys - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -NOTE: API keys are sent as plain-text, -so they only provide security when used in combination with <>. -They are not applicable for agents running on clients, like the RUM agent, -as there is no way to prevent them from being publicly exposed. - -Configure API keys to authorize requests to the APM Server. -To enable API key authorization, set `apm-server.auth.api_key.enabled` to `true`. - -There are multiple, unique privileges you can assign to each API key. -API keys can have one or more of these privileges: - -* *Agent configuration* (`config_agent:read`): Required for agents to read -{kibana-ref}/agent-configuration.html[Agent configuration remotely]. -* *Ingest* (`event:write`): Required for ingesting Agent events. -* *Source map* (`sourcemap:write`): Required for <>. - -To secure the communication between APM Agents and the APM Server with API keys, -make sure <> is enabled, then complete these steps: - -. <> -. <> -. <> -. <> - -[[configure-api-key]] -[float] -=== Enable and configure API keys - -API keys are disabled by default. Enable and configure this feature in the `apm-server.auth.api_key` -section of the +{beatname_lc}.yml+ configuration file. - -At a minimum, you must enable API keys, -and should set a limit on the number of unique API keys that APM Server allows per minute. -Here's an example `apm-server.auth.api_key` config using 50 unique API keys: - -[source,yaml] ----- -apm-server.auth.api_key.enabled: true <1> -apm-server.auth.api_key.limit: 50 <2> ----- -<1> Enables API keys -<2> Restricts the number of unique API keys that {es} allows each minute. -This value should be the number of unique API keys configured in your monitored services. - -All other configuration options are described in <>. - -[[create-apikey-user]] -[float] -=== Create an API key user in {kib} - -API keys can only have the same or lower access rights than the user that creates them. -Instead of using a superuser account to create API keys, you can create a role with the minimum required -privileges. - -The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege -and the APM application-level privileges that it wishes to grant. -The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] -to create a role named `apm_agent_key_role`. - -[source,js] ----- -POST /_security/role/apm_agent_key_role -{ - "cluster": ["manage_own_api_key"], - "applications": [{ - "application": "apm", - "privileges": ["event:write", "config_agent:read", "sourcemap:write"], - "resources": ["*"] - }] -} ----- - -Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. - -[[create-api-key]] -[float] -=== Create an API key - -Using a superuser account, or a user with the role created in the previous step, -open {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, copy and paste the following JSON. -This example creates an API key with privileges for ingesting APM events, -reading agent central configuration, uploading a source map: - -[source,json] ----- -{ - "apm": { - "applications": [ - { - "application": "apm", - "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <1> - "resources": ["*"] - } - ] - } -} ----- -<1> This example adds all three API privileges to the new API key. -Privileges are described <>. Remove any privileges that you do not need. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key** and then copy the Base64 encoded API key. -You will need this for the next step, and you will not be able to view it again. - -[role="screenshot"] -image::images/api-key-copy.png[API key copy base64] - -[[set-api-key]] -[float] -=== Set the API key in your APM agents - -You can now apply your newly created API keys in the configuration of each of your APM agents. -See the relevant agent documentation for additional information: - -// Not relevant for RUM and iOS -* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] -* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] -* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] - -[[configure-api-key-alternative]] -[float] -=== Alternate API key creation methods - -API keys can also be created and validated outside of {kib}: - -* <> -* <> - -[[create-api-key-workflow-apm-server]] -[float] -==== APM Server API key workflow - -APM Server provides a command line interface for creating, retrieving, invalidating, and verifying API keys. -Keys created using this method can only be used for communication with APM Server. - -[[create-api-key-subcommands]] -[float] -===== `apikey` subcommands - -include::{libbeat-dir}/command-reference.asciidoc[tag=apikey-subcommands] - -[[create-api-key-privileges]] -[float] -===== Privileges - -If privileges are not specified at creation time, the created key will have all privileges. - -* `--agent-config` grants the `config_agent:read` privilege -* `--ingest` grants the `event:write` privilege -* `--sourcemap` grants the `sourcemap:write` privilege - -[[create-api-key-workflow]] -[float] -===== Create an API key - -Create an API key with the `create` subcommand. - -The following example creates an API key with a `name` of `java-001`, -and gives the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey create --ingest --agent-config --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Name ........... java-001 -Expiration ..... never -Id ............. qT4tz28B1g59zC3uAXfW -API Key ........ rH55zKd5QT6wvs3UbbkxOA (won't be shown again) -Credentials .... cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== (won't be shown again) --------------------------------------------------- - -You should always verify the privileges of an API key after creating it. -Verification can be done using the `verify` subcommand. - -The following example verifies that the `java-001` API key has the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --agent-config --ingest --credentials cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] --------------------------------------------------- -Authorized for privilege "event:write"...: Yes -Authorized for privilege "config_agent:read"...: Yes --------------------------------------------------- - -To invalidate an API key, use the `invalidate` subcommand. -Due to {es} caching, there may be a delay between when this subcommand is executed and when it takes effect. - -The following example invalidates the `java-001` API key. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey invalidate --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Invalidated keys ... qT4tz28B1g59zC3uAXfW -Error count ........ 0 --------------------------------------------------- - -A full list of `apikey` subcommands and flags is available in the <>. - -[[create-api-key-workflow-es]] -[float] -==== {es} API key workflow - -It is also possible to create API keys using the {es} -{ref}/security-api-create-api-key.html[create API key API]. - -This example creates an API key named `java-002`: - -[source,kibana] ----- -POST /_security/api_key -{ - "name": "java-002", <1> - "expiration": "1d", <2> - "role_descriptors": { - "apm": { - "applications": [ - { - "application": "apm", - "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <3> - "resources": ["*"] - } - ] - } - } -} ----- -<1> The name of the API key -<2> The expiration time of the API key -<3> Any assigned privileges - -The response will look similar to this: - -[source,console-result] ----- -{ - "id" : "GnrUT3QB7yZbSNxKET6d", - "name" : "java-002", - "expiration" : 1599153532262, - "api_key" : "RhHKisTmQ1aPCHC_TPwOvw" -} ----- - -The `credential` string, which is what agents use to communicate with APM Server, -is a base64 encoded representation of the API key's `id:api_key`. -It can be created like this: - -[source,console-result] --------------------------------------------------- -echo -n GnrUT3QB7yZbSNxKET6d:RhHKisTmQ1aPCHC_TPwOvw | base64 --------------------------------------------------- - -You can verify your API key has been base64-encoded correctly with the -{ref}/security-api-authenticate.html[Authenticate API]: - -["source","sh",subs="attributes"] ------ -curl -H "Authorization: ApiKey R0gzRWIzUUI3eVpiU054S3pYSy06bXQyQWl4TlZUeEcyUjd4cUZDS0NlUQ==" localhost:9200/_security/_authenticate ------ - -If the API key has been encoded correctly, you'll see a response similar to the following: - -[source,console-result] ----- -{ - "username":"1325298603", - "roles":[], - "full_name":null, - "email":null, - "metadata":{ - "saml_nameid_format":"urn:oasis:names:tc:SAML:2.0:nameid-format:transient", - "saml(http://saml.elastic-cloud.com/attributes/principal)":[ - "1325298603" - ], - "saml_roles":[ - "superuser" - ], - "saml_principal":[ - "1325298603" - ], - "saml_nameid":"_7b0ab93bbdbc21d825edf7dca9879bd8d44c0be2", - "saml(http://saml.elastic-cloud.com/attributes/roles)":[ - "superuser" - ] - }, - "enabled":true, - "authentication_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - }, - "lookup_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - } -} ----- - -You can then use the APM Server CLI to verify that the API key has the requested privileges: - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --credentials R25yVVQzUUI3eVpiU054S0VUNmQ6UmhIS2lzVG1RMWFQQ0hDX1RQd092dw== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] ----- -Authorized for privilege "config_agent:read"...: Yes -Authorized for privilege "event:write"...: Yes -Authorized for privilege "sourcemap:write"...: Yes ----- - -[float] -[[api-key-settings]] -=== API key configuration options - -[float] -[[api-key-auth-settings]] -==== `auth.api_key.*` configuration options - -You can specify the following options in the `apm-server.auth.api_key.*` section of the -+{beatname_lc}.yml+ configuration file. -They apply to API key communication between the APM Server and APM Agents. - -NOTE: These settings are different from the API key settings used for {es} output and monitoring. - -[float] -===== `enabled` - -Enable API key authorization by setting `enabled` to `true`. -By default, `enabled` is set to `false`, and API key support is disabled. - -TIP: Not using Elastic APM agents? -When enabled, third-party APM agents must include a valid API key in the following format: -`Authorization: ApiKey `. The key must be the base64 encoded representation of the API key's `id:name`. - -[float] -===== `limit` - -Each unique API key triggers one request to {es}. -This setting restricts the number of unique API keys are allowed per minute. -The minimum value for this setting should be the number of API keys configured in your monitored services. -The default `limit` is `100`. - -[float] -==== `auth.api_key.elasticsearch.*` configuration options - -All of the `auth.api_key.elasticsearch.*` configurations are optional. -If none are set, configuration settings from the `apm-server.output` section will be reused. - -[float] -===== `elasticsearch.hosts` - -API keys are fetched from {es}. -This configuration needs to point to a secured {es} cluster that is able to serve API key requests. - - -[float] -===== `elasticsearch.protocol` - -The name of the protocol {es} is reachable on. -The options are: `http` or `https`. The default is `http`. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.path` - -An optional HTTP path prefix that is prepended to the HTTP API calls. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.proxy_url` - -The URL of the proxy to use when connecting to the {es} servers. -The value may be either a complete URL or a "host[:port]", in which case the "http"scheme is assumed. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -===== `elasticsearch.timeout` - -The HTTP request timeout in seconds for the {es} request. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -==== `auth.api_key.elasticsearch.ssl.*` configuration options - -SSL is off by default. Set `elasticsearch.protocol` to `https` if you want to enable `https`. - -[float] -===== `elasticsearch.ssl.enabled` - -Enable custom SSL settings. -Set to false to ignore custom SSL settings for secure communication. - -[float] -===== `elasticsearch.ssl.verification_mode` - -Configure SSL verification mode. -If `none` is configured, all server hosts and certificates will be accepted. -In this mode, SSL based connections are susceptible to man-in-the-middle attacks. -**Use only for testing**. Default is `full`. - -[float] -===== `elasticsearch.ssl.supported_protocols` - -List of supported/valid TLS versions. -By default, all TLS versions from 1.0 to 1.2 are enabled. - -[float] -===== `elasticsearch.ssl.certificate_authorities` - -List of root certificates for HTTPS server verifications. - -[float] -===== `elasticsearch.ssl.certificate` - -The path to the certificate for SSL client authentication. - -[float] -===== `elasticsearch.ssl.key` - -The client certificate key used for client authentication. -This option is required if certificate is specified. - -[float] -===== `elasticsearch.ssl.key_passphrase` - -An optional passphrase used to decrypt an encrypted key stored in the configured key file. -It is recommended to use the provided keystore instead of entering the passphrase in plain text. - -[float] -===== `elasticsearch.ssl.cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library’s default suites are used (recommended). - -[float] -===== `elasticsearch.ssl.curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -[float] -===== `elasticsearch.ssl.renegotiation` - -Configure what types of renegotiation are supported. -Valid options are `never`, `once`, and `freely`. Default is `never`. - -* `never` - Disables renegotiation. -* `once` - Allows a remote server to request renegotiation once per connection. -* `freely` - Allows a remote server to repeatedly request renegotiation. - -[float] -[[api-key-settings-legacy]] -==== `api_key.*` configuration options - -deprecated::[7.14.0, Replaced by `auth.api_key.*`. See <>] - -In versions prior to 7.14.0, API Key authorization was known as `apm-server.api_key`. In 7.14.0 this was renamed `apm-server.auth.api_key`. -The old configuration will continue to work until 8.0.0, and the new configuration will take precedence. - -[[secret-token-legacy]] -=== Secret token - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -You can configure a secret token to authorize requests to the APM Server. -This ensures that only your agents are able to send data to your APM servers. -Both the agents and the APM servers have to be configured with the same secret token. - -NOTE: Secret tokens are sent as plain-text, -so they only provide security when used in combination with <>. - -To secure the communication between APM agents and the APM Server with a secret token: - -. Make sure <> is enabled -. <> -. <> - -NOTE: Secret tokens are not applicable for the RUM Agent, -as there is no way to prevent them from being publicly exposed. - -[[set-secret-token]] -[float] -=== Set a secret token - -**APM Server configuration** - -// lint ignore fleet -NOTE: {ess} and {ece} deployments provision a secret token when the deployment is created. -The secret token can be found and reset in the {ecloud} console under **Deployments** -- **APM & Fleet**. - -Here's how you set the secret token in APM Server: - -[source,yaml] ----- -apm-server.auth.secret_token: ----- - -We recommend saving the token in the APM Server <>. - -IMPORTANT: Secret tokens are not applicable for the RUM agent, -as there is no way to prevent them from being publicly exposed. - -**Agent specific configuration** - -Each Agent has a configuration for setting the value of the secret token: - -* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] -* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] - -[[https-in-agents]] -[float] -=== HTTPS communication in APM agents - -To enable secure communication in your agents, you need to update the configured server URL to use `HTTPS` instead of `HTTP`. - -* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] -* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] -* *Python agent*: {apm-py-ref}/[`server_url`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] - -Some agents also allow you to specify a custom certificate authority for connecting to APM Server. - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -Agents that don't allow you specify a custom certificate will allow you to -disable verification of the SSL certificate. -This ensures encryption, but does not verify that you are sending data to the correct APM Server. - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[[secure-communication-unauthenticated]] -=== Anonymous authentication - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, see <>. - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -This is useful for agents that run on clients, like the Real User Monitoring (RUM) agent running in a browser, -or the iOS/Swift agent running in a user application. -Incoming requests are considered to be anonymous if no authentication token can be extracted from the incoming request. -By default, these anonymous requests are rejected and an authentication error is returned. - -Anonymous authentication must be enabled to collect RUM data. -To enable anonymous access, set either <> or -<> to `true`. - -Because anyone can send anonymous events to the APM Server, -additional configuration variables are available to rate limit the number anonymous events the APM Server processes; -throughput is equal to the `rate_limit.ip_limit` times the `rate_limit.event_limit`. - -See <> for a complete list of options and a sample configuration file. diff --git a/docs/legacy/security.asciidoc b/docs/legacy/security.asciidoc deleted file mode 100644 index 57b6a8767a3..00000000000 --- a/docs/legacy/security.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -A reference of all available <> is also available. - -[float] -[[security-overview]] -== Security Overview - -APM Server exposes an HTTP endpoint, and as with anything that opens ports on your servers, -you should be careful about who can connect to it. -Firewall rules are recommended to ensure only authorized systems can connect. diff --git a/docs/legacy/server-info.asciidoc b/docs/legacy/server-info.asciidoc deleted file mode 100644 index f734a05e22e..00000000000 --- a/docs/legacy/server-info.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[server-info]] -== Server Information API - -++++ -Server information -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -The APM Server exposes an API endpoint to query general server information. -This lightweight endpoint is useful as a server up/down health check. - -[[server-info-endpoint]] -[float] -=== Server Information endpoint -Send an `HTTP GET` request to the server information endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/ ------------------------------------------------------------- - -This endpoint always returns an HTTP 200. - -If an <> or <> is set, only requests including <> will receive server details. - -[[server-info-examples]] -[float] -==== Example - -Example APM Server information request: - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/ \ - -H "Authorization: Bearer secret_token" - -{ - "build_date": "2021-12-18T19:59:06Z", - "build_sha": "24fe620eeff5a19e2133c940c7e5ce1ceddb1445", - "publish_ready": true, - "version": "{version}" -} ---------------------------------------------------------------------------- diff --git a/docs/legacy/setting-up-and-running.asciidoc b/docs/legacy/setting-up-and-running.asciidoc deleted file mode 100644 index aaedceea193..00000000000 --- a/docs/legacy/setting-up-and-running.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ - -[[setting-up-and-running]] -== Set up APM Server - -++++ -Set up -++++ - -IMPORTANT: {deprecation-notice-installation} - -Before reading this section, see the <> -for basic installation and running instructions. - -This section includes additional information on how to set up and run APM Server, including: - -* <> -* <> -* <> -* <> -* <> - -include::{libbeat-dir}/shared-directory-layout.asciidoc[] - -include::{libbeat-dir}/keystore.asciidoc[] - -include::{libbeat-dir}/command-reference.asciidoc[] - -include::./high-availability.asciidoc[] - -include::{libbeat-dir}/shared-systemd.asciidoc[] diff --git a/docs/legacy/sourcemap-api.asciidoc b/docs/legacy/sourcemap-api.asciidoc deleted file mode 100644 index 2c54bf48d55..00000000000 --- a/docs/legacy/sourcemap-api.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[sourcemap-api]] -== Source map upload API - -++++ -Source map upload -++++ - -IMPORTANT: {deprecation-notice-api} -If you've already upgraded, see <>. - -IMPORTANT: You must <> in the APM Server for this endpoint to work. - -The APM Server exposes an API endpoint to upload source maps for real user monitoring (RUM). -See the <> guide to get started. - -If you're using the <>, -you must use the {kib} {kibana-ref}/rum-sourcemap-api.html[source map upload API] instead. - -[[sourcemap-endpoint]] -[float] -=== Upload endpoint -Send a `HTTP POST` request with the `Content-Type` header set to `multipart/form-data` to the source map endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/assets/v1/sourcemaps ------------------------------------------------------------- - -[[sourcemap-request-fields]] -[float] -==== Request Fields -The request must include some fields needed to identify `source map` correctly later on: - -* `service_name` -* `service_version` -* `sourcemap` - must follow the https://docs.google.com/document/d/1U1RGAehQwRypUTovF1KRlpiOFze0b-_2gc6fAH0KY0k[Source map revision 3 proposal] -spec and be attached as a `file upload`. -* `bundle_filepath` - the absolute path of the final bundle as it is used in the web application - -You can configure an <> or <> to restrict source map uploads. - -[float] -[[sourcemap-apply]] -==== How source maps are applied - -APM Server attempts to find the correct source map for each `stack trace frame` in an event. -To do this, it tries the following: - -* Compare the event's `service.name` with the source map's `service_name` -* Compare the event's `service.version` with the source map's `service_version` -* Compare the stack trace frame's `abs_path` with the source map's `bundle_filepath` - -While comparing the stack trace frame's `abs_path` with the source map's `bundle_filepath`, the search logic will prioritize `abs_path` full matching: -[source,console] ---------------------------------------------------------------------------- -{ - "sourcemap.bundle_filepath": "http://localhost/static/js/bundle.js" -} ---------------------------------------------------------------------------- - -But if there is no full match, it also accepts source maps that match only the URLs path (without the host). -[source,console] ---------------------------------------------------------------------------- -{ - "sourcemap.bundle_filepath": "/static/js/bundle.js" -} ---------------------------------------------------------------------------- - -If a source map is found, the `stack trace frame` attributes `filename`, `function`, `line number`, and `column number` are overwritten, -and `abs path` is https://golang.org/pkg/path/#Clean[cleaned] to be the shortest path name equivalent to the given path name. -If multiple source maps are found, -the one with the latest upload timestamp is used. - -[[sourcemap-api-examples]] -[float] -==== Example - -Example source map request including an optional <> "mysecret": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/assets/v1/sourcemaps \ - -H "Authorization: Bearer mysecret" \ - -F service_name="test-service" \ - -F service_version="1.0" \ - -F bundle_filepath="http://localhost/static/js/bundle.js" \ - -F sourcemap=@bundle.js.map ---------------------------------------------------------------------------- diff --git a/docs/legacy/sourcemap-indices.asciidoc b/docs/legacy/sourcemap-indices.asciidoc deleted file mode 100644 index ed1dfc0a683..00000000000 --- a/docs/legacy/sourcemap-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[sourcemap-indices]] -== Example source map document - -++++ -Source map document -++++ - -This example shows what a source map document can look like when indexed in {es}: - -[source,json] ----- -include::../data/intake-api/generated/sourcemap/bundle.js.map[] ----- diff --git a/docs/legacy/sourcemaps.asciidoc b/docs/legacy/sourcemaps.asciidoc deleted file mode 100644 index 4e3ae6b96d2..00000000000 --- a/docs/legacy/sourcemaps.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -[[sourcemaps]] -== How to apply source maps to error stack traces when using minified bundles - -++++ -Create and upload source maps (RUM) -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -Minifying JavaScript bundles in production is a common practice; -it can greatly improve the load time and network latency of your applications. -The problem with minifying code is that it can be hard to debug. - -For best results, uploading source maps should become a part of your deployment procedure, -and not something you only do when you see unhelpful errors. -That's because uploading source maps after errors happen won't make old errors magically readable — -errors must occur again for source mapping to occur. - -Here's an example of an exception stack trace in the {apm-app} when using minified code. -As you can see, it's not very helpful. - -[role="screenshot"] -image::images/source-map-before.png[{apm-app} without source mapping] - -With a source map, minified files are mapped back to the original source code, -allowing you to maintain the speed advantage of minified code, -without losing the ability to quickly and easily debug your application. -Here's the same example as before, but with a source map uploaded and applied: - -[role="screenshot"] -image::images/source-map-after.png[{apm-app} with source mapping] - -Follow the steps below to enable source mapping your error stack traces in the {apm-app}: - -* <> -* <> -* <> -* <> - -[float] -[[sourcemap-rum-initialize]] -=== Initialize the RUM Agent - -Set the service name and version of your application when initializing the RUM Agent. -To make uploading subsequent source maps easier, the `serviceVersion` you choose might be the -`version` from your `package.json`. For example: - -[source,js] ----- -import { init as initApm } from '@elastic/apm-rum' -const serviceVersion = require("./package.json").version - -const apm = initApm({ - serviceName: 'myService', - serviceVersion: serviceVersion -}) ----- - -Or, `serviceVersion` could be a git commit reference. For example: - -[source,js] ----- -const git = require('git-rev-sync') -const serviceVersion = git.short() ----- - -It can also be any other unique string that indicates a specific version of your application. -The APM integration uses the service name and version to match the correct source map file to each stack trace. - -[float] -[role="child_attributes"] -[[sourcemap-rum-generate]] -=== Generate a source map - -To be compatible with Elastic APM, source maps must follow the -https://sourcemaps.info/spec.html[source map revision 3 proposal spec]. - -Source maps can be generated and configured in many different ways. -For example, parcel automatically generates source maps by default. -If you're using webpack, some configuration may be needed to generate a source map: - -[source,js] ----- -const webpack = require('webpack') -const serviceVersion = require("./package.json").version <1> -const TerserPlugin = require('terser-webpack-plugin'); -module.exports = { - entry: 'app.js', - output: { - filename: 'app.min.js', - path: './dist' - }, - devtool: 'source-map', - plugins: [ - new webpack.DefinePlugin({'serviceVersion': JSON.stringify(serviceVersion)}), - new TerserPlugin({ - sourceMap: true - }) - ] -} ----- -<1> If you're using a different method of defining `serviceVersion`, you can set it here. - -[float] -[[sourcemap-rum-configure]] -=== Configure the {kib} endpoint in APM Server - -include::./tab-widgets/kibana-endpoint-widget.asciidoc[] - -[float] -[[sourcemap-rum-upload]] -=== Upload the source map to {kib} - -{kib} exposes a {kibana-ref}/rum-sourcemap-api.html[source map endpoint] for uploading source maps. -Source maps can be uploaded as a string, or as a file upload. - -Let's look at two different ways to upload a source map: curl and a custom application. -Each example includes the four fields necessary for APM Server to later map minified code to its source: - -* `service_name` - Should match the `serviceName` from step one -* `service_version` - Should match the `serviceVersion` from step one -* `bundle_filepath` - The absolute path of the final bundle as used in the web application -* `sourcemap` - The location of the source map. -If you have multiple source maps, you'll need to upload each individually. - -[float] -[[sourcemap-curl]] -==== Upload via curl - -Here’s an example curl request that uploads the source map file created in the previous step. -This request uses an API key for authentication. - -[source,console] ----- -SERVICEVERSION=`node -e "console.log(require('./package.json').version);"` && \ <1> -curl -X POST "http://localhost:5601/api/apm/sourcemaps" \ --H 'Content-Type: multipart/form-data' \ --H 'kbn-xsrf: true' \ --H 'Authorization: ApiKey ${YOUR_API_KEY}' \ <2> --F 'service_name="foo"' \ --F 'service_version="$SERVICEVERSION"' \ --F 'bundle_filepath="/test/e2e/general-usecase/bundle.js.map"' \ --F 'sourcemap="@./dist/app.min.js.map"' ----- -<1> This example uses the version from `package.json` -<2> The API key used here needs to have appropriate {kibana-ref}/rum-sourcemap-api.html[privileges] - -[float] -[[sourcemap-custom-app]] -==== Upload via a custom app - -To ensure uploading source maps become a part of your deployment process, -consider automating the process with a custom application. -Here's an example Node.js application that uploads the source map file created in the previous step: - -[source,js] ----- -console.log('Uploading sourcemaps!') -var request = require('request') -var filepath = './dist/app.min.js.map' -var formData = { - headers: { - Content-Type: 'multipart/form-data', - kbn-xsrf: 'true', - Authorization: 'ApiKey ${YOUR_API_KEY}' - }, - service_name: 'service-name’, - service_version: require("./package.json").version, // Or use 'git-rev-sync' for git commit hash - bundle_filepath: 'http://localhost/app.min.js', - sourcemap: fs.createReadStream(filepath) -} -request.post({url: 'http://localhost:5601/api/apm/sourcemaps',formData: formData}, function (err, resp, body) { - if (err) { - console.log('Error while uploading sourcemaps!', err) - } else { - console.log('Sourcemaps uploaded!') - } -}) ----- - -That's it! New exception stack traces should now be correctly mapped to your source code. -Don't forget to enable RUM support in the APM integration if you haven't already. diff --git a/docs/legacy/span-api.asciidoc b/docs/legacy/span-api.asciidoc deleted file mode 100644 index 3b42cf25fed..00000000000 --- a/docs/legacy/span-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[span-api]] -=== Spans - -Spans are events captured by an agent occurring in a monitored service. - -[[span-schema]] -[float] -==== Span Schema - -APM Server uses JSON Schema to validate requests. The specification for spans is defined on -{github_repo_link}/docs/spec/v2/span.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/span.json[] ----- diff --git a/docs/legacy/span-indices.asciidoc b/docs/legacy/span-indices.asciidoc deleted file mode 100644 index 4a82cebf1fe..00000000000 --- a/docs/legacy/span-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[span-indices]] -== Example span documents - -++++ -Span documents -++++ - -This example shows what span documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/spans.json[] ----- diff --git a/docs/legacy/ssl-input-settings.asciidoc b/docs/legacy/ssl-input-settings.asciidoc deleted file mode 100644 index 11fed09e222..00000000000 --- a/docs/legacy/ssl-input-settings.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -[[agent-server-ssl]] -=== SSL input settings - -IMPORTANT: {deprecation-notice-config} -If you've already upgraded, please see <> instead. - -You can specify the following options in the `apm-server.ssl` section of the +{beatname_lc}.yml+ config file. -They apply to SSL/TLS communication between the APM Server and APM Agents. - -[float] -==== `enabled` - -The `enabled` setting can be used to enable the SSL configuration by setting -it to `true`. The default value is `false`. - -[float] -==== `certificate` - -The path to the file containing the certificate for Server authentication. -Required if `apm-server.ssl.enabled` is `true`. - -[float] -==== `key` - -The path to the file containing the Server certificate key. -Required if `apm-server.ssl.enabled` is `true`. - -[float] -==== `key_passphrase` - -The passphrase used to decrypt an encrypted key stored in the configured `key` file. -We recommend saving the `key_passphrase` in the APM Server <>. - -[float] -==== `supported_protocols` - -This setting is a list of allowed protocol versions: -`SSLv3`, `TLSv1.0`, `TLSv1.1`, `TLSv1.2` and `TLSv1.3`. We do not recommend using `SSLv3` or `TLSv1.0`. -The default value is `[TLSv1.1, TLSv1.2, TLSv1.3]`. - -[float] -==== `cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library's https://golang.org/pkg/crypto/tls/[default suites] -are used (recommended). Note that TLS 1.3 cipher suites are not -individually configurable in Go, so they are not included in this list. - -include::{libbeat-dir}/shared-ssl-config.asciidoc[tag=cipher_suites] - -[float] -==== `curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -[float] -==== `certificate_authorities` - -The list of root certificates for verifying client certificates. -If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. -If `certificate_authorities` is set, `client_authentication` will be automatically set to `required`. -Sending client certificates is currently only supported by the RUM agent through the browser, -the Java agent (see {apm-java-ref-v}/ssl-configuration.html[Agent certificate authentication]), -and the Jaeger agent. - -[float] -==== `client_authentication` - -This configures what types of client authentication are supported. The valid options -are `none`, `optional`, and `required`. The default is `none`. -If `certificate_authorities` has been specified, this setting will automatically change to `required`. -This option only needs to be configured when the agent is expected to provide a client certificate. -Sending client certificates is currently only supported by the RUM agent through the browser, -the Java agent (see {apm-java-ref-v}/ssl-configuration.html[Agent certificate authentication]), -and the Jaeger agent. - -* `none` - Disables client authentication. -* `optional` - When a client certificate is given, the server will verify it. -* `required` - Requires clients to provide a valid certificate. diff --git a/docs/legacy/ssl-input.asciidoc b/docs/legacy/ssl-input.asciidoc deleted file mode 100644 index f6ad9cda7ef..00000000000 --- a/docs/legacy/ssl-input.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -SSL/TLS is disabled by default. Besides enabling it, you need to provide a certificate and a corresponding -private key as well. - -The following is a basic APM Server SSL config with secure communication enabled. -This will make APM Server serve HTTPS requests instead of HTTP. - -[source,yaml] ----- -apm-server.ssl.enabled: true -apm-server.ssl.certificate: "/path/to/apm-server.crt" -apm-server.ssl.key: "/path/to/apm-server.key" ----- - -A full list of configuration options is available in <>. - -Certificate and private key can be issued by a trusted certificate authority (CA) -or <>. - -NOTE: When using a self-signed (or custom CA) certificate, communication from APM Agents will require -additional settings due to <> - -[[self-signed-cert]] -==== Creating a self-signed certificate - -The {es} distribution offers the `certutil` tool for the creation of self-signed certificates: - -1. Create a CA: `./bin/elasticsearch-certutil ca --pem`. You'll be prompted to enter the desired -location of the output zip archive containing the certificate and the private key. -2. Extract the contents of the CA archive. -3. Create the self-signed certificate: `./bin/elasticsearch-certutil cert --ca-cert -/ca.crt --ca-key /ca.key --pem --name localhost` -4. Extract the certificate and key from the resulted zip archive. - -[[ssl-server-authentication]] -==== Server certificate authentication - -By default, when SSL is enabled for APM Server inbound communication, agents will verify the identity -of the APM Server by authenticating its certificate. - -When the APM server uses a certificate that is not chained to a publicly-trusted certificate -(e.g. self-signed), additional setting will be required on the agent side: - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -It is not recommended to disable APM Server authentication, -however it is possible through agents configuration: - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[[ssl-client-authentication]] -==== Client certificate authentication - -By default, the APM Server does not require agents to provide a certificate for authentication. -This can be changed through the `ssl.client_authentication` configuration. - -There is no dedicated support for SSL/TLS client certificate authentication in Elastic's backend agents, -so setting it up may require some additional effort. For example - see -{apm-java-ref}/ssl-configuration.html#ssl-client-authentication[Java Agent authentication]. - -If agents are authenticating themselves using a certificate that cannot be authenticated through known -CAs (e.g. self signed certificates), use the `ssl.certificate_authorities` to set a custom CA. -This will automatically modify the `ssl.client_authentication` configuration to require authentication. diff --git a/docs/legacy/storage-management.asciidoc b/docs/legacy/storage-management.asciidoc deleted file mode 100644 index 844b9d51fae..00000000000 --- a/docs/legacy/storage-management.asciidoc +++ /dev/null @@ -1,299 +0,0 @@ -[[storage-management]] -== Storage Management - -++++ -Manage Storage -++++ - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -* <> -* <> -* <> -* <> -* <> - -[[sizing-guide]] -=== Storage and sizing guide - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -APM processing and storage costs are largely dominated by transactions, spans, and stack frames. - -* {apm-overview-ref-v}/transactions.html[*Transactions*] describe an event captured by an Elastic {apm-agent} instrumenting a service. -They are the highest level of work being measuring within a service. -* {apm-overview-ref-v}/transaction-spans.html[*Spans*] belong to transactions. They measure from the start to end of an activity, -and contain information about a specific code path that has been executed. -* *Stack frames* belong to spans. Stack frames represent a function call on the call stack, -and include attributes like function name, file name and path, line number, etc. -Stack frames can heavily influence the size of a span. - -[float] -[[typical-transactions]] -==== Typical transactions - -Due to the high variability of APM data, it's difficult to classify a transaction as typical. -Regardless, this guide will attempt to classify Transactions as _Small_, _Medium_, or _Large_, -and make recommendations based on those classifications. - -The size of a transaction depends on the language, agent settings, and what services the agent instruments. -For instance, an agent auto-instrumenting a service with a popular tech stack -(web framework, database, caching library, etc.) is more likely to generate bigger transactions. - -In addition, all agents support manual instrumentation. -How little or much you use these APIs will also impact what a typical transaction looks like. - -If your sampling rate is very small, transactions will be the dominate storage cost. - -Here's a speculative reference: - -[options="header"] -|======================================================================= -|Transaction size |Number of Spans |Number of stack frames -|_Small_ |5-10 |5-10 -|_Medium_ |15-20 |15-20 -|_Large_ |30-40 |30-40 -|======================================================================= - -There will always be transaction outliers with hundreds of spans or stack frames, but those are very rare. -Small transactions are the most common. - -[float] -[[typical-storage]] -==== Typical storage - -Consider the following typical storage reference. -These numbers do not account for {es} compression. - -* 1 unsampled transaction is **~1 KB** -* 1 span with 10 stack frames is **~4 KB** -* 1 span with 50 stack frames is **~20 KB** -* 1 transaction with 10 spans, each with 10 stack frames is **~50 KB** -* 1 transaction with 25 spans, each with 25 spans is **250-300 KB** -* 100 transactions with 10 spans, each with 10 stack frames, sampled at 90% is **600 KB** - -APM data compresses quite well, so the storage cost in {es} will be considerably less: - -* Indexing 100 unsampled transactions per second for 1 hour results in 360,000 documents. These documents use around **50 MB** of disk space. -* Indexing 10 transactions per second for 1 hour, each transaction with 10 spans, each span with 10 stack frames, results in 396,000 documents. These documents use around **200 MB** of disk space. -* Indexing 25 transactions per second for 1 hour, each transaction with 25 spans, each span with 25 stack frames, results in 2,340,000 documents. These documents use around **1.2 GB** of disk space. - -NOTE: These examples were indexing the same data over and over with minimal variation. Because of that, the compression ratios observed of 80-90% are somewhat optimistic. - -[[processing-performance]] -=== Processing and performance - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -APM Server performance depends on a number of factors: memory and CPU available, -network latency, transaction sizes, workload patterns, -agent and server settings, versions, and protocol. - -Let's look at a simple example that makes the following assumptions: - -* The load is generated in the same region as where APM Server and {es} are deployed. -* We're using the default settings in cloud. -* A small number of agents are reporting. - -This leaves us with relevant variables like payload and instance sizes. -See the table below for approximations. -As a reminder, events are -{apm-overview-ref-v}/transactions.html[transactions] and -{apm-overview-ref-v}/transaction-spans.html[spans]. - -[options="header"] -|======================================================================= -|Transaction/Instance |512 MB Instance |2 GB Instance |8 GB Instance -|Small transactions - -_5 spans with 5 stack frames each_ |600 events/second |1200 events/second |4800 events/second -|Medium transactions - -_15 spans with 15 stack frames each_ |300 events/second |600 events/second |2400 events/second -|Large transactions - -_30 spans with 30 stack frames each_ |150 events/second |300 events/second |1400 events/second -|======================================================================= - -In other words, a 512 MB instance can process \~3 MB per second, -while an 8 GB instance can process ~20 MB per second. - -APM Server is CPU bound, so it scales better from 2 GB to 8 GB than it does from 512 MB to 2 GB. -This is because larger instance types in {ecloud} come with much more computing power. - -Don't forget that the APM Server is stateless. -Several instances running do not need to know about each other. -This means that with a properly sized {es} instance, APM Server scales out linearly. - -NOTE: RUM deserves special consideration. The RUM agent runs in browsers, and there can be many thousands reporting to an APM Server with very variable network latency. - -[[reduce-storage]] -=== Reduce storage - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -The amount of storage for APM data depends on several factors: -the number of services you are instrumenting, how much traffic the services see, agent and server settings, -and the length of time you store your data. - -[float] -[[reduce-sample-rate]] -==== Reduce the sample rate - -The transaction sample rate directly influences the number of documents (more precisely, spans) to be indexed. -It is the easiest way to reduce storage. - -The transaction sample rate is a configuration setting of each agent. -Reducing it does not affect the collection of metrics such as _Transactions per second_. - -[float] -[[reduce-stacktrace]] -==== Reduce collected stack trace information - -Elastic APM agents collect `stacktrace` information under certain circumstances. -This can be very helpful in identifying issues in your code, -but it also comes with an overhead at collection time and increases the storage usage. - -Stack trace collection settings are managed in each agent. - -[float] -[[delete-data]] -==== Delete data - -You might want to only keep data for a defined time period. -This might mean deleting old documents periodically, -deleting data collected for specific services or customers, -or deleting specific indices. - -Depending on your use case, -you can delete data periodically with <>, -{curator-ref-current}[Curator], the {ref}/docs-delete-by-query.html[Delete By Query API], -or in the {kibana-ref}/managing-indices.html[{kib} Index Management UI]. - -[float] -[[delete-data-ilm]] -===== Delete data with {ilm-init} - -Index Lifecycle management ({ilm-init}) enables you to automate how you want to manage your indices over time. -You can base actions on factors such as shard size and performance requirements. -See <> to learn more. - -[float] -[[delete-data-periodically]] -===== Delete data periodically - -To delete data periodically you can use {curator-ref-current}[Curator] and set up a cron job to run it. - -By default, APM indices have the pattern `apm-%{[observer.version]}-{type}-%{+yyyy.MM.dd}`. -With the curator command line interface you can, for instance, see all your existing indices: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -curator_cli --host localhost show_indices --filter_list '[{"filtertype":"pattern","kind":"prefix","value":"apm-"}]' - -apm-{version}-error-{sample_date_0} -apm-{version}-error-{sample_date_1} -apm-{version}-error-{sample_date_2} -apm-{version}-sourcemap -apm-{version}-span-{sample_date_0} -apm-{version}-span-{sample_date_1} -apm-{version}-span-{sample_date_2} -apm-{version}-transaction-{sample_date_0} -apm-{version}-transaction-{sample_date_1} -apm-{version}-transaction-{sample_date_2} ------------------------------------------------------------- - -And then delete any span indices older than 1 day: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -curator_cli --host localhost delete_indices --filter_list '[{"filtertype":"pattern","kind":"prefix","value":"apm-{version}-span-"}, {"filtertype":"age","source":"name","timestring":"%Y.%m.%d","unit":"days","unit_count":1,"direction":"older"}]' - -INFO Deleting selected indices: [apm-{version}-span-{sample_date_0}, apm-{version}-span-{sample_date_1}] -INFO ---deleting index apm-{version}-span-{sample_date_0} -INFO ---deleting index apm-{version}-span-{sample_date_1} -INFO "delete_indices" action completed. ------------------------------------------------------------- - -[float] -[[delete-data-by-query]] -===== Delete data matching a query - -You can delete all APM documents matching a specific query. -For example, to delete all documents with a given `service.name`, use the following request: - -["source","console"] ------------------------------------------------------------- -POST /apm-*/_delete_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - } -} ------------------------------------------------------------- - -See {ref}/docs-delete-by-query.html[delete by query] for further information on this topic. - -[float] -[[delete-data-kibana]] -===== Delete data via {kib} Index Management UI - -Select the indices you want to delete, and click **Manage indices** to see the available actions. -Then click **delete indices**. - -[[manage-indices-kibana]] -=== Manage Indices via {kib} - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -The {kib} UI for {kibana-ref}/managing-indices.html[managing indices] allows you to view indices, -index settings, mappings, document counts, used storage per index, and much more. -You can also perform management operations, like deleting indices directly via the {kib} UI. -Finally, the UI supports applying bulk operations on several indices at once. - -[[update-existing-data]] -=== Update existing data - -IMPORTANT: {deprecation-notice-data} -If you've already upgraded, please see <> instead. - -You might want to update documents that are already indexed. -For example, if you your service name was set incorrectly. - -To do this, you can use the {ref}/docs-update-by-query.html[Update By Query API]. - -[float] -[[update-data-rename-a-service]] -==== Rename a service - -To rename a service, send the following request: - -["source","sh"] ------------------------------------------------------------- -POST /apm-*/_update_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - }, - "script": { - "source": "ctx._source.service.name = 'new-service-name'", - "lang": "painless" - } -} ------------------------------------------------------------- -// CONSOLE - -TIP: Remember to also change the service name in the {apm-agents-ref}/index.html[{apm-agent} configuration]. diff --git a/docs/legacy/tab-widgets/configure-agent-widget.asciidoc b/docs/legacy/tab-widgets/configure-agent-widget.asciidoc deleted file mode 100644 index 0936b939643..00000000000 --- a/docs/legacy/tab-widgets/configure-agent-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::configure-agent.asciidoc[tag=central-config] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/configure-agent.asciidoc b/docs/legacy/tab-widgets/configure-agent.asciidoc deleted file mode 100644 index b952872c54a..00000000000 --- a/docs/legacy/tab-widgets/configure-agent.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -// tag::central-config[] -Central configuration allows you to fine-tune your agent configuration from within the {apm-app}. -Changes are automatically propagated to your APM agents, and there’s no need to redeploy. - -A select number of configuration options are supported. -See {apm-app-ref}/agent-configuration.html[Agent configuration in {kib}] -for more information and a configuration reference. -// end::central-config[] - -// tag::reg-config[] -For a full list of agent configuration options, see the relevant agent reference: - -* {apm-go-ref-v}/configuration.html[Go Agent configuration] -* {apm-ios-ref-v}/configuration.html[iOS Agent configuration] -* {apm-java-ref-v}/configuration.html[Java Agent configuration] -* {apm-dotnet-ref-v}/configuration.html[.NET Agent configuration] -* {apm-node-ref}/configuring-the-agent.html[Node.js Agent configuration] -* {apm-py-ref-v}/configuration.html[Python Agent configuration] -* {apm-ruby-ref-v}/configuration.html[Ruby Agent configuration] -* {apm-rum-ref-v}/configuration.html[RUM Agent configuration] -// end::reg-config[] diff --git a/docs/legacy/tab-widgets/configure-server-widget.asciidoc b/docs/legacy/tab-widgets/configure-server-widget.asciidoc deleted file mode 100644 index bcbfdf2d9c5..00000000000 --- a/docs/legacy/tab-widgets/configure-server-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::configure-server.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/configure-server.asciidoc b/docs/legacy/tab-widgets/configure-server.asciidoc deleted file mode 100644 index 4cc546d0601..00000000000 --- a/docs/legacy/tab-widgets/configure-server.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -// tag::ess[] - -If you're running APM Server in Elastic cloud, you can configure your own user settings right in the {es} Service Console. -Any changes are automatically appended to the `apm-server.yml` configuration file for your instance. - -Full details are available in the {cloud}/ec-manage-apm-settings.html[APM user settings] documentation. - -// end::ess[] - -// tag::self-managed[] - -If you've installed APM Server yourself, you can edit the `apm-server.yml` configuration file to make changes. -More information is available in {apm-guide-ref}/configuring-howto-apm-server.html[configuring APM Server]. - -Don't forget to also read about -{apm-guide-ref}/securing-apm-server.html[securing APM Server], and -{apm-guide-ref}/monitoring.html[monitoring APM Server]. - -// end::self-managed[] diff --git a/docs/legacy/tab-widgets/install-agents-widget.asciidoc b/docs/legacy/tab-widgets/install-agents-widget.asciidoc deleted file mode 100644 index 9a165850c0e..00000000000 --- a/docs/legacy/tab-widgets/install-agents-widget.asciidoc +++ /dev/null @@ -1,168 +0,0 @@ -// The Java agent defaults to visible. -// Change with `aria-selected="false"` and `hidden=""` -++++ -
-
- - - - - - - - - -
- - -
-++++ - -include::install-agents.asciidoc[tag=java] - -++++ -
- - - - - - -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/install-agents.asciidoc b/docs/legacy/tab-widgets/install-agents.asciidoc deleted file mode 100644 index 5c74b7124b2..00000000000 --- a/docs/legacy/tab-widgets/install-agents.asciidoc +++ /dev/null @@ -1,578 +0,0 @@ -// tag::go[] -*Install the agent* - -Install the {apm-agent} packages for Go. - -[source,go] ----- -go get go.elastic.co/apm ----- - -*Configure the agent* - -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the executable file name, or the `ELASTIC_APM_SERVICE_NAME` environment variable. - -[source,go] ----- -# Initialize using environment variables: - -# Set the service name. Allowed characters: a-z, A-Z, 0-9, -, _, and space. -# If ELASTIC_APM_SERVICE_NAME is not specified, the executable name will be used. -export ELASTIC_APM_SERVICE_NAME= - -# Set custom APM Server URL. Default: http://localhost:8200. -export ELASTIC_APM_SERVER_URL= - -# Use if APM Server requires a token -export ELASTIC_APM_SECRET_TOKEN= ----- - -*Instrument your application* - -Instrument your Go application by using one of the provided instrumentation modules or by using the tracer API directly. - -[source,go] ----- -import ( - "net/http" - - "go.elastic.co/apm/module/apmhttp" -) - -func main() { - mux := http.NewServeMux() - ... - http.ListenAndServe(":8080", apmhttp.Wrap(mux)) -} ----- - -*Learn more in the agent reference* - -* {apm-go-ref-v}/supported-tech.html[Supported technologies] -* {apm-go-ref-v}/configuration.html[Advanced configuration] -* {apm-go-ref-v}/getting-started.html[Detailed guide to instrumenting Go source code] -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -*Add the agent dependency to your project* - -Add the Elastic APM iOS Agent as a -https://developer.apple.com/documentation/swift_packages/adding_package_dependencies_to_your_app[package dependency] -to your Xcode project or your `Package.swift`: - -[source,swift,linenums,highlight=2;10] ----- -Package( - dependencies:[ - .package(name: "iOSAgent", url: "git@github.com:elastic/apm-agent-ios.git", .branch("main")), - ], - targets:[ - .target( - name: "MyApp", - dependencies: [ - .product(name: "iOSAgent", package: "iOSAgent") - ] - ), -]) ----- - -*Initialize the agent* - -If you're using `SwiftUI` to build your app, add the following to `App.swift`: - -[source,swift,linenums,swift,highlight=2;7..12] ----- -import SwiftUI -import iOSAgent - -@main -struct MyApp: App { - init() { - var config = AgentConfiguration() - config.collectorAddress = "127.0.0.1" <1> - config.collectorPort = 8200 <2> - config.collectorTLS = false <3> - config.secretToken = "" <4> - Agent.start(with: config) - } - var body: some Scene { - WindowGroup { - ContentView() - } - } -} ----- -<1> APM Server URL or IP address -<2> APM Server port number -<3> Enable TLS for Open telemetry exporters -<4> Set secret token for APM server connection - -If you're not using `SwiftUI`, you can add the same thing to your `AppDelegate` file: - -`AppDelegate.swift` -[source,swift,linenums,highlight=2;9..14] ----- -import UIKit -import iOSAgent -@main -class AppDelegate: UIResponder, UIApplicationDelegate { - func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { - var config = AgentConfiguration() - config.collectorAddress = "127.0.0.1" <1> - config.collectorPort = 8200 <2> - config.collectorTLS = false <3> - config.secretToken = "" <4> - Agent.start(with: config) - return true - } -} ----- -<1> APM Server URL or IP address -<2> APM Server port number -<3> Enable TLS for Open telemetry exporters -<4> Set secret token for APM server connection - -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -*Download the {apm-agent}* - -Download the agent jar from http://search.maven.org/#search%7Cga%7C1%7Ca%3Aelastic-apm-agent[Maven Central]. -Do not add the agent as a dependency to your application. - -*Start your application with the `javaagent` flag* - -Add the `-javaagent` flag and configure the agent with system properties. - -* Set required service name -* Set custom APM Server URL (default: http://localhost:8200) -* Set the base package of your application - -[source,java] ----- -java -javaagent:/path/to/elastic-apm-agent-.jar \ - -Delastic.apm.service_name=my-application \ - -Delastic.apm.server_urls=http://localhost:8200 \ - -Delastic.apm.secret_token= \ - -Delastic.apm.application_packages=org.example \ - -jar my-application.jar ----- - -*Learn more in the agent reference* - -* {apm-java-ref-v}/supported-technologies-details.html[Supported technologies] -* {apm-java-ref-v}/configuration.html[Advanced configuration] -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] -*Download the {apm-agent}* - -Add the agent packages from https://www.nuget.org/packages?q=Elastic.apm[NuGet] to your .NET application. -There are multiple NuGet packages available for different use cases. - -For an ASP.NET Core application with Entity Framework Core, download the -https://www.nuget.org/packages/Elastic.Apm.NetCoreAll[Elastic.Apm.NetCoreAll] package. -This package will automatically add every agent component to your application. - -To minimize the number of dependencies, you can use the -https://www.nuget.org/packages/Elastic.Apm.AspNetCore[Elastic.Apm.AspNetCore] package for just ASP.NET Core monitoring, or the -https://www.nuget.org/packages/Elastic.Apm.EntityFrameworkCore[Elastic.Apm.EfCore] package for just Entity Framework Core monitoring. - -If you only want to use the public agent API for manual instrumentation, use the -https://www.nuget.org/packages/Elastic.Apm[Elastic.Apm] package. - -*Add the agent to the application* - -For an ASP.NET Core application with the `Elastic.Apm.NetCoreAll` package, -call the `UseAllElasticApm` method in the `Configure` method within the `Startup.cs` file: - -[source,dotnet] ----- -public class Startup -{ - public void Configure(IApplicationBuilder app, IHostingEnvironment env) - { - app.UseAllElasticApm(Configuration); - //…rest of the method - } - //…rest of the class -} ----- - -Passing an `IConfiguration` instance is optional and by doing so, -the agent will read config settings through this `IConfiguration` instance, for example, -from the `appsettings.json` file: - -[source,json] ----- -{ - "ElasticApm": { - "SecretToken": "", - "ServerUrls": "http://localhost:8200", //Set custom APM Server URL (default: http://localhost:8200) - "ServiceName" : "MyApp", //allowed characters: a-z, A-Z, 0-9, -, _, and space. Default is the entry assembly of the application - } -} ----- - -If you don’t pass an `IConfiguration` instance to the agent, for example, in a non-ASP.NET Core application, -you can configure the agent with environment variables. -See the agent reference for more information. - -*Learn more in the agent reference* - -* {apm-dotnet-ref-v}/supported-technologies.html[Supported technologies] -* {apm-dotnet-ref-v}/configuration.html[Advanced configuration] -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] -*Install the {apm-agent}* - -Install the {apm-agent} for Node.js as a dependency to your application. - -[source,js] ----- -npm install elastic-apm-node --save ----- - -*Configure the agent* - -Agents are libraries that run inside of your application process. APM services are created programmatically based on the `serviceName`. -This agent supports a variety of frameworks but can also be used with your custom stack. - -[source,js] ----- -// Add this to the VERY top of the first file loaded in your app -var apm = require('elastic-apm-node').start({ - // Override service name from package.json - // Allowed characters: a-z, A-Z, 0-9, -, _, and space - serviceName: '', - - // Use if APM Server requires a token - secretToken: '', - - // Set custom APM Server URL (default: http://localhost:8200) - serverUrl: '' -}) ----- - -*Learn more in the agent reference* - -* {apm-node-ref-v}/supported-technologies.html[Supported technologies] -* {apm-node-ref-v}/advanced-setup.html[Babel/ES Modules] -* {apm-node-ref-v}/configuring-the-agent.html[Advanced configuration] - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -*Install the agent* - -Install the PHP agent using one of the https://github.com/elastic/apm-agent-php/releases[published packages]. - -To use the RPM Package (RHEL/CentOS and Fedora): - -[source,php] ----- -rpm -ivh .rpm ----- - -To use the DEB package (Debian and Ubuntu): - -[source,php] ----- -dpkg -i .deb ----- - -To use the APK package (Alpine): - -[source,php] ----- -apk add --allow-untrusted .apk ----- - -If you can’t find your distribution, -you can install the agent by {apm-php-ref-v}/setup.html[building it from the source]. - -*Configure the agent* - -Configure your agent inside of the `php.ini` file: - -[source,ini] ----- -elastic_apm.server_url=http://localhost:8200 -elastic_apm.secret_token=SECRET_TOKEN -elastic_apm.service_name="My-service" ----- - -*Learn more in the agent reference* - -* {apm-php-ref-v}/supported-technologies.html[Supported technologies] -* {apm-php-ref-v}/configuration.html[Configuration] - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] -Django:: -+ -*Install the {apm-agent}* -+ -Install the {apm-agent} for Python as a dependency. -+ -[source,python] ----- -$ pip install elastic-apm ----- -+ -*Configure the agent* -+ -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the `SERVICE_NAME`. -+ -[source,python] ----- -# Add the agent to the installed apps -INSTALLED_APPS = ( - 'elasticapm.contrib.django', - # ... -) - -ELASTIC_APM = { - # Set required service name. Allowed characters: - # a-z, A-Z, 0-9, -, _, and space - 'SERVICE_NAME': '', - - # Use if APM Server requires a token - 'SECRET_TOKEN': '', - - # Set custom APM Server URL (default: http://localhost:8200) - 'SERVER_URL': '', -} - -# To send performance metrics, add our tracing middleware: -MIDDLEWARE = ( - 'elasticapm.contrib.django.middleware.TracingMiddleware', - #... -) ----- - -Flask:: -+ -*Install the {apm-agent}* -+ -Install the {apm-agent} for Python as a dependency. -+ -[source,python] ----- -$ pip install elastic-apm[flask] ----- -+ -*Configure the agent* -+ -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the `SERVICE_NAME`. -+ -[source,python] ----- -# initialize using environment variables -from elasticapm.contrib.flask import ElasticAPM -app = Flask(__name__) -apm = ElasticAPM(app) - -# or configure to use ELASTIC_APM in your application settings -from elasticapm.contrib.flask import ElasticAPM -app.config['ELASTIC_APM'] = { - # Set required service name. Allowed characters: - # a-z, A-Z, 0-9, -, _, and space - 'SERVICE_NAME': '', - - # Use if APM Server requires a token - 'SECRET_TOKEN': '', - - # Set custom APM Server URL (default: http://localhost:8200) - 'SERVER_URL': '', -} - -apm = ElasticAPM(app) ----- - -*Learn more in the agent reference* - -* {apm-py-ref-v}/supported-technologies.html[Supported technologies] -* {apm-py-ref-v}/configuration.html[Advanced configuration] - -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] -*Install the {apm-agent}* - -Add the agent to your Gemfile. - -[source,ruby] ----- -gem 'elastic-apm' ----- -*Configure the agent* - -Ruby on Rails:: -+ -APM is automatically started when your app boots. -Configure the agent by creating the config file `config/elastic_apm.yml`: -+ -[source,ruby] ----- -# config/elastic_apm.yml: - -# Set service name - allowed characters: a-z, A-Z, 0-9, -, _ and space -# Defaults to the name of your Rails app -service_name: 'my-service' - -# Use if APM Server requires a token -secret_token: '' - -# Set custom APM Server URL (default: http://localhost:8200) -server_url: 'http://localhost:8200' ----- - -Rack:: -+ -For Rack or a compatible framework, like Sinatra, include the middleware in your app and start the agent. -+ -[source,ruby] ----- -# config.ru - require 'sinatra/base' - - class MySinatraApp < Sinatra::Base - use ElasticAPM::Middleware - - # ... - end - - ElasticAPM.start( - app: MySinatraApp, # required - config_file: '' # optional, defaults to config/elastic_apm.yml - ) - - run MySinatraApp - - at_exit { ElasticAPM.stop } ----- -+ -*Create a config file* -+ -Create a config file config/elastic_apm.yml: -+ -[source,ruby] ----- -# config/elastic_apm.yml: - -# Set service name - allowed characters: a-z, A-Z, 0-9, -, _ and space -# Defaults to the name of your Rack app's class. -service_name: 'my-service' - -# Use if APM Server requires a token -secret_token: '' - -# Set custom APM Server URL (default: http://localhost:8200) -server_url: 'http://localhost:8200' ----- - -*Learn more in the agent reference* - -* {apm-ruby-ref-v}/supported-technologies.html[Supported technologies] -* {apm-ruby-ref-v}/configuration.html[Advanced configuration] - -// end::ruby[] - -// *************************************************** -// *************************************************** - -// tag::rum[] -*Enable Real User Monitoring support in APM Server* - -APM Server disables RUM support by default. -To enable it, set `apm-server.rum.enabled: true` in your APM Server configuration file. - -*Set up the agent* - -Once RUM support enabled, you can set up the RUM agent. -There are two ways to do this: add the agent as a dependency, -or set it up with ` - ----- - -*Learn more in the agent reference* - -* {apm-rum-ref-v}/supported-technologies.html[Supported technologies] -* {apm-rum-ref-v}/configuration.html[Advanced configuration] - -// end::rum[] diff --git a/docs/legacy/tab-widgets/jaeger-sampling-widget.asciidoc b/docs/legacy/tab-widgets/jaeger-sampling-widget.asciidoc deleted file mode 100644 index cf41515e53d..00000000000 --- a/docs/legacy/tab-widgets/jaeger-sampling-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::jaeger-sampling.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/jaeger-sampling.asciidoc b/docs/legacy/tab-widgets/jaeger-sampling.asciidoc deleted file mode 100644 index d7133193bb8..00000000000 --- a/docs/legacy/tab-widgets/jaeger-sampling.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -// tag::ess[] -Visit the {kibana-ref}/agent-configuration.html[Agent configuration] page in the {apm-app} to add a new sampling rate. - -// end::ess[] - -// tag::self-managed[] -APM Agent central configuration requires the <> to be configured. -To enable the {kib} endpoint, set <> to `true`, -and point <> at the {kib} host that APM Server will communicate with. - -Once configured, -visit the {kibana-ref}/agent-configuration.html[Agent configuration] page in the {apm-app} to add a new sampling rate. - -// end::self-managed[] diff --git a/docs/legacy/tab-widgets/jaeger-widget.asciidoc b/docs/legacy/tab-widgets/jaeger-widget.asciidoc deleted file mode 100644 index 5902738ca38..00000000000 --- a/docs/legacy/tab-widgets/jaeger-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::jaeger.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/jaeger.asciidoc b/docs/legacy/tab-widgets/jaeger.asciidoc deleted file mode 100644 index ad2cabe34cb..00000000000 --- a/docs/legacy/tab-widgets/jaeger.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -// tag::ess[] -. Log into {ess-console}[{ecloud}] and select your deployment. -Copy your APM endpoint and APM Server secret token; you'll need these in the next step. - -. Configure APM Server as a collector for your Jaeger agents. -+ -As of this writing, the Jaeger agent binary offers the following CLI flags, -which can be used to enable TLS, output to {ecloud}, and set the APM Server secret token: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ---reporter.grpc.host-port= ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.22/cli/[Jaeger CLI flags documentation] for more information. - -// end::ess[] - -// tag::self-managed[] -. Configure APM Server as a collector for your Jaeger agents. -+ -As of this writing, the Jaeger agent binary offers the `--reporter.grpc.host-port` CLI flag. -Use this to define the <> that APM Server is listening on: -+ -[source,terminal] ----- ---reporter.grpc.host-port= ----- - -. (Optional) Enable encryption -+ -When <> is enabled in APM Server, Jaeger agents must also enable TLS communication: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ----- - -. (Optional) Enable token-based authorization -+ -A <> or <> can be used to ensure only authorized -Jaeger agents can send data to the APM Server. -When enabled, use an agent level tag to authorize Jaeger agent communication with the APM Server: -+ -[source,terminal] ----- ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.22/cli/[Jaeger CLI flags documentation] for more information. - -// end::self-managed[] diff --git a/docs/legacy/tab-widgets/kibana-endpoint-widget.asciidoc b/docs/legacy/tab-widgets/kibana-endpoint-widget.asciidoc deleted file mode 100644 index 4f9231c32bb..00000000000 --- a/docs/legacy/tab-widgets/kibana-endpoint-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::kibana-endpoint.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/kibana-endpoint.asciidoc b/docs/legacy/tab-widgets/kibana-endpoint.asciidoc deleted file mode 100644 index 0d149221233..00000000000 --- a/docs/legacy/tab-widgets/kibana-endpoint.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -// tag::ess[] - -The {kib} endpoint is automatically enabled and configured in {ecloud}. - -// end::ess[] - -// tag::self-managed[] - -Enable and configure the {kib} endpoint in the `apm-server.kibana` section of the `apm-server.yml` -config file. A basic configuration might look like this: - -[source,yml] ----- -apm-server.kibana.enabled: true -apm-server.kibana.host: "http://localhost:5601" -apm-server.kibana.username: "user" -apm-server.kibana.password: "pass" ----- - -See <> for a full list of configuration options. - -// end::self-managed[] diff --git a/docs/legacy/tab-widgets/spin-up-stack-widget.asciidoc b/docs/legacy/tab-widgets/spin-up-stack-widget.asciidoc deleted file mode 100644 index 6e913212257..00000000000 --- a/docs/legacy/tab-widgets/spin-up-stack-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::spin-up-stack.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/legacy/tab-widgets/spin-up-stack.asciidoc b/docs/legacy/tab-widgets/spin-up-stack.asciidoc deleted file mode 100644 index 2c65a88d027..00000000000 --- a/docs/legacy/tab-widgets/spin-up-stack.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -// tag::ess[] -There's no faster way to get started with Elastic APM than with our hosted {ess} on {ecloud}. -{ess} is available on AWS, GCP, and Azure, -and automatically configures APM Server to work with {es} and {kib}: - -. {ess-trial}[Get a free trial]. - -. Log into {ess-console}[{ecloud}]. - -. Click *Create deployment*. - -. Select *Elastic {observability}* and give your deployment a name. - -. Click *Create deployment* and copy the password for the `elastic` user. - -. Select *APM* from the menu on the left and make note of the APM endpoint and APM Server secret token. -You'll need these in step two. - -// end::ess[] - -// tag::self-managed[] -To install and run {es} and {kib}, see {stack-ref}/installing-elastic-stack.html[Installing the {stack}]. - -Next, install, set up, and run APM Server: - -. {apm-server-ref-v}/installing.html[Install APM Server]. -. {apm-server-ref-v}/apm-server-configuration.html[Set up APM Server] -. {apm-server-ref-v}/setting-up-and-running.html[Start APM Server]. - -Use the config file if you need to change the default configuration that APM Server uses to connect to {es}, -or if you need to specify credentials: - -* {apm-server-ref-v}/configuring-howto-apm-server.html[Configuring APM Server] -** {apm-server-ref-v}/configuration-process.html[General configuration options] -** {apm-server-ref-v}/configuring-output.html[Configure the {es} output] - -[[secure-api-access]] -If you change the listen address from `localhost` to something that is accessible from outside of the machine, -we recommend setting up firewall rules to ensure that only your own systems can access the API. -Alternatively, -you can use a {apm-server-ref-v}/securing-apm-server.html[TLS and a secret token or API key]. - -If you have APM Server running on the same host as your service, -you can configure it to listen on a Unix domain socket. - -[[more-information]] -TIP: For detailed instructions on how to install and secure APM Server in your server environment, -including details on how to run APM Server in a highly available environment, -please see the full {apm-server-ref-v}/index.html[APM Server documentation]. - -// end::self-managed[] diff --git a/docs/legacy/transaction-api.asciidoc b/docs/legacy/transaction-api.asciidoc deleted file mode 100644 index 95691741677..00000000000 --- a/docs/legacy/transaction-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[transaction-api]] -=== Transactions - -Transactions are events corresponding to an incoming request or similar task occurring in a monitored service. - -[[transaction-schema]] -[float] -==== Transaction Schema - -APM Server uses JSON Schema to validate requests. The specification for transactions is defined on -{github_repo_link}/docs/spec/v2/transaction.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/transaction.json[] ----- diff --git a/docs/legacy/transaction-indices.asciidoc b/docs/legacy/transaction-indices.asciidoc deleted file mode 100644 index e5db23db4e7..00000000000 --- a/docs/legacy/transaction-indices.asciidoc +++ /dev/null @@ -1,13 +0,0 @@ -[[transaction-indices]] -== Example transaction documents - -++++ -Transaction documents -++++ - -This example shows what transaction documents can look like when indexed in {es}: - -[source,json] ----- -include::../data/elasticsearch/generated/transactions.json[] ----- diff --git a/docs/legacy/transaction-metrics.asciidoc b/docs/legacy/transaction-metrics.asciidoc deleted file mode 100644 index fac93112550..00000000000 --- a/docs/legacy/transaction-metrics.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[x-pack] -[[transaction-metrics]] -== Configure transaction metrics - -++++ -Transaction metrics -++++ - -IMPORTANT: {deprecation-notice-config} - -{beatname_uc} produces transaction histogram metrics that are used to power the {apm-app}. -Shifting this responsibility from {apm-app} to APM Server removes the need to store unsampled transactions, reducing storage costs. - -Example config file: - -["source","yaml"] ----- -apm-server: - aggregation: - transactions: - interval: 1m ----- - -[float] -[[configuration-aggregation]] -=== Configuration options: `apm-server.aggregation.transactions.*` - -[[transactions-interval]] -[float] -==== `interval` - -Controls the frequency of metrics publication. - -Default: `1m`. - -[[transactions-max_groups]] -[float] -==== `max_groups` - -Maximum number of transaction groups to keep track of. -Once exceeded, APM Server devolves into recording a metrics document for each transaction that is not in one -of the transaction groups being tracked. - -Default: `10000`. - -[[transactions-hdrhistogram_significant_figures]] -[float] -==== `hdrhistogram_significant_figures` - -The fixed, worst-case percentage error (specified as a number of significant digits) -to maintain for recorded metrics. -Supported values are `1` through `5`. -See {ref}/search-aggregations-metrics-percentile-aggregation.html#_hdr_histogram_2[HDR histogram] for more information. - -Default: `2`. diff --git a/docs/legacy/troubleshooting.asciidoc b/docs/legacy/troubleshooting.asciidoc deleted file mode 100644 index 0529b803b84..00000000000 --- a/docs/legacy/troubleshooting.asciidoc +++ /dev/null @@ -1,45 +0,0 @@ -[[troubleshooting]] -= Troubleshoot - -IMPORTANT: {deprecation-notice-data} - -If you have issues installing or running APM Server, -read the following tips: - -* <> -* <> -* <> - -Other sections in the documentation may also be helpful: - -* <> -* <> -* <> -* <> -* {apm-overview-ref-v}/agent-server-compatibility.html[Agent/Server compatibility matrix] - -If your issue is potentially related to other components of the APM ecosystem, -don't forget to check the relevant troubleshooting guides: - -* {kibana-ref}/troubleshooting.html[{apm-app} troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[.NET agent troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[Go agent troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[iOS agent troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[Java agent troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[Node.js agent troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[PHP agent troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[Python agent troubleshooting] -* {apm-ruby-ref-v}/debugging.html[Ruby agent troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[RUM troubleshooting] - -include::common-problems.asciidoc[] - -[[enable-apm-server-debugging]] -== Debug - -include::{libbeat-dir}/debugging.asciidoc[] - -[[getting-help]] -== Get help - -include::{libbeat-dir}/getting-help.asciidoc[] \ No newline at end of file diff --git a/docs/log-correlation.asciidoc b/docs/log-correlation.asciidoc deleted file mode 100644 index 3680673da48..00000000000 --- a/docs/log-correlation.asciidoc +++ /dev/null @@ -1,176 +0,0 @@ -[[log-correlation]] -=== Logging integration - -Many applications use logging frameworks to help record, format, and append an application's logs. -Elastic APM now offers a way to make your application logs even more useful, -by integrating with the most popular logging frameworks in their respective languages. -This means you can easily inject trace information into your logs, -allowing you to explore logs in the {observability-guide}/monitor-logs.html[{logs-app}], -then jump straight into the corresponding APM traces -- all while preserving the trace context. - -To get started: - -. Enable log correlation -. Add APM identifiers to your logs -. Ingest your logs into {es} - -[float] -==== Enable Log correlation - -Some Agents require you to first enable log correlation in the Agent. -This is done with a configuration variable, and is different for each Agent. -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] for further information. - -// Not enough of the Agent docs are ready yet. -// Commenting these out and will replace when ready. -// * *Java*: {apm-java-ref-v}/config-logging.html#config-enable-log-correlation[`enable_log_correlation`] -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -==== Add APM identifiers to your logs - -Once log correlation is enabled, -you must ensure your logs contain APM identifiers. -In some supported frameworks, this is already done for you. -In other scenarios, like for unstructured logs, -you'll need to add APM identifiers to your logs in any easy to parse manner. - -The identifiers we're interested in are: {ecs-ref}/ecs-tracing.html[`trace.id`] and -{ecs-ref}/ecs-tracing.html[`transaction.id`]. Certain Agents also support the `span.id` field. - -This process for adding these fields will differ based the Agent you're using, the logging framework, -and the type and structure of your logs. - -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] to learn more. - -// Not enough of the Agent docs have been backported yet. -// Commenting these out and will replace when ready. -// * *Go*: {apm-go-ref-v}/supported-tech.html#supported-tech-logging[Logging frameworks] -// * *Java*: {apm-java-ref-v}/[] NOT merged yet https://github.com/elastic/apm-agent-java/pull/854 -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] Not backported yet https://www.elastic.co/guide/en/apm/agent/ruby/master/log-correlation.html -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -==== Ingest your logs into {es} - -Once your logs contain the appropriate identifiers (fields), you need to ingest them into {es}. -Luckily, we've got a tool for that -- {filebeat} is Elastic's log shipper. -The {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] -guide will walk you through the setup process. - -Because logging frameworks and formats vary greatly between different programming languages, -there is no one-size-fits-all approach for ingesting your logs into {es}. -The following tips should hopefully get you going in the right direction: - -**Download {filebeat}** - -There are many ways to download and get started with {filebeat}. -Read the {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] guide to determine which is best for you. - -**Configure {filebeat}** - -Modify the {filebeat-ref}/configuring-howto-filebeat.html[`filebeat.yml`] configuration file to your needs. -Here are some recommendations: - -* Set `filebeat.inputs` to point to the source of your logs -* Point {filebeat} to the same {stack} that is receiving your APM data - * If you're using Elastic cloud, set `cloud.id` and `cloud.auth`. - * If your using a manual setup, use `output.elasticsearch.hosts`. - -[source,yml] ----- -filebeat.inputs: -- type: log <1> - paths: <2> - - /var/log/*.log -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWMNjN2Q3YTllOTYyNTc0Mw==" <3> -cloud.auth: "elastic:YOUR_PASSWORD" <4> ----- -<1> Configures the `log` input -<2> Path(s) that must be crawled to fetch the log lines -<3> Used to resolve the {es} and {kib} URLs for {ecloud} -<4> Authorization token for {ecloud} - -**JSON logs** - -For JSON logs you can use the {filebeat-ref}/filebeat-input-log.html[`log` input] to read lines from log files. -Here's what a sample configuration might look like: - -[source,yml] ----- -filebeat.inputs: - json.keys_under_root: true <1> - json.add_error_key: true <2> - json.message_key: message <3> ----- -<1> `true` copies JSON keys to the top level in the output document -<2> Tells {filebeat} to add an `error.message` and `error.type: json` key in case of JSON unmarshalling errors -<3> Specifies the JSON key on which to apply line filtering and multiline settings - -**Parsing unstructured logs** - -Consider the following log that is decorated with the `transaction.id` and `trace.id` fields: - -[source,log] ----- -2019-09-18 21:29:49,525 - django.server - ERROR - "GET / HTTP/1.1" 500 27 | elasticapm transaction.id=fcfbbe447b9b6b5a trace.id=f965f4cc5b59bdc62ae349004eece70c span.id=None ----- - -All that's needed now is an {filebeat-ref}/configuring-ingest-node.html[ingest node processor] to preprocess your logs and -extract these structured fields before they are indexed in {es}. -To do this, you'd need to create a pipeline that uses {es}'s {ref}/grok-processor.html[Grok Processor]. -Here's an example: - -[source, json] ----- -PUT _ingest/pipeline/log-correlation -{ - "description": "Parses the log correlation IDs out of the raw plain-text log", - "processors": [ - { - "grok": { - "field": "message", <1> - "patterns": ["%{GREEDYDATA:message} | elasticapm transaction.id=%{DATA:transaction.id} trace.id=%{DATA:trace.id} span.id=%{DATA:span.id}"] <2> - } - } - ] -} ----- -<1> The field to use for grok expression parsing -<2> An ordered list of grok expression to match and extract named captures with: -`%{DATA:transaction.id}` captures the value of `transaction.id`, -`%{DATA:trace.id}` captures the value or `trace.id`, and -`%{DATA:span.id}` captures the value of `span.id`. - -NOTE: Depending on how you've added APM data to your logs, -you may need to tweak this grok pattern in order to work for your setup. -In addition, it's possible to extract more structure out of your logs. -Make sure to follow the {ecs-ref}/ecs-field-reference.html[Elastic Common Schema] -when defining which fields you are storing in {es}. - -Then, configure {filebeat} to use the processor in `filebeat.yml`: - -[source, json] ----- -output.elasticsearch: - pipeline: "log-correlation" ----- - -If your logs contain messages that span multiple lines of text (common in Java stack traces), -you'll also need to configure {filebeat-ref}/multiline-examples.html[multiline settings]. - -The following example shows how to configure {filebeat} to handle a multiline message where the first line of the message begins with a bracket ([). - -[source,yml] ----- -multiline.pattern: '^\[' -multiline.negate: true -multiline.match: after ----- diff --git a/docs/manage-storage.asciidoc b/docs/manage-storage.asciidoc deleted file mode 100644 index 90e97eb0a65..00000000000 --- a/docs/manage-storage.asciidoc +++ /dev/null @@ -1,194 +0,0 @@ -[[manage-storage]] -== Manage storage - -{agent} uses <> to store time series data across multiple indices. -Each data stream ships with a customizable <> that automates data retention as your indices grow and age. - -The <> attempts to define a "typical" storage reference for Elastic APM, -and there are additional settings you can tweak to <>, -or to <>. - -include::./data-streams.asciidoc[] - -include::./ilm-how-to.asciidoc[] - -[[storage-guide]] -=== Storage and sizing guide - -APM processing and storage costs are largely dominated by transactions, spans, and stack frames. - -* <> describe an event captured by an Elastic {apm-agent} instrumenting a service. -They are the highest level of work being measuring within a service. -* <> belong to transactions. They measure from the start to end of an activity, -and contain information about a specific code path that has been executed. -* *Stack frames* belong to spans. Stack frames represent a function call on the call stack, -and include attributes like function name, file name and path, line number, etc. -Stack frames can heavily influence the size of a span. - -[float] -==== Typical transactions - -Due to the high variability of APM data, it's difficult to classify a transaction as typical. -Regardless, this guide will attempt to classify Transactions as _Small_, _Medium_, or _Large_, -and make recommendations based on those classifications. - -The size of a transaction depends on the language, agent settings, and what services the agent instruments. -For instance, an agent auto-instrumenting a service with a popular tech stack -(web framework, database, caching library, etc.) is more likely to generate bigger transactions. - -In addition, all agents support manual instrumentation. -How little or much you use these APIs will also impact what a typical transaction looks like. - -If your sampling rate is very small, transactions will be the dominate storage cost. - -Here's a speculative reference: - -[options="header"] -|======================================================================= -|Transaction size |Number of Spans |Number of stack frames -|_Small_ |5-10 |5-10 -|_Medium_ |15-20 |15-20 -|_Large_ |30-40 |30-40 -|======================================================================= - -There will always be transaction outliers with hundreds of spans or stack frames, but those are very rare. -Small transactions are the most common. - -[float] -==== Typical storage - -Consider the following typical storage reference. -These numbers do not account for {es} compression. - -* 1 unsampled transaction is **~1 KB** -* 1 span with 10 stack frames is **~4 KB** -* 1 span with 50 stack frames is **~20 KB** -* 1 transaction with 10 spans, each with 10 stack frames is **~50 KB** -* 1 transaction with 25 spans, each with 25 spans is **250-300 KB** -* 100 transactions with 10 spans, each with 10 stack frames, sampled at 90% is **600 KB** - -APM data compresses quite well, so the storage cost in {es} will be considerably less: - -* Indexing 100 unsampled transactions per second for 1 hour results in 360,000 documents. These documents use around **50 MB** of disk space. -* Indexing 10 transactions per second for 1 hour, each transaction with 10 spans, each span with 10 stack frames, results in 396,000 documents. These documents use around **200 MB** of disk space. -* Indexing 25 transactions per second for 1 hour, each transaction with 25 spans, each span with 25 stack frames, results in 2,340,000 documents. These documents use around **1.2 GB** of disk space. - -NOTE: These examples were indexing the same data over and over with minimal variation. Because of that, the compression ratios observed of 80-90% are somewhat optimistic. - -[[reduce-apm-storage]] -=== Reduce storage - -The amount of storage for APM data depends on several factors: -the number of services you are instrumenting, how much traffic the services see, agent and server settings, -and the length of time you store your data. - -[float] -==== Reduce the sample rate - -The transaction sample rate directly influences the number of documents (more precisely, spans) to be indexed. -It is the easiest way to reduce storage. - -The transaction sample rate is a configuration setting of each agent. -Reducing it does not affect the collection of metrics such as _Transactions per second_. - -[float] -==== Reduce collected stack trace information - -Elastic APM agents collect `stacktrace` information under certain circumstances. -This can be very helpful in identifying issues in your code, -but it also comes with an overhead at collection time and increases the storage usage. - -Stack trace collection settings are managed in each agent. - -[float] -==== Delete data - -You might want to only keep data for a defined time period. -This might mean deleting old documents periodically, -deleting data collected for specific services or customers, -or deleting specific indices. - -Depending on your use case, -you can delete data periodically with <>, -{curator-ref-current}[Curator], the {ref}/docs-delete-by-query.html[Delete By Query API], -or in the {kibana-ref}/managing-indices.html[{kib} Index Management UI]. - -[float] -[[delete-data-with-ilm]] -===== Delete data with {ilm-init} - -Index Lifecycle management ({ilm-init}) enables you to automate how you want to manage your indices over time. -You can base actions on factors such as shard size and performance requirements. -See <> to learn more. - -[float] -[[delete-data-query]] -===== Delete data matching a query - -You can delete all APM documents matching a specific query. -For example, to delete all documents with a given `service.name`, use the following request: - -["source","console"] ----- -POST /.ds-*-apm*/_delete_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - } -} ----- - -[float] -[[delete-data-in-kibana]] -===== Delete data via {kib} Index Management UI - -Select the indices you want to delete, and click **Manage indices** to see the available actions. -Then click **delete indices**. - -[float] -[[manage-indices-in-kibana]] -=== Manage Indices via {kib} - -{kib}'s {ref}/index-mgmt.html[index management] allows you to manage your cluster's -indices, data streams, index templates, and much more. - -[float] -[[update-data]] -=== Update existing data - -You might want to update documents that are already indexed. -For example, if you your service name was set incorrectly. - -To do this, you can use the {ref}/docs-update-by-query.html[Update By Query API]. - -[float] -==== Rename a service - -To rename a service, send the following request: - -["source","sh"] ------------------------------------------------------------- -POST /.ds-*-apm*/_update_by_query?expand_wildcards=all -{ - "query": { - "term": { - "service.name": { - "value": "current-service-name" - } - } - }, - "script": { - "source": "ctx._source.service.name = 'new-service-name'", - "lang": "painless" - } -} ------------------------------------------------------------- -// CONSOLE - -TIP: Remember to also change the service name in the {apm-agents-ref}/index.html[{apm-agent} configuration]. - -include::./apm-tune-elasticsearch.asciidoc[] diff --git a/docs/monitor.asciidoc b/docs/monitor.asciidoc deleted file mode 100644 index bd6aa3585e9..00000000000 --- a/docs/monitor.asciidoc +++ /dev/null @@ -1,214 +0,0 @@ -[[monitor-apm]] -=== Monitor APM Server - -Use the {stack} {monitor-features} to gain insight into the real-time health and performance of APM Server. -Stack monitoring exposes key metrics, like intake response count, intake error rate, output event rate, -output failed event rate, and more. - -[float] -[[monitor-apm-cloud]] -=== Monitor APM running on {ecloud} - -{ecloud} manages the installation and configuration of a monitoring agent for you -- so -all you have to do is flip a switch and watch the data pour in. - -* **{ess}** user? See {ece-ref}/ece-enable-logging-and-monitoring.html[ESS: Enable logging and monitoring]. -* **{ece}** user? See {cloud}/ec-enable-logging-and-monitoring.html[ECE: Enable logging and monitoring]. - -[float] -[[monitor-apm-self-install]] -=== Monitor a self-installation of APM - -NOTE: This guide assumes you are already ingesting APM data into the {stack}. - -In 8.0 and later, you can use {metricbeat} to collect data about APM Server and ship it to a monitoring cluster. -To collect and ship monitoring data: - -. <> -. <> - -[float] -[[configure-ea-monitoring-data]] -==== Configure {agent} to send monitoring data - -**** -Before you can monitor APM, -you must have monitoring data for the {es} production cluster. -To learn how, see {ref}/configuring-metricbeat.html[Collect {es} monitoring data with {metricbeat}]. -Alternatively, open the **{stack-monitor-app}** app in {kib} and follow the in-product guide. -**** - -. Enable monitoring of {agent} by adding the following settings to your `elastic-agent.yml` configuration file: -+ --- -[source,yaml] ----- -agent.monitoring: - http: - enabled: true <1> - host: localhost <2> - port: 6791 <3> ----- -<1> Enable monitoring -<2> The host to expose logs/metrics on -<3> The port to expose logs/metrics on --- - -. Stop {agent} -+ -If {agent} is already running, you must stop it. -Use the command that work with your system: -+ --- -include::{obs-repo-dir}/ingest-management/tab-widgets/stop-widget.asciidoc[] --- - -. Start {agent} -+ -Use the command that work with your system: -+ --- -include::{obs-repo-dir}/ingest-management/tab-widgets/start-widget.asciidoc[] --- - -[float] -[[install-config-metricbeat]] -==== Install and configure {metricbeat} to collect monitoring data - -. Install {metricbeat} on the same server as {agent}. To learn how, see -{metricbeat-ref}/metricbeat-installation-configuration.html[Get started with {metricbeat}]. -If you already have {metricbeat} installed, skip this step. - -. Enable the `beat-xpack` module in {metricbeat}. -+ --- -For example, to enable the default configuration in the `modules.d` directory, -run the following command, using the correct command syntax for your OS: - -["source","sh",subs="attributes,callouts"] ----- -metricbeat modules enable beat-xpack ----- - -For more information, see -{metricbeat-ref}/configuration-metricbeat.html[Configure modules] and -{metricbeat-ref}/metricbeat-module-beat.html[beat module]. --- - -. Configure the `beat-xpack` module in {metricbeat}. -+ --- -When complete, your `modules.d/beat-xpack.yml` file should look similar to this: - -[source,yaml] ----- -- module: beat - xpack.enabled: true - period: 10s - hosts: ["http://localhost:6791"] - basepath: "/processes/apm-server-default" - username: remote_monitoring_user - password: your_password ----- - -.. Do not change the `module` name or `xpack.enabled` boolean; -these are required for stack monitoring. We recommend accepting the default `period` for now. - -.. Set the `hosts` to match the host:port configured in your `elastic-agent.yml` file. -In this example, that's `http://localhost:6791`. -+ -To monitor multiple APM Server instances running in multiple {agent}s, specify a list of hosts, for example: -+ -[source,yaml] ----- -hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"] ----- -+ -If you configured {agent} to use encrypted communications, you must access -it via HTTPS. For example, use a `hosts` setting like `https://localhost:5066`. - -.. APM Server metrics are exposed at `/processes/apm-server-default`. Add this location as the `basepath`. - -.. Set the `username` and `password` settings as required by your -environment. If Elastic {security-features} are enabled, you must provide a username -and password so that {metricbeat} can collect metrics successfully: - -... Create a user on the {es} cluster that has the -`remote_monitoring_collector` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -... Add the `username` and `password` settings to the beat module configuration -file. --- - -. Optional: Disable the system module in the {metricbeat}. -+ --- -By default, the {metricbeat-ref}/metricbeat-module-system.html[system module] is -enabled. The information it collects, however, is not shown on the -*{stack-monitor-app}* page in {kib}. Unless you want to use that information for -other purposes, run the following command: - -["source","sh",subs="attributes,callouts"] ----- -metricbeat modules disable system ----- --- - -. Identify where to send the monitoring data. + -+ --- -TIP: In production environments, you should send your deployment logs and metrics to a dedicated -monitoring deployment (referred to as the _monitoring cluster_). -Monitoring indexes logs and metrics into {es} and these indexes consume storage, memory, -and CPU cycles like any other index. -By using a separate monitoring deployment, you avoid affecting your other production deployments and can -view the logs and metrics even when a production deployment is unavailable. - -For example, specify the {es} output information in the {metricbeat} -configuration file (`metricbeat.yml`): - -[source,yaml] ----- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #api_key: "id:api_key" <2> - #username: "elastic" - #password: "changeme" ----- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. -<2> Specify one of `api_key` or `username`/`password`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one ingest node. - -If the {es} {security-features} are enabled on the monitoring cluster, you -must provide a valid user ID and password so that {metricbeat} can send metrics -successfully: - -.. Create a user on the monitoring cluster that has the -`remote_monitoring_agent` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -.. Add the `username` and `password` settings to the {es} output information in -the {metricbeat} configuration file. - -For more information about these configuration options, see -{metricbeat-ref}/elasticsearch-output.html[Configure the {es} output]. --- - -. {metricbeat-ref}/metricbeat-starting.html[Start {metricbeat}] to begin -collecting APM monitoring data. - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/notices.asciidoc b/docs/notices.asciidoc deleted file mode 100644 index 9fc0ea9ec0a..00000000000 --- a/docs/notices.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -// For installation, get started, and setup docs -:deprecation-notice-installation: This method of installing APM Server will be deprecated and removed in a future release. Please consider getting started with the <> instead. - -// Generic "running" message -// Usually followed by a link to the corresponding APM integration docs -:deprecation-notice-data: This documentation refers to the standalone (legacy) method of running APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For monitoring docs -:deprecation-notice-monitor: This documentation refers to monitoring the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For configuration docs -:deprecation-notice-config: This documentation refers to configuring the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. - -// For API docs -:deprecation-notice-api: This documentation refers to the API of the standalone (legacy) APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider <>. diff --git a/docs/open-telemetry.asciidoc b/docs/open-telemetry.asciidoc deleted file mode 100644 index a7614a011e8..00000000000 --- a/docs/open-telemetry.asciidoc +++ /dev/null @@ -1,495 +0,0 @@ -[[open-telemetry]] -=== OpenTelemetry integration - -:ot-spec: https://github.com/open-telemetry/opentelemetry-specification/blob/master/README.md -:ot-grpc: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc -:ot-http: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp -:ot-contrib: https://github.com/open-telemetry/opentelemetry-collector-contrib -:ot-resource: {ot-contrib}/tree/main/processor/resourceprocessor -:ot-attr: {ot-contrib}/blob/main/processor/attributesprocessor -:ot-repo: https://github.com/open-telemetry/opentelemetry-collector -:ot-pipelines: https://opentelemetry.io/docs/collector/configuration/#service -:ot-extension: {ot-repo}/blob/master/extension/README.md -:ot-scaling: {ot-repo}/blob/master/docs/performance.md - -:ot-collector: https://opentelemetry.io/docs/collector/getting-started/ -:ot-dockerhub: https://hub.docker.com/r/otel/opentelemetry-collector-contrib - -https://opentelemetry.io/docs/concepts/what-is-opentelemetry/[OpenTelemetry] is a set -of APIs, SDKs, tooling, and integrations that enable the capture and management of -telemetry data from your services for greater observability. For more information about the -OpenTelemetry project, see the {ot-spec}[spec]. - -Elastic OpenTelemetry integrations allow you to reuse your existing OpenTelemetry -instrumentation to quickly analyze distributed traces and metrics to help you monitor -business KPIs and technical components with the {stack}. - -[float] -[[open-telemetry-native]] -==== APM Server native support of OpenTelemetry protocol - -Elastic APM Server natively supports the OpenTelemetry protocol. -This means trace data and metrics collected from your applications and infrastructure can -be sent directly to Elastic APM Server using the OpenTelemetry protocol. - -image::./legacy/guide/images/open-telemetry-protocol-arch.png[OpenTelemetry Elastic architecture diagram] - -[float] -[[instrument-apps-otel]] -====== Instrument applications - -To export traces and metrics to APM Server, ensure that you have instrumented your services and applications -with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app using the -https://github.com/open-telemetry/opentelemetry-java-instrumentation[OpenTelemetry agent for Java]. - -By defining the following environment variables, you can configure the OTLP endpoint so that the OpenTelemetry agent communicates with -APM Server. - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production -export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200 -export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer an_apm_secret_token" -export OTEL_METRICS_EXPORTER="otlp" \ -export OTEL_LOGS_EXPORTER="otlp" \ -java -javaagent:/path/to/opentelemetry-javaagent-all.jar \ - -classpath lib/*:classes/ \ - com.mycompany.checkout.CheckoutServiceServer ----- - -|=== - -| `OTEL_RESOURCE_ATTRIBUTES` | Fields that describe the service and the environment that the service runs in. See <> for more information. - -| `OTEL_EXPORTER_OTLP_ENDPOINT` | APM Server URL. The host and port that APM Server listens for events on. - -| `OTEL_EXPORTER_OTLP_HEADERS` | Authorization header that includes the Elastic APM Secret token or API key: `"Authorization=Bearer an_apm_secret_token"` or `"Authorization=ApiKey an_api_key"`. - -For information on how to format an API key, see our <> docs. - -Please note the required space between `Bearer` and `an_apm_secret_token`, and `APIKey` and `an_api_key`. - -| `OTEL_EXPORTER_OTLP_CERTIFICATE` | The trusted certificate used to verify the TLS credentials of the client. (optional) - -|=== - -You are now ready to collect traces and <> before <> -and <> in {kib}. - -[float] -[[connect-open-telemetry-collector]] -===== Connect OpenTelemetry Collector instances - -Connect your OpenTelemetry collector instances to Elastic {observability} using the OTLP exporter. - -[source,yaml] ----- -receivers: <1> - # ... - otlp: - -processors: <2> - # ... - memory_limiter: - check_interval: 1s - limit_mib: 2000 - batch: - -exporters: - logging: - loglevel: warn <3> - otlp/elastic: <4> - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" <5> <7> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_APM_SECRET_TOKEN}" <6> <7> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: <8> - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> The receivers, such as -the https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver[OTLP receiver], that forward data emitted by APM agents or the https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/hostmetricsreceiver[host metrics receiver]. -<2> We recommend using the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md[Batch processor] and also suggest using the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.md[memory limiter processor]. For more information, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors[Recommended processors]. -<3> The https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter[logging exporter] is helpful for troubleshooting and supports various logging levels: `debug`, `info`, `warn`, and `error`. -<4> Elastic {observability} endpoint configuration. -APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport {ot-grpc}[(OTLP/gRPC)] -and the OTLP protocol over HTTP transport {ot-http}[(OTLP/HTTP)]. -To learn more about these exporters, see the OpenTelemetry Collector documentation: -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter[OTLP/HTTP Exporter] or -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter[OTLP/gRPC exporter]. -<5> Hostname and port of the APM Server endpoint. For example, `elastic-apm-server:8200`. -<6> Credential for Elastic APM <> (`Authorization: "Bearer a_secret_token"`) or <> (`Authorization: "ApiKey an_api_key"`). -<7> Environment-specific configuration parameters can be conveniently passed in as environment variables documented https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables[here] (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_SECRET_TOKEN`). -<8> To send OpenTelemetry logs to {stack} version 8.0+, declare a `logs` pipeline. - -You're now ready to export traces and metrics from your services and applications. - -[float] -[[open-telemetry-collect-metrics]] -==== Collect metrics - -IMPORTANT: When collecting metrics, please note that the https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html[`DoubleValueRecorder`] -and https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/LongValueObserver.html[`LongValueRecorder`] metrics are not yet supported. - -Here's an example of how to capture business metrics from a Java application. - -[source,java] ----- -// initialize metric -Meter meter = GlobalMetricsProvider.getMeter("my-frontend"); -DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build(); - -public void createOrder(HttpServletRequest request) { - - // create order in the database - ... - // increment business metrics for monitoring - orderValueCounter.add(orderPrice); -} ----- - -See the https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md[Open Telemetry Metrics API] -for more information. - -[float] -[[open-telemetry-verify-metrics]] -===== Verify OpenTelemetry metrics data - -Use *Discover* to validate that metrics are successfully reported to {kib}. - -. Launch {kib}: -+ --- -include::./shared/open-kibana/open-kibana-widget.asciidoc[] --- - -. Open the main menu, then click *Discover*. -. Select `apm-*` as your index pattern. -. Filter the data to only show documents with metrics: `processor.name :"metric"` -. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return -only OpenTelemetry metrics documents. - -[float] -[[open-telemetry-visualize]] -===== Visualize in {kib} - -TSVB within {kib} is the recommended visualization for OpenTelemetry metrics. TSVB is a time series data visualizer that allows you to use the -{es} aggregation framework's full power. With TSVB, you can combine an infinite number of aggregations to display complex data. - -// lint ignore ecommerce -In this example eCommerce OpenTelemetry dashboard, there are four visualizations: sales, order count, product cache, and system load. The dashboard provides us with business -KPI metrics, along with performance-related metrics. - - -[role="screenshot"] -image::./legacy/guide/images/ecommerce-dashboard.png[OpenTelemetry visualizations] - -Let's look at how this dashboard was created, specifically the Sales USD and System load visualizations. - -. Open the main menu, then click *Dashboard*. -. Click *Create dashboard*. -. Click *Save*, enter the name of your dashboard, and then click *Save* again. -. Let’s add a Sales USD visualization. Click *Edit*. -. Click *Create new* and then select *TSVB*. -. For the label name, enter Sales USD, and then select the following: -+ -* Aggregation: `Counter Rate`. -* Field: `order_sum`. -* Scale: `auto`. -* Group by: `Everything` -. Click *Save*, enter Sales USD as the visualization name, and then click *Save and return*. -. Now let's create a visualization of load averages on the system. Click *Create new*. -. Select *TSVB*. -. Select the following: -+ -* Aggregation: `Average`. -* Field: `system.cpu.load_average.1m`. -* Group by: `Terms`. -* By: `host.ip`. -* Top: `10`. -* Order by: `Doc Count (default)`. -* Direction: `Descending`. -. Click *Save*, enter System load per host IP as the visualization name, and then click *Save and return*. -+ -Both visualizations are now displayed on your custom dashboard. - -IMPORTANT: By default, Discover shows data for the last 15 minutes. If you have a time-based index -and no data displays, you might need to increase the time range. - -[float] -[[open-telemetry-aws-lambda]] -==== AWS Lambda Support - -AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {observability}. - -To get started, follow the official AWS Distro for OpenTelemetry Lambda https://aws-otel.github.io/docs/getting-started/lambda[getting started documentation] and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster. - -[float] -[[open-telemetry-aws-lambda-java]] -===== Instrumenting AWS Lambda Java functions - -NOTE: For a better startup time, we recommend using SDK-based instrumentation, i.e. manual instrumentation of the code, rather than auto instrumentation. - -To instrument AWS Lambda Java functions, follow the official https://aws-otel.github.io/docs/getting-started/lambda/lambda-java[AWS Distro for OpenTelemetry Lambda Support For Java]. - -Noteworthy configuration elements: - -* AWS Lambda Java functions should extend `com.amazonaws.services.lambda.runtime.RequestHandler`, -+ -[source,java] ----- -public class ExampleRequestHandler implements RequestHandler { - public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) { - // add your code ... - } -} ----- - -* When using SDK-based instrumentation, frameworks you want to gain visibility of should be manually instrumented -** The below example instruments https://square.github.io/okhttp/4.x/okhttp/okhttp3/-ok-http-client/[OkHttpClient] with the OpenTelemetry instrument https://search.maven.org/artifact/io.opentelemetry.instrumentation/opentelemetry-okhttp-3.0/1.3.1-alpha/jar[io.opentelemetry.instrumentation:opentelemetry-okhttp-3.0:1.3.1-alpha] -+ -[source,java] ----- -import io.opentelemetry.instrumentation.okhttp.v3_0.OkHttpTracing; - -OkHttpClient httpClient = new OkHttpClient.Builder() - .addInterceptor(OkHttpTracing.create(GlobalOpenTelemetry.get()).newInterceptor()) - .build(); ----- - -* The configuration of the OpenTelemetry Collector, with the definition of the Elastic {observability} endpoint, can be added to the root directory of the Lambda binaries (e.g. defined in `src/main/resources/opentelemetry-collector.yaml`) -+ -[source,yaml] ----- -# Copy opentelemetry-collector.yaml in the root directory of the lambda function -# Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/opentelemetry-collector.yaml' -receivers: - otlp: - protocols: - http: - grpc: - -exporters: - logging: - loglevel: debug - otlp/elastic: - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_OTLP_ENDPOINT}" <1> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_OTLP_TOKEN}" <1> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> Environment-specific configuration parameters can be conveniently passed in as environment variables: `ELASTIC_OTLP_ENDPOINT` and `ELASTIC_OTLP_TOKEN` - -* Configure the AWS Lambda Java function with: -** https://docs.aws.amazon.com/lambda/latest/dg/API_Layer.html[Function -layer]: The latest https://aws-otel.github.io/docs/getting-started/lambda/lambda-java[AWS -Lambda layer for OpenTelemetry] (e.g. `arn:aws:lambda:eu-west-1:901920570463:layer:aws-otel-java-wrapper-ver-1-2-0:1`) -** https://docs.aws.amazon.com/lambda/latest/dg/API_TracingConfig.html[`TracingConfig` / Mode] set to `PassTrough` -** https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionConfiguration.html[`FunctionConfiguration` / Timeout] set to more than 10 seconds to support the longer cold start inherent to the Lambda Java Runtime -** Export the environment variables: -*** `AWS_LAMBDA_EXEC_WRAPPER="/opt/otel-proxy-handler"` for wrapping handlers proxied through the API Gateway (see https://aws-otel.github.io/docs/getting-started/lambda/lambda-java#enable-auto-instrumentation-for-your-lambda-function[here]) -*** `OTEL_PROPAGATORS="tracecontext, baggage"` to override the default setting that also enables X-Ray headers causing interferences between OpenTelemetry and X-Ray -*** `OPENTELEMETRY_COLLECTOR_CONFIG_FILE="/var/task/opentelemetry-collector.yaml"` to specify the path to your OpenTelemetry Collector configuration - -[float] -[[open-telemetry-aws-lambda-java-terraform]] -===== Instrumenting AWS Lambda Java functions with Terraform - -We recommend using an infrastructure as code solution like Terraform or Ansible to manage the configuration of your AWS Lambda functions. - -Here is an example of AWS Lambda Java function managed with Terraform and the https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function[AWS Provider / Lambda Functions]: - -* Sample Terraform code: https://github.com/cyrille-leclerc/my-serverless-shopping-cart/tree/main/checkout-function/deploy -* Note that the Terraform code to manage the HTTP API Gateway (https://github.com/cyrille-leclerc/my-serverless-shopping-cart/tree/main/utils/terraform/api-gateway-proxy[here]) is copied from the official OpenTelemetry Lambda sample https://github.com/open-telemetry/opentelemetry-lambda/tree/e72467a085a2a6e57af133032f85ac5b8bbbb8d1/utils[here] - -[float] -[[open-telemetry-aws-lambda-nodejs]] -===== Instrumenting AWS Lambda Node.js functions - -NOTE: For a better startup time, we recommend using SDK-based instrumentation for manual instrumentation of the code rather than auto instrumentation. - -To instrument AWS Lambda Node.js functions, see https://aws-otel.github.io/docs/getting-started/lambda/lambda-js[AWS Distro for OpenTelemetry Lambda Support For JavaScript]. - -The configuration of the OpenTelemetry Collector, with the definition of the Elastic {observability} endpoint, can be added to the root directory of the Lambda binaries: `src/main/resources/opentelemetry-collector.yaml`. - -[source,yaml] ----- -# Copy opentelemetry-collector.yaml in the root directory of the lambda function -# Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/opentelemetry-collector.yaml' -receivers: - otlp: - protocols: - http: - grpc: - -exporters: - logging: - loglevel: debug - otlp/elastic: - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_OTLP_ENDPOINT}" <1> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_OTLP_TOKEN}" <1> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> Environment-specific configuration parameters can be conveniently passed in as environment variables: `ELASTIC_OTLP_ENDPOINT` and `ELASTIC_OTLP_TOKEN` - -Configure the AWS Lambda Node.js function: - -* https://docs.aws.amazon.com/lambda/latest/dg/API_Layer.html[Function -layer]: The latest https://aws-otel.github.io/docs/getting-started/lambda/lambda-js[AWS -Lambda layer for OpenTelemetry]. For example, `arn:aws:lambda:eu-west-1:901920570463:layer:aws-otel-nodejs-ver-0-23-0:1`) -* https://docs.aws.amazon.com/lambda/latest/dg/API_TracingConfig.html[`TracingConfig` / Mode] set to `PassTrough` -* https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionConfiguration.html[`FunctionConfiguration` / Timeout] set to more than 10 seconds to support the cold start of the Lambda JavaScript Runtime -* Export the environment variables: -** `AWS_LAMBDA_EXEC_WRAPPER="/opt/otel-handler"` for wrapping handlers proxied through the API Gateway. See https://aws-otel.github.io/docs/getting-started/lambda/lambda-js#enable-auto-instrumentation-for-your-lambda-function[enable auto instrumentation for your lambda-function]. -** `OTEL_PROPAGATORS="tracecontext"` to override the default setting that also enables X-Ray headers causing interferences between OpenTelemetry and X-Ray -** `OPENTELEMETRY_COLLECTOR_CONFIG_FILE="/var/task/opentelemetry-collector.yaml"` to specify the path to your OpenTelemetry Collector configuration. -** `OTEL_TRACES_SAMPLER="AlwaysOn"` define the required sampler strategy if it is not sent from the caller. Note that `Always_on` can potentially create a very large amount of data, so in production set the correct sampling configuration, as per the https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling[specification]. - -[float] -[[open-telemetry-aws-lambda-nodejs-terraform]] -===== Instrumenting AWS Lambda Node.js functions with Terraform - -To manage the configuration of your AWS Lambda functions, we recommend using an infrastructure as code solution like Terraform or Ansible. - -Here is an example of AWS Lambda Node.js function managed with Terraform and the https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function[AWS Provider / Lambda Functions]: - -* https://github.com/michaelhyatt/terraform-aws-nodejs-api-worker-otel/tree/v0.24[Sample Terraform code] - -[float] -[[open-telemetry-resource-attributes]] -==== Resource attributes - -A resource attribute is a key/value pair containing information about the entity producing telemetry. -Resource attributes are mapped to Elastic Common Schema (ECS) fields like `service.*`, `cloud.*`, `process.*`, etc. -These fields describe the service and the environment that the service runs in. - -The examples below set the Elastic (ECS) `service.environment` field for the resource, i.e. service, that is producing trace events. -Note that Elastic maps the OpenTelemetry `deployment.environment` field to -the ECS `service.environment` field on ingestion. - -**OpenTelemetry agent** - -Use the `OTEL_RESOURCE_ATTRIBUTES` environment variable to pass resource attributes at process invocation. - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production ----- - -**OpenTelemetry collector** - -Use the {ot-resource}[resource processor] to set or apply changes to resource attributes. - -[source,yaml] ----- -... -processors: - resource: - attributes: - - key: deployment.environment - action: insert - value: production -... ----- - -[TIP] --- -Need to add event attributes instead? -Use attributes--not to be confused with resource attributes--to add data to span, log, or metric events. -Attributes can be added as a part of the OpenTelemetry instrumentation process or with the {ot-attr}[attributes processor]. --- - -[float] -[[open-telemetry-proxy-apm]] -==== Proxy requests to APM Server - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to the APM Server. - -If you use the OTLP/gRPC protocol, requests to the APM Server must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: `"Content-Type: application/grpc"`. - -When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to the APM Server follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you'd select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request. - -For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: -https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/[Application Load Balancer Support for End-to-End HTTP/2 and gRPC]. - -For more information on how APM Server services gRPC requests, see -https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11[Muxing gRPC and HTTP/1.1]. - - -[float] -[[open-telemetry-known-limitations]] -==== Limitations - -[float] -[[open-telemetry-traces-limitations]] -===== OpenTelemetry traces - -* Traces of applications using `messaging` semantics might be wrongly displayed as `transactions` in the APM UI, while they should be considered `spans`. https://github.com/elastic/apm-server/issues/7001[#7001] -* Inability to see Stack traces in spans -* Inability in APM views to view the "Time Spent by Span Type" https://github.com/elastic/apm-server/issues/5747[#5747] -* Metrics derived from traces (throughput, latency, and errors) are not accurate when traces are sampled before being ingested by Elastic {observability} (for example, by an OpenTelemetry Collector or OpenTelemetry {apm-agent} or SDK) https://github.com/elastic/apm/issues/472[#472] - -[float] -[[open-telemetry-metrics-limitations]] -===== OpenTelemetry metrics - -* Inability to see host metrics in Elastic Metrics Infrastructure view when using the OpenTelemetry Collector host metrics receiver https://github.com/elastic/apm-server/issues/5310[#5310] - -[float] -[[open-telemetry-logs-limitations]] -===== OpenTelemetry logs - -* OpenTelemetry logs are supported with **beta support** from 8.0 https://github.com/elastic/apm-server/pull/6768[#6768]. - -[float] -[[open-telemetry-otlp-limitations]] -===== OpenTelemetry Line Protocol (OTLP) - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol with ProtoBuf payload. -APM Server does not yet support JSON Encoding for OTLP/HTTP. - -[float] -[[open-telemetry-collector-exporter]] -===== OpenTelemetry Collector exporter for Elastic - -The https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticexporter#legacy-opentelemetry-collector-exporter-for-elastic[OpenTelemetry Collector exporter for Elastic] -was deprecated in 7.13 and replaced by the native support of the OpenTelemetry Line Protocol in -Elastic {observability} (OTLP). To learn more, see -https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticexporter#migration[migration]. diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc deleted file mode 100644 index 472f600d65a..00000000000 --- a/docs/overview.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[apm-overview]] -== Free and open application performance monitoring - -++++ -What is APM? -++++ - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -=== Give Elastic APM a try - -Learn more about the <> that make up Elastic APM, -or jump right into the <>. - -NOTE: These docs will indiscriminately use the word "service" for both services and applications. diff --git a/docs/processing-performance.asciidoc b/docs/processing-performance.asciidoc deleted file mode 100644 index 8ab84d60f38..00000000000 --- a/docs/processing-performance.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[[processing-and-performance]] -=== Processing and performance - -APM Server performance depends on a number of factors: memory and CPU available, -network latency, transaction sizes, workload patterns, -agent and server settings, versions, and protocol. - -Let's look at a simple example that makes the following assumptions: - -* The load is generated in the same region as where APM Server and {es} are deployed. -* We're using the default settings in cloud. -* A small number of agents are reporting. - -This leaves us with relevant variables like payload and instance sizes. -See the table below for approximations. -As a reminder, events are -<> and -<>. - -[options="header"] -|======================================================================= -|Transaction/Instance |512 MB Instance |2 GB Instance |8 GB Instance -|Small transactions - -_5 spans with 5 stack frames each_ |600 events/second |1200 events/second |4800 events/second -|Medium transactions - -_15 spans with 15 stack frames each_ |300 events/second |600 events/second |2400 events/second -|Large transactions - -_30 spans with 30 stack frames each_ |150 events/second |300 events/second |1400 events/second -|======================================================================= - -In other words, a 512 MB instance can process \~3 MB per second, -while an 8 GB instance can process ~20 MB per second. - -APM Server is CPU bound, so it scales better from 2 GB to 8 GB than it does from 512 MB to 2 GB. -This is because larger instance types in {ecloud} come with much more computing power. - -Don't forget that the APM Server is stateless. -Several instances running do not need to know about each other. -This means that with a properly sized {es} instance, APM Server scales out linearly. - -NOTE: RUM deserves special consideration. The RUM agent runs in browsers, and there can be many thousands reporting to an APM Server with very variable network latency. \ No newline at end of file diff --git a/docs/release-notes.asciidoc b/docs/release-notes.asciidoc deleted file mode 100644 index 1069454dccb..00000000000 --- a/docs/release-notes.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -:root-dir: ../ - -[[release-notes]] -= Release notes -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -This section summarizes the changes in each release. - -* <> -* <> -* <> -* <> - -Looking for a previous version? See the {apm-guide-7x}/release-notes.html[7.x release notes]. - -include::{root-dir}/CHANGELOG.asciidoc[] diff --git a/docs/sampling.asciidoc b/docs/sampling.asciidoc deleted file mode 100644 index 9fc030ea39c..00000000000 --- a/docs/sampling.asciidoc +++ /dev/null @@ -1,201 +0,0 @@ -[[sampling]] -=== Transaction sampling - -Distributed tracing can generate a substantial amount of data. -More data can mean higher costs and more noise. -Sampling aims to lower the amount of data ingested and the effort required to analyze that data -- -all while still making it easy to find anomalous patterns in your applications, detect outages, track errors, -and lower mean time to recovery (MTTR). - -Elastic APM supports two types of sampling: - -* <> -* <> - -[float] -[[head-based-sampling]] -==== Head-based sampling - -In head-based sampling, the sampling decision for each trace is made when the trace is initiated. -Each trace has a defined and equal probability of being sampled. - -For example, a sampling value of `.2` indicates a transaction sample rate of `20%`. -This means that only `20%` of traces will send and retain all of their associated information. -The remaining traces will drop contextual information to reduce the transfer and storage size of the trace. - -Head-based sampling is quick and easy to set up. -Its downside is that it's entirely random -- interesting -data might be discarded purely due to chance. - -See <> to get started. - -**Distributed tracing with head-based sampling** - -In a distributed trace, the sampling decision is still made when the trace is initiated. -Each subsequent service respects the initial service's sampling decision, regardless of its configured sample rate; -the result is a sampling percentage that matches the initiating service. - -In this example, `Service A` initiates four transactions and has sample rate of `.5` (`50%`). -The sample rates of `Service B` and `Service C` are ignored. - -image::./images/dt-sampling-example-1.png[Distributed tracing and head based sampling example one] - -In this example, `Service A` initiates four transactions and has a sample rate of `1` (`100%`). -Again, the sample rates of `Service B` and `Service C` are ignored. - -image::./images/dt-sampling-example-2.png[Distributed tracing and head based sampling example two] - -[float] -[[tail-based-sampling]] -==== Tail-based sampling - -In tail-based sampling, the sampling decision for each trace is made after the trace has completed. -This means all traces will be analyzed against a set of rules, or policies, which will determine the rate at which they are sampled. - -Unlike head-based sampling, each trace does not have an equal probability of being sampled. -Because slower traces are more interesting than faster ones, tail-based sampling uses weighted random sampling -- so -traces with a longer root transaction duration are more likely to be sampled than traces with a fast root transaction duration. - -A downside of tail-based sampling is that it results in more data being sent from APM agents to the APM Server. -The APM Server will therefore use more CPU, memory, and disk than with head-based sampling. -However, because the tail-based sampling decision happens in APM Server, there is less data to transfer from APM Server to {es}. -So running APM Server close to your instrumented services can reduce any increase in transfer costs that tail-based sampling brings. - -See <> to get started. - -**Distributed tracing with tail-based sampling** - -With tail-based sampling, all traces are observed and a sampling decision is only made once a trace completes. - -In this example, `Service A` initiates four transactions. -If our sample rate is `.5` (`50%`) for traces with a `success` outcome, -and `1` (`100%`) for traces with a `failure` outcome, -the sampled traces would look something like this: - -image::./images/dt-sampling-example-3.png[Distributed tracing and tail based sampling example one] - -[float] -=== Sampled data and visualizations - -A sampled trace retains all data associated with it. -A non-sampled trace drops all <> and <> data^1^. -Regardless of the sampling decision, all traces retain <> data. - -Some visualizations in the {apm-app}, like latency, are powered by aggregated transaction and span <>. -Metrics are based on sampled traces and weighted by the inverse sampling rate. -For example, if you sample at 5%, each trace is counted as 20. -As a result, as the variance of latency increases, or the sampling rate decreases, your level of error will increase. - -^1^ Real User Monitoring (RUM) traces are an exception to this rule. -The {kib} apps that utilize RUM data depend on transaction events, -so non-sampled RUM traces retain transaction data -- only span data is dropped. - -[float] -=== Sample rates - -What's the best sampling rate? Unfortunately, there isn't one. -Sampling is dependent on your data, the throughput of your application, data retention policies, and other factors. -A sampling rate from `.1%` to `100%` would all be considered normal. -You'll likely decide on a unique sample rate for different scenarios. -Here are some examples: - -* Services with considerably more traffic than others might be safe to sample at lower rates -* Routes that are more important than others might be sampled at higher rates -* A production service environment might warrant a higher sampling rate than a development environment -* Failed trace outcomes might be more interesting than successful traces -- thus requiring a higher sample rate - -Regardless of the above, cost conscious customers are likely to be fine with a lower sample rate. - -[[configure-head-based-sampling]] -==== Configure head-based sampling - -There are three ways to adjust the head-based sampling rate of your APM agents: - -===== Dynamic configuration - -The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment -basis with {kibana-ref}/agent-configuration.html[{apm-agent} Configuration] in {kib}. - -===== {kib} API configuration - -{apm-agent} configuration exposes an API that can be used to programmatically change -your agents' sampling rate. -An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. - -===== {apm-agent} configuration - -Each agent provides a configuration value used to set the transaction sample rate. -See the relevant agent's documentation for more details: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-sample-rate[`ELASTIC_APM_TRANSACTION_SAMPLE_RATE`] -* Java: {apm-java-ref-v}/config-core.html#config-transaction-sample-rate[`transaction_sample_rate`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-sample-rate[`TransactionSampleRate`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-sample-rate[`transactionSampleRate`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] - -[[configure-tail-based-sampling]] -==== Configure tail-based sampling - -Enable tail-based sampling in the <>. -When enabled, trace events are mapped to sampling policies. -Each sampling policy must specify a sample rate, and can optionally specify other conditions. -All of the policy conditions must be true for a trace event to match it. - -Trace events are matched to policies in the order specified. -Each policy list must conclude with a default policy -- one that only specifies a sample rate. -This default policy is used to catch remaining trace events that don't match a stricter policy. -Requiring this default policy ensures that traces are only dropped intentionally. -If you enable tail-based sampling and send a transaction that does not match any of the policies, -APM Server will reject the transaction with the error `no matching policy`. - -IMPORTANT: Please note that from version `8.3.1` APM Server implements a default storage limit of 3GB, -but, due to how the limit is calculated and enforced the actual disk space may still grow slightly -over the limit. - -===== Example configuration - -This example defines three tail-based sampling polices: - -[source, yml] ----- -- sample_rate: 1 <1> - service.environment: production - trace.name: "GET /very_important_route" -- sample_rate: .01 <2> - service.environment: production - trace.name: "GET /not_important_route" -- sample_rate: .1 <3> ----- -<1> Samples 100% of traces in `production` with the trace name `"GET /very_important_route"` -<2> Samples 1% of traces in `production` with the trace name `"GET /not_important_route"` -<3> Default policy to sample all remaining traces at 10%, e.g. traces in a different environment, like `dev`, -or traces with any other name - -===== Configuration reference - -:input-type: tbs -**Top-level tail-based sampling settings:** - -// This looks like the root service name/env, trace name/env, and trace outcome - -[cols="2*> -* <> -* <> - -As soon as an authenticated communication is enabled, -requests without a valid token or API key will be denied. -If both API keys and a secret token are enabled, APM agents can choose whichever mechanism they support. - -In some use-cases, like when an {apm-agent} is running on the client side, -authentication is not possible. See <> for more information. - -[[agent-tls]] -=== {apm-agent} TLS communication - -TLS is disabled by default. -When TLS is enabled for APM Server inbound communication, agents will verify the identity -of the APM Server by authenticating its certificate. - -Enable TLS in the <>; a certificate and corresponding private key are required. -The certificate and private key can either be issued by a trusted certificate authority (CA) -or be <>. - -[float] -[[agent-self-sign]] -=== Use a self-signed certificate - -[float] -[[agent-self-sign-1]] -==== Step 1: Create a self-signed certificate - -The {es} distribution offers the `certutil` tool for the creation of self-signed certificates: - -1. Create a CA: `./bin/elasticsearch-certutil ca --pem`. You'll be prompted to enter the desired -location of the output zip archive containing the certificate and the private key. -2. Extract the contents of the CA archive. -3. Create the self-signed certificate: `./bin/elasticsearch-certutil cert --ca-cert -/ca.crt --ca-key /ca.key --pem --name localhost` -4. Extract the certificate and key from the resulted zip archive. - -[float] -[[agent-self-sign-2]] -==== Step 2: Configure the APM integration - -Configure the APM integration to point to the extracted certificate and key. - -[float] -[[agent-self-sign-3]] -==== Step 3: Configure APM agents - -When the APM server uses a certificate that is not chained to a publicly-trusted certificate -(e.g. self-signed), additional configuration is required in the {apm-agent}: - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -We do not recommend disabling {apm-agent} verification of the server's certificate, but it is possible: - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[float] -[[agent-client-cert]] -=== Client certificate authentication - -APM Server does not require agents to provide a certificate for authentication, -and there is no dedicated support for SSL/TLS client certificate authentication in Elastic’s backend agents. - -[[api-key]] -=== API keys - -IMPORTANT: API keys are sent as plain-text, -so they only provide security when used in combination with <>. - -Enable API key authorization in the <>. -When enabled, API keys are used to authorize requests to the APM Server. - -You can assign one or more unique privileges to each API key: - -* *Agent configuration* (`config_agent:read`): Required for agents to read -{kibana-ref}/agent-configuration.html[Agent configuration remotely]. -* *Ingest* (`event:write`): Required for ingesting agent events. - -To secure the communication between APM Agents and the APM Server with API keys, -make sure <> is enabled, then complete these steps: - -. <> -. <> -. <> -. <> - -[[enable-api-key]] -[float] -=== Enable API keys - -Enable API key authorization in the <>. -You should also set a limit on the number of unique API keys that APM Server allows per minute; -this value should be the number of unique API keys configured in your monitored services. - -[[create-api-key-user]] -[float] -=== Create an API key user in {kib} - -API keys can only have the same or lower access rights than the user that creates them. -Instead of using a superuser account to create API keys, you can create a role with the minimum required -privileges. - -The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege -and the APM application-level privileges that it wishes to grant. -In addition, when creating an API key from the {apm-app}, -you'll need the appropriate {kib} Space and Feature privileges. - -The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] -to create a role named `apm_agent_key_role`. - -[source,js] ----- -POST /_security/role/apm_agent_key_role -{ - "cluster": [ "manage_own_api_key" ], - "applications": [ - { - "application":"apm", - "privileges":[ - "event:write", - "config_agent:read", - "sourcemap:write" - ], - "resources":[ "*" ] - }, - { - "application":"kibana-.kibana", - "privileges":[ "feature_apm.all" ], - "resources":[ "space:default" ] <1> - } - ] -} ----- -<1> This example assigns privileges for the default space. - -Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. - -[[create-an-api-key]] -[float] -=== Create an API key in the {apm-app} - -The {apm-app} has a built-in workflow that you can use to easily create and view {apm-agent} API keys. -Only API keys created in the {apm-app} will show up here. - -Using a superuser account, or a user with the role created in the previous step, -open {kib} and navigate to **{observability}** > **APM** > **Settings** > **Agent keys**. -Enter a name for your API key and select at least one privilege. - -For example, to create an API key that can be used to ingest APM events -and read agent central configuration, select `config_agent:read` and `event:write`. - -IMPORTANT: The `sourcemap:write` privilege is outdated and will be removed in a future release. -To learn more about the privileges required to upload a source map, -see the {kibana-ref}/rum-sourcemap-api.html[RUM source map API]. - -// lint ignore apm-agent -Click **Create APM Agent key** and copy the Base64 encoded API key. -You will need this for the next step, and you will not be able to view it again. - -[role="screenshot"] -image::images/apm-ui-api-key.png[{apm-app} API key] - -[[agent-api-key]] -[float] -=== Set the API key in your APM agents - -You can now apply your newly created API keys in the configuration of each of your APM agents. -See the relevant agent documentation for additional information: - -// Not relevant for RUM and iOS -* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] -* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] -* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] - -[[secret-token]] -=== Secret token - -IMPORTANT: Secret tokens are sent as plain-text, -so they only provide security when used in combination with <>. - -Define a secret token in the <>. -When defined, secret tokens are used to authorize requests to the APM Server. -Both the {apm-agent} and APM integration must be configured with the same secret token for the request to be accepted. - -To secure the communication between APM agents and the APM Server with a secret token: - -. Make sure <> is enabled -. <> -. <> - -NOTE: Secret tokens are not applicable for the RUM Agent, -as there is no way to prevent them from being publicly exposed. - -[float] -[[create-secret-token]] -=== Create a secret token - -Create or update a secret token in {fleet}. - -include::./input-apm.asciidoc[tag=edit-integration-settings] -+ -. Navigate to **Agent authorization** > **Secret token** and set the value of your token. -. Click **Save integration**. The APM Server will restart before the change takes effect. - -[[configure-secret-token]] -[float] -=== Configure the secret token in your APM agents - -Each Elastic {apm-agent} has a configuration option to set the value of the secret token: - -* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] -* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] - -In addition to setting the secret token, ensure the configured server URL uses `HTTPS` instead of `HTTP`: - -* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] -* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] -* *Python agent*: {apm-py-ref}/[`server_url`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] - - -[[anonymous-auth]] -=== Anonymous authentication - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -An event is considered to be anonymous if no authentication token can be extracted from the incoming request. -By default, these anonymous requests are rejected and an authentication error is returned. - -In some cases, however, it makes sense to allow anonymous requests -- for -example, when using the Real User Monitoring (RUM) agent running in a browser, -or the iOS/Swift agent running in a user application, -it is not possible to hide or protect a secret token or API key. -Thus, enabling anonymous authentication is required to ingest client-side APM data. - -[float] -[[anonymous-auth-config]] -=== Configuring anonymous authentication - -There are a few configuration variables that can mitigate the impact of malicious requests to an -unauthenticated APM Server endpoint. - -Use the **Allowed anonymous agents** and **Allowed anonymous services** configs to ensure that the -`agent.name` and `service.name` of each incoming request match a specified list. - -Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address -(`client.ip`) of the request. -This allows you to specify the maximum number of requests allowed per unique IP address, per second. - -[float] -[[derive-client-ip]] -=== Deriving an incoming request's `client.ip` address - -The remote IP address of an incoming request might be different -from the end-user's actual IP address, for example, because of a proxy. For this reason, -the APM Server attempts to derive the IP address of an incoming request from HTTP headers. -The supported headers are parsed in the following order: - -1. `Forwarded` -2. `X-Real-Ip` -3. `X-Forwarded-For` - -If none of these headers are present, the remote address for the incoming request is used. - -[float] -[[derive-client-ip-concerns]] -==== Using a reverse proxy or load balancer - -HTTP headers are easily modified; -it's possible for anyone to spoof the derived `client.ip` value by changing or setting, -for example, the value of the `X-Forwarded-For` header. -For this reason, if any of your clients are not trusted, -we recommend setting up a reverse proxy or load balancer in front of the APM Server. - -Using a proxy allows you to clear any existing IP-forwarding HTTP headers, -and replace them with one set by the proxy. -This prevents malicious users from cycling spoofed IP addresses to bypass the -APM Server's rate limiting feature. diff --git a/docs/shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc b/docs/shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc deleted file mode 100644 index bc0f1f18da8..00000000000 --- a/docs/shared/distributed-trace-receive/distributed-trace-receive-widget.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -// The Java agent defaults to visible. -// Change with `aria-selected="false"` and `hidden=""` -++++ -
-
- - - - - - - - -
- - -
-++++ - -include::distributed-trace-receive.asciidoc[tag=java] - -++++ -
- - - - - -
-++++ \ No newline at end of file diff --git a/docs/shared/distributed-trace-receive/distributed-trace-receive.asciidoc b/docs/shared/distributed-trace-receive/distributed-trace-receive.asciidoc deleted file mode 100644 index 60dcbd1dd11..00000000000 --- a/docs/shared/distributed-trace-receive/distributed-trace-receive.asciidoc +++ /dev/null @@ -1,208 +0,0 @@ -// tag::go[] - -// Need help with this example - -1. Parse the incoming `TraceContext` with -https://godoc.org/go.elastic.co/apm/module/apmhttp#ParseTraceparentHeader[`ParseTraceparentHeader`] or -https://godoc.org/go.elastic.co/apm/module/apmhttp#ParseTracestateHeader[`ParseTracestateHeader`]. - -2. Start a new transaction or span as a child of the incoming transaction with -{apm-go-ref}/api.html#tracer-api-start-transaction-options[`StartTransactionOptions`] or -{apm-go-ref}/api.html#transaction-start-span-options[`StartSpanOptions`]. - -Example: - -[source,go] ----- -// Receive incoming TraceContext -traceContext, _ := apmhttp.ParseTraceparentHeader(r.Header.Get("Traceparent")) <1> -traceContext.State, _ = apmhttp.ParseTracestateHeader(r.Header["Tracestate"]...) <2> - -opts := apm.TransactionOptions{ - TraceContext: traceContext, <3> -} -transaction := apm.DefaultTracer.StartTransactionOptions("GET /", "request", opts) <4> ----- -<1> Parse the `TraceParent` header -<2> Parse the `Tracestate` header -<3> Set the parent trace context -<4> Start a new transaction as a child of the received `TraceContext` - -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -_Not applicable._ - -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -1. Create a transaction as a child of the incoming transaction with -{apm-java-ref}/public-api.html#api-transaction-inject-trace-headers[`startTransactionWithRemoteParent()`]. - -2. Start and name the transaction with {apm-java-ref}/public-api.html#api-transaction-activate[`activate()`] -and {apm-java-ref}/public-api.html#api-set-name[`setName()`]. - -Example: - -[source,java] ----- -// Hook into a callback provided by the framework that is called on incoming requests -public Response onIncomingRequest(Request request) throws Exception { - // creates a transaction representing the server-side handling of the request - Transaction transaction = ElasticApm.startTransactionWithRemoteParent(request::getHeader, request::getHeaders); <1> - try (final Scope scope = transaction.activate()) { <2> - String name = "a useful name like ClassName#methodName where the request is handled"; - transaction.setName(name); <3> - transaction.setType(Transaction.TYPE_REQUEST); <4> - return request.handle(); - } catch (Exception e) { - transaction.captureException(e); - throw e; - } finally { - transaction.end(); <5> - } -} ----- -<1> Create a transaction as the child of a remote parent -<2> Activate the transaction -<3> Name the transaction -<4> Add a transaction type -<5> Eventually, end the transaction - -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] - -Deserialize the incoming distributed tracing context, and pass it to any of the -{apm-dotnet-ref}/public-api.html#api-start-transaction[`StartTransaction`] or -{apm-dotnet-ref}/public-api.html#convenient-capture-transaction[`CaptureTransaction`] APIs -- -all of which have an optional `DistributedTracingData` parameter. -This will create a new transaction or span as a child of the incoming trace context. - -Example starting a new transaction: - -[source,csharp] ----- -var transaction2 = Agent.Tracer.StartTransaction("Transaction2", "TestTransaction", - DistributedTracingData.TryDeserializeFromString(serializedDistributedTracingData)); ----- - -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] - -1. Decode and store the `traceparent` in the receiving service. - -2. Pass in the `traceparent` as the `childOf` option to manually start a new transaction -as a child of the received `traceparent` with -{apm-node-ref}/agent-api.html#apm-start-transaction[`apm.startTransaction()`]. - -Example receiving a `traceparent` over raw UDP: - -[source,js] ----- -const traceparent = readTraceparentFromUDPPacket() <1> -agent.startTransaction('my-service-b-transaction', { childOf: traceparent }) <2> ----- -<1> Read the `traceparent` from the incoming request. -<2> Use the `traceparent` to initialize a new transaction that is a child of the original `traceparent`. - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -1. Receive the distributed tracing data on the server side. - -2. Begin a new transaction using the agent's public API. For example, use {apm-php-ref-v}/public-api.html#api-elasticapm-class-begin-current-transaction[`ElasticApm::beginCurrentTransaction`] -and pass the received distributed tracing data (serialized as string) as a parameter. -This will create a new transaction as a child of the incoming trace context. - -3. Don't forget to eventually end the transaction on the server side. - -Example: - -[source,php] ----- -$receiverTransaction = ElasticApm::beginCurrentTransaction( <1> - 'GET /data-api', - 'data-layer', - /* timestamp */ null, - $distDataAsString <2> -); ----- -<1> Start a new transaction -<2> Pass in the received distributed tracing data (serialized as string) - -Once this new transaction has been created in the receiving service, -you can create child spans, or use any other agent API methods as you typically would. - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] - -1. Create a `TraceParent` object from a string or HTTP header. - -2. Start a new transaction as a child of the `TraceParent` by passing in a `TraceParent` object. - -Example using HTTP headers: - -[source,python] ----- -parent = elasticapm.trace_parent_from_headers(headers_dict) <1> -client.begin_transaction('processors', trace_parent=parent) <2> ----- -<1> Create a `TraceParent` object from HTTP headers formed as a dictionary -<2> Begin a new transaction as a child of the received `TraceParent` - -TIP: See the {apm-py-ref}/api.html#traceparent-api[`TraceParent` API] for additional examples. -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] - -Start a new transaction or span as a child of the incoming transaction or span with -{apm-ruby-ref}/api.html#api-agent-with_transaction[`with_transaction`] or -{apm-ruby-ref}/api.html#api-agent-with_span[`with_span`]. - -Example: - -[source,ruby] ----- -# env being a Rack env -context = ElasticAPM::TraceContext.parse(env: env) <1> - -ElasticAPM.with_transaction("Do things", trace_context: context) do <2> - ElasticAPM.with_span("Do nested thing", trace_context: context) do <3> - end -end ----- -<1> Parse the incoming `TraceContext` -<2> Create a transaction as a child of the incoming `TraceContext` -<3> Create a span as a child of the newly created transaction. `trace_context` is optional here, -as spans are automatically created as a child of their parent's transaction's `TraceContext` when none is passed. - -// end::ruby[] diff --git a/docs/shared/distributed-trace-send/distributed-trace-send-widget.asciidoc b/docs/shared/distributed-trace-send/distributed-trace-send-widget.asciidoc deleted file mode 100644 index 115cf6556ca..00000000000 --- a/docs/shared/distributed-trace-send/distributed-trace-send-widget.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -// The Java agent defaults to visible. -// Change with `aria-selected="false"` and `hidden=""` -++++ -
-
- - - - - - - - -
- - -
-++++ - -include::distributed-trace-send.asciidoc[tag=java] - -++++ -
- - - - - -
-++++ \ No newline at end of file diff --git a/docs/shared/distributed-trace-send/distributed-trace-send.asciidoc b/docs/shared/distributed-trace-send/distributed-trace-send.asciidoc deleted file mode 100644 index af3a689f6b0..00000000000 --- a/docs/shared/distributed-trace-send/distributed-trace-send.asciidoc +++ /dev/null @@ -1,221 +0,0 @@ -// tag::go[] - -1. Start a transaction with -{apm-go-ref}/api.html#tracer-api-start-transaction[`StartTransaction`] or a span with -{apm-go-ref}/api.html#transaction-start-span[`StartSpan`]. - -2. Get the active `TraceContext`. - -3. Send the `TraceContext` to the receiving service. - -Example: - -[source,go] ----- -transaction := apm.DefaultTracer.StartTransaction("GET /", "request") <1> -traceContext := transaction.TraceContext() <2> - -// Send TraceContext to receiving service -traceparent := apmhttp.FormatTraceparentHeader(traceContext)) <3> -tracestate := traceContext.State.String() ----- -<1> Start a transaction -<2> Get `TraceContext` from current Transaction -<3> Format the `TraceContext` or `tracestate` as a `traceparent` header. -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -The agent will automatically inject trace headers into network requests using `URLSessions`, but if you're using a non-standard network library you may need to manually inject them. It will be done using the OpenTelemetry APIs: - -1. Create a `Setter` - -2. Create a `Span` per https://github.com/open-telemetry/opentelemetry-swift/blob/main/Examples/Simple%20Exporter/main.swift#L35[Open Telemetry standards] - -3. Inject trace context to header dictionary - -4. Follow the procedure of your network library to complete the network request. Make sure to call `span.end()` when the request succeeds or fails. - -[source,swift] ----- -import OpenTelemetryApi -import OpenTelemetrySdk - -struct BasicSetter: Setter { <1> - func set(carrier: inout [String: String], key: String, value: String) { - carrier[key] = value - } -} - -let span : Span = ... <2> -let setter = BasicSetter() -let propagator = W3CTraceContextPropagator() -var headers = [String:String]() - -propagator.inject(spanContext: span.context, carrier: &headers, setter:setter) <3> - -let request = URLRequest(...) -request.allHTTPHeaderFields = headers -... // make network request -span.end() ----- -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -1. Start a transaction with {apm-java-ref}/public-api.html#api-start-transaction[`startTransaction`], -or a span with {apm-java-ref}/public-api.html#api-span-start-span[`startSpan`]. - -2. Inject the `traceparent` header into the request object with -{apm-java-ref}/public-api.html#api-transaction-inject-trace-headers[`injectTraceHeaders`] - -Example of manually instrumenting an RPC framework: - -[source,java] ----- -// Hook into a callback provided by the RPC framework that is called on outgoing requests -public Response onOutgoingRequest(Request request) throws Exception { - Span span = ElasticApm.currentSpan() <1> - .startSpan("external", "http", null) - .setName(request.getMethod() + " " + request.getHost()); - try (final Scope scope = transaction.activate()) { - span.injectTraceHeaders((name, value) -> request.addHeader(name, value)); <2> - return request.execute(); - } catch (Exception e) { - span.captureException(e); - throw e; - } finally { - span.end(); <3> - } -} ----- -<1> Create a span representing an external call -<2> Inject the `traceparent` header into the request object -<3> End the span - -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] - -1. Serialize the distributed tracing context of the active transaction or span with -{apm-dotnet-ref}/public-api.html#api-current-transaction[`CurrentTransaction`] or -{apm-dotnet-ref}/public-api.html#api-current-span[`CurrentSpan`]. - -2. Send the serialized context the receiving service. - -Example: - -[source,csharp] ----- -string outgoingDistributedTracingData = - (Agent.Tracer.CurrentSpan?.OutgoingDistributedTracingData - ?? Agent.Tracer.CurrentTransaction?.OutgoingDistributedTracingData)?.SerializeToString(); -// Now send `outgoingDistributedTracingData` to the receiving service ----- - -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] - -1. Start a transaction with {apm-node-ref}/agent-api.html#apm-start-transaction[`apm.startTransaction()`], -or a span with {apm-node-ref}/agent-api.html#apm-start-span[`apm.startSpan()`]. - -2. Get the serialized `traceparent` string of the started transaction/span with -{apm-node-ref}/agent-api.html#apm-current-traceparent[`currentTraceparent`]. - -3. Encode the `traceparent` and send it to the receiving service inside your regular request. - -Example using raw UDP to communicate between two services, A and B: - -[source,js] ----- -agent.startTransaction('my-service-a-transaction'); <1> -const traceparent = agent.currentTraceparent; <2> -sendMetadata(`traceparent: ${traceparent}\n`); <3> ----- -<1> Start a transaction -<2> Get the current `traceparent` -<3> Send the `traceparent` as a header to service B. - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -1. On the client side (i.e., the side sending the request) get the current distributed tracing context. - -2. Serialize the current distributed tracing context to a format supported by the request's transport and send it to the server side (i.e., the side receiving the request). - -Example: - -[source,php] ----- -$distDataAsString = ElasticApm::getSerializedCurrentDistributedTracingData(); <1> ----- -<1> Get the current distributed tracing data serialized as string - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] - -1. Start a transaction with {apm-py-ref}/api.html#client-api-begin-transaction[`begin_transaction()`]. - -2. Get the `trace_parent` of the active transaction. - -3. Send the `trace_parent` to the receiving service. - -Example: - -[source,python] ----- -client.begin_transaction('new-transaction')<1> - -elasticapm.get_trace_parent_header('new-transaction') <2> - -# Send `trace_parent_str` to another service ----- -<1> Start a new transaction -<2> Return the string representation of the current transaction's `TraceParent` object -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] - -1. Start a span with {apm-ruby-ref}/api.html#api-agent-with_span[`with_span`]. - -2. Get the active `TraceContext`. - -3. Send the `TraceContext` to the receiving service. - -[source,ruby] ----- -ElasticAPM.with_span "Name" do |span| <1> - header = span.trace_context.traceparent.to_header <2> - # send the TraceContext Header to a receiving service... -end ----- -<1> Start a span -<2> Get the `TraceContext` - -// end::ruby[] diff --git a/docs/shared/jaeger/jaeger-widget.asciidoc b/docs/shared/jaeger/jaeger-widget.asciidoc deleted file mode 100644 index 5902738ca38..00000000000 --- a/docs/shared/jaeger/jaeger-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::jaeger.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/shared/jaeger/jaeger.asciidoc b/docs/shared/jaeger/jaeger.asciidoc deleted file mode 100644 index 2e1982b6e2c..00000000000 --- a/docs/shared/jaeger/jaeger.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -// tag::ess[] -. Log into {ess-console}[{ecloud}] and select your deployment. -In {kib}, select **Add data**, then search for and select "Elastic APM". -If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. -If the integration has not been installed, select **Add Elastic APM**. -Copy the URL. If you're using Agent authorization, copy the Secret token as well. - -. Configure the APM Integration as a collector for your Jaeger agents. -+ -As of this writing, the Jaeger agent binary offers the following CLI flags, -which can be used to enable TLS, output to {ecloud}, and set the APM Integration secret token: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ---reporter.grpc.host-port= ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.27/cli/[Jaeger CLI flags documentation] for more information. - -// end::ess[] - -// tag::self-managed[] -. Configure the APM Integration as a collector for your Jaeger agents. -In {kib}, select **Add data**, then search for and select "Elastic APM". -If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. -If the integration has not been installed, select **Add Elastic APM**. -Copy the Host. If you're using Agent authorization, copy the Secret token as well. -+ -As of this writing, the Jaeger agent binary offers the `--reporter.grpc.host-port` CLI flag. -Use this to define the host and port that the APM Integration is listening on: -+ -[source,terminal] ----- ---reporter.grpc.host-port= ----- - -. (Optional) Enable encryption -+ -When TLS is enabled for the APM Integration, Jaeger agents must also enable TLS communication: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ----- - -. (Optional) Enable token-based authorization -+ -A secret token or API key can be used to ensure only authorized Jaeger agents can send data to the APM Integration. -When enabled, use an agent level tag to authorize Jaeger agent communication with the APM Server: -+ -[source,terminal] ----- ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.27/cli/[Jaeger CLI flags documentation] for more information. - -// end::self-managed[] diff --git a/docs/shared/open-kibana/open-kibana-widget.asciidoc b/docs/shared/open-kibana/open-kibana-widget.asciidoc deleted file mode 100644 index 1947f97b537..00000000000 --- a/docs/shared/open-kibana/open-kibana-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::open-kibana.asciidoc[tag=cloud] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/shared/open-kibana/open-kibana.asciidoc b/docs/shared/open-kibana/open-kibana.asciidoc deleted file mode 100644 index b1665ea5e9e..00000000000 --- a/docs/shared/open-kibana/open-kibana.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -// tag::cloud[] -. https://cloud.elastic.co/[Log in] to your {ecloud} account. - -. Navigate to the {kib} endpoint in your deployment. -// end::cloud[] - -// tag::self-managed[] -Point your browser to http://localhost:5601[http://localhost:5601], replacing -`localhost` with the name of the {kib} host. -// end::self-managed[] \ No newline at end of file diff --git a/docs/source-map-how-to.asciidoc b/docs/source-map-how-to.asciidoc deleted file mode 100644 index a0bc2a4675b..00000000000 --- a/docs/source-map-how-to.asciidoc +++ /dev/null @@ -1,166 +0,0 @@ -[[source-map-how-to]] -=== Create and upload source maps (RUM) - -Minifying JavaScript bundles in production is a common practice; -it can greatly improve the load time and network latency of your applications. -The problem with minifying code is that it can be hard to debug. - -For best results, uploading source maps should become a part of your deployment procedure, -and not something you only do when you see unhelpful errors. -That's because uploading source maps after errors happen won't make old errors magically readable — -errors must occur again for source mapping to occur. - -Here's an example of an exception stack trace in the {apm-app} when using minified code. -As you can see, it's not very helpful. - -[role="screenshot"] -image::images/source-map-before.png[{apm-app} without source mapping] - -With a source map, minified files are mapped back to the original source code, -allowing you to maintain the speed advantage of minified code, -without losing the ability to quickly and easily debug your application. -Here's the same example as before, but with a source map uploaded and applied: - -[role="screenshot"] -image::images/source-map-after.png[{apm-app} with source mapping] - -Follow the steps below to enable source mapping your error stack traces in the {apm-app}: - -* <> -* <> -* <> - -[float] -[[source-map-rum-initialize]] -=== Initialize the RUM Agent - -Set the service name and version of your application when initializing the RUM Agent. -To make uploading subsequent source maps easier, the `serviceVersion` you choose might be the -`version` from your `package.json`. For example: - -[source,js] ----- -import { init as initApm } from '@elastic/apm-rum' -const serviceVersion = require("./package.json").version - -const apm = initApm({ - serviceName: 'myService', - serviceVersion: serviceVersion -}) ----- - -Or, `serviceVersion` could be a git commit reference. For example: - -[source,js] ----- -const git = require('git-rev-sync') -const serviceVersion = git.short() ----- - -It can also be any other unique string that indicates a specific version of your application. -The APM integration uses the service name and version to match the correct source map file to each stack trace. - -[float] -[[source-map-rum-generate]] -=== Generate a source map - -To be compatible with Elastic APM, source maps must follow the -https://sourcemaps.info/spec.html[source map revision 3 proposal spec]. - -Source maps can be generated and configured in many different ways. -For example, parcel automatically generates source maps by default. -If you're using webpack, some configuration may be needed to generate a source map: - -[source,js] ----- -const webpack = require('webpack') -const serviceVersion = require("./package.json").version <1> -const TerserPlugin = require('terser-webpack-plugin'); -module.exports = { - entry: 'app.js', - output: { - filename: 'app.min.js', - path: './dist' - }, - devtool: 'source-map', - plugins: [ - new webpack.DefinePlugin({'serviceVersion': JSON.stringify(serviceVersion)}), - new TerserPlugin({ - sourceMap: true - }) - ] -} ----- -<1> If you're using a different method of defining `serviceVersion`, you can set it here. - -[float] -[[source-map-rum-upload]] -=== Upload the source map to {kib} - -{kib} exposes a {kibana-ref}/rum-sourcemap-api.html[source map endpoint] for uploading source maps. -Source maps can be uploaded as a string, or as a file upload. -Before uploading a source map, ensure that RUM support is enabled in the APM integration - -Let's look at two different ways to upload a source map: curl and a custom application. -Each example includes the four fields necessary for APM Server to later map minified code to its source: - -* `service_name` - Should match the `serviceName` from step one -* `service_version` - Should match the `serviceVersion` from step one -* `bundle_filepath` - The absolute path of the final bundle as used in the web application -* `sourcemap` - The location of the source map. -If you have multiple source maps, you'll need to upload each individually. - -[float] -[[source-map-curl]] -==== Upload via curl - -Here’s an example curl request that uploads the source map file created in the previous step. -This request uses an API key for authentication. - -[source,console] ----- -SERVICEVERSION=`node -e "console.log(require('./package.json').version);"` && \ <1> -curl -X POST "http://localhost:5601/api/apm/sourcemaps" \ --H 'Content-Type: multipart/form-data' \ --H 'kbn-xsrf: true' \ --H 'Authorization: ApiKey ${YOUR_API_KEY}' \ <2> --F 'service_name=foo' \ --F 'service_version=$SERVICEVERSION' \ --F 'bundle_filepath=/test/e2e/general-usecase/app.min.js' \ --F 'sourcemap=@./dist/app.min.js.map' ----- -<1> This example uses the version from `package.json` -<2> The API key used here needs to have appropriate {kibana-ref}/rum-sourcemap-api.html[privileges] - -[float] -[[source-map-custom-app]] -==== Upload via a custom app - -To ensure uploading source maps become a part of your deployment process, -consider automating the process with a custom application. -Here's an example Node.js application that uploads the source map file created in the previous step: - -[source,js] ----- -console.log('Uploading sourcemaps!') -var request = require('request') -var filepath = './dist/app.min.js.map' -var formData = { - headers: { - Content-Type: 'multipart/form-data', - kbn-xsrf: 'true', - Authorization: 'ApiKey ${YOUR_API_KEY}' - }, - service_name: 'service-name’, - service_version: require("./package.json").version, // Or use 'git-rev-sync' for git commit hash - bundle_filepath: 'http://localhost/app.min.js', - sourcemap: fs.createReadStream(filepath) -} -request.post({url: 'http://localhost:5601/api/apm/sourcemaps',formData: formData}, function (err, resp, body) { - if (err) { - console.log('Error while uploading sourcemaps!', err) - } else { - console.log('Sourcemaps uploaded!') - } -}) ----- diff --git a/docs/span-compression.asciidoc b/docs/span-compression.asciidoc deleted file mode 100644 index 856606f1465..00000000000 --- a/docs/span-compression.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[[span-compression]] -=== Span compression - -In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction. -For example, this can happen if spans are captured inside of a loop, or in unoptimized SQL queries that use multiple queries instead of joins to fetch related data. -In such cases, the upper limit of spans per transaction (by default, 500 spans) can be reached quickly, causing the agent to stop capturing potentially more relevant spans for a given transaction. - -Such repeated similar spans often aren't very relevant for themselves, especially if they are of very short duration. -They also can clutter the UI, and cause processing and storage overhead. - -To address this problem, the APM agents can compress such spans into a single span. -The compressed span retains most of the original span information, such as overall duration and the number of spans it represents. - -Regardless of the compression strategy, a span is eligible for compression if: - -- It has not propagated its trace context. -- Is an _exit_ span (such as database query spans). -- Its outcome is not `"failure"`. - - -[float] -[[span-compression-strategy]] -=== Compression strategies - -The {apm-agent} can select between two strategies to decide if two adjacent spans can be compressed. -Both strategies have the benefit that only one previous span needs to be kept in memory. -This is important to ensure that the agent doesn't require large amounts of memory to enable span compression. - -[float] -[[span-compression-same]] -==== Same-Kind strategy - -The agent selects this strategy if two adjacent spans have the same: - - * span type - * span subtype - * `destination.service.resource` (e.g. database name) - -[float] -[[span-compression-exact]] -==== Exact-Match strategy - -The agent selects this strategy if two adjacent spans have the same: - - * span name - * span type - * span subtype - * `destination.service.resource` (e.g. database name) - -[float] -[[span-compression-settings]] -=== Settings - -The agent has configuration settings to define upper thresholds in terms of span duration for both strategies. -For the "Same-Kind" strategy, the limit is 5 milliseconds. For the "Exact-Match" strategy, the limit is 50 milliseconds. -Spans with longer duration are not compressed. Please refer to the agent documentation for specifics. - -[float] -[[span-compression-support]] -=== Agent support - -Support for span compression is available in these agents: - - * Go: {apm-go-ref}/configuration.html#config-span-compression-exact-match-duration[`ELASTIC_APM_SPAN_COMPRESSION_EXACT_MATCH_MAX_DURATION`], {apm-go-ref}/configuration.html#config-span-compression-same-kind-duration[`ELASTIC_APM_SPAN_COMPRESSION_SAME_KIND_MAX_DURATION`] - * Python: {apm-py-ref}/configuration.html#config-span-compression-exact-match-max_duration[`span_compression_exact_match_max_duration`], {apm-py-ref}/configuration.html#config-span-compression-same-kind-max-duration[`span_compression_same_kind_max_duration`] \ No newline at end of file diff --git a/docs/tab-widgets/configure-agent-widget.asciidoc b/docs/tab-widgets/configure-agent-widget.asciidoc deleted file mode 100644 index 0cd4d163f4a..00000000000 --- a/docs/tab-widgets/configure-agent-widget.asciidoc +++ /dev/null @@ -1 +0,0 @@ -// delete after PR in obs-docs repository \ No newline at end of file diff --git a/docs/tab-widgets/configure-server-widget.asciidoc b/docs/tab-widgets/configure-server-widget.asciidoc deleted file mode 100644 index 0cd4d163f4a..00000000000 --- a/docs/tab-widgets/configure-server-widget.asciidoc +++ /dev/null @@ -1 +0,0 @@ -// delete after PR in obs-docs repository \ No newline at end of file diff --git a/docs/tab-widgets/install-agents-widget.asciidoc b/docs/tab-widgets/install-agents-widget.asciidoc deleted file mode 100644 index 0cd4d163f4a..00000000000 --- a/docs/tab-widgets/install-agents-widget.asciidoc +++ /dev/null @@ -1 +0,0 @@ -// delete after PR in obs-docs repository \ No newline at end of file diff --git a/docs/troubleshoot-apm.asciidoc b/docs/troubleshoot-apm.asciidoc deleted file mode 100644 index ae32f63f1da..00000000000 --- a/docs/troubleshoot-apm.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[troubleshoot-apm]] -== Troubleshooting - -This section provides solutions to <> -and <> guidance. -For additional help, see the links below. - -[float] -[[troubleshooting-docs]] -=== Troubleshooting documentation - -{agent}, the {apm-app}, and each {apm-agent} has its own troubleshooting guide: - -* {fleet-guide}/troubleshooting-intro.html[*{fleet} and {agent}* troubleshooting] -* {kibana-ref}/troubleshooting.html[*{apm-app}* troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[*.NET agent* troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[*Go agent* troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[*iOS agent* troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[*Java agent* troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[*Node.js agent* troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[*PHP agent* troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[*Python agent* troubleshooting] -* {apm-ruby-ref-v}/debugging.html[*Ruby agent* troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[*RUM agent* troubleshooting] - -[float] -[[elastic-support]] -=== Elastic Support - -We offer a support experience unlike any other. -Our team of professionals 'speak human and code' and love making your day. -https://www.elastic.co/subscriptions[Learn more about subscriptions]. - -[float] -[[discussion-forum]] -=== Discussion forum - -For additional questions and feature requests, -visit our https://discuss.elastic.co/c/apm[discussion forum]. - -include::common-problems.asciidoc[] - -include::processing-performance.asciidoc[] \ No newline at end of file diff --git a/docs/upgrading-to-8.x.asciidoc b/docs/upgrading-to-8.x.asciidoc deleted file mode 100644 index 8137d998321..00000000000 --- a/docs/upgrading-to-8.x.asciidoc +++ /dev/null @@ -1,232 +0,0 @@ -[[upgrading-to-8.x]] -=== Upgrade to version {version} - -This guide explains the upgrade process for version {version}. -For a detailed look at what's new, see: - -* {observability-guide}/whats-new.html[What's new in {observability}] -* {kibana-ref}/whats-new.html[What's new in {kib}] -* {ref}/release-highlights.html[{es} release highlights] - -[float] -=== Notable APM changes - -* All index management has been removed from APM Server; -{fleet} is now entirely responsible for setting up index templates, index lifecycle polices, -and index pipelines. -* APM Server now only writes to well-defined data streams; -writing to classic indices is no longer supported. -* APM Server has a new {es} output implementation with defaults that should be sufficient for -most use cases. - -As a result of the above changes, -a number of index management and index tuning configuration variables have been removed. -See the APM <>, <> for full details. - -[float] -=== Find your upgrade guide - -Starting in version 7.14, there are two ways to run Elastic APM. -Determine which method you're using, then use the links below to find the correct upgrading guide. - -* **Standalone (legacy)**: Users in this mode run and configure the APM Server binary. -This mode has been deprecated and will be removed in a future release. -* **{fleet} and the APM integration**: Users in this mode run and configure {fleet} and the Elastic APM integration. - -**Self-installation (non-{ecloud} users) upgrade guides** - -* <> -* <> - -**{ecloud} upgrade guides** - -* <> -* <> - -// ******************************************************** - -[[upgrade-8.0-self-standalone]] -==== Upgrade a self-installation of APM Server standalone to {version} - -++++ -Self-installation standalone (legacy) -++++ - -This upgrade guide is for the standalone (legacy) method of running APM Server. -Only use this guide if both of the following are true: - -* You have a self-installation of the {stack}, i.e. you're not using {ecloud}. -* You're running the APM Server binary, i.e. you haven't switched to the Elastic APM integration. - -[float] -==== Prerequisites - -. Prior to upgrading to version {version}, {es}, {kib}, -and APM Server must be upgraded to version 7.17. -** To upgrade {es} and {kib}, -see the https://www.elastic.co/guide/en/elastic-stack/7.17/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] -** To upgrade APM Server to version 7.17, see -{apm-guide-7x}/upgrading-to-717.html[upgrade to version 7.17]. - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -[float] -==== Upgrade steps - -. **Upgrade the {stack} to version {version}** -+ -The {stack} ({es} and {kib}) must be upgraded before APM Server. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -. **Install the APM integration via the {fleet} UI** -+ -include::legacy/getting-started-apm-server.asciidoc[tag=why-apm-integration] -+ --- -include::legacy/getting-started-apm-server.asciidoc[tag=install-apm-integration] --- - -. **Install the {version} APM Server release** -+ -See <> to find the command that works with your system. -+ -[WARNING] -==== -If you install version {version} of APM Server before installing the APM integration, you will see error logs similar to the following. You must go back and install the APM integration before data can be ingested into {es}. - -[source,json] ----- -... -{"log.level":"error","@timestamp":"2022-01-19T10:45:34.923+0800","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":62},"message":"precondition 'apm integration installed' failed: error querying Elasticsearch for integration index templates: unexpected HTTP status: 404 Not Found ({\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"},\"status\":404}): to remediate, please install the apm integration: https://ela.st/apm-integration-quickstart","service.name":"apm-server","ecs.version":"1.6.0"} -{"log.level":"error","@timestamp":"2022-01-19T10:45:37.461+0800","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":62},"message":"precondition 'apm integration installed' failed: error querying Elasticsearch for integration index templates: unexpected HTTP status: 404 Not Found ({\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [logs-apm.error] not found\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [logs-apm.error] not found\"},\"status\":404}): to remediate, please install the apm integration: https://ela.st/apm-integration-quickstart","service.name":"apm-server","ecs.version":"1.6.0"} -... ----- -==== - -. **Review your configuration file** -+ -Some settings have been removed or changed. You may need to update your `apm-server.yml` configuration -file prior to starting the APM Server. -See <> for help in locating this file, -and <> for a list of all available configuration options. - -. **Start the APM Server** -+ -To start the APM Server, run: -+ -[source,bash] ----- -./apm-server -e ----- -+ -Additional details are available in <>. - -. **(Optional) Upgrade to the APM integration** -+ -Got time for one more upgrade? -See <>. - -// ******************************************************** - -[[upgrade-8.0-self-integration]] -==== Upgrade a self-installation of the APM integration to {version} - -++++ -Self-installation APM integration -++++ - -This upgrade guide is for the Elastic APM integration. -Only use this guide if both of the following are true: - -* You have a self-installation of the {stack}, i.e. you're not using {ecloud}. -* You have already switched to and are running {fleet} and the Elastic APM integration. - -[float] -==== Prerequisites - -. Prior to upgrading to version {version}, {es}, and {kib} -must be upgraded to version 7.17. To upgrade {es} and {kib}, -see the https://www.elastic.co/guide/en/elastic-stack/7.17/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -[float] -==== Upgrade steps - -. Upgrade the {stack} to version {version}. -+ -The {stack} ({es} and {kib}) must be upgraded before {agent}. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -. Upgrade {agent} to version {version}. -As a part of this process, the APM integration will automatically upgrade to version {version}. -+ --- -. In {fleet}, select **Agents**. - -. Under **Agents**, click **Upgrade available** to see a list of agents that you can upgrade. - -. Choose **Upgrade agent** from the **Actions** menu next to the agent you want to upgrade. -The **Upgrade agent** option is grayed out when an upgrade is unavailable, or -the {kib} version is lower than the agent version. --- -+ -For more details, or for bulk upgrade instructions, see -{fleet-guide}/upgrade-elastic-agent.html[Upgrade {agent}] - -// ******************************************************** - -[[upgrade-8.0-cloud-standalone]] -==== Upgrade {ecloud} APM Server standalone to {version} - -++++ -{ecloud} standalone (legacy) -++++ - -This upgrade guide is for the standalone (legacy) method of running APM Server. -Only use this guide if both of the following are true: - -* You're using {ecloud}. -* You're using the APM Server binary, i.e. you haven't switched to the Elastic APM integration. - -Follow these steps to upgrade: - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -. Upgrade {ecloud} to {version}, -See https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html[Upgrade versions] for instructions. - -. (Optional) Upgrade to the APM integration. -Got time for one more upgrade? -See <>. - -// ******************************************************** - -[[upgrade-8.0-cloud-integration]] -==== Upgrade {ecloud} with the APM integration to 8.0 - -++++ -{ecloud} APM integration -++++ - -This upgrade guide is for the Elastic APM integration. -Only use this guide if both of the following are true: - -* You're using {ecloud}. -* You have already switched to and are running {fleet} and the Elastic APM integration. - -Follow these steps to upgrade: - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -. Upgrade your {ecloud} instance to {version}. -See https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html[Upgrade versions] for details. -The APM integration will automatically be upgraded to version {version} as a part of this process. - - -NOTE: {ece} users require additional TLS setup. -See {ece-ref}/ece-manage-apm-settings.html[Add APM user settings] for more information. diff --git a/docs/upgrading-to-integration.asciidoc b/docs/upgrading-to-integration.asciidoc deleted file mode 100644 index 740acb79748..00000000000 --- a/docs/upgrading-to-integration.asciidoc +++ /dev/null @@ -1,214 +0,0 @@ -[[upgrade-to-apm-integration]] -=== Switch to the Elastic APM integration - -The APM integration offers a number of benefits over the standalone method of running APM Server: - -**{fleet}**: - -* A single, unified way to add monitoring for logs, metrics, traces, and other types of data to each host -- install one thing instead of multiple -* Central, unified configuration management -- no need to edit multiple configuration files - -**Data streams**: - -// lint ignore apm- -* Reduced number of fields per index, better space efficiency, and faster queries -* More granular data control -* Errors and metrics data streams are shared with other data sources -- which means better long-term integration with the logs and metrics apps -* Removes template inheritance for {ilm-init} policies and makes use of new {es} index and component templates -* Fixes +resource \'apm-{version}-$type' exists, but it is not an alias+ error - -**APM Integration**: - -* Easier to install APM on edge machines -* Improved source map handling and {apm-agent} configuration management -* Less configuration -* Easier and less error-prone upgrade path -* Zero-downtime configuration changes - -[discrete] -[[apm-arch-upgrade]] -=== APM integration architecture - -The Elastic APM integration consists of four components: *APM agents*, the *Elastic APM integration*, *{es}*, and *{kib}*. -Generally, there are two ways that these four components can work together: - -APM agents on edge machines send data to a centrally hosted APM integration: - -image::./images/apm-architecture.png[Architecture of Elastic APM] - -Or, APM agents and the APM integration live on edge machines and enroll via a centrally hosted {agent}: - -image::./images/apm-architecture-two.png[Architecture of Elastic APM option two] - -NOTE: In order to collect data from RUM and mobile agents, which run in browser and mobile applications, -you must run {agent} centrally. For other applications, such as backend services, -{agent} may be co-located on the edge machine. - -[discrete] -[[apm-integration-upgrade-limitations]] -=== Limitations - -There are some limitations to be aware of: - -* This change cannot be reverted -* Currently, only the {es} output is supported -* APM runs under {agent} which, depending on the installation method, might require root privileges -* An {agent} with the APM integration enabled must be managed by {fleet}. - -[discrete] -=== Make the switch - -Select a guide below to get started. - -* <> -* <> - -// ******************************************************** - -[[apm-integration-upgrade-steps]] -==== Switch a self-installation to the APM integration - -++++ -Switch a self-installation -++++ - -. <> -. <> -. <> -. <> -. <> - -[discrete] -[[apm-integration-upgrade-1]] -==== Upgrade the {stack} - -The {stack} ({es} and {kib}) must be upgraded to version 7.14 or higher. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content for important changes between -your current APM version and this one. - -[discrete] -[[apm-integration-upgrade-2]] -==== Add a {fleet} Server - -{fleet} Server is a component of the {stack} used to centrally manage {agent}s. -The APM integration requires a {fleet} Server to be running and accessible to your hosts. -Add a {fleet} Server by following {fleet-guide}/add-a-fleet-server.html[this guide]. - -TIP: If you're upgrading a self-managed deployment of the {stack}, you'll need to enable -{ref}/configuring-stack-security.html[{es} security] and the -{ref}/security-settings.html[API key service]. - -After adding your {fleet} Server host and generating a service token, the in-product help in {kib} -will provide a command to run to start an {agent} as a {fleet} Server. -Commands may require administrator privileges. - -Verify {fleet} Server is running by navigating to **{fleet}** > **Agents** in {kib}. - -[discrete] -[[apm-integration-upgrade-3]] -==== Install a {fleet}-managed {agent} - -NOTE: It's possible to install the Elastic APM integration on the same {agent} that is running the {fleet} Server integration. For this use case, skip this step. - -The {fleet}-managed {agent} will run the Elastic APM integration on your edge nodes, next to your applications. -To install a {fleet}-managed {agent}, follow {fleet-guide}/install-fleet-managed-elastic-agent.html[this guide]. - -[discrete] -[[apm-integration-upgrade-4]] -==== Add the APM integration - -The APM integration receives performance data from your APM agents, -validates and processes it, and then transforms the data into {es} documents. - -To add the APM integration, see <>. -Only complete the linked step (not the entire quick start guide). -If you're adding the APM integration to a {fleet}-managed {agent}, you can use the default policy. -If you're adding the APM integration to the {fleet-server}, use the policy that the {fleet-server} is running on. - -TIP: You'll configure the APM integration in this step. -See <> for a reference of all available settings. -As long as the APM integration is configured with the same secret token or you have API keys enabled on the same host, -no reconfiguration is required in your APM agents. - -[discrete] -[[apm-integration-upgrade-5]] -==== Stop the legacy APM Server - -Once data from upgraded APM agents is visible in the {apm-app}, -it's safe to stop the legacy APM Server process. - -Congratulations -- you now have the latest and greatest in Elastic APM! - -// ******************************************************** - -[[apm-integration-upgrade-steps-ess]] -==== Switch an {ecloud} cluster to the APM integration - -++++ -Switch an {ecloud} cluster -++++ - -. <> -. <> -. <> -. <> - -[discrete] -[[apm-integration-upgrade-ess-1]] -==== Upgrade the {stack} - -Use the {ecloud} console to upgrade the {stack} to version {version}. -See the {cloud}/ec-upgrade-deployment.html[{ess} upgrade guide] for details. - -[discrete] -[[apm-integration-upgrade-ess-2]] -==== Switch to {agent} - -APM data collection will be interrupted while the migration is in progress. -The process of migrating should only take a few minutes. - -With a Superuser account, complete the following steps: - -. In {kib}, navigate to **{observability}** > **APM** > **Settings** > **Schema**. -+ -image::./images/schema-agent.png[switch to {agent}] - -. Click **Switch to {agent}**. -Make a note of the `apm-server.yml` user settings that are incompatible with {agent}. -Check the confirmation box and click **Switch to {agent}**. -+ -image::./images/agent-settings-migration.png[{agent} settings migration] - -{ecloud} will now create a {fleet} Server instance to contain the new APM integration, -and then will shut down the old APM server instance. -Within minutes your data should begin appearing in the {apm-app} again. - -[discrete] -[[apm-integration-upgrade-ess-3]] -==== Configure the APM integration - -You can now update settings that were removed during the upgrade. -See <> for a reference of all available settings. - -// lint ignore fleet elastic-cloud -In {kib}, navigate to **Management** > **Fleet**. -Select the **Elastic Cloud Agent Policy**. -Next to the **Elastic APM** integration, select **Actions** > **Edit integration**. - -[discrete] -[[apm-integration-upgrade-ess-4]] -==== Scale APM and {fleet} - -Certain {es} output configuration options are not available with the APM integration. -To ensure data is not lost, you can scale APM and {fleet} up and out. -APM's capacity to process events increases with the instance memory size. - -Go to the {ess-console}[{ecloud} console], select your deployment and click **Edit**. -Here you can edit the number and size of each availability zone. - -image::./images/scale-apm.png[scale APM] - -Congratulations -- you now have the latest and greatest in Elastic APM! diff --git a/docs/upgrading.asciidoc b/docs/upgrading.asciidoc deleted file mode 100644 index 613be6c1ae2..00000000000 --- a/docs/upgrading.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[upgrade]] -== Upgrade - -This guide gives general recommendations for upgrading Elastic APM. - -* <> -* <> -* <> -* <> - -include::./agent-server-compatibility.asciidoc[] - -include::./apm-breaking.asciidoc[] - -include::./upgrading-to-8.x.asciidoc[] - -include::./upgrading-to-integration.asciidoc[] diff --git a/docs/version.asciidoc b/docs/version.asciidoc deleted file mode 100644 index 8aed013c966..00000000000 --- a/docs/version.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// doc-branch can be: master, 8.0, 8.1, etc. -:doc-branch: master -:go-version: 1.17.10 -:python: 3.7 -:docker: 1.12 -:docker-compose: 1.11 - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -// Agent link attributes -// Used in conjunction with the stack attributes found here: https://github.com/elastic/docs/tree/7d62a6b66d6e9c96e4dd9a96c3dc7c75ceba0288/shared/versions/stack -:apm-dotnet-ref-v: https://www.elastic.co/guide/en/apm/agent/dotnet/{apm-dotnet-branch} -:apm-go-ref-v: https://www.elastic.co/guide/en/apm/agent/go/{apm-go-branch} -:apm-ios-ref-v: https://www.elastic.co/guide/en/apm/agent/swift/{apm-ios-branch} -:apm-java-ref-v: https://www.elastic.co/guide/en/apm/agent/java/{apm-java-branch} -:apm-node-ref-v: https://www.elastic.co/guide/en/apm/agent/nodejs/{apm-node-branch} -:apm-php-ref-v: https://www.elastic.co/guide/en/apm/agent/php/{apm-php-branch} -:apm-py-ref-v: https://www.elastic.co/guide/en/apm/agent/python/{apm-py-branch} -:apm-ruby-ref-v: https://www.elastic.co/guide/en/apm/agent/ruby/{apm-ruby-branch} -:apm-rum-ref-v: https://www.elastic.co/guide/en/apm/agent/rum-js/{apm-rum-branch}