Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose model accuracy metrics in tests #600

Merged
merged 2 commits into from
Jun 29, 2022

Conversation

kaituo
Copy link
Collaborator

@kaituo kaituo commented Jun 28, 2022

Description

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:

  1. We can turn on/off logs during tests.
  2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li kaituo@amazon.com

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@kaituo kaituo requested review from a team, ohltyler and sean-zheng-amazon June 28, 2022 23:42
@opensearch-trigger-bot opensearch-trigger-bot bot added backport 2.x infra Changes to infrastructure, testing, CI/CD, pipelines, etc. labels Jun 28, 2022
@kaituo kaituo added backport 2.1 enhancement New feature or request and removed infra Changes to infrastructure, testing, CI/CD, pipelines, etc. labels Jun 28, 2022
ohltyler
ohltyler previously approved these changes Jun 28, 2022
Copy link
Member

@ohltyler ohltyler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for adding this!

@ohltyler
Copy link
Member

Can you add this under 'Enhancements' under 2.1 release notes?

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:
1. We can turn on/off logs during tests.
2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
Signed-off-by: Kaituo Li <kaituo@amazon.com>
@kaituo
Copy link
Collaborator Author

kaituo commented Jun 29, 2022

Can you add this under 'Enhancements' under 2.1 release notes?

added

@kaituo kaituo requested a review from ohltyler June 29, 2022 00:07
@codecov-commenter
Copy link

codecov-commenter commented Jun 29, 2022

Codecov Report

Merging #600 (44a3b05) into main (d484f9b) will decrease coverage by 0.19%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff              @@
##               main     #600      +/-   ##
============================================
- Coverage     79.21%   79.02%   -0.20%     
+ Complexity     4222     4207      -15     
============================================
  Files           296      296              
  Lines         17686    17686              
  Branches       1880     1880              
============================================
- Hits          14010    13976      -34     
- Misses         2783     2811      +28     
- Partials        893      899       +6     
Flag Coverage Δ
plugin 79.02% <ø> (-0.20%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...java/org/opensearch/ad/task/ADBatchTaskRunner.java 81.76% <0.00%> (-4.56%) ⬇️
...ain/java/org/opensearch/ad/model/ModelProfile.java 69.09% <0.00%> (-1.82%) ⬇️
...ava/org/opensearch/ad/task/ADHCBatchTaskCache.java 90.12% <0.00%> (-1.24%) ⬇️
...ain/java/org/opensearch/ad/task/ADTaskManager.java 76.67% <0.00%> (-0.46%) ⬇️
...rch/ad/transport/ForwardADTaskTransportAction.java 97.45% <0.00%> (+3.38%) ⬆️

Copy link
Collaborator

@sean-zheng-amazon sean-zheng-amazon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. you might want to check other info level logs just to make sure you don't print too much verbose in testing.

@kaituo kaituo merged commit f630c8f into opensearch-project:main Jun 29, 2022
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jun 29, 2022
* Expose model accuracy metrics in tests

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:
1. We can turn on/off logs during tests.
2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
(cherry picked from commit f630c8f)
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jun 29, 2022
* Expose model accuracy metrics in tests

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:
1. We can turn on/off logs during tests.
2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
(cherry picked from commit f630c8f)
ohltyler pushed a commit that referenced this pull request Jun 29, 2022
* Expose model accuracy metrics in tests

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:
1. We can turn on/off logs during tests.
2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
(cherry picked from commit f630c8f)
ohltyler pushed a commit that referenced this pull request Jun 29, 2022
* Expose model accuracy metrics in tests

This PR adds an option flag to print logs during tests and turn on the flag in CI workflow. The flag is disabled by default. By doing this, we can record model accuracy metrics in git workflows and later retrieve it for analysis.

Testing done:
1. We can turn on/off logs during tests.
2. The accuracy logs are recorded.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
(cherry picked from commit f630c8f)
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 7, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 7, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 7, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.  We run the benchmark in separate github workflows since they can be time consuming. For example, it takes 25+ minutes to run HCAD benchmarking alone in 1.1. Also, we print bench-marking results in standard output for recording purpose.

For HCAD, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

For single stream detectors, we use a curated data set with known anomaly windows.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.  We run the benchmark in separate github workflows since they can be time consuming. For example, it takes 25+ minutes to run HCAD benchmarking alone in 1.1. Also, we print bench-marking results in standard output for recording purpose.

For HCAD, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

For single stream detectors, we use a curated data set with known anomaly windows.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.  We run the benchmark in separate github workflows since they can be time consuming. For example, it takes 25+ minutes to run HCAD benchmarking alone in 1.1. Also, we print bench-marking results in standard output for recording purpose.

For HCAD, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

For single stream detectors, we use a curated data set with known anomaly windows.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 17, 2022
* HCAD model performance benchmark

This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 17, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 17, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.  We run the benchmark in separate github workflows since they can be time consuming. For example, it takes 25+ minutes to run HCAD benchmarking alone in 1.1. Also, we print bench-marking results in standard output for recording purpose.

For HCAD, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent.  We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

For single stream detectors, we use a curated data set with known anomaly windows.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
1. added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 18, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 18, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 22, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 22, 2022
* AD model performance benchmark

This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 22, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Nov 23, 2022
This PR adds a HCAD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this pull request Nov 23, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Dec 1, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

Regarding benchmark data, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit that referenced this pull request Dec 1, 2022
* AD model performance benchmark

This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

We also fixed #712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants