-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Security enabled tests are failing on main branch #712
Comments
I think the issue might be that we are using "kibana" for the user-agent instead of "Opensearch-Dashboards". Will test this out and confirm and announce it to all plugins currently using "kibana" |
This wasn't enough to fix the issue, running the same commands from postman that are executed from the integ tests works manually. Also additionally the unauthorized issue is showing in java 11 and for java 17 it is a protocol error which is currently being fixed in common utils: https://github.com/opensearch-project/common-utils/pull/302/files |
@amitgalitz @jackiehanyang will solve it together with my benchmark PR. The issue is the client setup code in src/test/java/org/opensearch/ad/ODFERestTestCase.java:
|
Got it, thanks for helping solve this! |
This PR adds an AD model performance benchmark so that we can compare model performance across versions. For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT. For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions. We also backported opensearch-project#600 so that we can capture the performance data in CI output. We also fixed opensearch-project#712 by revising the client setup code. Testing done: * added unit tests to run the benchmark. Signed-off-by: Kaituo Li <kaituo@amazon.com>
This PR adds an AD model performance benchmark so that we can compare model performance across versions. For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT. For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions. We also backported opensearch-project#600 so that we can capture the performance data in CI output. We also fixed opensearch-project#712 by revising the client setup code. Testing done: * added unit tests to run the benchmark. Signed-off-by: Kaituo Li <kaituo@amazon.com>
This PR adds an AD model performance benchmark so that we can compare model performance across versions. For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT. For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions. We also backported opensearch-project#600 so that we can capture the performance data in CI output. We also fixed opensearch-project#712 by revising the client setup code. Testing done: * added unit tests to run the benchmark. Signed-off-by: Kaituo Li <kaituo@amazon.com>
This PR adds an AD model performance benchmark so that we can compare model performance across versions. For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT. For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions. We also backported opensearch-project#600 so that we can capture the performance data in CI output. We also fixed opensearch-project#712 by revising the client setup code. Testing done: * added unit tests to run the benchmark. Signed-off-by: Kaituo Li <kaituo@amazon.com>
fixed in the above PRs. |
* AD model performance benchmark This PR adds an AD model performance benchmark so that we can compare model performance across versions. For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT. For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions. We also backported #600 so that we can capture the performance data in CI output. We also fixed #712 by revising the client setup code. Testing done: * added unit tests to run the benchmark. Signed-off-by: Kaituo Li <kaituo@amazon.com>
What is the bug?
All security enabled tests are failling in our current CI: https://github.com/opensearch-project/anomaly-detection/actions/runs/3348855024/jobs/5548340002
How can one reproduce the bug?
Steps to reproduce the behavior:
Follow the steps in this workflow
In shorthand run docker from opensearchstaging:3.0.0 and run this command:
./gradlew integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername="docker-cluster" -Dhttps=true -Duser=admin -Dpassword=admin
What is the expected behavior?
All tests pass
What is your host/environment?
Do you have any additional context?
Seems like the issue happens on all integ tests when security is enabled. We are getting unauthorized responses on our requests. This might be a breaking change either in security or OS core, meaning the way we send requests is out of date and needs to be changed. Will continue investigating and updating this issue as this might be faced by other plugins.
The text was updated successfully, but these errors were encountered: