Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Security enabled tests are failing on main branch #712

Closed
amitgalitz opened this issue Nov 1, 2022 · 5 comments · Fixed by #728
Closed

[BUG] Security enabled tests are failing on main branch #712

amitgalitz opened this issue Nov 1, 2022 · 5 comments · Fixed by #728
Labels
bug Something isn't working v3.0.0

Comments

@amitgalitz
Copy link
Member

What is the bug?
All security enabled tests are failling in our current CI: https://github.com/opensearch-project/anomaly-detection/actions/runs/3348855024/jobs/5548340002

How can one reproduce the bug?
Steps to reproduce the behavior:
Follow the steps in this workflow

In shorthand run docker from opensearchstaging:3.0.0 and run this command: ./gradlew integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername="docker-cluster" -Dhttps=true -Duser=admin -Dpassword=admin

What is the expected behavior?
All tests pass
What is your host/environment?

  • OS: Linux
  • Version OS 3.0.0
  • Plugins AD and Security

Do you have any additional context?

Seems like the issue happens on all integ tests when security is enabled. We are getting unauthorized responses on our requests. This might be a breaking change either in security or OS core, meaning the way we send requests is out of date and needs to be changed. Will continue investigating and updating this issue as this might be faced by other plugins.

org.opensearch.ad.rest.SecureADRestIT > testGetApiFilterByEnabled FAILED
    org.opensearch.client.ResponseException: method [PUT], host [https://localhost:9200], URI [/_opendistro/_security/api/roles/index_all_access], status line [HTTP/2.0 401 Unauthorized]
    Unauthorized
        at app//org.opensearch.client.RestClient.convertResponse(RestClient.java:384)
        at app//org.opensearch.client.RestClient.performRequest(RestClient.java:354)
        at app//org.opensearch.client.RestClient.performRequest(RestClient.java:329)
        at app//org.opensearch.ad.TestHelpers.makeRequest(TestHelpers.java:213)
        at app//org.opensearch.ad.TestHelpers.makeRequest(TestHelpers.java:186)
        at app//org.opensearch.ad.AnomalyDetectorRestTestCase.createIndexRole(AnomalyDetectorRestTestCase.java:447)
        at app//org.opensearch.ad.rest.SecureADRestIT.setupSecureTests(SecureADRestIT.java:62)

    java.lang.NullPointerException
        at org.opensearch.ad.rest.SecureADRestIT.deleteUserSetup(SecureADRestIT.java:112)
        
@amitgalitz amitgalitz added bug Something isn't working untriaged v3.0.0 labels Nov 1, 2022
@amitgalitz
Copy link
Member Author

I think the issue might be that we are using "kibana" for the user-agent instead of "Opensearch-Dashboards". Will test this out and confirm and announce it to all plugins currently using "kibana"

@amitgalitz
Copy link
Member Author

I think the issue might be that we are using "kibana" for the user-agent instead of "Opensearch-Dashboards". Will test this out and confirm and announce it to all plugins currently using "kibana"

This wasn't enough to fix the issue, running the same commands from postman that are executed from the integ tests works manually. Also additionally the unauthorized issue is showing in java 11 and for java 17 it is a protocol error which is currently being fixed in common utils: https://github.com/opensearch-project/common-utils/pull/302/files

@kaituo
Copy link
Collaborator

kaituo commented Nov 14, 2022

@amitgalitz @jackiehanyang will solve it together with my benchmark PR. The issue is the client setup code in src/test/java/org/opensearch/ad/ODFERestTestCase.java:

-            credentialsProvider
-                .setCredentials(
-                    new AuthScope(new HttpHost("localhost", 9200)),
-                    new UsernamePasswordCredentials(userName, password.toCharArray())
-                );
+            final AuthScope anyScope = new AuthScope(null, -1);
+            credentialsProvider.setCredentials(anyScope, new UsernamePasswordCredentials(userName, password.toCharArray()));

@amitgalitz
Copy link
Member Author

@amitgalitz @jackiehanyang will solve it together with my benchmark PR. The issue is the client setup code in src/test/java/org/opensearch/ad/ODFERestTestCase.java:

-            credentialsProvider
-                .setCredentials(
-                    new AuthScope(new HttpHost("localhost", 9200)),
-                    new UsernamePasswordCredentials(userName, password.toCharArray())
-                );
+            final AuthScope anyScope = new AuthScope(null, -1);
+            credentialsProvider.setCredentials(anyScope, new UsernamePasswordCredentials(userName, password.toCharArray()));

Got it, thanks for helping solve this!

kaituo added a commit to kaituo/anomaly-detection-1 that referenced this issue Nov 14, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this issue Nov 14, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this issue Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
kaituo added a commit to kaituo/anomaly-detection-1 that referenced this issue Nov 15, 2022
This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported opensearch-project#600 so that we can capture the performance data in CI output.

We also fixed opensearch-project#712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
@kaituo
Copy link
Collaborator

kaituo commented Nov 23, 2022

fixed in the above PRs.

@kaituo kaituo closed this as completed Nov 23, 2022
kaituo added a commit that referenced this issue Dec 1, 2022
* AD model performance benchmark

This PR adds an AD model performance benchmark so that we can compare model performance across versions.

For the single stream detector, we refactored tests in DetectionResultEvalutationIT and moved it to SingleStreamModelPerfIT.

For the HCAD detector, we randomly generated synthetic data with known anomalies inserted throughout the signal. In particular, these are one/two/four dimensional data where each dimension is a noisy cosine wave. Anomalies are inserted into one dimension with 0.003 probability. Anomalies across each dimension can be independent or dependent. We have approximately 5000 observations per data set. The data set is generated using the same random seed so the result is comparable across versions.

We also backported #600 so that we can capture the performance data in CI output.

We also fixed #712 by revising the client setup code.

Testing done:
* added unit tests to run the benchmark.

Signed-off-by: Kaituo Li <kaituo@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v3.0.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants