Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show line of failure in opa test #961

Closed
aeneasr opened this issue Sep 19, 2018 · 6 comments
Closed

Show line of failure in opa test #961

aeneasr opened this issue Sep 19, 2018 · 6 comments

Comments

@aeneasr
Copy link

aeneasr commented Sep 19, 2018

I typically define more than one definition in a test suite:

test_exact_with_condition {
    allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1", "context": {"foobar": "the-value-should-be-this"}}
    not allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1", "context": {"foobar": "not-the-value-should-be-this"}}
    not allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1", "context": {"not-foobar": "the-value-should-be-this"}}
    not allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1", "context": {"foobar": 1234}}
    not allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1", "context": {}}
    not allow_exact with input as {"resource": "articles:1", "subject": "subjects:1", "action": "actions:1"}
}

While I see the test suite that fails, I'm not able to find the exact line without manually removing lines of the tests to see if it still passes (extremely annoying & time-intensive). The opa test result is something like this, for example:


data.ladon.test_condition_boolean: PASS (1µs)
data.ladon.test_condition_resource_contains: PASS (1.999µs)
data.ladon.test_condition_string_equal: PASS (0s)
data.ladon.test_condition_string_match: PASS (0s)
data.ladon.test_condition_string_pairs_eqal: FAIL (0s)
data.ladon.test_condition_equals_subject: PASS (1.001µs)
data.ladon.test_exact_deny_policy: PASS (0s)
data.ladon.test_exact_deny_overrides: PASS (0s)
data.ladon.test_deny_without_match: PASS (0s)
data.ladon.test_exact_with_condition: FAIL (998ns)
--------------------------------------------------------------------------------
PASS: 8/10
FAIL: 2/10

It would probably safe me around 10 minutes of dumb trial & error searching per test if I knew which line the failure occurred at.

@aeneasr aeneasr changed the title Show line of failure in opa run Show line of failure in opa test Sep 19, 2018
@aeneasr
Copy link
Author

aeneasr commented Sep 19, 2018

It's possible to see the location by using --format=json, so the info is there, it's just not being displayed!

@tsandall
Copy link
Member

Hrmmm. The location info the in the JSON output will include the line number of the test rule that failed not the expression in the rule that failed.

We could get the line number of the expression with tracing though. This could be done in conjunction with #856.

@tsandall
Copy link
Member

Here's a rough sketch of how we could implement this.

Implementation notes

  1. Add --show-failure-line / -l flag to opa test subcommand
  2. Extend tester.Runner with setter for enabling failure line reporting (e.g., runner.EnableFailureLine(true)`
  3. Extend the tester.Result struct to include a FailedAt *ast.Expr (omitempty) field that indicates the AST node where the test failed at
  4. Inside tester.Runner#runTest create a new *topdown.BufferTracer like we do for normal tracing. Today, there can only be one tracer used at a time, so apply the following precedence: coverage > tracing > failure line tracing.
  5. If the test run failed, find the second to last Fail event in the trace and set the FailedAt field to the AST node field in the trace event (the field should be an *ast.Expr, if it's not, something is broken.)
  6. Extend tester.PrettyReporter#Report to include the line number from the FailedAt.Location field in the output

Example

test.rego:

package x

test_p {
    true
    true
    false
    true
}

Output with -v:

$ opa test test.rego  -v
FAILURES
--------------------------------------------------------------------------------
data.x.test_p: FAIL (810ns)

  Enter data.x.test_p = _
  | Eval data.x.test_p = _
  | Index data.x.test_p = _ (matched 1 rule)
  | Enter test_p = true { true; true; false; true }
  | | Eval true
  | | Eval true
  | | Eval false
  | | Fail false
  | | Redo true
  | | Redo true
  | Fail data.x.test_p = _

SUMMARY
--------------------------------------------------------------------------------
data.x.test_p: FAIL (810ns)
--------------------------------------------------------------------------------
FAIL: 1/1

Output with -l but not -v:

$ opa test test.rego -l
data.x.test_p: FAIL (test.rego:6) (638ns)
--------------------------------------------------------------------------------
FAIL: 1/1

@BenderScript
Copy link
Contributor

I will take a stab at this one

@BenderScript
Copy link
Contributor

I have this working but writing a unit test for this feature is a bit tricky

In tester/runner_test.go (date 1541440882000)

I created a new test where I have to set both tracing and EnableFailureLine() to true. The reason being I need tracing to figure out the line that fails and at the same time if only tracing is enabled, I should not present the line.

Moreover, Run() is called from both the CLI and also the testing harness, with different code paths and reporting.

func TestRunner_EnableFailureLine(t *testing.T) {

ctx := context.Background()

files := map[string]string{
	"/a_test.rego": `package foo
		test_p { 
			true
			false
			true
		}`,
}

tests := map[[2]string]struct {
	wantErr  bool
	wantFail bool
}{
	{"data.foo", "test_p"}:          {false, true},
}

test.WithTempFS(files, func(d string) {
	paths := []string{d}
	modules, store, err := tester.Load(paths, nil)
	if err != nil {
		t.Fatal(err)
	}
	ch, err := tester.NewRunner().EnableFailureLine(true).EnableTracing(true).SetStore(store).Run(ctx, modules)

Do you prefer another approach? BTW, what is the reason the second to last event has the line that fails?

@tsandall
Copy link
Member

tsandall commented Nov 8, 2018

@repenno I think that the runner can just instantiate it's own buffer tracer when line failures are enabled. Per my comment above, line failures (-l) and tracing (-v) will be mutually exclusive UNTIL we add support for multiple tracers.

tsandall pushed a commit that referenced this issue Jan 3, 2019
Fixes #961
Added new option to opa test command (-l)
Verbose (-v) as precedence over (-l)
Example run:

opa test test.rego -l
data.foo.test_a: FAIL (754ns) (test.rego:4)
data.foo.test_b: FAIL (382ns) (test.rego:8)
-------------------------------------------------------
FAIL: 2/2

Signed-off-by: repenno <rapenno@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants