Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add trace output to the test mode #856

Closed
mlbiam opened this issue Jul 28, 2018 · 4 comments
Closed

Add trace output to the test mode #856

mlbiam opened this issue Jul 28, 2018 · 4 comments

Comments

@mlbiam
Copy link

mlbiam commented Jul 28, 2018

When running opa test I don't get any trace output, just the result. To get the trace output I need to run the opa run command line and enable trace. If this feature were available for opa test it would make for a much quicker development cycle

@tsandall
Copy link
Member

tsandall commented Jul 30, 2018

Let's extend opa test -v to print the trace for failed tests.

For example, without -v OPA only prints failures:

$ opa test .
data.tests.test_not_equal: FAIL (460ns)
--------------------------------------------------------------------------------
PASS: 4/5
FAIL: 1/5

With -v OPA prints all test names:

$ opa test . -v
data.tests.test_single: PASS (565ns)
data.tests.test_multiple: PASS (740ns)
data.tests.test_mismatch_one: PASS (558ns)
data.tests.test_not_equal: FAIL (555ns)
data.tests.test_gt: PASS (647ns)
--------------------------------------------------------------------------------
PASS: 4/5
FAIL: 1/5

Let's extend -v to also print traces for test failures:

FAILURES
--------------------------------------------------------------------------------
data.tests.test_not_equal (FAIL) (555ns)

   <trace output goes here>

SUMMARY
--------------------------------------------------------------------------------
data.tests.test_single: PASS (565ns)
data.tests.test_multiple: PASS (740ns)
data.tests.test_mismatch_one: PASS (558ns)
data.tests.test_not_equal: FAIL (555ns)
data.tests.test_gt: PASS (647ns)
--------------------------------------------------------------------------------
PASS: 4/5
FAIL: 1/5

@tsandall
Copy link
Member

See my previous comment for details on expected behaviour.

The required changes can be made in:

  • cmd/test.go: if verbose is true, instantiate topdown.BufferTracer and set on the test runner.
  • tester/reporter.go: update JSONReporter and PrettyReporter to emit trace results.

tsandall added a commit to tsandall/opa that referenced this issue Oct 2, 2018
Previously the test runner only exposed a single interface to set the
tracer to use during evaluation. This caused problems when we
implemented open-policy-agent#856. These changes refactor the test runner interface to
let the caller enable tracing and coverage separately. For now these
features are mutually exclusive but in the future we could implement a
wrapper that provides support for multiple tracers.

Signed-off-by: Torin Sandall <torinsandall@gmail.com>
tsandall added a commit that referenced this issue Oct 2, 2018
Previously the test runner only exposed a single interface to set the
tracer to use during evaluation. This caused problems when we
implemented #856. These changes refactor the test runner interface to
let the caller enable tracing and coverage separately. For now these
features are mutually exclusive but in the future we could implement a
wrapper that provides support for multiple tracers.

Signed-off-by: Torin Sandall <torinsandall@gmail.com>
@srenatus
Copy link
Contributor

srenatus commented Oct 3, 2018

🎉 I think is is done with #978 and #975, isn't it? 😃

@tsandall
Copy link
Member

tsandall commented Oct 3, 2018

Fixed by #975 thanks to @srenatus!

@tsandall tsandall closed this as completed Oct 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants