-
Notifications
You must be signed in to change notification settings - Fork 12
Integration Tests for the SOM language #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Integration Tests for the SOM language #128
Conversation
Another small issue: I have test that passes, but was expected to fail. Reporting this should be minimal, and does not need to show the whole stuff. Currently, it shows this:
But all that's needed is: Also, it looks like there's something odd with the diff:
|
Assertions that fail only by being in unspecified/known/unuspported now just show that they passed and are located inside of the tags file. |
Signed-off-by: Stefan Marr <git@stefan-marr.de>
Signed-off-by: Stefan Marr <git@stefan-marr.de>
Signed-off-by: Stefan Marr <git@stefan-marr.de>
…le which can be specified through TEST_EXCEPTIONS A generic test_tags.yaml exists which passes all implementations. However language specific yaml files may allow more tests or less tests.
…on for the report to be saved to. It includes detailed information about the tests ran and the setup for the specific run.
Signed-off-by: Stefan Marr <git@stefan-marr.de>
…t a proper error on /Tests folder not found
c830358
to
493648b
Compare
…t the yaml files are working correctly
…ter as False but True instead
…t, including testing of the ... functionality for symbolising a gap. Formatted and passes Pylint
IntegrationTests/test_runner.py
Outdated
|
||
# ############################################### | ||
# TEST BELOW THIS ARE TESTING THE RUNNER ITSELF | ||
# ############################################### |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good if all this goes into a separate file.
Would be good if there's an easy way to run the integration tests on a SOM VM, without getting distracted by the tests of the test runner.
IntegrationTests/test_runner.py
Outdated
tests = sorted(tests) | ||
|
||
expected_tests = [ | ||
"core-lib/IntegrationTests/test_runner_tests/soms_for_testing/som_test_1.som", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're already using __file__
above. I am not really sure what the test here is doing, but the hard coded paths here means this fails when pytest is ran in another than the expected directory, which is not great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep I agree, hadn't spotted that. This should fix that using a filepath which is generated at runtime.
expected_tests = [
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_1.som",
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_2.som",
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_3.som",
]
IntegrationTests/test_runner.py
Outdated
if filename: | ||
with open(f"{filename}", "r", encoding="utf-8") as file: | ||
yaml_file = yaml.safe_load(file) | ||
|
||
if yaml_file is not None: | ||
if "known_failures" in yaml_file.keys(): | ||
external_vars.known_failures = yaml_file["known_failures"] | ||
if external_vars.known_failures is None: | ||
external_vars.known_failures = [] | ||
else: | ||
external_vars.known_failures = [] | ||
|
||
if "failing_as_unspecified" in yaml_file.keys(): | ||
external_vars.failing_as_unspecified = yaml_file[ | ||
"failing_as_unspecified" | ||
] | ||
if external_vars.failing_as_unspecified is None: | ||
external_vars.failing_as_unspecified = [] | ||
else: | ||
external_vars.failing_as_unspecified = [] | ||
|
||
if "unsupported" in yaml_file.keys(): | ||
external_vars.unsupported = yaml_file["unsupported"] | ||
if external_vars.unsupported is None: | ||
external_vars.unsupported = [] | ||
else: | ||
external_vars.unsupported = [] | ||
|
||
if "do_not_run" in yaml_file.keys(): | ||
external_vars.do_not_run = yaml_file["do_not_run"] | ||
if external_vars.do_not_run is None: | ||
external_vars.do_not_run = [] | ||
else: | ||
external_vars.do_not_run = [] | ||
else: | ||
external_vars.known_failures = [] | ||
external_vars.failing_as_unspecified = [] | ||
external_vars.unsupported = [] | ||
external_vars.do_not_run = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code seems to be much more complicated than needed.
Could you not just initialize the various fields to empty lists initially, and if the file has the key you are looking for, you read it?
Or perhaps even better, assuming that yaml_file
is simply a dict: https://docs.python.org/3/library/stdtypes.html#dict.get
So, yaml_file.get("unsupported", [])
just gives you the value, or []
if the key is not in the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And another simplification. You can just use in
on the dict, doesn't need to look at the keys()
explicitly:
>>> d = {}
>>> 'foo' in d
False
>>> d['foo'] = 'bar'
>>> 'foo' in d
True
IntegrationTests/test_runner.py
Outdated
def assign_environment_variables(classpath, vm, test_exceptions, generate_report): | ||
""" | ||
Assign the environment variables needed for execution based on the name | ||
classpath: String being the name of the variable CLASSPATH | ||
vm: String being the name of the variable VM | ||
test_exceptions: String being the name of the variable TEST_EXCEPTIONS | ||
generate_report: String being the name of the variable GENERATE_REPORT | ||
""" | ||
# Work out settings for the application (They are labelled REQUIRED or OPTIONAL) | ||
if classpath not in os.environ: # REQUIRED | ||
sys.exit("Please set the CLASSPATH environment variable") | ||
|
||
if vm not in os.environ: # REQUIRED | ||
sys.exit("Please set the VM environment variable") | ||
|
||
if test_exceptions in os.environ: # OPTIONAL | ||
external_vars.TEST_EXCEPTIONS = os.environ[test_exceptions] | ||
|
||
if generate_report in os.environ: # OPTIONAL | ||
# Value is the location | ||
# Its prescense in env variables signifies intent to save | ||
external_vars.GENERATE_REPORT = os.environ[generate_report] | ||
|
||
external_vars.CLASSPATH = os.environ[classpath] | ||
external_vars.VM = os.environ[vm] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I am not sure I see the benefit of having this a function.
Also, passing in the constants on call seems to make this just harder to figure out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial idea was to convert this to a method call to be able to test the logic, however I've not thought of a good way of doing such so it will probably revert back to the old implementation.
…ed within IntegrationTests. Adjusted test_test_diccovery to use relative path names rather than hard coded
…st the som tests and with the argument -m tester which will run the test_runner tests located in test_tester
Hm, there's some problem with the current output. Looks like everything is turned lower case:
|
Output will be converted to lower case if case_sensitive is not set to be true in the test file. If case sensitivity is imperative for a test then it needs to be specified in the test file. The diff just reflects that, unless I am missing something in the output. |
This pull request adds integration tests to the SOM standard library. These integration tests require two python libraries to be installed Pytest and PyYaml.
Integration tests are based on the lang_tests of yksom.
Tests can be configured through using environment variables:
EXECUTABLE= Location of the SOM executable
CLASSPATH= Classpath required for running
TEST_EXCEPTIONS= Exceptions yaml file for tests not expected to pass
DEBUG= Boolean for detailed run information on the go
GENERATE_REPORT= File location for a report to be generated at the end of the run