Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests for nf-core subworkflows create-test-yml #2219

Merged
merged 11 commits into from
Mar 29, 2023
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@
- Use `nfcore/gitpod:dev` container in the dev branch ([#2196](https://github.com/nf-core/tools/pull/2196))
- Replace requests_mock with responses in test mocks ([#2165](https://github.com/nf-core/tools/pull/2165)).

### Bug fixes, maintenance and tests

- Add tests for `nf-core subworkflows create-test-yml` ([#2219](https://github.com/nf-core/tools/pull/2219))

Aratz marked this conversation as resolved.
Show resolved Hide resolved
## [v2.7.2 - Mercury Eagle Patch](https://github.com/nf-core/tools/releases/tag/2.7.2) - [2022-12-19]

### Template
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -612,7 +612,7 @@ The graphical interface is oganzised in groups and within the groups the single

Now you can start to change the parameter itself. The `ID` of a new parameter should be defined in small letters without whitespaces. The description is a short free text explanation about the parameter, that appears if you run your pipeline with the `--help` flag. By clicking on the dictionary icon you can add a longer explanation for the parameter page of your pipeline. Usually, they contain a small paragraph about the parameter settings or a used datasource, like databases or references. If you want to specify some conditions for your parameter, like the file extension, you can use the nut icon to open the settings. This menu depends on the `type` you assigned to your parameter. For integers you can define a min and max value, and for strings the file extension can be specified.

The `type` field is one of the most important points in your pipeline schema, since it defines the datatype of your input and how it will be interpreted. This allows extensive testing prior to starting the pipeline.
The `type` field is one of the most important points in your pipeline schema, since it defines the datatype of your input and how it will be interpreted. This allows extensive testing prior to starting the pipeline.

The basic datatypes for a pipeline schema are:

Expand All @@ -621,7 +621,7 @@ The basic datatypes for a pipeline schema are:
- `integer`
- `boolean`

For the `string` type you have three different options in the settings (nut icon): `enumerated values`, `pattern` and `format`. The first option, `enumerated values`, allows you to specify a list of specific input values. The list has to be separated with a pipe. The `pattern` and `format` settings can depend on each other. The `format` has to be either a directory or a file path. Depending on the `format` setting selected, specifying the `pattern` setting can be the most efficient and time saving option, especially for `file paths`. The `number` and `integer` types share the same settings. Similarly to `string`, there is an `enumerated values` option with the possibility of specifying a `min` and `max` value. For the `boolean` there is no further settings and the default value is usually `false`. The `boolean` value can be switched to `true` by adding the flag to the command. This parameter type is often used to skip specific sections of a pipeline.
For the `string` type you have three different options in the settings (nut icon): `enumerated values`, `pattern` and `format`. The first option, `enumerated values`, allows you to specify a list of specific input values. The list has to be separated with a pipe. The `pattern` and `format` settings can depend on each other. The `format` has to be either a directory or a file path. Depending on the `format` setting selected, specifying the `pattern` setting can be the most efficient and time saving option, especially for `file paths`. The `number` and `integer` types share the same settings. Similarly to `string`, there is an `enumerated values` option with the possibility of specifying a `min` and `max` value. For the `boolean` there is no further settings and the default value is usually `false`. The `boolean` value can be switched to `true` by adding the flag to the command. This parameter type is often used to skip specific sections of a pipeline.

After filling the schema, click on the `Finished` button in the top right corner, this will automatically update your `nextflow_schema.json`. If this is not working, the schema can be copied from the graphical interface and pasted in your `nextflow_schema.json` file.

Expand Down
2 changes: 1 addition & 1 deletion nf_core/subworkflow-template/tests/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ nextflow.enable.dsl = 2

include { {{ subworkflow_name|upper }} } from '../../../../subworkflows/{{ org }}/{{ subworkflow_dir }}/main.nf'

workflow test_{{ subworkflow_name }} {
workflow test_{{ component_name_underscore }} {
Aratz marked this conversation as resolved.
Show resolved Hide resolved
{% if has_meta %}
input = [
[ id:'test' ], // meta map
Expand Down
6 changes: 3 additions & 3 deletions nf_core/subworkflows/test_yml_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def scrape_workflow_entry_points(self):
if match:
self.entry_points.append(match.group(1))
if len(self.entry_points) == 0:
raise UserWarning("No workflow entry points found in 'self.module_test_main'")
raise UserWarning(f"No workflow entry points found in '{self.subworkflow_test_main}'")

def build_all_tests(self):
"""
Expand Down Expand Up @@ -195,7 +195,7 @@ def build_single_test(self, entry_point):
).strip()
ep_test["tags"] = [t.strip() for t in prompt_tags.split(",")]

ep_test["files"] = self.get_md5_sums(entry_point, ep_test["command"])
ep_test["files"] = self.get_md5_sums(ep_test["command"])

return ep_test

Expand Down Expand Up @@ -272,7 +272,7 @@ def create_test_file_dict(self, results_dir, is_repeat=False):

return test_files

def get_md5_sums(self, entry_point, command, results_dir=None, results_dir_repeat=None):
def get_md5_sums(self, command, results_dir=None, results_dir_repeat=None):
"""
Recursively go through directories and subdirectories
and generate tuples of (<file_path>, <md5sum>)
Expand Down
95 changes: 95 additions & 0 deletions tests/subworkflows/create_test_yml.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
import os
from unittest import mock

Aratz marked this conversation as resolved.
Show resolved Hide resolved
import pytest

import nf_core.subworkflows

from ..utils import with_temporary_folder
Aratz marked this conversation as resolved.
Show resolved Hide resolved


@with_temporary_folder
def test_subworkflows_custom_yml_dumper(self, out_dir):
"""Try to create a yml file with the custom yml dumper"""
yml_output_path = os.path.join(out_dir, "test.yml")
Aratz marked this conversation as resolved.
Show resolved Hide resolved
meta_builder = nf_core.subworkflows.SubworkflowTestYmlBuilder(
subworkflow="test/tool",
directory=self.pipeline_dir,
test_yml_output_path=yml_output_path,
no_prompts=True,
)
meta_builder.test_yml_output_path = yml_output_path
meta_builder.tests = [{"testname": "myname"}]
meta_builder.print_test_yml()
assert os.path.isfile(yml_output_path)
Aratz marked this conversation as resolved.
Show resolved Hide resolved


@with_temporary_folder
def test_subworkflows_test_file_dict(self, test_file_dir):
"""Create dict of test files and create md5 sums"""
meta_builder = nf_core.subworkflows.SubworkflowTestYmlBuilder(
subworkflow="test/tool",
directory=self.pipeline_dir,
test_yml_output_path="./",
no_prompts=True,
)
with open(os.path.join(test_file_dir, "test_file.txt"), "w") as fh:
Aratz marked this conversation as resolved.
Show resolved Hide resolved
fh.write("this line is just for testing")
test_files = meta_builder.create_test_file_dict(test_file_dir)
assert len(test_files) == 1
assert test_files[0]["md5sum"] == "2191e06b28b5ba82378bcc0672d01786"


@with_temporary_folder
def test_subworkflows_create_test_yml_get_md5(self, test_file_dir):
"""Get md5 sums from a dummy output"""
meta_builder = nf_core.subworkflows.SubworkflowTestYmlBuilder(
subworkflow="test/tool",
directory=self.pipeline_dir,
test_yml_output_path="./",
no_prompts=True,
)
with open(os.path.join(test_file_dir, "test_file.txt"), "w") as fh:
fh.write("this line is just for testing")
test_files = meta_builder.get_md5_sums(
command="dummy",
results_dir=test_file_dir,
results_dir_repeat=test_file_dir,
)
assert test_files[0]["md5sum"] == "2191e06b28b5ba82378bcc0672d01786"


def test_subworkflows_create_test_yml_entry_points(self):
"""Test extracting test entry points from a main.nf file"""
subworkflow = "test_subworkflow"
meta_builder = nf_core.subworkflows.SubworkflowTestYmlBuilder(
subworkflow=f"{subworkflow}/test",
directory=self.pipeline_dir,
test_yml_output_path="./",
no_prompts=True,
)
meta_builder.subworkflow_test_main = os.path.join(
Aratz marked this conversation as resolved.
Show resolved Hide resolved
self.nfcore_modules, "tests", "subworkflows", "nf-core", subworkflow, "main.nf"
)
meta_builder.scrape_workflow_entry_points()
assert meta_builder.entry_points[0] == f"test_{subworkflow}"


def test_subworkflows_create_test_yml_check_inputs(self):
"""Test the check_inputs() function - raise UserWarning because test.yml exists"""
cwd = os.getcwd()
os.chdir(self.nfcore_modules)
subworkflow = "test_subworkflow"
meta_builder = nf_core.subworkflows.SubworkflowTestYmlBuilder(
subworkflow=f"{subworkflow}",
directory=self.pipeline_dir,
test_yml_output_path="./",
no_prompts=True,
)
meta_builder.subworkflow_test_main = os.path.join(
Aratz marked this conversation as resolved.
Show resolved Hide resolved
self.nfcore_modules, "tests", "subworkflows", "nf-core", subworkflow, "main.nf"
)
with pytest.raises(UserWarning) as excinfo:
meta_builder.check_inputs()
os.chdir(cwd)
assert "Test YAML file already exists!" in str(excinfo.value)
7 changes: 7 additions & 0 deletions tests/test_subworkflows.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,13 @@ def tearDown(self):
test_subworkflows_create_nfcore_modules,
test_subworkflows_create_succeed,
)
from .subworkflows.create_test_yml import (
test_subworkflows_create_test_yml_check_inputs,
test_subworkflows_create_test_yml_entry_points,
test_subworkflows_create_test_yml_get_md5,
test_subworkflows_custom_yml_dumper,
test_subworkflows_test_file_dict,
)
from .subworkflows.info import (
test_subworkflows_info_in_modules_repo,
test_subworkflows_info_local,
Expand Down