You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When prototyping new nextflow pipelines I have run into the problem described here, where long-running jobs are not cached but re-run for no apparent reason or because I refactored some code in the process definition. This is great behavior for production, but sometimes I just want nextflow to treat all successfully completed jobs as cached and go ahead and run downstream processes.
Would it be possible to add an "evil" mode where nextflow reverts to something closer to Snakemake behavior, for example only using timestamps and the returncode file for determining if a process result can be treated as cached?
The text was updated successfully, but these errors were encountered:
When prototyping new nextflow pipelines I have run into the problem described here, where long-running jobs are not cached but re-run for no apparent reason or because I refactored some code in the process definition. This is great behavior for production, but sometimes I just want nextflow to treat all successfully completed jobs as cached and go ahead and run downstream processes.
Would it be possible to add an "evil" mode where nextflow reverts to something closer to Snakemake behavior, for example only using timestamps and the returncode file for determining if a process result can be treated as cached?
The text was updated successfully, but these errors were encountered: