Skip to content

Commit

Permalink
Removes java options, MAX_RECORDS_IN_RAM and TMP_DIR
Browse files Browse the repository at this point in the history
  • Loading branch information
cgpu committed Nov 1, 2019
1 parent eb8615d commit eeb3078
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -744,12 +744,9 @@ process MarkDuplicates {
script:
markdup_java_options = task.memory.toGiga() > 8 ? params.markdup_java_options : "\"-Xms" + (task.memory.toGiga() / 2).trunc() + "g -Xmx" + (task.memory.toGiga() - 1) + "g\""
"""
gatk --java-options ${markdup_java_options} \
MarkDuplicates \
--MAX_RECORDS_IN_RAM 500000 \
gatk MarkDuplicates \
--INPUT ${idSample}.bam \
--METRICS_FILE ${idSample}.bam.metrics \
--TMP_DIR . \
--ASSUME_SORT_ORDER coordinate \
--CREATE_INDEX true \
--OUTPUT ${idSample}.md.bam
Expand Down

1 comment on commit eeb3078

@cgpu
Copy link
Owner Author

@cgpu cgpu commented on eeb3078 Nov 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed those because I consistently get out of memory error.

In the Somatic Variant Calling pipeline, the MarkDuplicates process looks like this:

process MarkDuplicates {
    tag "$bam_sort"
    container 'broadinstitute/gatk:latest'

    input:
    set val(shared_matched_pair_id), val(unique_subject_id), val(case_control_status), val(name), file(bam_sort) from bam_sort

    output:
    set val(name), file("${name}.bam"), file("${name}.bai"), val(shared_matched_pair_id), val(unique_subject_id), val(case_control_status) into bam_markdup_baserecalibrator, bam_markdup_applybqsr
    file ("${name}.bam.metrics") into markDuplicatesReport

    """
    gatk MarkDuplicates  \
    -I  ${bam_sort} \
    -O ${name}.bam \
    -M ${name}.bam.metrics \
    --CREATE_INDEX true  \
    --READ_NAME_REGEX null 
    """
}

Many huge bams ran successfully in one hour:

image

🤞 this will work

Please sign in to comment.