Search code examples
bioinformaticssnakemake

Snakemake Getting Checkpoint and Aggregate Function to Work


I'm having issue getting my snakemake aggregate command to work. My hope is to take a given GTF file, look for separate regions within the GTF and if found write these regions to a separate file. Thus, I'm unsure the number of output GTF files each input GTF file will create. In order to solve this problem I'm attempting to use a snakemake checkpoint.

To do this I wrote a brief script called collapse_gtf_file.py which simply takes in a GTF file, and generates N number of files corresponding to the number of individual regions found. So if given the file test_regions.gtf which had three regions, it would generate test_regions_1.gtf, test_regions_2.gtf test_regions_3.gtf respectivly.

After said seperation, all GTF files should be converted to fasta files, and the aggregated.

However, I have not been able to get my checkpoint command to work. I can get the example cases to work, yet when I try and build a larger pipeline around this checkpoint it breaks.

So far I've tried following the checkpoint tutorial found here https://snakemake.readthedocs.io/en/stable/snakefile/rules.html#dynamic-files

sample=config["samples"]
reference=config["reference"]

rule all:
    input:
    ¦   expand("string_tie_assembly/{sample}.gtf", sample=sample),
    ¦                                                                                                 expand("string_tie_assembly_merged/merged_{sample}.gtf",sample=sample),
    ¦   #expand("split_gtf_file/{sample}", sample=sample),
    ¦   #expand("lncRNA_fasta/{sample}.fa", sample=sample),
    ¦   "combined_fasta/all_fastas_combined.fa",
    ¦   "run_CPC2/cpc_calc_output.txt"

rule samtools_sort:
    input:
    ¦   "mapped_reads/{sample}.bam"
    output:
    ¦   "sorted_reads/{sample}.sorted.bam"
    shell:
    ¦   "samtools sort -T sorted_reads/{wildcards.sample} {input} > {output}"

rule samtools_index:
    input:
        "sorted_reads/{sample}.sorted.bam"
    output:
        "sorted_reads/{sample}.sorted.bam.bai"
    shell:
        "samtools index {input}"

rule generate_fastq:
    input:
        "sorted_reads/{sample}.sorted.bam"
    output:
        "fastq_reads/{sample}.fastq"
    shell:
        "samtools fastq {input} > {output}"

rule string_tie_assembly:
    input:
        "sorted_reads/{sample}.sorted.bam"
    output:
        "string_tie_assembly/{sample}.gtf"
    shell:
        "stringtie {input} -f 0.0 -a 0 -m 50 -c 3.0 -f 0.0 -o {output}"

rule merge_gtf_file_features:
    input:
    ¦   "string_tie_assembly/{sample}.gtf"
    output:
    ¦   "string_tie_assembly_merged/merged_{sample}.gtf"
    shell:
    ¦   #prevents errors when there's no sequence
    ¦   """
    ¦   set +e
    ¦   stringtie --merge -o {output} -m 25 -c 3.0 {input}
    ¦   exitcode=$?
    ¦   if [ $exitcode -eq 1 ]
    ¦   then
    ¦   ¦   exit 0
    ¦   else
    ¦   ¦   exit 0
    ¦   fi
    ¦   """

#This is where the issue seems to arise from. Modeled after https://snakemake.readthedocs.io/en/stable/snakefile/rules.html#dynamic-files 

checkpoint clustering:
    input:
    ¦   "string_tie_assembly_merged/merged_{sample}.gtf"
    output:
    ¦   clusters = directory("split_gtf_file/{sample}")
    shell:
    ¦   """
    ¦   mkdir -p split_gtf_file/{wildcards.sample} ;

    ¦   python collapse_gtf_file.py -gtf {input} -o split_gtf_file/{wildcards.sample}/{wildcards.sample}
    ¦   """

rule gtf_to_fasta:
    input:
    ¦   "split_gtf_file/{sample}/{sample}_{i}.gtf"
    output:
    ¦   "lncRNA_fasta/{sample}/canidate_{sample}_{i}.fa"
    wildcard_constraints:
    ¦   i="\d+"
    shell:
    ¦   "gffread -w {output} -g {reference} {input}"


def aggregate_input(wildcards):
    checkpoint_output = checkpoints.clustering.get(**wildcards).output[0]
    x = expand("lncRNA_fasta/{sample}/canidate_{sample}_{i}.fa",
    ¦   sample=wildcards.sample,
    ¦   i=glob_wildcards(os.path.join(checkpoint_output, "{i}.fa")).i)
    return x

rule combine_fasta_file:
    input:
    ¦   aggregate_input
    output:
    ¦   "combined_fasta/all_fastas_combined.fa"
    shell:
    ¦   "cat {input} > {output}"


Error Message:

InputFunctionException in combine_fasta_file:
WorkflowError: Missing wildcard values for sample
Wildcards:

What this seems to me is indicating that something is wrong with the way I've called wildcards above in the aggregate command, but I cannot figure out what. Any pointers would be very much appreciated.


Solution

  • Function aggregate_input is expecting variable wildcards.sample, but you don't have any wildcards specified in rule combine_fasta_file. You may want to either specify the wildcard in that rule or refactor the function to use global variable, if applicable.