Sbatch fails due to parameters not being provided (though they are) #153
-
Hello, Executing the following code to run metabat:
I noticed the following in the nohub.out:
I find this to be strange, as I've already specified this in the parameter. Also, does metaGEM submits the metabat job to the server using sbatch? Something I also noticed after executing step is that the jobs don't appear to have been submitted, as when I check with showq or uqueue command, the jobs aren't there. Your help would be appreciated as always. Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments 3 replies
-
Hey Young, Line 427 in 57cfd60 Could you check what version of snakemake you are using? It should be >=5.10.0,<5.31.1. Also, could you show me what your config files look like? It looks like probably your wildcards are not expanding properly based on the dataset folder.
|
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, Apologies for the delayed response. Our server was shut down due to a maintenance yesterday. The version of snakemake I have at the moment is 5.31.0 Here is my config.yaml:
|
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, Yes, I have the fastq and the contig file in a subfolder within dataset/ folder. Within the /home/bioinfo_tools/bin/metaGEM, I have the following folders: |
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, Thank you for reaching out, and no worries about the delay. It's actually a good timing, given that our server here has suffered some technical issues, and we are waiting to have it fixed. I'll definitely give this a go and will post updates as soon as they become available. Thanks for your help so far, it is much appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, I just tried as you have previously recommended. The issue that I described seems to have been resolved, except I am getting different issues:
In my dataset folder, I have a subfolder that contains the contigs and the reads files. The reads files are both in .fastq.gz format, while the contig file is a nucleotide fasta file (.fna). Should these files be in the dataset folder? Thank you, Young |
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, I think we are nearly there. I received a sbatch error as show below:
It appears that the input files are now being read correctly.
Right now this is the command I am executing:
But I saw it in the nohup message that there are total of four jobs. So does the '-j 1' need to be changed to '-j 4'? |
Beta Was this translation helpful? Give feedback.
-
Hi Francisco, I think I may have found a source to the sbatch issue:
Is cluster.n supposed to be the number of nodes? If so, I think the -n should be -N or --nodes (in sbatch -n is number of tasks, which is already stated by using --ntasks) There were other similar lines there, so I replaced these instances and then executed the codes. The job submission was successful (though execution was not, which I will address as a new discussion topic). |
Beta Was this translation helpful? Give feedback.
-
Please read the slurm documentation to better understand what each flag means. Of course you can modify the sbatch call to suit your needs, but the way that the sbatch call is configured is to submit |
Beta Was this translation helpful? Give feedback.
Hi Young,
Great that we are making some progress!
Just FYI,
metaGEM
looks at the subfolders in the/dataset
folder in order to expand the wildcards used by Snakemake.metaGEM
expects that you will start from the raw fastq files which are taken from each aforementioned subfolder, quality filters them, and stores them in subfolders under/qfiltered
. If you have already generated any results withoutmetaGEM
, you just need to name and store them as expected by the corresponding Snakefile rule (example below). Next,metaGEM
looks for files in the/qfiltered
folder in order to assemble them and deposit the contigs in subfolders under/assemblies
, and so on for the rest of the pipeline.You get …