Comparing millions of trimmed reads to large database

1

Hi all,

I have a set of reads which I've trimmed down to 21nt based on the sequencing experiment. I'd like to compare these 21nt sequences to a database of 300,000 21nt sequences to annotate each read. I attempted to use bowtie2 by making a indices for the database then mapping the reads, but the mapping rate was lower than expected, suggesting that the bowtie read mapping method isn't amenable to this type of comparison.

Next I tried using Blastn, but it's apparently too slow for this scale of comparison.

Can someone please recommend a tool or approach for making so many exact comparisons?

Thanks


bowtie2


blastn

• 32 views



Source link