State of sequencing technology in 2010

Dan Koboldt has a very nice recap of the various sequencing technologies presented at last week's Advances in Genome Biology and Technology meeting. I totally agree with his central point:

Something had been bothering me about the sequencing-company presentations this year, and I finally realized what it was. During AGBT 2009, every player was gunning to take over the world. This year it seems like every sequencing platform has a niche in mind.
The recent proliferation of sequencing technologies - each with their own characteristic profile of strengths and weaknesses - has been bewildering, especially given the excessive hype being sprayed around as companies seek to raise venture capital and drown out their competitors. However, I think Dan's right that the market is now openly segmenting as each platform seeks to find the applications that best fit its strength/weakness profile.
As one notable example, it's very clear now that the third-generation single molecule sequencing technology developed by Pacific Biosciences - originally touted as being a replacement for second-generation platforms - will be restricted to niche applications (rapid confirmation of variants discovered by another technology, and supplementing second-gen sequencing in the assembly of novel genomes) for the foreseeable future due to its low yield and high error rate.
Anyway, if you're interested in how the sequencing field is starting to play out, go and read Dan's post
Categories

More like this

I doubt PacBio will find much use for validating small variants found with other platforms; why would you use a less reliable system to validate a more reliable one.

One the other hand, in addition to being useful in sequencing novel genomes (or cleaning up previously sequenced ones) the very long read technologies -- even with high point substitution & indel rates -- could be very useful for elucidating detailed structural variation in human (both normal and cancer) and other well studied genomes. It's clear that many of the processes generating structural variation make reference-guided alignment problematic (because the breakpoint regions may have stretches looking nothing like the reference) & so you end up doing assembly -- and having long reads will be valuable for that.

Hi Keith,

I was also skeptical about the applications for validation, but at least one major genome facility is already doing just that. I gather the plan is to pull down fragments spanning a whole set of candidate variants, circularise them, and then do multiple-pass sequencing (i.e. rolling circle) with PacBio. It looks like the errors in PacBio are almost exclusively randomly distributed indels, so if you get five-pass coverage of a given fragment your error rate will be pretty low - and importantly for validation, the PacBio error mode is entirely orthogonal to the dominant error mode in Illumina or SOLiD.

And yes, you're absolutely right about structural variants; the strobe reads may well prove useful for resolving these. However, I'm holding off until I've seen some raw data from the platform before getting too excited about this.