Sanger Sequencing Turns 40: Retrospectives and Perspectives on DNA Sequencing Technologies

Retrospective: What have we accomplished with DNA sequencing so far?

Sanger wasn’t the first person to attempt sequencing, but before his classic method was invented, the process was painfully slow and cumbersome. Before Sanger, Gilbert and Maxam sequenced 24 bases of the lactose-repressor binding site by copying it into RNA and sequencing the RNA–which took a total of 2 years [1]!

Sanger’s process sequencing made the process much more efficient. Original Sanger sequencing took a ‘sequencing by synthesis’ approach, creating 4 extension reactions, each with a different radioactive chain-terminating nucleotide to identify what base lay at each position along a DNA fragment. When he ran each of those reactions out on a gel, it became relatively simple to identify the sequence of the DNA fragment (see Figure 1) [2].

Figure1

Figure 1: Gel from the paper that originally described Sanger sequencing.

Of course, refinements have been made to the process since then. We now label each of the nucleotides with a different fluorescent dye, which allows for the same process to occur but using only one extension reaction instead of 4, greatly simplifying the protocol. Sanger received his second Nobel Prize for this discovering in 1980 (well-deserved, considering it is still used today).

An early version of the Human Genome Project (HGP) began not long after this discovery in 1987. The project was created by the United States Department of Energy, which was interested in obtaining a better understanding of the human genome and how to protect it from the effects of radiation. A more formalized version of this project was approved by Congress in 1988 and a five-year plan was submitted in 1990 [3]. The basic overview of the protocol for the HGP emerged as follows: the large DNA fragments were cloned in bacterial artificial chromosomes (BACs), which were then fragmented, size-selected, and sub-cloned. The purified DNA was then used for Sanger sequencing, and individual reads were then assembled based on overlaps between the reads.

Given how large the human genome is, and the limitations of Sanger sequencing, it quickly became apparent that more efficient and better technologies were necessary, and indeed, a significant part of the HGP was dedicated to creating these technologies. Several advancements in both wet-lab protocol and data analysis pipelines were made during this time, including the advent of paired-end sequencing and the automation of quality metrics for base calls.

Due to the relatively short length of the reads produced, the highly repetitive parts of the human genome (such as centromeres, telomeres and other areas of heterochromatin) remained intractable to this sequencing method. Despite this, a draft of the HGP was submitted in 2001, with a finished sequence submitted in 2004–all for the low, low cost $2.7 billion.

Since then, there have been many advancements to the process of DNA sequencing, but the most important of these is called multiplexing. Multiplexing involves tagging different samples with a specific DNA barcode, which allows us to sequence multiple samples in one reaction tube, vastly increasing the amount of data we can obtain per sequencing run. Interestingly, the most frequently used next-generation sequencing method today (the Illumina platforms–check them out here) still uses the basics of Sanger sequencing (i.e., detection of fluorescently labelled nucleotides), combined with multiplexing and a process called bridge amplification to sequence hundreds of millions of reads per run.

Figure 2

Figure 2: Cost of WGS has decreased faster than we could have imagined.

Rapid advancement in genome sequencing since 2001 have greatly decreased the cost of sequencing, as you can see in Figure 2 [4]. We are quickly approaching sequencing of the human genome for less than $1,000–which you can see here on our website.

What are we doing with sequencing today?

Since the creation of next-generation DNA sequencing, scientists have continued to utilize this technology in increasingly complex and exciting new ways. RNA-sequencing, which involves isolating the RNA from an organism, converting it into cDNA, and then sequencing resulting cDNA, was invented shortly after the advent of next-generation sequencing and has since become a staple of the molecular biology and genetics fields. ChIP-seq, Ribo-Seq, RIP-seq, and methyl-seq followed and have all become standard experimental protocols as well. In fact, as expertly put by Shendure et al. (2017), ‘DNA sequencers are increasingly to the molecular biologist what a microscope is to the cellular biologist–a basic and essential tool for making measurements. In the long run, this may prove to be the greatest impact of DNA sequencing.’ [5] In my own experience, utilizing these methods in ways that complement each other (like cross-referencing ChIP-seq or Ribo-Seq data with RNA-seq data) can produce some of the most exciting scientific discoveries.

Figure 3

Figure 3: Model of the MinION system.

Although Illumina sequencing still reigns supreme on the market, there are some up and coming competitor products as well. Of great interest is the MinION from Oxford Nanopore Technologies (ONT) (see more about them here). MinION offers a new type of sequencing that offers something the Illumina platforms lack–the ability to sequence long regions of DNA, which is of enormous value when sequencing through highly repetitive regions. MinION works via a process called nanopore sequencing, a system which applies voltage across hundreds of small protein pores. At the top of these pores sits an enzyme that processively unwinds DNA down through the pore, causing a disruption in the voltage flow which can measured at the nucleotide level (see Figure 3) [6]. These reads can span thousands of base pairs, orders of magnitude greater than the Illumina platforms, which greatly simplifies genome assembly. Other new options for long-read sequencing include the PacBio system from Pacific Biosciences (look for pricing options for this service here).

Like any new technology, there have been setbacks. The early accuracy of MinION cells was quite low compared with Illumina, and the output was quite low as well. And although these issues have mainly been addressed, MinION still trails in the market compared to Illumina platforms, which are seen as more reliable and well-characterized. However, MinION has several advantages that could eventually lead to it being more commonly used in the future: for one, it literally fits in the palm of your hand, making it much more feasible for people like infectious diseases researchers, who are in desperate need of sequencing capabilities in remote locales. It’s fast as well; in one example, a researcher in Australia was able to identify antibiotic resistance genes in cultured bacteria in 10 hours [7]–an absolutely incredible feat that couldn’t have been imagined until very recently. This kind of technology could easily be used in hospitals to assist in identifying appropriate patient treatments, hopefully within a few years.

Although we are not regularly able to utilize sequencing technology for medical treatments as of yet, there are a few areas where this is currently happening. Detecting Down’s syndrome in a fetus during pregnancy used to be a much more invasive process, but with improvements in sequencing technology, new screens have been invented that allow for the detection of chromosomal abnormalities circulating in the maternal blood [8]. Millions of women have already benefitted from this improved screen.

Perspective: What does the future of DNA sequencing hold?

As the Chinese poet Lao Tzu said, ‘Those who have knowledge, don’t predict’, and that’s as true as ever when it comes to DNA sequencing technology. We’re capable today of things we couldn’t even have dreamed of 40 years ago, so who knows where we’ll be in the next 40 years?

But as a scientist, I’ve always enjoyed making educated guesses, so here’s some limited predictions about what the future might hold.

Clinical applications: I’ve never been a fan of the term personalized medicine, since it implies that one day doctors will be able to design individual treatments for each patient’s specific illness. I find this scenario unlikely (at least in the near future), because even though the cost and time of DNA sequencing has decreased by astonishing amounts, it still is expensive and time-consuming enough that it doesn’t seem likely to be of great use for clinical applications (to say nothing of cost and time for developing new drug regiments). However, I have high hopes for the future of precision medicine, particularly in cancer treatments. Although we may never be able to design the perfect drug specifically designed to target one individual’s cancer, we can certainly create drugs that are designed to interact with the frequently observed mutations we see in cancers. This could allow for a more individualized drug regiment for patients. Given that cancer is a disease with such extremely wide variations, we will almost certainly need to start taking more targeted approach to its treatment, and genome sequencing will be of great benefit to us in this regard.

A fully complete human genome: As I mentioned previously, one drawback of Illumina sequencing is that it is not capable of sequencing across highly repetitive regions, and unfortunately, large swaths of the human genome are highly repetitive. As such, while we have what is very close to a complete human genome, we do not have the full telomere-to-telomere sequence down as of yet. However, with the new long-read technologies that are currently being implemented, the day when this will be completed is likely not far off.

A complete tapestry of human genetic variation: Millions of people have already had their genomes sequenced to some degree (I’m one of them! Any others?), and millions more are sure to come. Widespread genome re-sequencing could one day allow us to have a full catalog of virtually every heterozygous gene variant in the world, which could allow for an even greater understanding of the connection between our genetics and specific traits.

Faster and better data analysis: Data analysis is probably the biggest bottleneck we’re currently experience when it comes to DNA sequencing. There is what seems like an infinite amount of data out there and unfortunately, a finite number of people who are capable of and interested in analyzing it. As these technologies become more and more mature and established, new and better data analysis pipelines will eventually be created, speeding up analysis time and increasing our understanding the data. Hopefully, one day even scientists with only moderate technical savvy will be capable of performing their own data analysis.

I’m certain the future of DNA sequencing will also hold things that I can’t even imagine. It’s an amazing time to be a scientist right now, as researchers are continuously discovering new technologies, and finding ways to put our current technologies to even more interesting uses.

What do you think the next big thing in DNA sequencing will be? Tell us in the comments!

RIN Numbers: How they’re calculated, what they mean and why they’re important

High-quality sequencing data is an important part of ensuring that your data is reliable and replicable, and obtaining high-quality sequencing data means using high-quality starting material. For RNA-seq data, this means using RNA that has a high RIN (RNA Integrity Number), a 10-point scale from 1 – 10 that provides a standardized number to researchers indicating the quality of their RNA, removing individual bias and interpretation from the process.

The RIN is a significant improvement over the way that RNA integrity was previously calculated: the 28S and 18S ratio. Because 28S is approximately 5 kb and 18S is approximately 2 kb, the ideal 28S:18S ratio is 2.7:1–but the benchmark is considered about 2:1. However, this measurement relies on the assumption that the quality of rRNA (a very stable molecule) is linearly reflective of mRNA quality, which is actually much less stable and experience higher turnover [1].

Figure1

Figure 1: RNA traces of RNA samples with different RIN values. Note the difference between high and low quality samples.

Fortunately, Agilent Technologies has developed a better method: the RIN value. Agilent has developed a sophisticated algorithm that calculates the RIN value, a measurement that is a considerable improvement over the 28S:18S ratio. RIN is an improvement in that it takes into account the entirety of the RNA sample, not just the rRNA measurements, as you can see in Figure 1 [2]

The importance of RNA integrity in determining the quality of gene expression was examined by Chen et al. [3] in 2014 by comparing RNA samples of 4 different RIN numbers (from 4.5 – 9.4) and 3 different library preparation methods (poly-A selected, rRNA-depleted, and total RNA) for a total of 12 samples. They then calculated the correlation coefficient of gene expression between the highest quality RNA and the more degraded samples between library preparation methods.

Figure2

Figure 2: Only poly-A selected RNA library preparations experience a decrease in data quality with a decrease in RIN value.

Fascinatingly, the only library preparation method that showed a significant decrease in the correlation between high quality and low quality RNA was the poly-A selected library preparation method. The other two library preparation methods still had correlation coefficients of greater than 0.95 even at low RINs (see Figure 2 [3])!

Chen et al. theorize that the reason behind this is that degraded samples that are poly-A selected will result in an increasingly 3′ biased library preparation, and that therefore you will lose valuable reads from your data. Because the other methods involve either no treatment or rRNA removal (as opposed to selection), there will be considerably less bias in the overall sample.

Even though it seems as though only the poly-A selected library preparation method suffers from having a low RIN, providers still prefer to work with relatively high quality RNA samples for all library preparation methods. However, if you do have important samples that are of lower quality RIN, it may be worth still discussing your options with a provider directly–and we at Genohub are more than happy to help facilitate your discussions! Please contact us here if you have any further questions about sequencing of samples with poor RIN.

How mispriming events could be creating artifacts in your library prep (and what you can do to prevent it)

Next-generation sequencing technology has been advancing at an incredibly rapid rate; what started as only genome sequencing now encompasses an incredible amount of RNA sequencing techniques as well. These range from standard RNA-seq, to miRNA-seq, Ribo-seq, to HITS-CLIP (high-throughput sequencing of RNA isolated by crosslinking immunoprecipiation). While these technological advances are now widely used (and have been invaluable to the scientific community), they are not fully mature technologies and we are still learning about potential artifacts that may arise and how to combat them; mispriming events are a significant and under-studied contributor to errors in sequencing data.

What is a mispriming event?

Reverse transcription is an important part of any RNA-sequencing technique. The RNA in question is first converted into cDNA, which is then PCR amplified and converted in a library from there (there are various methods for library preparation, depending on what kind of technique you are using). However, the conversion of RNA into cDNA by reverse transcriptase requires a DNA primer to start the process. This primer is complementary to the RNA, binding to it and allowing for reverse transcription to take place. A mispriming event is when this process occurs at a place where the DNA primer is not perfectly complementary to the RNA.

Two recent papers have highlighted how reverse transcription mispriming events can have a considerable impact on the library preparation process and result in error. Gurp, McIntyre and Verhoeven [1] conducted an RNA-seq experiment focusing on reads that mapped to ERCC spikes (artificial and known RNA fragments that are added to RNA-seq experiments as a control). Because the sequence of these ERCC spikes is already known, detecting mismatches in the sequences is relatively straightforward.

Their findings were striking: they found that 1) RNA-to-DNA mispriming events were the leading cause of deviations from the true sequence (as opposed to DNA-to-DNA mispriming events that can occur later on in the library preparation process), and 2) these mispriming events are non-random and indeed show specific and predictable patterns. For example, if the first nucleotide of an RNA-seq read starts with A or T, rA-dC and rU-dC mispriming events are common. In positions 2 – 6, rU-dG and rG-dT are also quite common, which lines up with the observation that these are the most stable mismatched pairs [2]. Needless to say, these kind of mispriming events can cause huge issues for various type of downstream analysis, particularly identification of SNPs and RNA-editing sites; eliminating these biases will be extremely important for future experiments (Figure 1). 

journal.pone.0085583.g002

Figure 1: Common base mismatches and their locations [1]

As of right now, we do not have good, sophisticated methods of eliminating these types of mispriming events from our datasets. Eliminating the first 10 bases of reads will solve the problem, but will also involve throwing out real data with the artifacts. Given the fact that these mispriming events do follow predictable patterns, it is possible that in the future, we could devise programs to identify and correct mispriming events, or even modify hexamer design to exclude ones that result in frequent mispriming.

Frustratingly, mispriming events can occur even when the priming oligo is quite lengthy. HITS-CLIP has been greatly important in discovering many protein-RNA interactions [3]; however, a recent paper published by Gillen et al. [4]  demonstrated that mispriming events even with a long DNA primer can create a significant artifact, creating read pileups that align to the genomic occurrences of the adaptor sequence, making it appear as though there are protein-RNA interactions occurring at that locus.

Part of HITS-CLIP library preparation involves attachment of a 3’ RNA adaptor to the protein bound RNA. A DNA oligo perfectly complementary to this RNA sequence serves as the primer for conversion of this RNA into cDNA, and it is this DNA oligo that leads to significant mispriming events. Although the DNA primer is long enough to be extremely specific, sequences that are complementary to only the last 6 nucleotides of the primer are still enough to result in a mispriming event, which converts alternative RNAs into cDNAs that eventually get amplified in the library.

Gillen et al. analyzed 44 experiments from 17 research groups, and showed that the adaptor sequence was overrepresented by 1.5-fold on average–and sometimes as high as 6-fold (Figure 2)!

12864_2016_2675_Fig1_HTML

Figure 2: Over-representation of DNA primer sequences can be found in multiple datasets from different groups, indicating the possibility of a widespread problem. 

And since only 6 complementary nucleotides are needed to result in a mispriming event, how can we eliminate this artifactual data?

Gillen et al. devised an ingenious and simple method of reducing this artifact by using a nested reverse transcription primer (Figure 3). By ‘nested primer’, they are referring to a primer that is not perfectly complementary to the 3’ adaptor, but rather stops 3 nucleotides short of being fully flush with the adaptor. This, combined with a full-length PCR primer (that is, flush with the adaptor sequence) with a ‘protected’ final 3 nucleotides (note: in this instance, ‘protected’ mean usage of phosphorothioate bonds in the final 3 oligo bases, which prevents degradation by exonucleases. Without this protective bond, the mispriming artifact is simply shifted downstream 3 bases), is enough to almost completely eliminate mispriming artifacts. This allows for significantly improved library quality and increased sensitivity!

12864_2016_2675_Fig2_HTML

Figure 3: A nested reverse transcription primer combined with a protected PCR primer can eliminate sequencing artifacts almost entirely. 

Although we have been working with sequencing technologies for many years now, we still have a lot to discover about hidden artifacts in the data. It’s becoming increasingly important to stay aware of emerging discoveries of these biases and make sure we are doing everything we can to eliminate this from our data.

Have you ever had an experience with sequencing artifacts in your data? Tell us in the comments!

Ribo-Seq: Understanding the effect of translational regulation on protein abundance in the cell

Examining changes in gene expression has become one of the most powerful tools in molecular biology today. However, the correlation between mRNA expression and protein levels is often poor. Thus, being able to identify precisely which transcripts are being actively translated, and the rate at which they are being translated, could be a huge boon to the field and give us more insight into which genes are carried through all the way from the mRNA to the protein level–and Ribo-seq (also known as ribosome profiling) technology gives us just that!

Historic nature of ribosome profiling

Ribo-seq is based upon the much older technique of in vitro ribosome footprinting, which stretches back nearly 50 years ago and was used by Joan Steitz and Marilyn Kozak in important studies to map the locations of translation initiation [1, 2]. Due to the technological limitations of the time, these experiments were performed with cell-free in vitro translational systems. These days, we can actually extract actively translating ribosomes from cells and directly observe their locations on the mRNAs they are translating!

Method

So how does this innovative new technique work? The workflow is actually remarkably simple.

  1. We start by lysing the cells by first flash-freezing them, and then harvesting them in the presence of cyclohexamide (see explanation for this under ‘Drawbacks and complications’).
  2. Next, we treat the lysates with RNase 1, which digests the part of the mRNA not protected by the ribosome.
  3. The ribosomes are then separated using a sucrose cushion and centrifugation at very high speeds.
  4. RNA from the ribosome fraction obtained above are then purified with a miRNeasy kit and then gel purified to obtain the 26 – 34 nt region. These are the ribosome footprints.
  5. From there, the RNA is dephosphorylated and the linker DNA is added.
  6. The hybrid molecule is then subjected to reverse transcription into cDNA.
  7. The cDNA is then circularized, PCR amplified, and then used for deep sequencing.

Ribo-seq vs. RNA-seq

Ribosome profiling as a next-generation sequencing technique was developed quite recently by Nicholas Ingolia and Jonathan Weissman [3, 4]. One of their most interesting findings was that there is a roughly 100-fold range of translation efficiency across the yeast transcriptome, meaning that just because an mRNA is very abundant, that does not mean that it is highly translated. They concluded that translation efficiency, which cannot be measured by RNA-seq experiments, is a significant factor in whether or not a gene makes it all the way from an mRNA to a protein product.

Additionally, they looked at the correlation between the abundance of proteins (measured by mass spectrometry) and either the data obtained from Ribo-seq or RNA-seq. They found that Ribo-seq measurements had a much higher correlation with protein abundance than RNA-seq ( = 0.60 vs. = 0.17), meaning that Ribo-seq is actually a better measurement of gene expression analysis (depending on the type of experiment you’re interested in performing).

Of course, there are still significant advantages to RNA-seq over Ribo-seq–Ribo-seq will not be able to capture the expression of non-coding RNAs, for instance. Additionally, RNA-seq is considerably cheaper and easier to perform as of this moment. However, I believe that we are likely to see a trend towards ribosome profiling as this technique becomes more mature.

What else can we learn from ribosome profiling?

Ribosome profiling has already taught us many new things, including:

  • discovering that RNAs that had previously been thought to be non-coding RNAs due to their short length are actually translated, and indeed code for short peptide sequences, the exact functions of which remain unknown. [5]
  • detection of previously unknown translated short upstream ORFs (uORFs), which often possess a non-AUG start codon. These uORFs are likely responsible for regulating the protein-coding ORFs (which is true of the GCN4 gene)[6], though it remains to be seen if that is true for all uORFs or if they have other currently unknown functionalities.
  • determination of the approximate translation elongation rate (330 codons per minute)
  • examples of ribosome pausing or stalling at consecutive proline codons in eukaryotes and yeast [7, 6]

But who knows what else we will learn in the future? This technique can teach us a lot about how gene expression can be regulatated at the translational level. Additionally, we can learn a lot about how translation affects various diseases states, most notably cancer, since cellular stress will very likely affect both translation rate and regulation.

Drawbacks and complications 

While this technique is extremely powerful, there are a few drawbacks. The most prominent amongst them is that any attempt to lyse and harvest the cells for this procedure causes a change in the ribosome profile, making this technique particularly vulnerable to artefacts. Researchers often attempt to halt translation before harvesting with a 5 minute incubation with cyclohexamide, a drug that blocks translation elongation, to prevent ribosome run-off; however, this can result in an enormous increase in ribosome signal at the initiation sites, as ribosomes will still initiate translation and begin to pile up.

The best method of combatting these artefacts is to flash-freeze the cells prior to harvesting, lyse over them dry ice, and then continue the protocol in the presence of cyclohexamide. This technique should result in the best balance between prevention of run-off, and prevention of excessive ribosome accumulation at the initiation site [8].

Conclusions

Our understanding of the mechanisms involved in regulation of translation has been sorely limited by our lack of ability to study it directly. Ribosome profiling now provides a method for us to do just that. We’ve already made huge strides in our understanding of many events in the translation process, including the discovery of hundreds of non-canonical translation initiation sites as well the realization that not all ‘non-coding’ RNAs are non-coding after all! I expect that we’ll continue to see this technique put to new and innovate questions about translation and its role in the cell as the technology matures.

If you’re interested in Ribo-Seq services enter your basic project parameters on Genohub and send us a request. We’ll be happy to help.