AGBT 2014 – Summary of Day 1

AGBT 2014 Summary

The first day of the Advances in Genome Biology & Technology (AGBT) meeting kicked off with an introduction by Eric Green, Director of the National Human Genome Research Institute. He announced that this 15th annual meeting was the largest ever with 850 expected to attend. The opening plenary session certainly did not look like 850 people in attendance. Winter Storm Pax wreaked havoc on flights coming in from Atlanta and other cities, resulting in several speaker and general attendee cancellations.

The plenary session began with scheduled talks by Aviv Regev, Jeanne Lawrence, Wendy Winckler and Valerie Schneider. Jeanne Lawrence couldn’t make it, which was a shame particularly since she gave a brilliant talk at ASHG on using a single gene XIST to shut down the extra copy of chromosome 21 in Down syndrome. This work was nicely summarized in a publication that came out this summer titled: Translating dosage compensation to trisomy 21.          

Aviv Regev and Wendy Winckler’s talks were subject to a blog/tweet embargo (unclear whether Regev’s talk was completely under embargo or only the last half, we’re playing it safe and not discussing it here), leaving Valerie Schneider’s presentation the only one that was tweeted or written about. This instantly created great angst among those attending the lectures, those stuck in airports enroute to AGBT and those at home waiting for in depth coverage.

Single-cell sequencing, considered the “method of the year” by Nature Methods was the basis of the opening lecture. Aviv Regev offered an excellent view of the dendritic cell network based on cyclical perturbations and variations between single cells. Regev’s first half of her presentation titled, “Harnessing Variation Between Single Cells to Decipher Intra and Intercellular Circuits in Immune Cells” was largely covered by her publication in April, “Single-cell transcriptomics reveals bimodality in expression and splicing in immune cells”.

The second talk, by Wendy Winckler was not allowed to be discussed or tweeted according to Winckler, courtesy of Novartis’s communications department. The title of her presentation “Next Generation Diagnostics for Precision Cancer Medicine” wasn’t revealing either. To get an idea of what she’s up to and the direction of her lecture, you can read these recent publications.

The final talk by Valerie Schneider, titled “Taking advantage of GRCh38” began with an analogy to an unwanted pair of socks one receives for Christmas that ends up being used and finally really liked. “It was time for an update….whether or not it was on your wish list”. We were reminded that centromeres are important specialized chromatin structures important for cell division, but because of repetitive regions, they are not represented in reference assemblies. Previous versions of the human reference assembly had centromeres represented by a 3M gap. The latest assembly, GRCh38 incorporates centromere models generated using whole genome shotgun reads as part of the Venter sequencing project. Since there are two copies of each centromere for each autosome, these centromere models represent an average of two copies. She concluded her presentation urging users to switch now: http://www.ncbi.nlm.nih.gov/genome/tools/remap.

 After a short break from the talks, the closing reception sponsored by Roche began outside. Halfway through, there was a brief yet sudden Florida thundershower that sent the entire AGBT community scurrying indoors for shelter. That was okay though because the conversations just continued indoors. Looking forward to tomorrow morning’s lectures. Several of the ones we’ve highlighted will be up.

 

2014 AGBT Agenda Highlights

AGBT 2014 Agenda

The yearly genomics pilgrimage to Marco Island begins next Wednesday from Feb. 12th until the 15th. The 15th Advances in Genome Biology and Technology (AGBT 2014) Agenda was released last week, it’s guaranteed not to disappoint. We expect the following lectures to be the most interesting: 

Aviv Regev, Broad Institute of MIT and Harvard
“Harnessing Variation Between Single Cells to Decipher Intra and Intercellular Circuits in Immune Cells”

-Single cell genomics is becoming an extremely useful tool, we’re eager to hear more on the first day of the meeting. 

Jeanne Lawrence, University of Massachusetts Medical School
“Silencing Trisomy 21 for Genome Balance in Down Syndrome Stem Cells”

– Her talk was the most tweeted plenary lecture during ASHG 2013 and it was great! A must go!

Evan Eichler, University of Washington
“Advances in Sequencing Technology Identify New Mutations, Genes and Pathways Related to Autism”

– Dr. Eichler studies gene duplication and DNA transposition within the human genome. This is going to be a good talk.

Beth Shapiro, University of California, Santa Cruz
“Paleogenomes, Ice-age Megafauna, and Rapid Warming: How Genomes From the Past Can Help Predict the Consequences of the Future Climate Change”

 – The paleogenomics section of the recent PAG conference was excellent. I expect this to be an interesting lecture as well. 

 Andrea Kohn, University of Florida
“Single-Neuron Semiconductor RNA-seq with Nanofluidic Capture: Toward Genomic Dissection of the Complex Brains and Memory Circuits”

– Lots of good RNA-Seq talks. Dr. Kohn’s single cell sequencing approach is described here: Single-neuron transcriptome and methylome sequencing for epigenomic analysis of aging  and here: Single-cell semiconductor sequencing.

Stephen Fodor, Cellular Research, Inc.
“Digital Encoding of Cellular mRNAs Enables Precise and Absolute Gene Expression Analysis by Single-Molecule Counting”

– What’s next from the founder of Affymetrix? For a preview check out their paper that came out this month on molecular indexing for quantitative targeted RNA-Seq.

 James Hadfield, Cancer Research, UK
“Monitoring Cancer Genome Evolution with Circulating Tumour DNA Exome Sequencing”

– Looks interesting. Hoping to get a preview from his blog: CoreGenomics

Hiroyuki Aburatani, The University of Tokyo
“Single Cell RNA Sequencing Reveals Transition of Cell Populations with Epigenomic Switch in Cell Fate Determination Along Cardiomyocyte Differentiation”

Brian Haas, Broad Institute of MIT and Harvard
“Single Cell Developmental Genomics: Trinity-Enabled Single Cell Transcriptome Study Identifies New Regulators of Salamander Limb Regeneration”

 David Jaffe, Broad Institute of MIT and Harvard
“Assembly of Bacterial Genomes Using Long Nanopore Reads”

– Expecting to see some of the first Nanopore data here !

W.R. McCombie, Cold Spring Harbor Laboratory
“A Near Perfect de novo Assembly of a Eukaryotic Genome Using Sequence Reads of Greater than 10 Kilobases Generated by the Pacific Biosciences RS II”

Hesaam Esfandyarpour, Genapsys, Inc.
“The GENIUS™ Platform: A Next Generation Sequencing Platform That Exceeds Quality and Cost Goals for Universal Deployment In and Out of Core Laboratory Environments”

– The latest NGS platform, Genapsys just got $37M in series B financing

 Zak Wescoe, University of California, Santa Cruz
“Error Rates for Nanopore Discrimination Among Cytosine and Four Epigenetic Variants Along Individual DNA Strands”

Yaniv Erlich, Whitehead Institute for Biomedical Research
“Genome-Wide Analysis of Expression Short Tandem Repeats”

Carlos Bustamante, Stanford University
“Any Way You Want It: Applications of Whole Genome Capture to Ancient DNA, Metagenomics, and Orthogonal Validation”

Daniel MacArthur, Massachusetts General Hospital
“Functional Annotation at Scale:  Analysis of Genetic Variation From Over 50,000 Human Exomes”

Genohub will be in attendance and tweeting @Genohub. Send us a message and let’s meetup to talk NGS !

Matchmaking for the Life Scientist

Seq-ing NGS data analyst for long walks on the beach (Marco Island). Experience with SNP calling and genome mapping a plus. Must be willing to deal with large data sets and my species diversity. Willing to pay hourly for a short term (one-time) relationship. Also offering to include you as an author on my paper. 

So maybe you haven’t seen a request exactly like this one, Genohub has ! There are typically two main types of people using Genohub.com.

There is the researcher who is completely new to next-gen sequencing. He or she may know nothing about sequencing read types or how to analyze data and needs help starting. The researcher may have some resources, but needs help with sequencing and / or  data analysis.  These researchers can begin their foray into NGS using our complimentary consultation form. This researcher will talk with our scientific staff and get matched with a sequencing service provider or bioinformatician best suited to handle their project. 

 Then there’s the experienced NGS researcher who has ordered library prep, sequencing or data analysis services before and knows exactly what he or she wants. Often this researcher needs to discuss a few things with a service provider before moving things forward. This type of researcher can find service providers based on the NGS instrument or library prep application they need using Genohub’s technology search page or enter the number of reads / coverage they need using our project specific search engine. Using either search applications, the researcher can find, submit their project and talk directly with a service provider.

So the next time you’re ‘seqing’ a match for your next-gen sequencing project, start with Genohub.com.

next generation sequencing consultation

 

Consider Bias in NGS Library Preparation

Library preparation for next-generation sequencing involves a coordinated series of enzymatic reactions to produce a random, representative collection of adapter modified DNA fragments within a specific range of fragment sizes. The success of next-generation sequencing is contingent on this proper processing of DNA or RNA sample. However as you go from isolating nucleic acid to finally interpreting your sequencing data, assumptions and bias prone steps can quickly reduce the value of your work. This is especially the case during library preparation.

Where are biases introduced ?

The first step in most library preparation is shearing of your DNA or RNA to fragments that are compatible with today’s short read NGS instruments. Whether you fragment your nucleic acid material using a high divalent cation buffer, nuclease, transposase, acoustic or mechanical method, you are invariably cutting at “preferred” positions resulting in fragments with non-random ends. In DNA-Seq/ChIP-Seq library prep procedures the next step is to end-repair or repair 5’overhangs and fill in 3’overhangs using a couple polymerases (or fragments of a polymerase). If you’re starting with variably fragmented ends, this process isn’t going to occur at the same efficiency for all your fragments, and neither will adenylation of these ends (commonly performed after end-repair). The next step is typically ligation of an adapter that is compatible with the instrument you plan to use. Unfortunately there aren’t that many studies that have examined bias in DNA adapter ligation. However, if you’re keeping up to date with papers coming out on RNA adapter ligation bias, there is reason to be concerned. To complete your library you typically have to amplify it, which can lead to preferences for particular fragments, and un-even fragment representation in your data. We’re just talking polymerase bias in PCR here, we haven’t even mentioned the polymerase biases on board the sequencing platform.

All RNA-Seq library prep procedures utilize reverse transcriptases that often have issues reading through regions of RNA that have secondary structure or specific base compositions. The reverse transcriptase primer you choose can also bias your results. Follow that with second strand synthesis and all the steps described above for DNA-Seq and you now have potentially the most compromised library application on hand.

A recent review nicely highlights some of these biases in more detail: Library preparation methods for next-generation sequencing: Tone down the bias

It’s not all doom and gloom, there are several ways you can reduce or measure these biases.  For example using randomized bases in small RNA library preparations, eliminating PCR  or even using a polymerase that can handle GC / AT rich regions fairly equally. Eliminating or reducing these biases will be the subject of a subsequent blog post.

Consistency in library prep method

For many applications, as long as all your samples are being treated equally, biases introduced during library preparation affect all your samples and may be less of a concern. If you’re in the middle of a long project, we highly recommend using the same library prep method or kit for your whole study. Differences in method, enzymes and reaction times between two different library prep kits can give you significantly different results. To understand differences in library preparation methodology, we have a NGS Library Prep Kit Guide that describes kits providers use on Genohub.

If you’d like help navigating issues in library preparation bias or would like to use our complementary consultation service, contact us !

 

NextSeq 500 and HiSeq X Ten: New Tech Lowering Cost per Mbp

Jay Flatley’s announcement yesterday certainly changes calculations for whole genome sequencing. Newer, cheaper optics, fluidics and reagent chemistry have lowered the cost of sequencing and enabled a 300 cycle, 125 Gb run in 30 hours with the NextSeq 500. The HiSeq X Ten, (pronounced ex ten, not ten ten) consisting of 10 instruments daisy chained together, will generate 18 Tb in 72 hours.  The new optical technology now utilizes a 2 dye system: adenine and cytosine bases are represented by one dye, an absence of dye for guanine bases and both dyes for thymine. This allows Illumina to utilize lower resolution cameras with half the number of images. The new patterned flow cells with larger clusters use nano wells and are scanned bi-directionally making optical scanning 6 times faster than a HiSeq 2500. New reagent chemistry now allows reactions to occur at room temperature, eliminating the need for a bulky chiller which reduces the instrument’s size to that of a Miseq, leading to the commonly quoted phrase: “HiSeq in a MiSeq”.

What’s the cost ?

The NEXTSeq 500 will cost $250,000 and the HiSeq X Ten must be purchased in sets of 10 at $10 million for a full set. According to Illumina, the HiSeq X Ten will yield whole human genome sequences for $1,000 each and will have the capability to generate around 15,000-20,000 genomes per year. The NEXTSeq 500 will be able to generate 120 Gb or 4, whole human genomes at 30x coverage for ~$4,000.

Excess capacity

So what will providers be doing with all this excess capacity….enter Genohub.com.  Genohub’s intelligent sequencing matching engine instantly matches researchers with service providers based on specific project criteria. Genohub facilitates the management of sequencing projects throughout the sequencing lifecycle from selecting orderable sequencing packages, to communication, payments and delivery of data. In March, NGS service facilities are going to need to recoup operational costs and convince their institutions they made the right choice dropping $250K for a NEXTSeq 500 or $10M for a HiSeq X 10 cluster. We estimate that toward the middle of 2014 there will be a lot of available NEXTSeq 500 flow cells needing filling and a much higher number of whole human genomes needed for the HiSeq X 10. Regulatory issues, data analysis bottlenecks and operational logistics will most likely keep the 5 HiSeq X 10’s fairly quiet in 2014 (Illumina has promised 5 in 2014, 3 have already been purchased). Genohub is uniquely positioned to distribute this excess capacity to researchers around the world.  Your local institution or even country no longer need to have one of these instruments on hand (See our post on reasons to outsource NGS services). By using Genohub.com, you have access to sequencing capacity and instruments located throughout the world.

Looking to use the NEXTSeq 500 ? After discussions with our current service providers, we expect NEXTSeq 500 sequencing services to be available on Genohub in 3 months. We’ve already spoken to one of the announced HiSeq X 10 customers and hope to have that service available on Genohub shortly after delivery.  So today, we’re happy to announce that Genohub is taking NEXTSeq 500 pre-delivery service requests ! Send your request through our consultation form. Check back with us in March for regular access to these platforms using our intelligent sequencing search engine.   

HiSeqX_Ten_Image

 

 

Top Next Generation Sequencing Applications

A common question we’re asked is what library preparation applications are researchers most interested in. Providers starting their own core facility, bioinformaticians writing software for a particular pipeline and others trying to gauge demand for NGS applications are most interested in this answer. In the last three months we looked at the number of initiated projects on Genohub that included library preparation. Projects initiated on Genohub are made through our Shop by Project: https://genohub.com/shop-by-next-gen-sequencing-project/ or our Shop by Technology: https://genohub.com/shop-by-next-gen-sequencing-technology/ interfaces. Users enter project information like coverage or the number of required reads and can specify if they prefer one platform over another. Genohub’s intelligent project matching engine takes this data and displays packages that consist of provider services that match the user’s request. Users who select a package and begin direct communication with the provider are considered those who have initiated a project. A summary of the library preparation applications those users choose in the projects started between 10/2013 and 12/2013 are plotted in Figure 1 (data of projects using our complementary consultation service was also included in this graph).  

projects started

RNA-Seq projects encompass all those starting with Total RNA, ribosomal depleted and poly-A select RNA. These applications were the most popular followed by projects involving whole genome sequencing. RNA-Seq’s growing versatility as both an expression analysis and de novo assembly/construction tool are likely the reasons for the greatest number of projects on Genohub. Targeted DNA applications were also frequently performed as Exome, 16S V4 and other Amplicon-Seq projects consisted of the 3rd, 4th and 5th most commonly started projects on Genohub. While not illustrated in Figure 1, specialized applications related to Methyl-Seq and ChIP-Seq were some of the fastest growing.

Having recently started, we expect these numbers to grow significantly. We’ll keep the community updated with our latest data. If you’re a researcher or service provider that has a unique NGS application, we’d like to hear about it ! For inquiries or suggestions please contact us at support@genohub.com.

Choosing the Right NGS Instrument for Your Research

If you’re about to embark on a high throughput sequencing project, choosing the right sequencing instrument to use is an important consideration. Perhaps you’re replicating a published study or repeating an experiment from previous work and the instrument you plan to use is known. If not, the right sequencing instrument should be based on the sequencing goal you are trying to achieve. Instrument features to take into consideration include: number of reads per run, read length, read type (paired or single end), error type, turnaround time and price. Using Genohub’s Shop by Project page, you can enter the number of required reads or coverage you need and instantly compare instruments, filtering by read length and sorting by turnaround time and price. To get a better idea for the differences between NGS instruments, we’ve generated the following comparison: Table 1.   

Certain instruments are ideally suited to specific applications. Illumina instruments are versatile and ideal for a variety of sequencing applications, including: de novo assembly, resequencing, transcriptome, SNP detection and metagenomic studies. The HiSeq and GAIIx instruments are both suited for analyzing large animal or plant genomes. High level multiplexing of samples are possible when analyzing species with a smaller genome size. While the Illumina MiSeq outputs significantly fewer reads (Table 1), its read lengths are significantly longer making it ideal for small genomes, sequencing long variable domains or targeted regions within a genome. The only real limitation to the Illumina platform is its relatively short reads compared to other platforms (Roche 454 and PacBio).

The Ion PGM (Ion Torrent), is ideal for amplicons, small genomes or targeting of small regions within a genome. Its low throughput makes it ideal for smaller sized studies. The Ion Proton however is capable of generating significantly larger outputs (Table 1) making sequencing of transcriptome, exome and medium sized genomes possible.

The PacBio RS/RS II breaks the mold of other short reads high throughput sequencing instruments by focusing on length. The reads, averaging ~4.6 kb are significantly longer than other sequencing platforms making it ideal for sequencing small genomes such as bacteria or viruses. Other advantages include its ability to sequence regions of high G/C content and determine the status of modified bases (methylation, hydroxymethylation) without necessitating the need for chemical conversion during library preparation. The instrument’s low output of reads prevent it from being useful for assembly of medium to large genomes.

The Roche 454 FLX+ is typically used in studies where read length is critical. These include de novo assemblies of microbial genomes, BACs and plastids. It’s long read length has made it a favorite of those examining 16S variable regions and other targeted amplicon sequences. The lower output of the FLX and FLX+ instruments make it less cost-effective for transcriptome or larger genome studies. Roche has announced that it will stop producing the 454 in 2015 and end servicing in mid-2016. 

The SOLiD series of instruments are high throughput, generating a large number of short reads. De novo sequencing, differential transcript expression and resequencing are all viable applicaions of the SOLiD platform. The weakness of the platform is its short reads making assembly very difficult. 

If you’re still not sure about what NGS instrument to choose for your work, feel free to contact us for our complementary sequencing project consultation

5 Reasons to Consider Outsourcing Your Sequencing Project

According to the US News & World Report [1] there are 257 universities with biological sciences graduate programs in the US alone. Of these, fewer than a hundred [2] have genomics core facilities.  If you’re a researcher in an institution without a sequencing core facility you’ll obviously have to outsource your sequencing projects. We’ve also found that many researchers with access to their own local core facilities often choose to get their sequencing done  elsewhere, be it at other core facilities or commercial NGS providers. Here are some of the reasons why:

 

1. Shorter turnaround times

Depending on the volume of projects a core lab can handle at any given time, researchers are often faced with long queues before capacity opens up for their project. Even if a facility has plenty of capacity,  high-throughput instruments allow for multiple libraries to run simultaneously. For example the popular Illumina HiSeq 2000 sequencer has flow cells that have 8 lanes each. Reagent costs are largely independent of how many lanes are filled, so it makes sense for the facility manager to wait till there are enough projects to fill up all lanes before starting a new run. These are some of the factors that can lead to turnaround times as long as several weeks or even a few months. Especially when faced with tight publication deadlines it often makes sense for researchers to consider getting their sequencing done at a facility that can offer a shorter turnaround time for their project.

 

2. Access to instruments not available locally

Not all sequencing instruments are created equal. Typically the relevant parameters are total output per run, number of reads per run, read length, price per run and run duration.  Each sequencer is designed to strike a different balance between these parameters and there is no single sequencer that would be the optimal choice for all projects. Sometimes the right instrument for a project is simply not available at the local facility.

For example, applications such as de novo assemblies of novel genomes benefit from longer read lengths which will span more repeats and missing bases, closing gaps and simplifying finishing.  Going with an instrument like the PacBio RS whose average read length is currently ~4600 bp with some reads up to ~20,000 bp has become routine for those looking to completely assemble bacterial genomes. Long read lengths also allow you to uncover complex structural variations and identify where copy number variations have occurred relative to a reference genome. For transcriptome analysis, they help resolve RNA splicing patterns as long reads allow for a greater chance that the entire transcript is read, eliminating the need to infer isoforms.

 

3. Bioinformatics capabilities and expertise:

When it comes to analysis of sequencing data, there are typically three things to consider:

  1. Expertise to design or select the right analysis pipeline for the given project
  2. Access to and familiarity with the right software tools
  3. Access to sufficient hardware resources, e.g. compute clusters.

It’s fairly common for researchers to not have all three available locally. Most NGS facilities can typically assist with one or more of these, but the vast majority of facilities have unique expertise and specializations.  For example, alignment of methylation sites using bisulfite-seq data differs from alignment of regular DNA sequence in that the conversion of unmethylated cytosines to uracil decreases the total amount of information available for alignment of sequenced reads against a reference genome. Some approaches align cytosines to reference genome cytosines and sequenced thymines to cytosines or thymines in the reference genome. An alternative method is to remove the bias toward the alignment of methylated reads at the cost of degrading the total information available for alignment. You will need to work with a provider who has experience with methylation analysis to determine which is the most appropriate for your study.

On the hardware resource front, while for example the E. Coli genome can be assembled in 15 minutes with a desktop computer equipped with 32GB of RAM, what about an organism that hasn’t been sequenced before? Will you be waiting on your core to setup a de novo pipeline? What if the analysis you need doesn’t fit a cookie cutter pipeline, does your core facility have the capacity to perform custom analysis services? These are all questions to think about when deciding where to get your NGS project done. 

 

4. Library preparation expertise

Properly designed libraries are probably the single most important part of a successful sequencing experiment. Library preparation is a relatively manual process, so experience matters and it will be reflected in your sequencing results. It is important to use facilities that have experience in the library preparation application you are looking for.

Each sequencing service facility usually has a core 3-4 library prep applications they are experts in. While almost every service facility you encounter has DNA-Seq and RNA-Seq in their “expert list”, if you’re looking for amplicon or targeted capture, it is important the facility has performed this type of work. Outside a core’s defacto 3-4 library prep applications, we recommend spending the time to  look for a facility experienced with your application.  Otherwise,  you may be just as successful by purchasing a kit and making your own libraries.

As a specific example, if you outsource chromatin immunoprecipitated or microRNA samples to a facility that hasn’t performed ChIP-Seq or Small RNA- Seq library prep, chances are you won’t be getting back anything that’s sequenceable. A fault of current small RNA library methodologies are adapter dimers that form during the preparation. An experienced provider will know to look for these and ensure that 50% of your reads are not wasted on meaningless sequence. ChIP-Seq or any low input library preparation methodology is prone to high duplication rates. Again, an experienced operator will know to perform qc checks and limit PCR cycles so that you achieve reads with high diversity. 

 

5. Cost

It’s needless to say that using the most efficient use of research funds is important. This is especially true given the recent tightening of government funding of life science research, at least in the US and Europe. Everything else being equal , it’s often wise to at least shop around to see whether there are facilities that can carry out your sequencing project at substantially lower prices. This used to be a very time consuming process but using Genohub you can now easily make apples-to-apples comparisons of NGS services with just a few clicks.

 

We’re here to help

Our job here at Genohub is to make it very easy for researchers to find NGS service providers and facilitate such collaborations. You can use our project matching page to compare very accurate information about NGS services from a wide variety of service providers and easily submit your project. We’d also love to hear from researchers directly and help identify the right sequencing solution and service provider. Just fill our free consultation form or email us at support@genohub.com.

 

References:

[1] US News& World Report Biological Sciences Graduate Programs

[2] http://omicsmaps.com/

Library Preparation Guide

One of the most important steps in the sequencing process is library construction. With the rapid expansion of Next Generation Sequencing into a growing number of labs, the number of techniques to create libraries has grown. Heard or Gro-Seq, HITS-CLIP or GBS? These are just a few examples of established techniques that took on a sequencing twist (genomic run-on, cross-linking immunoprecipitation and genotyping respectively). You could say that the growing throughput of sequencing has enabled these methods and that these methods are enabling the growth of sequencing into more labs. Our aim is to keep track of these techniques, offer brief descriptions of each, along with comparison metrics. We’ve started by writing the Guide to Library Preparation, describing the techniques and kits used by our service providers on Genohub. When searching for library preparation services on Genohub, researchers will be able to see the type of kit a provider uses and click on its name for a more detailed description of the technology behind the construction of libraries.

As of today, there are a little more than 200 different techniques to construct libraries. Our guide is certainly a work in progress. If you’ve recently developed a new technique and would like it described in our guide, please contact us with a description of the library protocol. We’re also looking for feedback on the guide’s content and what you think we should add.

3 Top Factors Researchers Consider When Selecting an NGS Provider

At Genohub, not only do we seek feedback from researchers, our development methodology is almost entirely based on this feedback. We receive this feedback via website forms as well as routine one-on-one conversations with some of the top researchers using next generation sequencing for their projects. Through this data and interaction, certain trends have begun to emerge which may be useful to an NGS provider seeking additional projects. This list is not based on a controlled experiment, however countless conversations indicate that these factors are extremely important:

  1. Turnaround time – this one is a toss up when compared with price, but we typically find turnaround time to be among the leading factors in a researcher’s decision to select an NGS provider. We have heard quite a few stories of researchers seeing turnaround times over several months for library prep and sequencing.
  2. Price – while this is one of the biggest factors for researchers, it must be qualified with established trust which is the next major factor.
  3. Trust – this one is a biggie for many researchers and often a non-starter if not established. The main reasons for this are that researchers are hesitant to ship their precious samples (ie human brain tissue) to an NGS provider for quite often costly sequencing if they are not confident in their abilities. Researchers have told us some of the things they look for which lend to building their confidence:
    • Referrals & Reviews – researchers seek out colleagues who have done similar projects and look for recommendations. Word of mouth is one of the biggest methods researchers rely on to select an NGS provider.
    • Publications – providers who are listed in publications involving similar projects.
    • What kind of QC will be run on the sample.
    • Overall experience indicators such as time in business and volume of samples regularly handled.
    • Data and sample security.
    • Location – this factor is considerably important if previous trust is not established. Some researchers have absolutely no problem shipping samples across the globe, while others might physically drive their samples to a local provider to ensure sample integrity.

We would love to hear your feedback on this topic whether you are an NGS provider, or a researcher actively using next sequencing. What other decision driving criteria have you found as a provider, or what are some other factors important to you as a researcher?