Next Generation Sequencing Applications in the Clinic Today

clinical sequencing carcinoma

Every year an increasing number of next-generation sequencing based diagnostic assays are validated and enter into clinical practice. Hundreds more are developed for pre-clinical research purposes. NGS applications being used in the clinic today include pre-implantation genetic screening (PGS) for in vitro fertilization, chromosomal aneuploidy detection, mutation analysis of patient samples, and sequence driven chemotherapeutics.

Comprehensive assays that identify single base substitutions and fusion events are commonly performed to establish a diagnosis or to help in deciding what drug treatment is best. While routine implementation of clinical NGS in oncology is still in it’s infancy, several assays are currently being performed in CLIA-certified environments: amplicon-based gene panels (1, 2), targeted capture based gene panels (3, 4), full exome and transcriptome-seq (5, 6), whole genome and RNA-seq (7-9). Some patients are even being treated with drugs for off-label indications, based on NGS tests (of all chemotherapeutic prescriptions, 33-47% are off-label (10)). As assays are standardized and confirmed by orthogonal platforms, we expect see an increase in the number of these clinical applications.

NGS-based pre-implantation genetic screening has significantly changed prenatal testing and screening. Currently only ~ 25% of invitro-fertilization procedures are successful. This low rate of success is due to an increasing maternal age and chromosomal aneuploidy.  PGS is performed to select chromosomally balanced embryos during the IVF process, ensuring only euploid embryos are implanted. This NGS based assay has been shown to improve implantation success rates and is being used in clinics today.

Non-invasive prenatal testing using cell-free fetal DNA circulating in maternal blood allows for the detection of genetic diseases and common chromosomal aneuploidies such as trisomies 13, 18, and 21. Fetal DNA, comprising about 10% of the DNA in maternal circulation becomes detectable between 5-10 weeks after conception. The method allows for an early assessment of aneuploidy without the risk of harming the fetus. Sequenom, Verinata Health, Ariosa Diagnostics and Natera each offer CAP and CLIA certified tests and are available at OB/GYN offices.

In November 2013, the FDA cleared the Illumina MiSeqDx platform as a class II device and a cystic fibrosis carrier screening assay. The assay detects 139 variants in the cystic fibrosis transmembrane conductance regulator gene (CFTR) gene.

These are highlights of assays one could expect to receive if visiting a clinic today. We’d like to hear from you about others in development or those currently in practice.

 

1)    Committee on a Framework for Development a New Taxonomy of Disease & the National Research Council. Toward Precision Medicine: Building a Knowlege Network for Biomedical Research and a New Taxonomy of Disease (National Academies Press, 2011).

2)    Beadling, C. et al. Combining highly multiplexed PCR with semiconductor-based sequencing for rapid cancer genotyping. J. Mol. Diagn. 15, 171–176 (2013).

3)    Dagher, R. et al. Approval summary: imatinib mesylate in the treatment of metastatic and/or unresectable malignant gastrointestinal stromal tumors. Clin. Cancer Res. 8, 3034–3038 (2002)

4)    Davies, H. et al. Mutations of the BRAF gene in human cancer. Nature 417, 949–954 (2002).

5)    Gnirke, A. et al. Solution hybrid selection with ultra-long oligonucleotides for massively parallel targeted sequencing. Nature Biotech. 27, 182–189 (2009).

6)    Roychowdhury, S. et al. Personalized oncology through integrative high-throughput sequencing: a pilot study.Sci. Transl. Med. 3, 111ra121 (2011).

7)    Wagle, N. et al. High-throughput detection of actionable genomic alterations in clinical tumor samples by targeted, massively parallel sequencing.Cancer Discov. 2, 82–93 (2012).

8)    Matulonis, U. A. et al. High throughput interrogation of somatic mutations in high grade serous cancer of the ovary. PLoS ONE 6, e24433 (2011).

9)    Weiss, G. J. et al. Paired tumor and normal whole genome sequencing of metastatic olfactory neuroblastoma. PLoS ONE 7, e37029 (2012).

10) Conti, R.M. et al. Prevalence of Off-Label Use and Spending in 2010 Among Patent-Protected Chemotherapies in a Population-Based Cohort of Medical Oncologists. J. Clin. Oncol. 31, 1134–1139 (2013).

Beginner’s Guide to Exome Sequencing

Exome Capture Kit Comparison

With decreasing costs to sequence whole human genomes (currently $1,550 for 35X coverage), we frequently hear researchers ask, “Why should I only sequence protein coding genes” ?

First, WGS of entire populations is still quite expensive. These types of projects are currently only being performed by large centers or government entities, like Genomics England, a company owned by UK’s Department of Health, which announced that they would sequence 100,000 whole genomes by 2017. At Genohub’s rate of $1,550/genome, 100,000 genomes would cost $155 million USD. This $155 million figure only includes sequencing costs and does not take into account labor, data storage and analysis which is likely several fold greater. 

Second, the exome, or all ~180,000 exons comprise less than 2% of all sequence in the human genome, but contain 85-90% of all known disease causing variants. A more focused dataset makes interpretation and analysis a lot easier.

Let’s assume you’ve decided to proceed with exome sequencing. The next step is to either find a service provider to perform your exome capture, sequencing and analysis or do it yourself. Genohub has made it easy to find and directly order sequencing services from providers around the world. Several of our providers offer exome library prep and sequencing services. If you’re only looking for someone to help with your data analysis, you can contact one of our providers offering exome bioinformatics services. Whether you decide to send your samples to a provider or make libraries yourself, you’ll need to decide on what capture technology to use, the number of reads you’ll need and what type of read length is most appropriate for your exome-seq project.

There are currently three main capture technologies available: Agilent SureSelect, Illumina Nextera Rapid Capture, Roche Nimblegen SeqCap EZ Exome. All three are in-solution based and utilize biotinylated DNA or RNA probes (baits) that are complementary to exons. These probes are added to genomic fragment libraries and after a period of hybridization, magnetic streptavidin beads are used to pull down and enrich for fragmented exons. Each of these three exome capture technologies is compared in a detailed table: https://genohub.com/exome-sequencing-library-preparation/. Each kit has a varying numbers of probes, probe length, target region, input DNA requirements and hybridization time. Researchers planning on exome sequencing should first determine whether the technology they’re considering covers their regions of interest. Only 26.2 Mb of total targeted bases are in common, and only small portions of the CCDS Exome are uniquely covered by each tech (Chilamakuri, 2014).

Our Exome Guide breaks down the steps you’ll need to determine how much sequencing and what read length is appropriate for your exome capture sequencing project.

rRNA Depletion / Poly-A Selection Responsible for Coverage Bias in RNA-seq

Using a pool of 1,062 in vitro transcribed (IVT) human cDNA plasmids, a group from the University of Pennsylvania sought to characterize coverage biases in RNA-seq experiments. Their paper, titled IVT-seq reveals extreme bias in RNA-sequencing was published last week.

The authors cleverly use a carefully controlled set of IVT cDNA clones whose base composition and expression levels are known. Mixing the IVT set with mouse total RNA they found > 2 fold differences in transcript coverage amongst 50% of their transcripts and 10% having up to 10 fold changes. When IVT cDNA clones are sequenced alone, in the absence of a complex genomic milieu, the authors acknowledge biases that arise from random priming, adapter ligation, and amplification, but identify polyA selection and ribosomal depletion as being the main cause for RNA coverage bias. In their experiment, they consider hexamer entropy, GC-content, similarity of sequence to rRNA and measure coverage variability as an indicator of coverage bias along with depth of coverage as measured by FPKM. They demonstrate a significant correlation between transcript similarity to rRNA and greater differences in coverage between libraries that undergo rRNA depletion and those that do not.

Overall their method demonstrates that library preparation does introduce significant biases in RNA-seq data and that developing carefully controlled synthetic test transcripts, allows users to accurately measure this bias. Development of these controlled sets will allow for further refinement to current library preparation practices.

 

Ask a Bioinformatician

In the last 2 years, next-gen sequencing instrument output has significantly increased; labs are now sequencing more samples with greater depth than ever before. Demand for the analysis of next generation sequencing data is also growing at an arguably even higher rate. To help accommodate this higher demand, Genohub.com now allows researchers to quickly find and connect directly with service providers who have specific data analysis expertise: https://genohub.com/bioinformatics-services-and-providers/

 Whether it’s a simple question about gluing piece together in a pipeline or a request to have your transcriptome annotated, researchers can quickly choose a bioinformatics provider based on their expertise and post queries and project requests. Services that bioinformaticians offer on Genohub are broken down into primary, secondary and tertiary data analysis:

Primary – Involves the quality analysis of raw sequence data from a sequencing platform. Primary analysis solutions are typically provided by the platform after the sequencing phase is complete. This often results in a FASTQ file, which is a combination of sequence data and Phred quality scores for each base.

Secondary – Encompasses sequence alignment, assembly and variant calling of aligned reads. Analysis is usually resource intensive, requiring a significant amount of data and compute resources. This type of analysis often requires a set of algorithms that can often be automated into a pipeline. While the simplest pipelines can be a matter of gluing together publically available tools, a certain level of expertise is required to maintain and optimize the analysis flow for a particular project. 

Tertiary Analysis – Annotation, variant call validation, data aggregation and sample or population based statistical analysis are all components of tertiary data analysis. This type of analysis is typically performed to answer a specific biologically relevant question or generate a series of new hypothesis that need testing.

Researchers that still need library prep, sequencing and data analysis services can still search find and begin projects as before using our Shop by Project page. What’s new is that researchers who only need data analysis services can now directly search for and contact a bioinformatics service provider to request a quote: https://genohub.com/bioinformatics-services-and-providers/

Whether you plan performing a portion of your sequencing data analysis yourself or intend on taking on the challenge of putting together your own pipeline, consultation by a seasoned expert saves time and ensures you’re the way to successfully completing your project. By adding this new service, we’re trying to make it easier to search for and identify the right provider for your analysis requirements.

If you’re a service provider and would like your services to be listed on Genohub, you can sign up for Service Provider Account or contact us to discuss the screening and approval process.

 

PEG Precipitation of DNA Libraries – How Ampure or SPRIselect works

Polyethylene_glycol

One question we’ve been asked, and one that our NGS providers are frequently asked, is how in principle does PEG precipitate DNA in next generation sequencing library preparation cleanup? We usually hear the question presented as: how do Agencourt’s Ampure XP or SPRIselect beads precipitate DNA? The answer has to do with the chemical properties of DNA, polyethylene glycol (PEG), the beads being used and water.  Polystyrene – magnetite beads (Ampure) are coated with a layer of negatively charged carboxyl  groups. DNA’s highly charged phosphate backbone makes it polar, allowing it to readily dissolve in water (also polar). When PEG [ H-(O-CH2-CH2)n-OH ] is added to a DNA solution in saturating condition, DNA forms large random coils. Adding this hydrophilic molecule with the right concentration of salt (Na+) causes DNA to aggregate and precipitate out of solution from lack of solvation (1, 2). Too much salt and you’ll have a lot of salty DNA, too little will result in poor recovery. The Na+ ions shield the negative phosphate backbones causing DNA to stick together and on anything else that’s in near vicinity (including carboxylated beads). Once you’re ready to elute your DNA and put it back into solution (after you’ve done your size selection or removal of enzymes, nucleotides, etc.) an aqueous solution is added back (TE or water) fully hydrating the DNA and moving it from an aggregated state back into solution. The negative charge of the carboxyl beads now repel DNA, allowing the user to extract it in the supernatant. Changing the amount of PEG and salt concentration can aid in size selecting DNA (2). This is a common method in NGS library preparation where the user is interested in size selecting a fragment of particular size. It’s often used to replace gel steps in NGS library prep. There is already a wealth of literature out there on conditions to size select DNA, just do a simple google search. The first article we’ve found that describes this selection is referenced below (3).

Updated 7/18/2016

Since this publication on May 7th, 2014, there are several more commercial, Ampure-like size selection beads on the market:

  • MagJet – ThermoFisher
  • Mag-Bind – Omega Biotek
  • Promega Beads – Promega
  • Kapa Pure Beads – Kapa Biosystems

While we haven’t explored each one of these yet, we suspect the chemistry behind precipitation and selection is very similar. If you’d like to share information about these beads, please leave us a comment or send us an email at support@genohub.com

If you’d like help in constructing your NGS library contact us, and we’d be happy to consult with you on your sequencing project: https://genohub.com/ngs-consultation/

If you’re looking for a NGS service provider, check out our NGS Service Matching Engine: https://genohub.com/.

(1)     A Transition to a Compact Form of DNA in Polymer Solutions: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC389314/pdf/pnas00083-0227.pdf

(2)    DNA Condensation by Multivalent Cations: https://www.biophysics.org/Portals/1/PDFs/Education/bloomfield.pdf

(3)    Size fractionation of double -stranded DNA by precipitation with polyethylene glycol: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC342844/pdf/nar00500-0080.pdf

 

NextSeq, HiSeq or MiSeq for Low Diversity Sequencing ?

Low diversity libraries, such as those from amplicons and those generated by restriction digest can suffer from Illumina focusing issues, a problem not found with random fragment libraries (genomic DNA). Illumina’s real time analysis software uses images from the first 4 cycles to determine cluster positions (X,Y coordinates for each cluster on a tile). With low diversity samples, color intensity is not evenly distributed causing a phasing problem. This tends to result in a high phasing number that deteriorates quickly.

Traditionally this problem is solved in two ways:

1)      ‘Spiking in’ a higher diversity sample such as PhiX (small viral genome used to enable quick alignment and estimation of error rates) into your library.  This increases the diversity at the beginning of your read and takes care of intensity distribution across all four channels. Many groups spike in as much as 50% PhiX in order to achieve a more diverse sample. This disadvantage of this is that you lose 50% of your reads to sample you were never interested in sequencing.

2)      Other groups have designed amplicon primers with a series of random ‘N’ (25%A, 25%T, 25%G, 25%C) bases upstream of their gene target. This and a combination of PhiX spike also helps to increase color diversity. The disadvantage is that these extra bases cut into your desired read length and can be problematic when you are trying to conserve cycles to sequence a 16S variable domain.

Last year, Illumina released a new version of their control program that included updated MiSeq Real Time Analysis (RTA) software that significantly improves the data quality of low diverse samples. This included 1) improved template generation and higher sensitivity template detection of optically dense and dark images,  2) a new color matrix calculation that is performed at the beginning of read 1, 3) using 11 cycles to increase diversity, and 4) new optimizations to phasing and pre-phasing corrections to each cycle and tile to maximize intensity data. Now with a software update and as little as 5% PhiX spike-in, you can sequence low diversity libraries and expect significantly better MiSeq data quality.

Other instruments, including the HiSeq and GAIIx still require at least 20-50% PhiX and are less suited for low diversity samples. If you must use a HiSeq for your amplicon libraries take the following steps with low diversity libraries:

1)      Reduce your cluster density by 50-80% to reduce overlapping clusters

2)      Use a high amount of PhiX spike in (up to 50%) of the total library

3)      Use custom primers with a random sequence to increase diversity. Alternatively, intentionally concatamerize your amplicons and fragment them to increase base diversity at the start of your reads.

The NextSeq 500, released in March of 2014, uses a two channel SBS sequencing process, likely making it even less suited for low diversity amplicons. As of 4/2014, Illumina has not performed significant validation or testing using low diversity samples on the NextSeq 500. It is not expected the NextSeq 500 instrument will perform better than the HiSeq for these sample types.

So, in conclusion, the MiSeq is currently still the best Illumina instrument for sequencing samples of low diversity: https://genohub.com/shop-by-next-gen-sequencing-technology/#query=c814746ad739c57b9a69e449d179c27c

100 Gb of Data per Day – Nextseq 500 Sequencing Services Now Available on Genohub

Nextseq 500, Genohub

Find Nextseq 500 service providers on Genohub.com

Access to the Nextseq 500, Illumina’s first high throughput desktop sequencing instrument, is now available on Genohub.com. While not the highest throughput instrument on the market, it is one of the fastest with up to a 6x increase in bases read per hour (compared to HiSeq). The instrument is ideally suited for those who need a moderate amount of sequencing data (more than a MiSeq run, less than HiSeq) in a short amount of time. We expect the highest interest to be centered around targeted sequencing (exome or custom regions) and fast RNA profiling. For exome studies, you can run between 1-12 samples in a single run and get back 4 Gb at 2 x75 or 5 Gb at a 2×100 read length. If you’re interested in RNA profiling at 10M reads per sample, you can multiplex between 12-36 samples together in a single run. A 1×75 cycle run takes as few as 11 hours to complete and 2×150 runs take ~29 hours.

You can order Nextseq 500 sequencing services today and expect to receive data back in 3-4 days ! Prices for 1 lane start at $2,250. Start your search here and use our helpful filters to narrow down your choices:  https://genohub.com/shop-by-next-gen-sequencing-technology/#query=e4c84df6f5ddd963cc48c30d3f93d505

After you’ve identified the service you need, communicate your questions directly to the service provider. We’ll make sure you get a fast response. Genohub also takes care of billing and invoicing, making domestic & international ordering a breeze. We also have an easy to use project management interface to keep communication and sample specification data in one place.

If you’re not familiar with Nextseq technology or how best this instrument can be applied to your samples, take advantage of our complementary consultation service: https://genohub.com/ngs-consultation/. We can help with your sequencing project design and make recommendation as to what sequencing service would be best suited for your experiment.

Last month we announced the availability of HiSeq X Ten services on Genohub: https://blog.genohub.com/the-1k-30x-whole-human-genome-is-now-available-for-1400/

As an efficient online market for NGS services, Genohub increases your access to the latest instrumentation and technology.  You don’t have to shell out $250K or $10M for a NextSeq or HiSeq X Ten, when access to professional services is right at your fingertips !

 

 

 

Yearly Demand for Whole Human Genome Sequencing – 400K New Genomes in 2015 ?

hgp_measures

(Figure courtesy of the National Human Genome Research Institute: http://www.genome.gov/27553526) 

Advances since the Human Genome Project ended in 2003 have been significant. With new Illumina sequencing instruments becoming operational in April, large facilities will be able to generate 18,000 whole human genomes (18,000 30x Genomes / HiSeq X Ten, a set of 10 HiSeq X Systems). As of today, these facilities include: the Broad Institute, Garvan Research Foundation, Macrogen, New York Genome Center, Novogene and WuXi PharmTech. At a rate of 1 genome / lane, this begs the question how many 30x human genomes will be sequenced in the next 3 years ? Let’s estimate that each facility will churn out around 25,000, 30x genomes/year (some of the facilities above have purchased multiple HiSeq X Tens, others have more than 10 daisy chained together). In 2015 yield from these facilities alone (assuming no one else purchased a machine) would be ~150,000 genomes. Optimistically doubling that to account for new HiSeq X Ten purchases between now and 2015 would give an estimate of ~300,000 genomes in 2015, and that’s only on the HiSeq X Ten. Assuming this year there will already be 60,000 30x (non-HiSeq X Ten) genomes sequenced, 20% growth brings this figure closer to ~400,000 genomes in 2015. While this figure certainly does not account for delays, instrument break downs, data analysis, storage and library prep bottlenecks, it represents optimistic potential for 2015.

The next question is who’s going to supply all the DNA ? Several new initiatives to sequence whole populations are quickly popping up. With £100m earmarked, the UK is planning on sequencing the genomes of up to 100,000 NHS patients by 2017 (instrument platform likely Illumina), Saudi Arabia also plans to map 100,000 of their citizens, with the Ion Proton in line ready to do all the heavy lifting: http://www.bbc.com/news/health-25216135. Craig Venter’s recent launch of the company Human Longevity plans to start sequencing 40,000 genomes with plans to “rapidly scale to 100,000 human genomes / year”: http://www.humanlongevity.com/human-longevity-inc-hli-launched-to-promote-healthy-aging-using-advances-in-genomics-and-stem-cell-therapies/.

Everything described above pertains to whole human genome sequencing and is not meant to undercut the significantly higher number of other species that will be sequenced between now and 2015. Our focus at Genohub is to make it easy for researchers interested in next generation sequencing services to access all the latest sequencing technology, including the HiSeq X Ten: https://genohub.com/shop-by-next-gen-sequencing-technology/#query=e304abac02105b87079fd1a19e70b9ed. Anyone can search for, find and order sequencing, library prep and analysis services, making owning an actual sequencing instrument not a requirement for getting access to good quality data.

 

 

The “$1K”, 30X Whole Human Genome is now available for $1,400

HiSeq X Ten Sequencing Services now Available on Genohub

You can now order whole human genome sequencing (~30x coverage) on Genohub.com for $1,400 / sample ($1,550 with library prep). The Kinghorn Centre for Clinical Genomics is accepting orders for their HiSeq X Ten service through Genohub.com.  In fact, you can order this service today: https://genohub.com/shop-by-next-gen-sequencing-technology/#query=5a4399a2a2cab432b240d2426c708472

Designed for population scale human genome sequencing, the HiSeq X Ten when operating individually can output between 1.6-1.8 Tb on a dual flow cell in less than 3 days (600 Gb / day). When running 10 in parallel, tens of thousands of genomes can be sequenced in a single year. While currently Illumina has limited the HiSeq X Ten to human samples, we expect this will change in 2015. 

A single lane of HiSeq X Ten, gives you 750M paired end 2x 150 reads, for a total output of 112.5 Gb / lane. Kinghorn guarantees 100 Gb raw data per lane, with >75% of bases above Q30 at 2x150bp. With a haploid human genome size of 3.2 Gb, that’s equivalent to 30-35x  per lane of sequencing.  The $10 million price tag for the HiSeq X Ten means that not all institutes have access to such sequencing power. Genohub solves this problem by making it easy for researchers interested in next generation sequencing services to access all the latest sequencing technology. We also:

  1. Ensure your project with the provider goes smoothly
  2. Take care of billing and invoicing, making domestic & international ordering a breeze
  3. Have an easy to use project management interface to keep communication and information in one place
  4. Offer NGS project design and consultation
  5. Have competitive pricing and turnaround times

Start your population study on Genohub.com today !

 

 

 

AGBT 2014 – Digest of Days 2 – 4

AGBT 2014 Summary

Days 2 – 4 of the Advances in Genome Biology and Technology meeting (AGBT) meeting were packed with great talks and insightful comments #AGBT14. For a summary of day 1 check out our earlier post. We’re just going to give a brief digest of the talks we attended and will update this blog post as we fill in more details.

Day 2 at the Advances in Genome Biology and Technology meeting (AGBT) started off with a cancellation of the much-anticipated talk by Evan Eichler on “Advances in Sequencing Technology Identify New Mutations, Genes and Pathways Related to Autism”. Dr. Eichler studies gene duplication and DNA transposition within the human genome. We’re assuming that what he was going to present was just published this month, “de novo convergence of autism genetics and molecular neuroscience”. Let us know if you know otherwise or when Dr. Eichler is speaking again!

Stephen Fodor, founder of Affymetrix announced a new approach for quantitation of mRNA transcripts in single cells. The method utilizes a single tube endpoint assay and allows for precise measurements of transcripts without the need for cycle to cycle real time measurements or physical partitioning (digital PCR). The procedure works by encoding all mRNA molecules with molecular barcodes allowing the user to amplify their samples and not worry about duplication rates. Pixel™, the name of their device is described in more detail in their white paper.

Hiroyuki Aburtani presented a lecture on applying epigenome profiling and single cell transcriptome analysis and demonstrated that Wnt / B-catenin signaling switches transcriptional networks and re-organizes the epigenome into specific pluripotent lineages. Their analysis is designed to improve the understanding of cell fate during differentiation.

David Jaffe’s much anticipated talk on the assembly of bacterial genomes using long nanopore reads generated a significant amount of interest. David presented data on two bacterial genomes, methylation negative E. coli and Scardovia. His conclusion was that the data was not useful by itself for de novo assembly, but he speculated on applications that would benefit from data where 84% of reads had at least a perfect 50-mer. This talk was nicely summarized by two other blog posts: The one and only Oxford Nanopore talk at AGBT 2014 – with real data and Oxford Nanopore Data and MinION: Valentines Day’s Gift to Genome Enthusiasts.

William McCombie’s talk demonstrated his group’s ability to generate 10 kilobase reads to assemble the Saccharoyces W303 genome using HGAP and the Celera Assembler. Their resulting contig N50 approached 1 million bases. Their hybrid assembly made it possible to generate whole genome, eukaryotic genomes that exceed BAC assemblies with Sanger sequencing.

Gene Myers discussed a new assembler called the “Dazzler” (the Dresden Azzembler) that can assemble 1-10 Gb genomes from PacBio RSII data. The advantages for the new assembler are that it can scale to Gb genomes 100 fold faster than current assemblers and it has a “scrubbing phase” that detects and corrects read artifacts that can cause problems with long contiguous assemblies. A nice summary of this talk is described in Dale Yuzuki’s blog post: http://www.yuzuki.org/favorite-talk-agbt-2014-gene-myers-max-planck-dresden/

 Hessam Esfandyarpour from Genapsys, presented on what they call a “truly cost-disruptive sequencing platform”, the GENIUS 110 System. Backed with just under $50 million in venture funds from Yuri Milner, DeChang Capital and IPV Capital, the GENIUS system uses nano-electronic technology (clonal amplification as opposed single molecule sequencing) combined with unmodified nucleotides and polymerase to generate “long reads”. They announced the Genius Club, an early access program, to test the instrument. While he did talk about consumables, three chips with 1Gb, 10Gb and 100Gb, he didn’t show much data. We’ll have to wait to hear more.

Jeffery Schloss, director of the division of Genome Sciences at the National Human Genome Research Institute gave a talk on “Ambitious Goals, Concerted Efforts, Conscientious Collaborations – 10 Years Hence”. His talk began with a graph measuring the cost per base on the y-axis and the years from 1990-2005 on the x-axis. He reiterated their strategic plan of Base pairs to Bedsides: http://www.genome.gov/Pages/About/Planning/2011NHGRIStrategicPlan.pdf. Schloss’s talk was nicely summarized in a recent Dale Yuzuki’s post.

We’re in the process of updating and adding summaries of talks from AGBT 2014. Check in again !