Monday, 15 September 2014

Are PCR-free exomes the answer

I'm continuing my exome posts with a quick observation. There have been several talks recently that I've seen where people present genome and exome data and highlight the drop-out of genomic regions primarily due to PCR amplification and hybridisation artefacts. They make a compelling case for avoiding PCR when possible, and for sequencing a genome to get the very best quality exome.

A flaw with this is that we often want to sequence an exome not simply to reduce the costs of sequencing, but more importantly to increase the coverage to a level that would not be economical for a genome, even on an X Ten! For studies of heterogeneous cancer we may want to sequence the exome to 100x or even 1000x coverage to look for rare mutant alleles. Unfortunately this is exactly the kind of analysis that might be messed up by those same PCR artefact's, namely PCR duplication (introducing allele bias) and base misincorporation (introducing artifactual variants).

PCR free exomes: In my lab we are running Illumina's rapid exomes so PCR is a requirement to complete the Nextera library prep. But if we were to use another method then in theory PCR-free exomes would be possible. Even if we stick to Nextera (or Agilent QXT) then we could aim for very low-cycle PCR libraries. The amount of exome library we are getting is huge, often 100's of nanomoles, when we only need picomoles for sequencing.

Something we might try testing is a PCR-free or PCR-lite (pardon the American spelling) exome to see if we can reduce exome artefacts and improve variant calling. If anyone else is doing this please let me know how you are getting along and how far we can push this.

Thursday, 4 September 2014

The newest sequencer on the block: Base4 goes public

I've heard lots of presentations about novel sequencing technologies, many have never arrived, some have come and gone, all have been pretty neat ideas; but so far not one has arrived that outperforms the Illumina systems many readers of this blog are using.


Base4's pyrophosphorolysis sequencing technology

The latest newcomer is Base4's single-molecule microdroplet sequencing technology. The picture above explains the process very well: a single molecule of double-stranded DNA is immobilised in the sequencer, single bases are cleaved at a defined rate from the 3' end by pyrophosphorolysis (the new Pyrosequencing perhaps?), as each nucleotide is cleaved it is captured into a microdroplet where it initiates a cascade reaction that generates a fluorescent signal unique to each base, as microdroplets are created at a faster rate than DNA is cleaved at the 3' end the system generates a series of droplets that can be read out by the sequencer (a little like the fluorescent products being read of a capillary electrophoresis instrument).

Base4 are talking big about what their technology can deliver. They say it will be capable of sequencing 1M bases per second with low systematic error rates. The single-molecules mean no amplification and read-lengths should be long. Parallelisation of the technology should allow multiple single-molecules to be sequenced at the same time. How much and when will have to wait a little longer.

I've been speaking to Base4 over the past few years after meeting their founder Cameron Frayling in a pub in Cambridge. Over the past two years Base4 has been developing their technology and recently achieved a significant milestone by demonstrating robust base-calling of single nucleotides in microdroplets. They are still small, with just 25 employees and are based outside Cambridge. I hope they'll be growing as we start to get our hands on the technology and see what it's capable of.

Low-diversity sequencing: RRBS made easy

Illumina recently released a new version of HCS v2.2.38 for the HiSeq. The update improves cluster definition significantly and enables low-diversity sequencing. It’s a great update and one that’s making a big impact on a couple of projects here.



Thursday, 28 August 2014

SEQC kills microarrays: not quite

I've been working with microarrays since 2000 and ever since RNA-seq came on the scene the writing has been on the wall. RNA-seq has so many advantages over arrays that we've been recommending them as the best way to generate differential gene expression data for a number of years. However the cost, and lack of maturity in analysis meant we still ran over 1000 arrays in 2013, but it looks like 2014 might be the end of the line. RIP: microarrays.



Thursday, 21 August 2014

FFPE: the bane of next-generation sequencing? Maybe not for long...

FFPE makes DNA extraction difficult; DNA yields are generally low, quality can be affected by fixation artefacts and the number of amplifiable copies of DNA are reduced by strand-breaks and other DNA damage. Add on top of this almost no standardisation in the protocols used for fixation and widley different ages of samples and it's not suprising FFPE causes a headache for people that want to sequence genomes and exomes. In this post I'm going to look at alternative fixatives to formalin, QC methods for FFPE samples to assess their suitability in NGS methods, some recent papers and new methods to fix FFPE damage.
 
Why do we use formalin-fixation: The ideal material to work with for molecular studies is fresh-frozen (FFZN) tumour tissue, as nucleic acids are of high-quality. But many cancer samples are fixed in formalin for pathological analysis and stored as Formalin-Fixed Parrafin-Embeded (FFPE) blocks, preserving tissue morphology but damaging nucleic acids. The most common artifacts are, C>T base substitutions caused by deamination of cytosine bases converting them to uracil and generating thymines during PCR amplification, and strand-breaks. Both of these reduce the amount of correctly amplifiable template DNA in a sample and this must be considered when designing NGS experiments.
 
Molecular fixatives: Our Histopathology core recently published a paper in Methods: Tissue fixation and the effect of molecular fixatives on downstream staining procedures. In this they demonstrated that overall, molecular fixatives preserved tissue morphology of tissue as well as formaldehyde for most histological purposes. They presented a table, listing the molecular-friendly fixatives and reporting the average read-lengths achievable from DNA & RNA (median read-lengths 725 & 655 respectively). All the fixatives reviewed have been shown to preserve nucleic acid quality, by assessment of qPCR Ct values or through RNA analysis (RIN, rRNA ratio, etc). But no-one has performed a comparison of these at the genome level, and the costs of sequencing probably keep these kind of basic tests beyond the limits of most individual labs.

The paper also presents a tissue-microarray of differently fixed samples, which is a unique resource that allowed them to investigate the effects of molecular fixatives on histopathology. All methods preserved morphology, but there was a wide variation in the results from staining. This highlights the importance of performing rigourous comparisons, even for the most basic procedures in a paper (sorry to any histpathologists reading this, but I am writing from an NGS perspective).

The first paper describing molecular a fixative (UMFIX) appeared back in 2003, in it the authors describe the comparison of FFZN to UMFIX tissue for DNA and RNA extraction, with no significant differences between UMFIX and FFZN tissues on PCR, RT-PCR, qPCR, or expression microarrays. Figure B from their paper shows how similar RNA bioanalyser profiles were from UMFIX and FFZN.

UMFIX (top) and FFZN (bottom)

 

Recent FFPE papers: A very recent and really well written paper in May 2014 by Hedegaard et al compared FFPE and FFZN tissues to evaluate their use in exome and RNA-seq. They used two extraction methods for DNA and three for RNA with different effects on quality and quantity.  Only 30% of exome libraries worked, but with 70% concordance (FFZN:FFPE). They made RNA-seq libraries from 20 year old samples with 90% concordance, and found a set of 1500 genes that appear to be due to fixation. Their results certainly make NGS analysis of FFPE samples seem to be much more possible than previous work. Interestingly they made almost no changes to the TruSeq exome protocol, so some fiddling with library prep, perhaps adding more DNA to reduce the impact of strand-breaks for instance would help a lot (or fixing FFPE damage - see below). The RNA-seq libraries were made using RiboZero and ScriptSeq. Figure 2 from their paper shows the exome variants with percentages of common (grey), FFZN-only (white) and FFP-only (red), there are clear sample issues due to age (11, 7, 3 & 2 years storage) but the overall results were good.


Other recent papers looking at FFPE include: Ma et al (Apr 2014): they developed a bioinformatics method fo gene fusion detection in FFPE RNA-seq. Li et al (Jan 2014): they investigated the effect of molecular fixatives on routine histpathology and molecular analysis. They achieved high-quality array results with as little as 50ng of RNA. Norton et al (Nov 2012): they manually degraded RNA in 9 pairs of matched FFZN/FFPE samples, and ran both Nanostring and RNA-seq. Both gave reliable gene expression results from degraded material. Sinicropi et al (Jul 2012): they developed and optimised RNA-seq library prep and informatics protocols. And most recently Cabanski et al published what looks like the first RNA-access paper (not open access and unavailable to me). RNA-access is Illumina's new kit for FFPE that combines RNA-seq prep from any RNA (degraded or not) with exome capture (we're about to test this, once we get samples).

QC of FFPE samples: It is relatively simple to extract nucleic acids from FFPE tissue and get quantification values to see how much DNA or RNA there is, but tolerating a high failure rate, due to low-quality, in subsequent library prep is likely to be too much of a headache for most labs. Fortunately several groups have been developing QC methods for FFPE nucleic acids. Here I'll focus mostly on those for DNA.

Van beers et al published an excellent paper in 2006 on a multiplex PCR QC for FFPE DNA. This was developed for CGH arrays and produces 100, 200, 300 and 400bp fragments from nonoverlapping target sites in the GAPDH gene from the template FFPE DNA. Figure 2 from their paper (reproduced below) demonstrate a good (top) and a bad (bottom) FFPE samples results.


Whilst the above method is very robust and generally predictive of how well an FFPE sample will work in downstream molecular applications, it is not high-throughput. Other methods generally use qPCR as the analytical method as it is quick and can be run in very high-throughput. Illumina sell an FFPE QC kit which uses comparison of a control template to test sampeples and a deltaCq method to determine if samples are suitable for arraya or NGS. LifeTech also sell a similar kit but for RNA, Arcturus sample QC, using two β-actin probes and assessing quality via their 3'/5' ratio.Perhaps the ideal approach would be a set of exonic probes multiplexed as 2, 3, or 4-colour TaqMan assays. This could be used on DNA and RNA and would bring the benefits of the Van beer and LifeTech methods to all sample types.

Fixing FFPE damage: Another option is to fix the damage caused by fomalin fixation. This is attractive as there are literally millions of FFPE blocks, and many have long-term follow up data. A paper in Oncotarget in 2012 reported the impact of using uracil-DNA glycosylase (UDG) to reduce C>T caused by cytosine deamination to uracil. They also showed that this can be incoporated into current methods as a step prior to PCR, something which we've been doing for qPCR for many years. There are not strong reasons to incorporate this as a step in any NGS workflow as there is little impact on high-quality templates.

NEB offer a cocktail of ezymes in their PreCR kit, which repairs damaged DNA templates. It is designed to work on: modified bases, nicks and gaps, and blocked 3' ends. They had a poster at AGBT demonstrating the utility of the method, showing increased library yields and success rates with no increase in bias in seqeuncing data.

Illumina also have an FFPE restoration kit; restoration is achieved through treatment with DNA polymerase, DNA repair enzyme, ligase, and modified Infinium WGA reaction, see here for more details.

These cocktails can almost certainly be added to: MUTYH works to fix 8-oxo-G damage, CEL1 is used in TILLING analysis to create strand-breaks in mismatched templates and could be included, lots of other DNA repair enzymes could be added to a mix to remove nearly all compromised bases. It may be possible to go a step further and fix compromised bases rather than just neutralise their effect.

Whatever the case it looks very much like FFPE samples are going to be processed in a more routine manner very soon.


Monday, 18 August 2014

$1000 genomes = 1000x coverage for just £20,000

It strikes me that if you can now sequence a genome for $1000, then you could buy 1000x coverage for not much more than a 30x genome cost a couple of years ago! Using a PCR-free approach I can imagine that this would be the most sensitive tool to determine tumour, or population, heterogeneity. I’m sure that sampling statistics might limit the ability to detect low-prevalence alleles but I’m amazed by the possibility none-the-less.
  • 1 X-Ten run costs $1,000
  • 1000x requires 33 X-Ten runs (30x each)
  • $33,000 = £20625
If you’re running a ridiculously high Human genome project on X-Ten do let me know!

Thursday, 14 August 2014

How many MiSeq owners are using the bleach protocol to minimise carryover?

A comment popped up on a post I'd written in April last year "MiSeq (and 2500) owners better read this and beware"that made me think I'd ask readers the question in the title: "how many of you are using the MiSeq bleach wash protocol?

The carryover issue led to a small residue of the last library to be run being sequenced in the subsequent run. This caused a potential problem to MiSeq users, particularly those interested in low frequency mutations. My post was prompted after some discussions with other MiSeq owners and a thread on SEQanswers, which Illumina posted to describing their experiences with reducing this carryover, and that it was seen typically at 0.1%.

The comment on my post was about a more aggressive bleach protocol which reduced carryover to almost undetectable levels (thanks Illumina), but that appears to have not been communicated to all users. It was impossible for me to find on the Illumina website but it's not the most easy to navigate site in the world so I thought I'd put the document up here for you to see (click the image for the TechNote).

https://drive.google.com/file/d/0B383TG7oJh2CTWp6MzZkMHBDd28/edit?usp=sharing

You need to request this through your techsupport team as it needs a new recipie on your MiSeq. And you really must follow the instructions to the letter, too much bleach and you'll probably kill your MiSeq!

Ilumina demonstrated that this protocol can reduce carry-over to less than 0.001% or one read per 100,000. We've been using this as the default wash for many months and reports of carry-over are nearly non-existent.

Sunday, 27 July 2014

When will BGI stop using Illumina sequencers

With the BGI aiming to get their own diagnostic sequencing tests on the market, and the purchase and development of Complete Genomics technology - Omega, a question that could be asked is whether BGI will ever stop using Illumina technology?

BGI are still the largest install of HiSeq's but they have not purchased an X-Ten and it's not clear if they've switched over to v4 chemistry on the updated 2500. The cost of upgrades or replacement on a scale on 128 machines would be high, but BGI have deep pockets. So is this the start of the end for Illumina in China?

If BGI stops using Illumina will Illumina notice? I'm sure they will and the markets will read lots into any announcement, but in the long run it's difficult to see China without an Ilumina presence. The Chinese science community is booming, their research spend is second only to the US and is likely to climb much more quickly, and they have a massive health-care market that NGS can make a big impact on.

Once we hear what BGI can do with the CGI technology (exomes for instance) we might find out Illumina has a strong competitor and with LifeTech/Thermo effectively putting Ion Torrent on-hold competition in the NGS market is something the whole community, including Illumina needs.

PS: This is my last post for a couple of weeks while I'm off on holiday in Spain. Hasta luego!

Monday, 21 July 2014

1st Altmetric conference - Sept 25/26th in London

I've been a user of Altmetric for a while now and very much like what they are doing with article metrics. I'm sure many Core Genomics readers will also have seen the Altmetric badge on their own papers. Now Altmetric are hosting their first conference.


The meeting aims to demonstrate how users are integrating Altmetric tools into their processes. Hopefully they'll cover lots of interesting topics and spend some time talking about how the community can keep tools like Altmetric from becoming devalued by gaming.

Might see you there...

Thursday, 17 July 2014

A hint at the genomes impact on our social lives

GWAS is still in the news and still finding hits, the number of GWAS hits has increased rapidly since the first publication for AMD in 2005. Watch the movie to see the last decade of work!




A recent paper in PNAS seems to have got people talking: in Friendship and natural selection Nicholas Christakis and James Fowler describe their analysis of the Framingham Heart Study (FHS) data; specifically the data of people recorded as friends by participants. The FHS recorded lots of information about relatives (parents, spouses, siblings, children), but also asked participants “please tell us the name of a close friend". Some of those friends were also participants and it is this data the paper used to determine a kinship coefficient, higher values indicate that two individuals share a greater number of genotypes (homophily.

The study has generated a lot of interest and news (GenomeWeb, BBC, Altmetric) but also some negative comments, mainly about how difficult this is to prove in a study where you cannot rule out genetic relationships individuals themselves don't know exist (i.e. I don't know who my third cousins are and might make friends by chance).

The data in supplemental files from PNAS paper show Manhattan plot (top) for the identified loci, its not as stunning an example as you'd see in other fields. Compare it to a well characterised GWAS hit from a replicated study in Ovarian Cancer (bottom).