I’ve worked on model-organisms for a long time, originally in Plant research but nowadays I'm more likely to run genomic experiments for groups using Mouse models of cancer. It is impossible to do some experiments in Human patients for a variety of reasons, so we use Mouse models instead. Genetically engineered mice (GEMs) are used in many research programs and offer us the ability to tailor a disease phenotype, as our understanding of the driving events in cancer increases we can build GEMs that carry these same driver mutations; we can even turn the specific mutations on at specific time points to try and recapitulate Human disease.
However we don’t necessarily understand the passenger events in those same Mouse models. Is it OK to know that TP53 is mutated in an ovarian or KRAS in a Pancreatic cancer model, or do we also need to know what passenger events have been picked up to maximise the utility of the model in understanding Human disease?
NGS allows us to get a handle on how well Mouse models recapitulate the genomic architecture seen in Human cancer. All Mouse model papers will show the pathological similarities between Human and Mouse samples, perhaps we should also expect to see the comparative circos plots and overlap in driver and passenger events?
And if this data throws up something unexpected should we stop using the model?
In a Nature Methods research highlights article, Natalie de souza says we should not throw the baby out with the bath water; mouse models offer a real chance to get biological insights into important Human diseases, and can have strong phenotypic similarities. She also comments that the immune systems in Mice and Men are quite different [and are likely to be far more homogeneous in a mouse model with its highly in-bred genome] so the results from the PNAS paper may not be so surprising.
Mouse models don’t work (sort of): Natalie de Souza covered a paper in PNAS that used a meta-analysis of three experiments which compared inflammatory response in humans and mouse models of inflammation. They had run gene expression arrays on human samples and the mouse models and comparison showed no correlation between the two groups (Human:Mouse), but a strong correlation was seen within each group (trauma, burn, endotoxin exposure). The take home message of the paper being: gene expression patterns in mouse models of inflammation do not correlate well with Human disease.
A more recent paper in PLoS Biology suggests there is an excess of positive results in papers using Mouse models of human disease. Their analysis of almost 4500 studies saw that twice as many as expected reported positive results, with the papers having the smallest sample sizes being most likely to over-estimate the significance of their work. The authors suggest that animal-model studies should be better controlled and designed (I think we could say that for almost every experiment) by increasing average sample size, looking for statistical bias and publishing negative results.
Mark Wanner at the Jackson Laboratory seconds these suggestions on his blog and Michael Ryan Hunsaker rants even more on his blog
saying "[it's good for] animal researcher[s] to be called out for laziness. It is time to up our game. Talk to clinicians. Make valid disease models. Do not oversell results. And finally, for Pete's sake make the data OPEN, and that includes behavioural videos and raw, unphotoshopped histology."
As genomes become cheaper and cheaper, and we become able to sequence the Immunome then this data is likely to be useful in understanding how good a particular model is. Genome engineering technologies are leaping ahead with Zinc finger nucleases, Talens and possibly CRISPRS allowing exquisite control over what bases are changed in a genome.