New York University Skip to Content Skip to Search Skip to Navigation Skip to Sub Navigation

Courant Researchers Study Methods to More Accurately Measure Genome Sequencing

April 13, 2012

Lost in the euphoria of the 2003 announcement that the human genome had been sequenced was a fundamental question: How can we be sure that an individual’s genome has been read correctly?

While the first full, individual genome was sequenced a decade ago, given the vast genetic variation across the world’s seven billion people, not to mention the differences in makeup even among close relatives, the question of accurate sequencing for individuals has continued to vex researchers.

With companies now projecting they can sequence a genome for $1,000, down from $25,000 just a few years ago, and efforts to develop “personalized” medicines, this matter is taking on increased significance in today’s marketplace.

These cheaper endeavors rely on newer technologies, which assume that scientists can continue to use the standard shotgun approach of randomly chopping down the genome into smaller pieces and then reassembling them algorithmically.

Specifically, today’s lower cost is achieved by breaking the DNA into even tinier pieces and rapidly and cheaply reading a massive amount of them. But it is not clear how to assess the accuracy of the newer assembly algorithms and the basic shotgun approach, especially if the accuracy of the earlier genomic data is questionable.

Among the particular challenges in confirming the accuracy of the sequencing of an individual’s genome is matching a person’s phenotype, or physical trait, with his or her genotype, or genetic makeup. This has served, in particular, as a barrier to successful development of personalized medicines, which were predicted shortly after the first sequencing of the human genome, but have yet to truly materialize.

In an article in the journal PLoS One, researchers at NYU’s Courant Institute of Mathematical Sciences evaluate some current methods to sequence individual genomes—a study that serves as a “stress test” of the efficacy of these practices.

The researchers employed testing procedures that aim to identify key, or representative, features of the genome as well as how each of these features is related to others.

“Most current technologies, when assembling a genome, make several kinds of mistakes when they encounter a repeated region—where a substring of the letters that make up DNA strands re-occur in many locations in the genome,” explains Bud Mishra, a professor of computer science and mathematics and the study’s senior author. “The input random reads tend to collect in one such location, and also show much higher discrepancies among themselves.”

To test the viability of these procedures, the NYU researchers relied on a collection of features from an open-source software, AMOS, developed by a public consortium of genomicists and bioinformaticists. If a method has accurately sequenced an individual’s entire genome, the researchers hypothesized, then the components of that method’s creation should “fit together,” and will be consistent with other auxiliary data like “mate pairs,” “optical maps,” or “strobed sequences,” all of which constitute long-range information from the genome.

While they found shortcomings in all examined methods for sequencing an individual’s genome, some assemblers showed promise. The NYU researchers’ conclusions were derived from a procedure called Feature-Response Curve (FRCurve), which effectively shows a global picture of how different assemblers are able to deal with different regions and different structures in a large complex genome. In this way, it also points out how an assembler might have traded off one kind of quality measure at the expense of another kind. For instance, it shows how aggressively a genome assembler might have tried to pull together a group of genes into a contiguous piece of the genome, while incorrectly rearranging their order.

“Such errors have important consequences, especially if the technology is being used to study the genome of a tumor, which often can be highly heterogeneous, making each tumor cell’s genome rearranged and mutated very differently from its neighbors,’” says Mishra.

The study’s other co-authors were NYU summer visitor Francesco Vezzi, now at Italy’s University of Udine, and NYU graduate student Giuseppe Narzisi, now at Cold Spring Harbor Laboratory.

The study was supported by the National Science Foundation and Abraxis BioScience, LLC.

Type: Article

Courant Researchers Study Methods to More Accurately Measure Genome Sequencing

Search News

NYU In the News

Paying It Backward: NYU Alum Funds Scholarships

The Wall Street Journal profiled Trustee Evan Chesler on why he decided to chair the Momentum fund-raising campaign.

A Nobel Prize Party: Cheese, Bubbles, and a Boson

The New Yorker talked to Professor Kyle Cranmer and graduate student Sven Kreiss about NYU’s role in the discovery of the Higgs boson, which resulted in a Nobel prize for the scientists who predicted its existence.

The World as They Knew It

The New York Times reviewed the exhibit at the Institute for the Study of the Ancient World on how ancient Greeks and Romans mapped the known and unknown areas of their world.

Elite Institutions: Far More Diverse Than They Were 20 Years Ago

NYU made stronger gains over the last 20 years in increasing diversity than any other major research university, according to the Chronicle of Higher Education.

Program Seeks to Nurture ‘Data Science Culture’
at Universities

The New York Times reported on the multi-million collaboration among NYU and two other universities to harness the potential of Big Data, including an interview with Professor Yann LeCun, director of NYU’s Center for Data Science.

NYU Footer