Predictive Modeling

An in-depth look a how genomic information is changing the way medical professionals approach their patients.

Vol. 24 • Issue 1 • Page 10

Molecular Diagnostics

The medical world is quickly recognizing that genomic information is changing the way we approach the patient. However, becoming involved in this new space is not a simple matter of installing next-generation sequencers in a lab, particularly when it comes to analyzing somatic variants in cancer. The difficulty isn’t in gathering the clinical information from the test, but rather, in interpreting the large and very complex data sets that result.

Inherited diseases, like cystic fibrosis, can often be traced to a common or novel mutation in a single gene. The complexity of genetic analysis jumps several orders of magnitude when we start looking at gene panels, exomes and genomes. According to the 1000 Genomes Project data, we might expect to see ~150,000 variants that haven’t been previously characterized in any given individual; 100-200 of those variants are likely to be disruptive to cell function. Similarly, if we are looking at the genetic content of a tumor with a gene panel, we can expect to see a fair number of well-characterized mutations that may have drug indications or prognostic information, but in addition, one can expect to identify dozens or even hundreds of poorly characterized or novel variants. These variants, commonly referred to as variants of unknown significance (VUSs), are possibly damaging or benign. They may be oncogenic events or passenger mutations along for the ride of the tumor’s evolution. What do we do with that information?


The Only Certainty is Genomic Complexity

The approach for most labs has been to ignore VUSs-with few exceptions-because the labs don’t know how to interpret the majority of these variants. Instead, they only search for variants that have been well-characterized and clinically validated in the literature. That is one reason there are so many “hot spot” panel tests available today that focus solely on these mutations.

As labs debate how to address VUSs, they consider the rationale for gathering any additional gene content; what is to be gained by exploring VUSs when the information captured is likely to be confusing at best and at worst ignored? The bet being taken in ignoring VUSs is that mutations that are commonly observed are critical to cell function, but those that are rare are less likely to be so. Even if there is a possibility that these variants are functionally damaging, there is no way to be certain without validating each variant independently on the bench, so it is better to report nothing than something that lacks certainty. This is in line with how clinical labs think about the world, but it is not in line with the realities about how physicians practice medicine or how it will be practiced in the future.

Our collective approach to medical science will need to be fundamentally different in light of the new technologies available to us. In the past, the identification of proper treatments for cancer patients was through tried and true methods like randomized, prospective clinical trials. These “gold standard” study designs took, let’s say, 1,000 patients with a particular disease, with 500 patients getting one experimental treatment and the other 500 getting the standard treatment. However, in the brave new world resulting from innovative technology, the granularity we now get with next-generation sequencing demonstrates that no two tumors, or patients, are the same. If we go by histologic characterizations of 1,000 patients for novel therapies, are the responders due to statistical aberrations of stochastic events, or due to the differences of genetic content? In this light, everyone is an “N of 1,” and it becomes impossible to create a cohort of 1,000 patients with identical genetic content.

Even without clinical validation of variants, we could conduct biologic functional experiments to determine the pathogenicity of any proposed variant, but we need to balance the need for biologic certainty with an understanding that, for any given patient, we may find dozens of novel variants, and furthermore, time to treatment becomes a huge factor as these patients may have rapidly fatal diseases and have often already failed multiple therapies. What can we do for these patients now if no well-characterized and clinically validated variants are found?

Overcoming the ‘N of 1’ Problem

To circumvent this problem we may need to rely on predictive modeling of what novel variants might do. This is an extremely controversial conversation in the medical community, but I think it is one we have to have. The conservative, and perhaps more traditional, argument would be that we don’t know for certain what these novel variants mean, so we shouldn’t act on that information at all. On the other hand, we’re possibly never going to know for certain based on clinical evidence in other patients: This may be the only patient to ever be seen with a specific mutation, but that doesn’t mean that the variant has no effect. We have the capacity to predict what it could mean, and we should act on it.

The future of personalized medicine hangs on our ability to perform robust analytics and make recommendations based on inference from our experience and expertise. We need the ability to do computational modeling to predict, with some measurable amount of certainty, a variant’s impact on protein function, as well as incorporate all known information from the medical literature and a mechanistic understanding of pathways to make reasonable recommendations on how a variant may affect a patient’s treatment course. The cancer genomicist must practice both the art and science of medicine, not just the latter, and give recommendations that are most likely to benefit the patient based on their knowledge, experience, and expertise.

Most clinicians have a difficult time understanding how to make sense of the complex data we provide, but they do want guidance and direction. They don’t want to keep loading the patients up with toxic chemotherapies that are likely to fail, nor do they want to do nothing. By applying predictive models to the variants uncovered by genomic tumor assessment, clinicians now, at a minimum, have a sense of direction on treatments that might be useful for these patients. Given the alternatives, clinicians are eager to employ novel targeted therapies and approaches assuming that the recommendations are based on sound medical principles and logic.

Seeking the Scientific Proof

How will medical science ever support and test the validity of such an approach, given that there may never be sufficient variants of any specific type, or even worse, haplotypes, to adequately test with our standard methodologies? One approach could be to forgo the traditional method of testing independent variants and instead focus on testing the methodology of predictive modeling itself. This could include obtaining a cohort of patients that are stage and age matched, where some are treated with standard therapies and others with recommendations based on comprehensive predictive modeling.

Do patients do better with these algorithms or not? We will soon see the outcomes of the use of precision medicine and predictive modeling. This comprehensive approach to assessing the tumor of the patient, including predictive modeling of VUSs, is our approach to analyzing patient cases received at MolecularHealth. Our goal is to arm the clinician with not just a list of variants identified in the tumor, but also with treatment recommendations and clinical trials based on those variants.

Gabriel Bien-Willner is medical director at MolecularHealth.

About The Author