In this article
- By applying HPE’s Memory-Driven Computing Sandbox to Jungla’s data sets, HPE will help Jungla accelerate the clarity and utility of clinical genetic and genomic tests
- Understanding this data is a critical step toward realizing the promise of affordable healthcare tailored to an individual patient’s needs: precision medicine
- HPE also is collaborating with researchers in life sciences and clinical medicine from the Living Heart Project and DZNE to look beyond the inadequacy of traditional computational infrastructure
- Inspired by this work, HPE launched the second challenge under Tech Impact 2030—to enable real-time, personalized medical care for patients
Madrid – November 27, 2018 – Hewlett Packard Enterprise (HPE) today announced a collaboration with biotechnology and artificial intelligence (AI) startup Jungla to enable real-time personalized medical care for patients. By applying the large-capacity and exponential performance scaling of HPE’s Memory-Driven Computing Sandbox to Jungla’s data sets, HPE will help Jungla accelerate the clarity and utility of clinical genetic and genomic tests.
Decoding the first human genome required $2.7 billion and took 15 years. Remarkable advances in sequencing technologies have brought the costs of genome sequencing to less than $1,000, making personal genomes a reality and opening a path for affordable, equitable access globally. This data provides the potential to gain insight and actionable guidance from patient genomic data in clinical settings, a critical step toward realizing the promise of affordable healthcare tailored to an individual patient’s needs: precision medicine. It allows researchers to determine the answer to questions like:
- What is the connection between the three billion base pairs that comprise the human genome and human wellness?
- How does variation in the genomes affect the likelihood of a patient developing cancer or heart disease?
Humans’ genetic uniqueness presents significant challenges to clinical care. Even within the parts of the genome whose disease-relevance is well established, less than one percent of the genetic variants in the population are clinically understood. Existing processes and standards to interpret this information within clinical laboratories are heavily reliant on manual pattern recognition and correlation. This barrier slows medical research and limits the scalability, utility and integration of genomic information in clinical workflows for real-time patient diagnosis and management.
“We firmly believe that to realize the value of genomic data, we need to look beyond changes in the sequence of a patient’s genome and into the changes induced to molecular and cellular function,” said Carlos L. Araya, CEO, Jungla Inc. “To do this, we’ve built computational and experimental systems that can provide unprecedented levels of insight to clinical teams. This has required massive increases in the scale of data and the processes to generate and analyze it.”
Jungla’s Molecular Evidence Platform (MEP) models the effects of variants on biological systems at scale and translate the insights into clinical practice. This integrated platform arms patients and healthcare providers with accurate, clear and transparent support for the interpretation of findings in genetic and genomic tests. As the MEP evolves, Jungla is integrating increasingly detailed, mechanistic approaches – including high-resolution molecular analyses– that can reveal how variations in the genome alter cells, such as DNA damage that can lead to cancer.
“Traditional computational systems were making it impossible for Jungla to compare its data sets at scale,” said Michael Woodacre, HPE Fellow. “After systems analyzed the data, researchers still needed to manually review the data to identify patterns. But with its unprecedented capacity to process data at scale, Memory-Driven Computing has allowed Jungla to use its analysis platform as a scientific instrument, reducing the risk of human error and dramatically speeding time to results.”
To bring Jungla’s vision for the creation of a genomic insights engine to life, HPE loaded one of Jungla’s massive data sets onto its 48 TB Memory-Driven Computing Sandbox, an operating and development environment for customers introduced in June 2018. With the Memory-Driven Computing Sandbox, Jungla’s MEP can deliver approximately 250x speed improvements in high-resolution molecular analyses, as compared to traditional hardware.
The complete genome sequence of a patient—describing hundreds of thousands of variants from thousands of genes—represents roughly five GB of information; however, the detailed data from Jungla’s workflows can require processing more than 40 TB of information for a single gene.
“Our work with HPE represents a commitment to push the envelope and bring advances in science and engineering to bear on the clinical tests of individual patients,” said Araya. “Not only must we succeed at the science, we must build infrastructure and processes that can scale to translate scientific insights into clinical successes for patients. HPE’s Memory-Driven Computing systems can allow us to consider new findings from a patient in the context of the molecular characteristics of all clinically understood variants – on demand. These developments don’t happen overnight, but it’s where we need to go if we want to enable both speed and accuracy in healthcare.”
Enable real-time, personalized medical care for patients
In addition to its work with Jungla, the team at HPE is collaborating with researchers in life sciences and clinical medicine from the Living Heart Project and the German Center for Neurodegenerative Diseases (DZNE) to explore technology’s potential to transform multiple areas of the life sciences industry. Each of these research communities is pioneering new approaches to health science to find answers hiding in plain sight and look beyond the silos of scientific disciplines and the inadequacy of traditional computational infrastructure to scale.
"Precision medicine will not be possible without the most sophisticated compute infrastructures,” stated Prof. Joachim Schultze, Director, PRECISE Platform for Single Cell Genomics and Epigenomics, DZNE and University of Bonn. “To me, Memory-Driven Computing seems to be the ideal compute ecosystem for this enormous task."
Inspired by projects with these customers, the ability to disrupt conventional medical testing, and the potential impact of that disruption if scaled, HPE today introduced the second challenge under its Tech Impact 2030 collaboration with the World Economic Forum (the Forum)—to enable real-time, personalized medical care for patients. Under Tech Impact 2030, HPE and the Forum are bringing together experts across the public and private sectors to power meaningful change against a set of key societal challenges by the year 2030.
Looking forward, HPE envisions a fundamental shift for medical researchers who have access to simulations that are sophisticated enough that they include the emergent properties of complex biological systems. Not only will the technology enable them to conduct research faster and at a lower cost; it will empower them to change their relationship with their patients, their data and their science—evolving their roles from information gatherers to insight hunters.
“In a traditional setting, researchers begin with a simple model and add complexity to match what is observed. When something is observed that the model can’t explain, the model is often thrown out,” said Kirk Bresniker, vice president, HPE Fellow and chief architect for Memory-Driven Computing. “But what if a researcher could start at the most basic level and assemble levels of modeling by stitching together the atoms to make molecules, molecules to make structures, then build into cells, organs, organisms and ecosystems. Now, researchers can visualize the invisible. They can capture a clear ‘before and after’ picture, which is critical in science.”
Tech Impact 2030
Earlier this year, HPE and the Forum announced the challenge to “Help Solve World Hunger” by 2030, inspired by results from Purdue University’s 1,400-acre research farm and its application of precision agriculture to increase crop yields while drastically conserving resources. By leveraging massive amounts of data collected from connected platforms and devices and processing it at the edge, HPE is providing fast insights that can inform quicker decisions for farmers.
Over the next several months, HPE and the Forum will introduce additional challenges in key industries including financial services, transportation and manufacturing. Each challenge will pose a social, economic or environmental problem that could be tackled by convening experts and applying existing and innovative technology in disruptive ways.
Joining Tech Impact 2030
HPE and the Forum believe that partnership is essential to achieving real change. Organizations interested in collaborating on the Tech Impact 2030 program can learn more at https://news.hpe.com/tech-impact-2030/ or reach out to TechImpact2030@hpe.com.
About Hewlett Packard Enterprise
Hewlett Packard Enterprise is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze and act upon data seamlessly from edge to cloud. HPE enables customers to accelerate business outcomes by driving new business models, creating new customer and employee experiences, and increasing operational efficiency today and into the future.