Nine months ago, Hewlett Packard Enterprise unveiled the world’s first Memory-Driven Computing architecture. Then three weeks ago, we revealed the largest in-memory system on the planet -- a 160 terabyte prototype based on this architecture. For context, with this amount of memory it would be possible to simultaneously work with the data from 80,000 human genomes. The implications for every industry, from space travel to transportation to healthcare, are huge.
In fact, we’re exploring those implications through our first collaboration using our Memory-Driven Computing architecture with the German Center for Neurodegenerative Diseases (DZNE). They needed a new kind of computer to manipulate their massive data sets to accelerate finding a cure for Alzheimer’s, a disease that impacts one in 10 people age 65 or older in the world. The initial findings are powerful and promising. We’ve only started to scratch the surface with one component of their overall data analytics pipeline and are already getting 9X speed improvements. In practical terms, that means we’re getting results that used to take more than 22 minutes in less than three. We believe these gains could increase up to 100X when we expand our learnings to the other components of their pipeline. In an industry where saved time translates to saved lives, these efficiencies could change the game.
But the real opportunity is the incredible new insights they’ll be able to draw from the data. DZNE has never been able to work with so much data at one time, which means different correlations and better answers than ever before – ultimately resulting in new discoveries to help cure Alzheimer’s.
Today, we’re continuing this momentum with the announcement of The Machine User Group, an open community of technologists, developers and industry experts interested in programming for the Memory-Driven Computing environment. HPE will facilitate the group, providing participants with training, resources and toolkits. In return, the community will share what they are developing – including algorithms, applications, perspective on this new architecture and its ability to transform the industry – among peers. Our hope is that we can explore the vast potential of our new approach and where it will collectively take us.
Our progress on this project can be seen not only in our working prototype, but also in how we’re commercializing the component technologies. We shared our first commercialization roadmap in November 2016, and today we’re sharing an expanded view. In this update, we've broadened our innovation categories based on customer outcomes – performance, efficiency, resiliency and flexibility – and have expanded the technologies and areas of integration considerably to better reflect the exciting opportunities we see across our business.
What Got Us Here
We first announced The Machine research program three years ago at Discover Las Vegas 2014. At that time, we still had a lot of work to do to turn our Memory-Driven Computing vision into reality. We had technical papers, sketches drawn on whiteboards in our Labs, and some 3D printed models of different components. But we still needed working prototypes to fully prove these concepts. Now, we’ve turned our aspirations into reality.
Since then, we’ve marched forward, inventing new technologies … and even inventing new technologies to test new technologies (ask us about The Kraken). We also relied on the expertise and innovations of more than 20 partners, as we’ve brought all these components online and assembled them into working prototypes. The work of hundreds of researchers and engineers across five continents has resulted in the first fully-realized, genuinely novel computer architecture in decades.
We made some changes, too. Creating an operating system from scratch sounded like a great idea in 2014, but we learned that modifying Linux – a well-known and used operating system – made a lot more sense. With some enhancements to enable it to comprehend massive amounts of memory, Linux can run the world’s best new computer architecture quite effectively. Plus, we can make it easier to program if we don’t have to ask developers to learn an entirely new OS to work with it.
We’re excited about the progress of Memristor-based non-volatile memory, but also broadened our horizons along the way. It turns out that creating an architecture and building a system that can accept all forms of memory technology – whether it’s conventional (such as DRAM or Flash) or an emerging storage class memory (like spin-torque, 3DXpoint or Memristor) – is not only more inclusive, it’s better aligned to what our customers want because different memory solutions are better suited for different needs. We designed our architecture to accept not only Memristors (as soon as they’re ready), but other technologies as well.
The Importance of Collaboration
Collaboration is key to our success. As we moved through the milestones above, we’ve released the Memory-Driven Computing Developer Toolkit for developers to experiment with programming for this new environment. In addition, we’re contributing new technologies to industry groups like the Gen-Z Consortium, and we’ve relied on over 20 technology partnerships with companies like Cavium to bring our Memory-Driven Computing vision to life.
It’s this type of collaboration, including the work with DZNE and The Machine User Group, that will carry us forward. Come join us.