Modeling the world…continued.

In May we introduced the Technical Computing initiative at Microsoft. This initiative is focused on empowering a broader group of people in academia, business and government to solve some of the world’s biggest challenges. Our aim is mainstream technical computing with tools, platforms and services that take advantage of computing power across the desktop, servers and the cloud. You can learn more about our technical computing effort and see videos of some of the best minds in the international community at www.modelingtheworld.com.

Today, as an important step forward with the initiative, we are launching the new version of our high performance computing server – Windows HPC Server 2008 R2. You can read our press release here and watch video of my keynote at the HPC in Financial Markets conference in New York City here.

Over the last few years our customers have accomplished some amazing things with Windows HPC Server.

For example, TerraPower is facilitating the development of new, radically improved nuclear reactors that would create less waste, be more sustainable, and could potentially eliminate the need to enrich uranium. The Japan Aerospace Exploration Agency decreased processing time by 400% in their study of advanced composite materials for lightweight, more fuel-efficient aircraft. And the Scripps Research Institute was able to dramatically reduce the time of processing test results by 800%, helping them accelerate cancer research process and outcomes.

 

While I’m inspired by the results delivered through HPC today, I’m even more excited when I think about where we’re headed. We’re now at the point where parallel computing – many computers working together to solve complex problems – is expanding out of HPC and into the mainstream. Multi-core PCs and cloud datacenters offering tens of thousands of processors create opportunities for a much broader set of people to harness parallel systems, in order to ask tougher questions, gain deeper insights and solve bigger challenges.

For example, a climatologist could better understand global warming through models and simulation, first using her multi-processor PC, then using a local cluster of servers for more precision and analysis, which then dynamically reaches into the cloud for more computing power.

A genetics researcher could access secure, anonymous public health information from a worldwide repository of information about a particular disease. Add to that the ability to then run algorithms and conducts abstract experimentation against this information and you have greater potential for significant breakthroughs and novel approaches to treating disease.

Or, an engineer designing a new engine for the next generation of commercial aircraft could tap into 10,000 processing cores at the push of a button, in order to repeatedly simulate how the airflow through that engine affects its performance and how it dissipates heat.

These are just a few examples of how we can increase the speed of innovation with solutions that harness parallel systems across client, cluster and cloud technologies. Our efforts in the cloud, in particular, will truly expand the possibilities and opportunities of technical computing. I can’t wait to see the exciting breakthrough our customers are going to accomplish.

Posted by Bill Hilf
General Manager, Technical Computing