Microsoft Takes Supercomputing To The Cloud - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Leadership // CIO Insights & Innovation
Commentary
5/17/2010
04:31 PM
Alexander Wolfe
Alexander Wolfe
Commentary
Connect Directly
Facebook
Twitter
RSS
E-Mail
50%
50%

Microsoft Takes Supercomputing To The Cloud

Buried beneath the bland verbiage announcing Microsoft's Technical Computing Initiative on Monday is some really exciting stuff. As Bill Hilf, Redmond's general manager of technical computing, explained it to me, Microsoft is bringing burst- and cluster-computing capability to its Windows Azure platform. The upshot is that anyone will be able to access HPC in the cloud.

Buried beneath the bland verbiage announcing Microsoft's Technical Computing Initiative on Monday is some really exciting stuff. As Bill Hilf, Redmond's general manager of technical computing, explained it to me, Microsoft is bringing burst- and cluster-computing capability to its Windows Azure platform. The upshot is that anyone will be able to access HPC in the cloud.HPC stands for High-Performance Computing. That's the politically correct acronym for what we used to call supercomputing. Microsoft itself has long offered Windows HPC Server as its operating system in support of highly parallel and cluster-computing systems.

The new initiative doesn't focus on Windows HPC Server, per se, which was what I'd been expecting to hear when Microsoft called to corral me for a phone call about the announcement. Instead, it's about enabling users to access compute cycles -- lots of them, as in, HPC-class performance -- via its Azure cloud computing service.

As Microsoft laid it out in an e-mail, there are three specific areas of focus:

  • Cloud: Bringing technical computing power to scientists, engineers and analysts through cloud computing to help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Supercomputing work may emerge as a "killer app" for the cloud.

  • Easier, consistent parallel programming: Delivering new tools that will help simplify parallel development from the desktop to the cluster to the cloud.

  • Powerful new tools: Developing powerful, easy-to-use technical computing tools that will help speed innovation. This includes working with customers and industry partners on innovative solutions that will bring our technical computing vision to life.
  • Trust me that this is indeed powerful stuff. As Hilf told me in a brief interview: "We've been doing HPC Server and selling infrastructure and tools into supercomputing, but there's really a much broader opportunity. What we're trying to do is democratize supercomputing, to take a capability that's been available to a fraction of users to the broader scientific computing."

    In some sense, what this will do is open up what can be characterized as "supercomputing light" to a very broad group of users. There will be two main classes of customers who take advantage of this HPC-class access. The first will be those who need to augment their available capacity with access to additional, on-demand "burst" compute capacity.

    The second group, according to Hilf, "is the broad base of users further down the pyramid. People who will never have a cluster, but may want to have the capability exposed to them in the desktop."

    OK, so when you deconstruct this stuff, you have to ask yourself where one draws the line between true HPC and just needing a bunch of additional capacity. If you look at it that way, it's not a stretch to say that perhaps many of the users of this service won't be traditional HPC customers, but rather (as Hilf admitted) users lower down the rung who need a little extra umph.

    OTOH, as Hilf put it: "We have a lot of traditional HPC customers who are looking at the cloud as a cost savings."

    Which makes perfect sense. Whether this will make such traditional high-end users more like to postpone purchase of a new 4P server or cluster in favor of additional cloud capacity is another issue entirely, one which will be interesting to follow in the months to come.

    You can read more about Microsoft's Technical Computing Initiative here and here.

    Follow me on Twitter: (@awolfe58)

    What's your take? Let me know, by leaving a comment below or e-mailing me directly at [email protected].

    Like this blog? Subscribe to its RSS feed: (here)

     My videos on ( YouTube)

      LinkedIn

    Alex Wolfe is editor-in-chief of InformationWeek.com.

    We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
    Comment  | 
    Print  | 
    More Insights
    White Papers
    More White Papers
    Slideshows
    IT Careers: Top 10 US Cities for Tech Jobs
    Cynthia Harvey, Freelance Journalist, InformationWeek,  1/14/2020
    Commentary
    Predictions for Cloud Computing in 2020
    James Kobielus, Research Director, Futurum,  1/9/2020
    News
    What's Next: AI and Data Trends for 2020 and Beyond
    Jessica Davis, Senior Editor, Enterprise Apps,  12/30/2019
    Register for InformationWeek Newsletters
    Video
    Current Issue
    The Cloud Gets Ready for the 20's
    This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
    Slideshows
    Flash Poll