How API-driven Public Clouds and HPC are Bringing a Brighter Future


API-Driven Public Cloud

The increasing popularity of cloud computing is evident in a myriad of news articles, blog posts and videos. Although cloud computing offers many benefits, one of the greatest benefits is largely hidden within the cloud.

When public clouds appeared, initial conversation focused on comparing the public cloud to data center virtualization. A critical aspect gradually emerged, the secret sauce: the public cloud is more than just virtualization. It is a flexible and resilient API-driven infrastructure. This infrastructure is comprised of API-driven, resilient services that are basically building blocks like Legos. As more API-driven services are provided in the public cloud, they essentially increase the infrastructure foundation. It’s like having more Legos in the box to build with. Initially these building blocks were leveraged by forward-looking companies, who used them to change the world. They have leveraged this infrastructure to build the next generation of resilient and flexible consumer and B2B applications. These early adopters include such well known names as Netflix, Airbnb, Yelp, Expedia, Adobe, Pinterest, Zynga, gilt, MLBAM, Slack, Foursquare, Lyft, Dow Jones, Bristol-Myers Squibb, NASA and many more.

Now the real power is in the applications that can expand beyond what resides in a single virtual machine and leverage these API-driven building blocks to provide complete, dynamic application infrastructures. These applications are no longer confined to single machines that require system administrators to integrate. The public cloud provides the API-driven infrastructure that allows applications to self-form their entire environment without restriction to location or device.

Traditional enterprises are now looking more and more into the flexible, scalable, on-demand, API-driven infrastructure provided by the public cloud. Included in this are the historical and High Performance Computing science and engineering applications and also the Advanced Computing applications of the future, which touch all aspects of society.

Data Science, High Performance and Advanced Computing

Research and advanced computing entails a long list of applications that are designed to be run in parallel. These applications have changed the world and have sped up efficiencies in design, chemical engineering, bioinformatics, space engineering, simulation (including weather), climate, financial, energy, CG rendering, etc. They are also including growing fields such as genomics, bioinformatics, computational humanities and visualization. From the origins of HPC in high-energy physics to modern handheld gene sequencers, the hunger for data and thirst for computation continue to increase exponentially, but the breadth of disciplines embracing parallel computation is new.

Historically these types of applications were desktop and mini-computer workloads. As the problems increased in size and detail, larger computers were required, and eventually parallelized computation in clusters. HPC has traditionally been reserved for those who have massive amounts of hardware in large data centers, smaller footprints in small research offices, or access to rely heavily on the HPC resources of others.

Presently we see more data driving more computation and the coining of terms such as “Big Data” and eventually “Smart Data,” once people got tired of comparing their petabytes. But the reality is that data and computation keep growing in size, complexity and diversity. To be competitive, smaller companies and research enterprises need the ability to compute at a scale that matches their larger competitors. Convergence of the Cloud and HPC or Advanced Computing provides this universal capability.


Combine the flexibility and on-demand nature of the public cloud with the world changing power of High Performance Computing, and this resource that was once accessible by only a small percentage of the world can now be available to many more, to dynamically and securely create High Performance Computing environments in the public cloud.

Imagine researchers seeking to solve the next big problem in their area of expertise being given the ability to go to a Public Cloud Marketplace and launch, manage and easily maintain a high performance computing infrastructure. Imagine they can do this without going through a long procurement process, and they start to get the needed results in hours instead of days or weeks. Now multiply this by the number of people staring at their monitors, wondering how they can possibly compute what is needed, and the potential for advancements in the world just increased exponentially.

These were the genesis thoughts behind the development of CloudyCluster. We want to enable the next generation of computational and data scientists by using automation to reduce the need for system administration. We strive to give researchers the ability to act at the speed of the of the future. As the public clouds continue to mature, the next generation of computational practitioners will be able to easily take advantage of them. So here’s to the cloudy computational practitioners who are bringing a brighter future!

This is a “reprint” of the Linkedin article

Back to blog


Read more about it in Jeff Barrs AWS Blog

1.1m vCPU on AWS Blog