Back to top
Turn-Key Cloud HPC

Turn-Key Cloud HPC
~ elastic orchestration with a familiar hpc look and feel ~



Seamless Cloud Computation & Storage


-- CLOUD COMPUTATIONAL INVESTIGATION --




With CloudyCluster you can easily create HPC/HTC jobs that will run on-prem or in CloudyCluster on GCP and AWS. You can rely on the familiar look and feel of a standard HPC environment while embracing the capabilities and elasticity of the Cloud. The HPC jobs can be easily configured to support many instance types and any number of memory & CPU configurations. You will always have the latest computational technology at your fingertips.





-- INTERACTIVE RESEARCH COMPUTING --



With the latest release of CloudyCluster, users can now take advantage of the GUI developed by OSC and the cloudyCluster Team. This new inclusion offers non-computer scientists a pathway to cloud-based HPC tools, without having to utilize the CLI. Upload and Download files with a file browser-like interface. You can now: draft job scripts with the built-in Job Script tool, spin-up new computing instances with or without a variety of GPU Acceleration, and have them tear down automatically after your specified work window. The current release includes JupyterLab with Jupyter Notebooks in Python 3 for true interactive computation.
CloudyCluster online documentation-->
Open OnDemand Project-->






-- RESEARCH CLOUD STORAGE --




As part of Google Cloud, AWS, and CloudyCluster you have a vast array of storage technologies available to you. The data can be configured to reside at different storage classes based on age or access frequency. Jobs can be configured to pull the data needed for computation to High Performance Parallel Storage. Let us show you how cost effective storage can be in the cloud when you leverage the full capabilities of Cloud Storage and CloudyCluster.




-- THE HUMAN ELEMENT --




People make all the difference, we want to help you be successful, whether you are a researcher, helping support researchers, or in a leadership role. We are continually creating resources to help you simplify integrating cloud computation and storage with your research processes. The CloudyCluster team offers assistance with knowledge transfer from specific workflows to strategic direction and planning on reducing your time to discovery.




-- GOOGLE CLOUD ARCHITECTURE --







You Create a fully operational & secure computation cluster in minutes, complete with:

Encrypted Storage (GCS, OrangeFS on PD),Compute (standard, preemptible, & GPU), HPC Scheduler (Torque or SLURM with the CCQ Meta-Scheduler). CloudyCluster includes over 300 packages and libraries used in HPC, HTC, & ML workflows. You can also easily customize the base image with your own software as needed.




-- AWS ARCHITECTURE --





You Create a fully operational & secure computation cluster in minutes, complete with:

Encrypted Storage (S3, OrangeFS on EBS),Compute (standard, Spot, & GPU), HPC Scheduler (Torque or SLURM with the CCQ Meta-Scheduler). CloudyCluster includes over 300 packages and libraries used in HPC, HTC, & ML workflows. You can also easily customize the base image with your own software as needed.




-- SCALING --




CloudyCluster can scale - Leveraging Millions of vCPUs

Read more about it here:
AWS HPC Blog Post
Google HPC Blog Post
NextPlatform Article
TrafficVision Tracklets Public Dataset


FREQUENTLY ASKED QUESTIONS (FAQ):

Q: What Schedulers does CloudyCluster Support?
A: CloudyCluster automatically configures and deploys Slurm or Torque depending on the configuration options you choose.

Q: Can I run MPI Jobs?
A: Yes! CloudyCluster configures MPI when you launch an environment. Additionally any jobs you run will also be MPI-enabled if required.

Q: Does it support Intel MPI and oneAPI?
A: CloudyCluster comes with many of the Intel OneAPI runtimes as part of the image. Additionally there is a sample job you can run that uses the Intel Cluster Checker to evaluate the environment.

Q: How do I make sure I don’t leave nodes running after the job completes?
A: When you launch your HPC or HTC job through CCQ (CloudyCluster Queue) it will dynamically launch the nodes required for the job. Once the job is complete it will clean up the nodes.

Q: How do I keep track of how much jobs cost?
A: CCQ (CloudyCluster’s Queue) provides the ability for you to add job directives that allow you to track your spending through billing labels that will be visible through the Google Cloud and AWS billing reporting.

Q: Can I get help?
A: Yes! We are able to schedule times to help you launch your first environment and workflow, beyond that we can also provide ongoing support as part of your subscription. If you would like more help we provide reasonable rates to help you integrate additional workflows with CloudyCluster.

Q: I am a student, are there opportunities for hands on learning?
A: Yes! You may want to consider participating in one of our sponsored hackathons at either ADMI or SC conferences. See hackhpc.org for more information. Here is a photo from a recent hack!


Hack-Logo