Customize Compute Partitions

Overview

When operating an auto-scaling HPC cluster on the cloud, you have access to all of the various arrangements of virtual machines that the cloud provider has to offer. On Google Cloud, you can choose

  • CPU platform/machine type (n1, n2, n2d, c2, e2)

  • Number of vCPU/VM

  • Amount of memory/VM

  • GPU Type and GPU Count/per VM

  • Preemptibility

  • VM image

  • Placement Policies

  • Network features

This gives you numerous options when customizing a heterogeneous Cloud-HPC cluster for your organization. To facilitate customization of your cluster’s compute nodes at any time, RCC comes with a command line tool called cluster-services and a dictionary schema to describe your cluster called a cluster-config.

Understanding Partitions and Machine Blocks

Machine Blocks

A machine block is a homogeneous group of Google Compute Engine (GCE) instances. VMs in a machine block share the following attributes

  • name - The prefix for all instances in this machine block.

  • machine_type - Google Compute Engine machine type

  • max_node_count - The number of compute instances in this machine block

  • zone - The Google Cloud zone to deploy machines to in this machine block. If regional_capacity=True, instances are deployed to any zone within the corresponding region.

  • image - The VM image to use for machines in this block. By default, this is set to the image used by the controller and login nodes. Using custom images is often used to deploy specific applications to the cluster. See RCC-Apps for details on creating and deploying custom VM images to the RCC.

  • image_hyperthreads - Boolean flag to indicate if hyperthreading is enabled (True) or not (False).

  • compute_disk_type - The boot disk type.

  • compute_disk_size_gb - The size of the boot disk in GB.

  • compute_labels - Any labels to apply to compute nodes when deployed.

  • cpu_platform - The minimum CPU platform to request for compute nodes.

  • gpu_type - The type of GPU to attach to compute nodes. GPUs are only available in select zones.

  • gpu_count - The number of GPUs to attach to each instance.

  • gvnic - Boolean to enable (True) or disable (False) Google Virtual NIC. GVNIC is used to increase peak network bandwidth.

  • preemptible_bursting - Boolean to enable preemptible instances. Jobs should be capable of recovering of preemption. The RCC comes with Distributed Multithreaded Checkpointing (DMTCP) <https://docs.nersc.gov/development/checkpoint-restart/dmtcp/>_ to support application recovery (even for MPI applications)

  • vpc_subnet - The VPC subnetwork to deploy compute nodes to. If not specified, the subnetwork used to host the controller and login nodes is used.

  • exclusive - Boolean to set job scheduling to exclusive (one job per node, True).

  • enable_placement - Boolean to enable placement policy for compute node scheduling.

  • regional_capacity - Boolean to enable a spread placement policy. When set to False, a compace placement policy <https://cloud.google.com/compute/docs/instances/define-instance-placement#compact is used. enable_placement must also be set to true.

  • regional_policy - A previously created regional placement policy.

  • static_node_count - The number of static nodes in this machine block.

Partitions

Partitions (synonymous with Slurm Partitions) consist of an array of machine-blocks that have a few shared attributes :

  • project - A Google Cloud Project is used to group and manage cloud resources, billing, and permissions. You can configure multiple partitions in your cluster, each with their own GCP project. This provides an easy method for dividing up your monthly cloud bills across multiple cost-centers.

  • max_time - The maximum time, or wall clock limit, for jobs submitted to this partition.

On RCC clusters, you are able to have multiple compute partitions, with each partition having multiple machine types. This level of composability allows you to meet various business and technical needs.

Examples

Add a new partition

$ sudo su
$ cluster-services init
$ cluster-services list all > config.yaml

Edit config.yaml