For the previous couple years, we have been working to make Google Cloud a superb platform for operating C++ workloads. To display a few of the progress we have made to date, we’ll present how you should utilize C++ with each Cloud Pub/Sub and Cloud Storage to construct a extremely scalable job queue operating on Google Kubernetes Engine (GKE).

Such functions typically have to distribute work to many compute nodes to attain good efficiency. A part of the enchantment of public cloud suppliers is the flexibility to schedule these sorts of parallel computations on demand, rising the dimensions of the cluster that runs the computation as wanted, and shrinking it when it’s not operating. On this publish we’ll discover learn how to notice this potential for C++ functions, utilizing Pub/Sub and GKE.

A typical sample for operating large-scale computations is a job queue, the place work is represented by messages within the queue, and numerous employee functions pull objects from the queue for processing. The just lately launched Pub/Sub (CPS) C++ client library makes it straightforward to implement this sample. And with GKE autoscaling, the cluster operating such a workload can develop and shrink on demand, saving C++ builders from the tedium of managing the cluster, and leaving them with extra time to enhance their functions.

Pattern software

For our instance, we’ll create thousands and thousands of Cloud Storage objects; this fashions a parallel software that performs some computation (e.g., analyze a fraction of some massive knowledge set) and saves the ends in separate Cloud Storage objects. We imagine this workload is simpler to grasp than some unique simulation, but it surely’s not purely synthetic: from time-to-time our crew must create massive artificial knowledge units for load testing.


The essential concept is to interrupt the work right into a small variety of work objects, corresponding to, “create 1,000 objects with this prefix”. We use a command-line software to publish these work objects to a Pub/Sub subject, which reliably delivers them to any variety of employee nodes that execute the work objects. We use GKE to run the employee nodes, as GKE robotically scales the cluster primarily based on demand, and restarts the employee nodes if wanted after a failure. 

As a result of Pub/Sub provides at-least-once delivery, and since the employee nodes could also be restarted by GKE, it’s necessary to make these work objects idempotent, that’s, executing the work merchandise a number of instances produces the identical objects in Cloud Storage as executing the work merchandise a single time.

The code for this instance is on the market on this GitHub repository.

Posting the work objects

A easy C++ struct represents the work merchandise:

Leave a Reply

Your email address will not be published. Required fields are marked *