See the example notebook for extra particulars on the best way to run this pipeline.

Occasion-triggered pipeline runs

After getting outlined this pipeline, a subsequent helpful step is to routinely run it when an replace to the dataset is on the market, so that every dataset replace triggers an evaluation of information drift and potential mannequin (re)coaching.

We’ll present how to do that utilizing Cloud Functions (GCF), by organising a perform that’s triggered when new knowledge is added to a GCS bucket.

Arrange a GCF perform to set off a pipeline run when a dataset is up to date

We’ll outline and deploy a Cloud Functions (GCF) perform that launches a run of this pipeline when new coaching knowledge turns into obtainable, as triggered by the creation or modification of a file in a ‘trigger’ bucket on GCS.

Normally, you don’t wish to launch a brand new pipeline run for each new file added to a dataset—since usually, the dataset will encompass a group of recordsdata, to which you’ll add/replace a number of recordsdata in a batch. So, you don’t need the ‘trigger bucket’ to be the dataset bucket (if the information lives on GCS)—as that may set off undesirable pipeline runs.

As a substitute, we’ll set off a pipeline run after the add of a batch of latest knowledge has accomplished.

To do that, we’ll use an method the place the ‘set off’ bucket is totally different from the bucket used to retailer dataset recordsdata. ‘Trigger files’ uploaded to that bucket are anticipated to include the trail of the up to date dataset in addition to the trail to the information stats file generated for the final mannequin skilled. 

A set off file is uploaded as soon as the brand new knowledge add has accomplished, and that add triggers a run of the GCF perform, which in flip reads information on the brand new knowledge path from the set off file and launches the pipeline job.

Outline the GCF perform

To arrange this course of, we’ll first outline the GCF perform in a file referred to as, in addition to an accompanying necessities file in the identical listing that specifies the libraries to load previous to operating the perform. The necessities file will point out to put in the KFP SDK:


The code seems like this (with some element eliminated); we parse the set off file contents and use that info to launch a pipeline run. The code makes use of the values of a number of setting variables that we’ll set when importing the GCF perform.

Leave a Reply

Your email address will not be published. Required fields are marked *