Get started Bring yourself up to speed with our introductory content.

How to deploy machine learning models on Google's AI Platform

Developers use the AI Platform on Google Cloud Platform to build data pipelines with TensorFlow, Keras, XGBoost and other machine learning libraries. In this video, we'll show you how to build a model using the scikit-learn framework, save it in Cloud Storage and then deploy it using the Google Cloud CLI.

To start, launch a new project in your Google Cloud Platform (GCP) account and enable the AI Platform Training and Prediction and Compute Engine APIs. To run commands, install the Google Cloud SDK locally or use a Cloud Shell instance -- as shown in the demo. Use the command gcloud config set project <Project ID> to select the project from the terminal.

Python is already installed on Cloud Shell. If you are running this locally, you can use Python 2.7, 3.5 or 3.7, depending on what AI Platform runtime version you use. We'll use the iris data set in this demo since it's installed automatically with scikit-learn, which is a machine learning library for Python. However, if you want to use a custom data set that's stored in a Google Cloud Storage bucket, you need to install something like the Python pandas library to create data structures.

Once all the libraries are ready, write the code to import the data set, train the classifier and then export that classifier to a model file. In the tutorial, the data set is imported by using the datasets library from scikit-learn, but you'll need to write some custom code if you're not using the iris data set. The classifier is trained by passing the features and labels to the svm library, then the model is created with the dump() method of the joblib library. AI Platform expects the model to be named model.joblib, so it won't recognize it if it's named differently.

When the model file has been created, put it somewhere it can be versioned later on -- if necessary. In the demo, we create a Cloud Storage bucket using the gsutil mb command, which can be versioned to keep older iterations of the model. Copy and enter the model there.

You'll need to preprocess the input for prediction, which is a little different for each data set. Take the numerical values for each of the rows in your sample and put them in a JSON file, which can then be consumed by the model to run its predictions. In the tutorial, this file was called input.json.

Before you spend any compute resources on creating and deploying the model to the AI Platform, test it locally. To do that, use the gcloud ai-platform local predict command. In this tutorial, the full command looks like this: gcloud ai-platform local predict --model-dir gs://<Bucket name with model> --json-instances input.json --framework scikit-learn. Once the model is validated, you can deploy it with the gcloud ai-platform models create command, passing in the name of the model and region in which to create it.

You also need to create a version of the model using the gcloud ai-platform versions create command, before you can use the model for predictions. At the time of this recording, the model.joblib file has to be created with Python 2.7, but the model will run in GCP on Python 3.7. Once the version is created, you can retrieve information about it by using the gcloud ai-platform versions describe command or looking it up in the GCP console.

At this point, you're ready to send the model a prediction request by using the gcloud ai-platform predict command. Remember to pass in the model name and version, as well as the JSON input file as described above. If you use the same one from the test local step, you should get the exact same result.

View All Videos
Transcript - How to deploy machine learning models on Google's AI Platform

Hello, and welcome to "How to deploy a machine learning model on the Google Cloud AI Platform." So to start out, I'm here in my Cloud Shell instance. And I'm going to install the framework I need, which for this demo, is the scikit-learn framework.

Next, I'm going to create a Python script that will load our data set, which is the iris data set present in the scikit-learn model. I'll include a link to that in the Snip description as well, then train a classifier on that data and export that to a file.

You can see here [that] we're using the iris data set, but it's also possible to load in a custom data set from the Google Cloud Storage buckets using another framework like pandas. We're also outputting this model to a file called the model.joblib -- that is a convention that you're going to want to keep; the AI Platform is going to look for that file name when it's deploying these.

We're also going to want to add a JSON input file, which should be preprocessed for prediction in the same way that the training data was. So, since we're using a training data set that's included with this model, I'm also going to use one that's been provided for us.

So now we've got our input file, our model file and our Python script. The next thing we need to do is make a GCP bucket so that we can store the model file and deploy it from there if our training data is good. So first, let's start by making a model bucket variable. And then use the GS util. command line to create a bucket with that name in the US Central one region. Now we can copy that model up to the bucket. Now that that's up in the bucket, I can use the G Cloud AI Platform local predict command to test our model, passing in the model directory or the Cloud Storage bucket -- in this case, the JSON input file -- and the framework we're using, which is scikit-learn.

You can see the operation did complete successfully. So now we can use the G Cloud AI Platform models create to create a model resource that we can deploy later. On, you see, I broke a rule here, it's very strict about naming convention. So it can only use letters, numbers and underscores. So I'm going to change this dash into an underscore and try it again.

Now that my model resource is created, I can create a version of that which I can then iterate on and deploy as I improve upon it. So to do that, I'm going to use the G Cloud AI Platform version's create command. And this is going to take a few minutes, so let's just let it run. All right, now that that version is created, let's go ahead and use the G Cloud AI Platform versions describe command to find out more about it.

All right, we can see it's using the scikit-learn framework. We can see the deployment URL, the machine type and the name of the model. Let's see if we can find out anything more about it from the Google Cloud Console.

OK, I'm here in the GCP console. And if you go to the left-hand navigation menu here and scroll down to AI services, this is under the AI platform dashboard. But if we look just a little bit further down here, we do see we have model v1, which is a version of our demo model. And we can see it deployed successfully. Let's go back to the Cloud Shell window. And we can run another test against it.

All right, so I'm back here in Cloud Shell. I still have my original files, but I'm going to delete the model job lib and the train.py because we don't need them anymore.

Now we're going to run the G Cloud AI Platform predict command again, but instead of specifying the local, we're going to use our deployed model. All right, so we got the same results back from our deployed model that we did when we ran it locally. Now, our model has been trained, created and deployed, all using the G Cloud SDK, which can be put into a Jupiter notebook or a script or some kind of pipeline.

That's it for today. Thank you for watching.

+ Show Transcript
Data Center
ITOperations
SearchAWS
SearchVMware
Close