Updated: 23 December 2025

Deploy a Python Machine Learning Model as a CF Web Service

An important part of machine learning is model deployment, deploying a machine learning mode so that other applications can consume the model in production.

An effective way to deploy a machine learning model for consumption is be means of a web service

This tutorial will cover the steps required from getting your data and training a model with it to deploying your model as a web service to Cloud Foundry

Learning Objectives

Upon completion of this tutorial you will know how to export a machine learning model and create a web service that will expose your model for other applications to use by means of a Cloud Foundry Application or Docker Container.

Prerequisites

This tutorial requires the following:

  • Basic understanding of Python
  • Experience using Skikit Learn to train Machine Learning Models
  • Basic understanding of REST
  • An IBM Cloud Account
  • Familiarity with Jupyter Notebook
  • IBM Cloud CLI
  • Docker and a Docker Hub Account

Estimated Time

This tutorial should take between 15 and 30 minutes to complete.

Steps

Get the data

The first step of deploying a machine learning model is having some data to train a model on. The data to be generated will be a two-column dataset that conforms to a Linear Regression Approximation.

  1. Create a directory for the project, and in it create a directory for your training files called train
  2. For this tutorial some generated data will be used. We can create a new Jupyter Notebook in the train directory called generatedata.ipynb, you can generate the data by running the following python code in a Notebook Cell.
1
import numpy as np
2
3
x = np.arange(-5.0, 5.0, 0.1)
4
y = 2*(x) + 3
5
y_noise = 2 * np.random.normal(size=x.size)
6
ydata = y + y_noise
  1. Next you can view the generated data as a Data Frame by running the following code.
1
import pandas as pd
2
3
df = pd.DataFrame()
4
df['Independent Variable'] = x
5
df['Dependent Variable'] = ydata
6
df.head()
  1. Lastly export the data to a CSV file so it can be used to train the model.
1
df.to_csv('data.csv', sep=',')

Train the model

Once the data has been exported a machine learning model can be trained on it. This tutorial will use the Scikit-Learn Linear Regression model.

  1. Create a new Jupyter Notebook in the train directory called train.ipynb and import and view the data that was just exported with the following code:
1
import pandas as pd
2
3
df = pd.read_csv('data.csv')
4
df.head()
  1. Next, use Scikit-Learn’s Linear Regression model to train the model.
1
from sklearn.linear_model import LinearRegression
2
import numpy as np
3
4
lm = LinearRegression()
5
X = np.asanyarray(df[['Independent Variable']])
6
Y = np.asanyarray(df[['Dependent Variable']])
7
lm.fit(X, Y)
  1. Once the model has been trained, you look at some predictions by using the lm.predict() function.
1
lm.predict([[-2], [0], [4]])

Which should result in the following output:

1
array([[-0.84154182],
2
[ 3.20582465],
3
[11.3005576 ]])
  1. Lastly, we can use pickle to export our model object as a binary which can be used by the Web Service that will be created in the next step.
1
import pickle
2
3
pickle.dump(lm, open("../deploy/linearmodel.pkl","wb"))

Expose the model as a web service

Exposing the model as a web service can be done by creating a Python Flask application with an endpoint that can take a JSON body of features and return a prediction based on those features

  1. Create a new file in the deploy directory and name it app.py.
  2. Inside of the app.py file, add the following code to import the necessary packages and define your app. Pickle will be used to read the model binary that was exported earlier, and Flask will be used to create the web server.
1
import pickle
2
import flask
3
import os
4
5
app = flask.Flask(__name__)
6
port = int(os.getenv("PORT", 9099))
  1. Then import the model file that was created previously.
1
model = pickle.load(open("linearmodel.pkl","rb"))
  1. Next, add a route that will allow you to send a JSON body of features and will return a prediction.
1
@app.route('/predict', methods=['POST'])
2
def predict():
3
4
features = flask.request.get_json(force=True)['features']
5
prediction = model.predict([features])[0,0]
6
response = {'prediction': prediction}
7
8
return flask.jsonify(response)
9
10
if __name__ == '__main__':
11
app.run(host='0.0.0.0', port=port)
  1. The route that was just defined will expect a JSON input of the following form:
1
{
2
"features": [feature1]
3
}

And will return a response with the following form:

1
{
2
"prediction": value
3
}

If your model requires multiple features to make a prediction, you can simply add more features to the feature array as follows:

1
{
2
"features": [feature1, feature2, feature3,...]
3
}
  1. You can run your Python application from the terminal or your Python IDE. From the terminal, run the following command
Terminal window
1
python app.py

You will need to add Python to your path, this can be done by following the instructions for Windows and Linux or Mac.

  1. You can make use of a POST to make a prediction, you make the request from Terminal on Mac or bash on Linux with:
Terminal window
1
curl -X POST \
2
http://localhost:9099/predict \
3
-H 'Content-Type: application/json' \
4
-d '{"features": [0]}'

Or from Powershell with:

Terminal window
1
Invoke-RestMethod -Method POST -Uri http://localhost:9099/predict -Body '{"features":[0]}'

Deployment Configuration

There are two methods of configuring the application deployment that are covered here, namely using the Cloud Foundry Python Runtime or using the Cloud Foundry Docker Runtime

If you would like to deploy the application using the Cloud Foundry Python Runtime look at the next section Configure the Cloud Foundry Python Deployment

If you would like to deploy the application as a Docker Image, skip ahead to the Configure the Cloud Foundry Docker Deployment Section

Configure the Cloud Foundry Python Deployment

In order to deploy to Cloud Foundry, a few additional files need to be created inside of your deploy folder

  1. Cloud Foundry apps require a manifest file with some application configurations to be defined, these must be in the application root (the deploy directory) in this case. The manifest.yml file must be created and should contain the following:
1
---
2
applications:
3
- name: MLModelAPI
4
random-route: true
5
buildpack: python_buildpack
6
command: python app.py
7
memory: 256M
  1. Next, the Cloud Foundry runtime version for the application must be defined in the runtime.txt file. In the deploy directory create the runtime.txt file and add the following to it:
1
python-3.6.4
  1. Lastly, the application dependencies must be defined in the requirements.txt. Create this file in the deploy directory and add the following:
1
flask
2
numpy
3
scipy
4
scikit-learn

Upon completing the above, your deploy directory should be as follows:

1
- deploy
2
- app.py
3
- linearmodel.pkl
4
- manifest.yml
5
- requirements.txt
6
- runtime.txt

Configure the Cloud Foundry Docker Deployment

If it is preferred to make use of a Docker image that can be run via the Cloud Foundry Runtime or any other Docker runtime for deployment, the following steps can be followed instead

If you have already completed the instructions in the Configure the Cloud Foundry Python Deployment Section you can skip this and jump ahead to Deploy the Application from the CLI

In order to deploy the Docker image on Cloud Foundry we need to first create a few additional files in the deploy directory

  1. Cloud Foundry apps require a manifest file with some application configurations to be defined, these must be in the application root (the deploy directory) in this case. The manifest.yml file must be created and should contain the following:
1
---
2
applications:
3
- name: MLModelAPI
4
random-route: true
5
buildpack: python_buildpack
6
command: python app.py
7
memory: 256M
8
docker:
9
image: <YOUR DOCKERHUB USERNAME>/python-ml-service
  1. The application dependencies must be defined in the requirements.txt. Create this file in the deploy directory and add the following:
1
flask
2
numpy
3
scipy
4
scikit-learn
  1. Next, the Dockerfile application must be defined. In the deploy directory create a Dockerfile file and add the following to it:
1
FROM python:3.6.4-slim
2
COPY requirements.txt /requirements.txt
3
RUN pip install -r requirements.txt
4
EXPOSE 9099
5
COPY . /.
6
CMD ["python","app.py"]
  1. From the deploy directory, build the docker image with the following command
Terminal window
1
docker image build -t <YOUR DOCKERHUB USERNAME>/python-ml-service .

Take note of the . at the end of the command which specifies that you are building the image based on the Dockerfile in this directory

  1. Before you can deploy the application it needs to be pushed to a container registry, in this case Docker Hub will be used. Log into Docker Hub with the following command, and enter your username and password
Terminal window
1
docker login
  1. Lastly, push the image to Docker Hub with the following command
Terminal window
1
docker push <YOUR DOCKERHUB USERNAME>/python-ml-service

Upon completing the above, your deploy directory should be as follows:

1
- deploy
2
- app.py
3
- Dockerfile
4
- linearmodel.pkl
5
- manifest.yml
6
- requirements.txt

Deploy the Application from the CLI

Lastly, you will deploy the application to Cloud Foundry on IBM Cloud, this will be done with the IBM Cloud CLI.

Regardless of whether you’re using the Cloud Foundry Python Runtime or the Cloud Foundry Docker Runtime, the following steps should be the same

  1. Navigate to your deploy directory from your command line.

  2. Download the IBM Cloud CLI if you do not already have it as directed here or as shown below.

For Linux/Mac, you can download it with the following command:

Terminal window
1
curl -sL https://ibm.biz/idt-installer | bash

And for Windows from Powershell as an Administrator with:

Terminal window
1
Set-ExecutionPolicy Unrestricted; iex(New-Object Net.WebClient).DownloadString('http://ibm.biz/idt-win-installer')
  1. Next, verify your installation by running the following command:
Terminal window
1
ibmcloud --help
  1. Once you have verified that the CLI is correctly installed, log into IBM Cloud, and then target Cloud Foundry from the CLI with the following:
Terminal window
1
ibmcloud login
2
ibmcloud target --cf

Note that if you are using a Federated ID and see the following when running ibmcloud login:

1
You are using a federated user ID, please use one time passcode ( C:\Program Files\IBM\Cloud\bin\ibmcloud.exe login --sso ), or use API key ( C:\Program Files\IBM\Cloud\bin\ibmcloud.exe --apikey key or @key_file ) to authenticate.

First try to login using ibmcloud login --sso, and if that does not work then use the API Key method

  1. Next, you can simply push your application to Cloud Foundry from the CLI.
Terminal window
1
ibmcloud cf push

When the deployment has completed, you will see something like the following output

1
name: MLModelAPI
2
requested state: started
3
instances: 1/1
4
usage: 256M x 1 instances
5
routes: https://<HOSTNAME>.<REGION>.mybluemix.net
6
last uploaded: Mon 14 Jan 14:26:10 SAST 2019
7
stack: cflinuxfs2
8
buildpack: python_buildpack
9
start command: python app.py
  1. Lastly, you can try to use the endpoint that was defined to make predictions with a POST as before using Terminal on Mac or bash on Linux with the following command:
Terminal window
1
curl -X POST \
2
https://<HOSTNAME>.<REGION>.mybluemix.net/predict \
3
-H 'Content-Type: application/json' \
4
-d '{"features": [0]}'

Or from Powershell with:

Terminal window
1
Invoke-RestMethod -Method POST -Uri http://<HOSTNAME>.<REGION>.mybluemix.net/predict -Body '{"features":[0]}'

Troubleshooting

Python not found/recognized

If you encounter the following error from bash:

1
bash: python: command not found

Or on Powershell

1
python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program.
2
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
3
At line:1 char:1
4
+ python
5
+ ~~~~~~
6
+ CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException
7
+ FullyQualifiedErrorId : CommandNotFoundException

Ensure that Python is in your PATH, the process for doing this is dependant on your Python installation and OS and is not covered here

ModuleNotFound

If you encounter a ModuleNotFound when importing a Python package

1
---------------------------------------------------------------------------
2
ModuleNotFoundError Traceback (most recent call last)
3
<ipython-input-1-5b23471a9b60> in <module>
4
----> 1 import ...
5
6
ModuleNotFoundError: No module named '...'

You need to install the package, this can be done using Pip via the terminal with:

Terminal window
1
pip install <PACKAGE NAME>

Or within a Jupyter Notebook by running the following from a cell

Terminal window
1
!pip install <PACKAGE NAME>

Unicode Decode Error

If when importing the model binary into the app, you encounter the following error

1
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 49: character maps to <undefined>

Ensure that you are reading the file in binary mode with rb and not just r in the app.py file as follows

1
model = pickle.load(open("linearmodel.pkl","rb"))

Underlying connection closed - Powershell

If you encounter the following error when testing your endpoints from Powershell

1
Invoke-RestMethod : The underlying connection was closed: An unexpected error occurred on a send.
2
At line:1 char:1
3
+ Invoke-RestMethod -Method POST -Uri "https://mlmodelapi-forgiving-war ...
4
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-RestMethod], WebExc
6
eption
7
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand

Ensure that you are using the http://<HOSTNAME>.<REGION>.mybluemix.net endpoint above, and not the HTTPS.

Summary

You have successfully completed the process of training and deploying your Python machine learning model as a Web Service as well as interacting with it by means of an HTTP POST to the service in order to make prediction.

You can also read more about using Flask as a Python Web Framework and about Developing Machine Learning models with Python.

Resources

Some additional resources that can be helpful for additional information: