Updated: 23 December 2025
Deploy a Python Machine Learning Model as a CF Web Service
An important part of machine learning is model deployment, deploying a machine learning mode so that other applications can consume the model in production.
An effective way to deploy a machine learning model for consumption is be means of a web service
This tutorial will cover the steps required from getting your data and training a model with it to deploying your model as a web service to Cloud Foundry
Learning Objectives
Upon completion of this tutorial you will know how to export a machine learning model and create a web service that will expose your model for other applications to use by means of a Cloud Foundry Application or Docker Container.
Prerequisites
This tutorial requires the following:
- Basic understanding of Python
- Experience using Skikit Learn to train Machine Learning Models
- Basic understanding of REST
- An IBM Cloud Account
- Familiarity with Jupyter Notebook
- IBM Cloud CLI
- Docker and a Docker Hub Account
Estimated Time
This tutorial should take between 15 and 30 minutes to complete.
Steps
Get the data
The first step of deploying a machine learning model is having some data to train a model on. The data to be generated will be a two-column dataset that conforms to a Linear Regression Approximation.
- Create a directory for the project, and in it create a directory for your training files called
train - For this tutorial some generated data will be used. We can create a new Jupyter Notebook in the
traindirectory calledgeneratedata.ipynb, you can generate the data by running the following python code in a Notebook Cell.
1import numpy as np2
3x = np.arange(-5.0, 5.0, 0.1)4y = 2*(x) + 35y_noise = 2 * np.random.normal(size=x.size)6ydata = y + y_noise- Next you can view the generated data as a Data Frame by running the following code.
1import pandas as pd2
3df = pd.DataFrame()4df['Independent Variable'] = x5df['Dependent Variable'] = ydata6df.head()- Lastly export the data to a CSV file so it can be used to train the model.
1df.to_csv('data.csv', sep=',')Train the model
Once the data has been exported a machine learning model can be trained on it. This tutorial will use the Scikit-Learn Linear Regression model.
- Create a new Jupyter Notebook in the
traindirectory calledtrain.ipynband import and view the data that was just exported with the following code:
1import pandas as pd2
3df = pd.read_csv('data.csv')4df.head()- Next, use Scikit-Learn’s Linear Regression model to train the model.
1from sklearn.linear_model import LinearRegression2import numpy as np3
4lm = LinearRegression()5X = np.asanyarray(df[['Independent Variable']])6Y = np.asanyarray(df[['Dependent Variable']])7lm.fit(X, Y)- Once the model has been trained, you look at some predictions by using the
lm.predict()function.
1lm.predict([[-2], [0], [4]])Which should result in the following output:
1array([[-0.84154182],2 [ 3.20582465],3 [11.3005576 ]])- Lastly, we can use
pickleto export our model object as a binary which can be used by the Web Service that will be created in the next step.
1import pickle2
3pickle.dump(lm, open("../deploy/linearmodel.pkl","wb"))Expose the model as a web service
Exposing the model as a web service can be done by creating a Python Flask application with an endpoint that can take a JSON body of features and return a prediction based on those features
- Create a new file in the
deploydirectory and name itapp.py. - Inside of the
app.pyfile, add the following code to import the necessary packages and define your app. Pickle will be used to read the model binary that was exported earlier, and Flask will be used to create the web server.
1import pickle2import flask3import os4
5app = flask.Flask(__name__)6port = int(os.getenv("PORT", 9099))- Then import the model file that was created previously.
1model = pickle.load(open("linearmodel.pkl","rb"))- Next, add a route that will allow you to send a JSON body of features and will return a prediction.
1@app.route('/predict', methods=['POST'])2def predict():3
4 features = flask.request.get_json(force=True)['features']5 prediction = model.predict([features])[0,0]6 response = {'prediction': prediction}7
8 return flask.jsonify(response)9
10if __name__ == '__main__':11 app.run(host='0.0.0.0', port=port)- The route that was just defined will expect a JSON input of the following form:
1{2 "features": [feature1]3}And will return a response with the following form:
1{2 "prediction": value3}If your model requires multiple features to make a prediction, you can simply add more features to the feature array as follows:
1{2 "features": [feature1, feature2, feature3,...]3}- You can run your Python application from the terminal or your Python IDE. From the terminal, run the following command
1python app.pyYou will need to add Python to your path, this can be done by following the instructions for Windows and Linux or Mac.
- You can make use of a POST to make a prediction, you make the request from Terminal on Mac or bash on Linux with:
1curl -X POST \2 http://localhost:9099/predict \3 -H 'Content-Type: application/json' \4 -d '{"features": [0]}'Or from Powershell with:
1Invoke-RestMethod -Method POST -Uri http://localhost:9099/predict -Body '{"features":[0]}'Deployment Configuration
There are two methods of configuring the application deployment that are covered here, namely using the Cloud Foundry Python Runtime or using the Cloud Foundry Docker Runtime
If you would like to deploy the application using the Cloud Foundry Python Runtime look at the next section Configure the Cloud Foundry Python Deployment
If you would like to deploy the application as a Docker Image, skip ahead to the Configure the Cloud Foundry Docker Deployment Section
Configure the Cloud Foundry Python Deployment
In order to deploy to Cloud Foundry, a few additional files need to be created inside of your deploy folder
- Cloud Foundry apps require a manifest file with some application configurations to be defined, these must be in the application root (the
deploydirectory) in this case. Themanifest.ymlfile must be created and should contain the following:
1---2applications:3 - name: MLModelAPI4 random-route: true5 buildpack: python_buildpack6 command: python app.py7 memory: 256M- Next, the Cloud Foundry runtime version for the application must be defined in the
runtime.txtfile. In thedeploydirectory create theruntime.txtfile and add the following to it:
1python-3.6.4- Lastly, the application dependencies must be defined in the
requirements.txt. Create this file in thedeploydirectory and add the following:
1flask2numpy3scipy4scikit-learnUpon completing the above, your deploy directory should be as follows:
1- deploy2 - app.py3 - linearmodel.pkl4 - manifest.yml5 - requirements.txt6 - runtime.txtConfigure the Cloud Foundry Docker Deployment
If it is preferred to make use of a Docker image that can be run via the Cloud Foundry Runtime or any other Docker runtime for deployment, the following steps can be followed instead
If you have already completed the instructions in the Configure the Cloud Foundry Python Deployment Section you can skip this and jump ahead to Deploy the Application from the CLI
In order to deploy the Docker image on Cloud Foundry we need to first create a few additional files in the deploy directory
- Cloud Foundry apps require a manifest file with some application configurations to be defined, these must be in the application root (the
deploydirectory) in this case. Themanifest.ymlfile must be created and should contain the following:
1---2applications:3 - name: MLModelAPI4 random-route: true5 buildpack: python_buildpack6 command: python app.py7 memory: 256M8 docker:9 image: <YOUR DOCKERHUB USERNAME>/python-ml-service- The application dependencies must be defined in the
requirements.txt. Create this file in thedeploydirectory and add the following:
1flask2numpy3scipy4scikit-learn- Next, the Dockerfile application must be defined. In the
deploydirectory create aDockerfilefile and add the following to it:
1FROM python:3.6.4-slim2COPY requirements.txt /requirements.txt3RUN pip install -r requirements.txt4EXPOSE 90995COPY . /.6CMD ["python","app.py"]- From the
deploydirectory, build the docker image with the following command
1docker image build -t <YOUR DOCKERHUB USERNAME>/python-ml-service .Take note of the . at the end of the command which specifies that you are building the image based on the Dockerfile in this directory
- Before you can deploy the application it needs to be pushed to a container registry, in this case Docker Hub will be used. Log into Docker Hub with the following command, and enter your username and password
1docker login- Lastly, push the image to Docker Hub with the following command
1docker push <YOUR DOCKERHUB USERNAME>/python-ml-serviceUpon completing the above, your deploy directory should be as follows:
1- deploy2 - app.py3 - Dockerfile4 - linearmodel.pkl5 - manifest.yml6 - requirements.txtDeploy the Application from the CLI
Lastly, you will deploy the application to Cloud Foundry on IBM Cloud, this will be done with the IBM Cloud CLI.
Regardless of whether you’re using the Cloud Foundry Python Runtime or the Cloud Foundry Docker Runtime, the following steps should be the same
-
Navigate to your
deploydirectory from your command line. -
Download the IBM Cloud CLI if you do not already have it as directed here or as shown below.
For Linux/Mac, you can download it with the following command:
1curl -sL https://ibm.biz/idt-installer | bashAnd for Windows from Powershell as an Administrator with:
1Set-ExecutionPolicy Unrestricted; iex(New-Object Net.WebClient).DownloadString('http://ibm.biz/idt-win-installer')- Next, verify your installation by running the following command:
1ibmcloud --help- Once you have verified that the CLI is correctly installed, log into IBM Cloud, and then target Cloud Foundry from the CLI with the following:
1ibmcloud login2ibmcloud target --cfNote that if you are using a Federated ID and see the following when running ibmcloud login:
1You are using a federated user ID, please use one time passcode ( C:\Program Files\IBM\Cloud\bin\ibmcloud.exe login --sso ), or use API key ( C:\Program Files\IBM\Cloud\bin\ibmcloud.exe --apikey key or @key_file ) to authenticate.First try to login using ibmcloud login --sso, and if that does not work then use the API Key method
- Next, you can simply push your application to Cloud Foundry from the CLI.
1ibmcloud cf pushWhen the deployment has completed, you will see something like the following output
1name: MLModelAPI2requested state: started3instances: 1/14usage: 256M x 1 instances5routes: https://<HOSTNAME>.<REGION>.mybluemix.net6last uploaded: Mon 14 Jan 14:26:10 SAST 20197stack: cflinuxfs28buildpack: python_buildpack9start command: python app.py- Lastly, you can try to use the endpoint that was defined to make predictions with a POST as before using Terminal on Mac or bash on Linux with the following command:
1curl -X POST \2 https://<HOSTNAME>.<REGION>.mybluemix.net/predict \3 -H 'Content-Type: application/json' \4 -d '{"features": [0]}'Or from Powershell with:
1Invoke-RestMethod -Method POST -Uri http://<HOSTNAME>.<REGION>.mybluemix.net/predict -Body '{"features":[0]}'Troubleshooting
Python not found/recognized
If you encounter the following error from bash:
1bash: python: command not foundOr on Powershell
1python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program.2Check the spelling of the name, or if a path was included, verify that the path is correct and try again.3At line:1 char:14+ python5+ ~~~~~~6 + CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException7 + FullyQualifiedErrorId : CommandNotFoundExceptionEnsure that Python is in your PATH, the process for doing this is dependant on your Python installation and OS and is not covered here
ModuleNotFound
If you encounter a ModuleNotFound when importing a Python package
1---------------------------------------------------------------------------2ModuleNotFoundError Traceback (most recent call last)3<ipython-input-1-5b23471a9b60> in <module>4----> 1 import ...5
6ModuleNotFoundError: No module named '...'You need to install the package, this can be done using Pip via the terminal with:
1pip install <PACKAGE NAME>Or within a Jupyter Notebook by running the following from a cell
1!pip install <PACKAGE NAME>Unicode Decode Error
If when importing the model binary into the app, you encounter the following error
1UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 49: character maps to <undefined>Ensure that you are reading the file in binary mode with rb and not just r in the app.py file as follows
1model = pickle.load(open("linearmodel.pkl","rb"))Underlying connection closed - Powershell
If you encounter the following error when testing your endpoints from Powershell
1Invoke-RestMethod : The underlying connection was closed: An unexpected error occurred on a send.2At line:1 char:13+ Invoke-RestMethod -Method POST -Uri "https://mlmodelapi-forgiving-war ...4+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~5 + CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-RestMethod], WebExc6 eption7 + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeRestMethodCommandEnsure that you are using the http://<HOSTNAME>.<REGION>.mybluemix.net endpoint above, and not the HTTPS.
Summary
You have successfully completed the process of training and deploying your Python machine learning model as a Web Service as well as interacting with it by means of an HTTP POST to the service in order to make prediction.
You can also read more about using Flask as a Python Web Framework and about Developing Machine Learning models with Python.
Resources
Some additional resources that can be helpful for additional information:
- Tutorial GitHub Repo
- Python Flask
- Ivan Yung’s Article on Deploying Python Machine Learning Models
- Ian Huston’s Python Cloud Foundry Examples
- Cloud Foundry Manifests
- Cloud Foundry Python Buildpacks