Istio with k8s
Updated: 23 December 2025
Prerequisites
- Trial IBM Cloud Account
- Kubrenetes Cluster
- Kubernetes 1.9.x or later
- IBM Cloud CLI with Kubernetes
Setting Up The Environment
Access Your Cluster
List your available clusters and then download the config and set an environment variable to point to it with
1ibmcloud cs clusters2ibmcloud cs cluster-config <CLUSTER NAME>3$env:KUBECONFIG="C:\Users\NabeelValley\.bluemix\plugins\container-service\clusters\mycluster\kube-4config-mil01-mycluster.yml"Then you can check the workers in your cluster and get information with
1ibmcloud cs workers <CLUSTER NAME>2ibmcloud cs worker-get <WORKER ID>You can get your nodes, services, deployments, and pods with the following
1kubectl get node2kubectl get node,svc,deploy,po --all-namespacesClone the Lab Repo
You can clone the lab repo from https://github.com/IBM/istio101 and then navigate to the workshop directory
1git clone https://github.com/IBM/istio1012cd istio101/workshopInstall Istio on IBM Cloud Kubernetes Service
Download Istio from here and extract to your root directory
Then add the istioctl.exe file to your PATH variable
Thereafter navigate to the istio-demo.yaml file in the istio folder that you extracted and do the following
1kubectl apply -f .\install\kubernetes\istio-demo.yamlIf you run into the following error
1http: proxy error: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused itMake sure that your $env:KUBECTL variable is set, if not get your cluster config and set it again
Once that is done, check that the istio services are running on the cluster with
1kubectl get svc -n istio-systemDownload the App and Create the Database
Get the App
Clone the app from the GitHub repo
1git clone https://github.com/IBM/guestbook.git2cd guestbook/v2Create the Database
Next we can create a Redis Database with the following master and slave deployments and services from the Yaml files in the Guestbook project
1kubectl create -f redis-master-deployment.yaml2kubectl create -f redis-master-service.yaml3kubectl create -f redis-slave-deployment.yaml4kubectl create -f redis-slave-service.yamlInstall the Guestbook App with Manual Sidecar Injection
Sidecars are utility containers that support the main container, we can inject the Istio sidecar in two ways
- Manually with the Istio CLI
- Automatically with the Istio Initializer
With Linux you can do this
1kubectl apply -f <(istioctl kube-inject -f ../v1/guestbook-deployment.yaml2kubectl apply -f <(istioctl kube-inject -f guestbook-deployment.yamlBut, if you’re on Windows and you need to redirect your output, use this instead
1$istiov1 = istio kube-inject -f ..\v1\guestbook-deployment.yaml2echo $istiov1 > istiov1.yaml3kubectl apply -f .\istiov1.yaml4
5$istiov2 = istio kube-inject -f .\guestbook-deployment.yaml6echo $istiov2 > istiov2.yaml7kubectl apply -f .\istiov2.yamlThen create the Guestbook Service
1kubectl create -f guestbook-service.yamlAdding the Tone Analyzer
Create a Tone Analyzer Service and get the credentials, then add these to the analyzer-deployment.yaml file
1ibmcloud target --cf2ibmcloud service create tone_analyzer lite my-tone-analyzer3ibmcloud service key-create my-tone-analyzer istiokey4ibmcloud service key-show my-tone-analyzer istiokeyThen do the following
1$istioanalyzer = istio kube-inject -f analyzer-deployment.yaml2echo $istioanalyzer > istioanalyzer.yaml3kubectl apply -f .\istioanalyzer.yaml4
5kubectl apply -f analyzer-service.yamlService Telemetry and Tracing
Challenges with Microservices
One of the difficulties when using microservices is identifying issues and process bottlenecks as well as debugging
Istio comes with tracing built in for this exact purpose
Configure Istio for Telemetry Data
In the v2 directory, do the following
1istioctl create -f guestbook-telemetry.yamlGenerate a Load on the Application
Then we can then generate a small load on our application from the worker’s IP and Port
1kubectl get service guestbook -n defaultOr for a lite plan
1ibmcloud cs workers <CLUSTER NAME>2kubectl get svc guestbook -n default3while sleep 0.5; do curl http://<guestbook_endpoint/; doneWe can get our telemetry data at intervals with the following in Bash
1while sleep 0.5; do curl http://<WORKER'S PUBLIC IP>:<NODE PORT>/; doneView Data
Jaeger
We can find the external port for our tracing service and visit it based on that
1kubectl get svc tracing -n istio-systemGrafana
We can establish port forwarding for Grafana and view the dashboard on localhost:3000
1kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000Prometheus
We can view the Prometheus dashboard at localhost:9090
1kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090Service Graph
Can view this at http://localhost:8088/dotviz
1kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088Expose the Service Mesh with Ingress
Ingress Controller
Istio components are by default not exposed outside the cluster, an Ingress is a collection of rules that allow connections to reach a cluster
Navigate to the istio101\workshop\plans directory
Using a Lite Account
Configure the Guestbook App with Ingress
1istioctl create -f guestbook-gateway.yamlThen check the node port and IP of the Ingress
1kubectl get svc istio-ingressgateway -n istio-system2ibmcloud cs workers <CLUSTER NAME>In my case, I have the endpoint 159.122.179.103:31380 which is bound to port 80
Using a Paid Account
1istioctl create -f guestbook-gateway.yaml2kubectl get service istio-ingress -n istio-systemSet up a Controller to work with IBM Cloud Kubernetes Service
This will only work with a paid cluster
Get your Ingress subdomain
1ibmcloud cs cluster-get <CLUSTER NAME>Then add this subdomain to the frontdoor.yaml file, and create and list the details for your Ingress
1kubectl apply -f guestbook-frontdoor.yaml2kubectl get ingress guestbook-ingress -o yamlTraffic Management
Traffic Management Rules
The core component for traffic management in istio is Pilot. This manages and configures all the Envoy proxy instances in a service mesh
Pilot translates high level rules into low level configurations by means of the following three resources
- Virtual Services - Defines a set of routing rules to apply when a host is addressed
- Destination Rules - Defines policies that apply to traffic intended for a service after routing has occurred, specifications for load balancing, connection pool size, outlier detection, etc
- Service Entries - Enables services to access a service not necessarily managed by Istio
A/B Testing
Previously we had created two versions of the Guestbook app, v1 and v2. If we do not have any rules, istio will distribute requests evenly between the instances
To prevent Istio from using the default routing method we can do the following to route all traffic to v1
1istioctl replace -f virtualservice-all-v1.yamlIncrementally roll our changes
We can incrementally roll our changes by changing the weighting of our different versions
1istioctl replace -f virtualservice-80-20.yamlCircuit Breakers and Destination Rules
Istio lets us configure settings for destination rules as well as implementing circuit breakers for Envoys
Securing Services
Mutual Auth with Transport Layer Security
Istio can enable secure communication between app services without the need for application code changes. We can delegate service control to Istio instead of implementing it on each service
Citadel is the Istio component that provides sidecar proxies with an identity certificate . Envoys then use these certificates to encrypt and authenticate communication along channels between these services
When a microservice connects to another microservice communication between them is redirected through the Envoys
Setting up a Certificate Authority
First check that Citadel is running
1kubectl get deployment -l istio=citadel -n istio-systemDo the following with bash
1ibmcloud cs cluster-config <CLUSTER NAME>Then set the environment variable, and paste the following
1cat <<EOF | C:/Users/NabeelValley/istio-1.0.3/bin/istioctl.exe create -f -2apiVersion: authentication.istio.io/v1alpha13kind: Policy4metadata:5 name: mtls-to-analyzer6 namespace: default7spec:8 targets:9 - name: analyzer10 peers:11 - mtls:12EOFYou can then confirm the policy is set with
1kubectl get policies.authentication.istio.ioNext we can enable mTLS from a guestbook with a Destination Rule
1cat <<EOF | istioctl create -f -2apiVersion: networking.istio.io/v1alpha33kind: DestinationRule4metadata:5 name: route-with-mtls-for-analyzer6 namespace: default7spec:8 host: "analyzer.default.svc.cluster.local"9 trafficPolicy:10 tls:11 mode: ISTIO_MUTUAL12EOFVerify Authenticated Connection
We can ssh into a pod by getting the pod name and opening the terminal
1kubectl get pods -l app=guestbook2kubectl exec -it guestbook-v2-xxxxxxxx -c istio-proxy /bin/bashThen we should be able to view the certificate pem files as follows
1ls etc/certs/Enforcing Isolation
Service Isolation with Adapters
Back-end systems typically integrate with services in a way that creates a hard coupling
Istio uses Mixer to provide a generic intermediate layer between app code and infrastructure back-ends
Mixer makes use of adapters to interface between code and back-ends
- Denier
- Prometheus
- Memquota
- Stackdriver
Using the Denier Adapter
Block access to the Guestbook service with
1istioctl create -f mixer-rule-denial.yamlThe rule we have created is as follows
1apiVersion: 'config.istio.io/v1alpha2'2kind: denier3metadata:4 name: denyall5 namespace: istio-system6spec:7 status:8 code: 79 message: Not allowed10---11# The (empty) data handed to denyall at run time12apiVersion: 'config.istio.io/v1alpha2'13kind: checknothing14metadata:15 name: denyrequest16 namespace: istio-system17spec:18---19# The rule that uses denier to deny requests to the guestbook service20apiVersion: 'config.istio.io/v1alpha2'21kind: rule22metadata:23 name: deny-hello-world24 namespace: istio-system25spec:26 match: destination.service=="guestbook.default.svc.cluster.local"27 actions:28 - handler: denyall.denier29 instances:30 - denyrequest.checknothingWe can verify that the access is denied by navigating to our Ingress IP, next we can remove the rule with
1istioctl delete -f mixer-rule-denial.yaml