The Kubernetes API is your route to inspecting and managing your cluster’s operations. You can consume the API using the Kubectl CLI, tools such as curl
, or the official integration libraries for popular programming languages.
The API is available to applications within your cluster too. Kubernetes Pods are automatically given access to the API and can authenticate using a provided service account. You perform interactions by consuming the injected environment variables and certificate files to make connections from the client of your choice.
Why Access The Kubernetes API Within Pods?
There are several use cases for in-Pod API access. This technique allows applications to dynamically inspect their environment, apply Kubernetes changes, and collect control plane metrics that provide performance insights.
Some organizations develop their own tooling around Kubernetes. They might deploy a special in-cluster application that uses the API to expose additional functionality. Operating from within the cluster can be safer than making API calls from an external script as you don’t need to open up your environment or share service accounts and authentication tokens.
Using the API Client Libraries
The easiest and recommended method for accessing the Kubernetes API from a Pod is to use a client library. Fully supported options are available for C, .NET, Go, Haskell, Java, JavaScript, Perl, Python, and Ruby. There are equivalent community-maintained solutions for most other popular programming languages.
The client libraries have built-in support for discovering the cluster environment they’re running in. Each implementation provides a function you can call that will configure the library to connect to the correct API server.
Here’s an example of how to list the Pods in your cluster within a Python application:
from kubernetes import client, config config.load_incluster_config() api = client.CoreV1Api() # Perform necessary API interactions # pods = api.list_pod_for_all_namespaces()
This approach is easy to work with and requires no manual configuration. Sometimes you won’t be able to use a client library though. In those cases, it’s still possible to manually access the API using the service account Kubernetes provides.
Performing Manual API Interactions
To call the API you need to know two things: the in-cluster hostname it’s exposed on, and the service account token that will authenticate your Pod.
The API hostname is always kubernetes.default.svc
. The Kubernetes DNS provider will resolve this name to the control plane’s API server. Alternatively, you can use the $KUBERNETES_SERVICE_HOST
environment variable to discover the API server’s IP address:
$ echo $KUBERNETES_SERVICE_HOST 10.96.0.1
The API’s only available over HTTPS. You can find the certificate authority file for your cluster at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
within your Pod. Kubernetes deposits this into the filesystem each time a new container is created.
You’ll need to authenticate to achieve anything useful with the API. Kubernetes creates a new service account for each Pod and provides its token at /var/run/secrets/kubernetes.io/serviceaccount/token
. This should be included with each HTTP request as a bearer token in the Authorization
header.
Putting everything together, here’s an example of making a basic in-Pod Kubernetes API request using curl
:
$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "192.168.49.2:8443" } ]
The Kubernetes server has responded with the API versions that are available. This confirms a successful connection has been made using the kubernetes.default.svc
hostname and the provided service account.
Handling RBAC
Although an API request has been successfully made, most others will be off-limits if RBAC is enabled for your cluster. Newly created service accounts don’t automatically receive roles so your Pod won’t be able to request protected API endpoints.
You can resolve this by creating your own Role objects and binding them to the service account that’s provided to your Pods. First create a new Role:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: demo-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list"]
Apply it to your cluster with Kubectl:
$ kubectl apply -f role.yaml
Next bind the role to the service account:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: default name: demo-role-binding subjects: - kind: ServiceAccount name: default apiGroup: "" roleRef: kind: Role name: demo-role apiGroup: ""
The default
service account is selected as the role binding’s subject. Pods are always supplied with this service account, scoped to the namespace they were created in. In this example, the default
namespace is used, but this should be changed on the Role and RoleBinding objects if your Pod exists in a different namespace.
Add the RoleBinding to your cluster:
$ kubectl apply -f role-binding.yaml
Now your Pods will be permitted to get and list other Pod objects in the default
namespace. You can verify this by making an API request to the namespaced Pods endpoint:
$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/default/pods { "kind": "PodList", "apiVersion": "v1" ... }
Pods can identify their own namespace by reading the /var/run/secrets/kubernetes.io/serviceaccount/namespace
file:
$ cat /var/run/secrets/kubernetes.io/serviceaccount/namespace default
This provides a convenient method for interpolating the active namespace into endpoint URLs:
$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/pods { "kind": "PodList", "apiVersion": "v1" ... }
Choosing a Different Service Account
Kubernetes automatically provides Pods with the default
service account inside their namespace. You can optionally inject a different service account instead by setting the spec.serviceAccountName
field on your Pods:
apiVersion: v1 kind: Pod metadata: name: demo spec: serviceAccountName: demo-sa
In this example the Pod will authenticate as the demo-sa
token. You can create this service account manually and bind it the roles you require.
$ kubernetes create serviceaccount demo-sa
The service account should exist in the same namespace as the Pod.
Opting Out of Service Account Mounting
Automatic service account injection isn’t always desirable. It can be a security hazard as a successful Pod compromise offers immediate access to your Kubernetes cluster’s API. You can disable service account token mounts with the spec.automountServiceAccountToken
Pod manifest field:
apiVersion: v1 kind: Pod metadata: name: demo spec: automountServiceAccountToken: false
Kubernetes won’t inject the /var/run/secrets/kubernetes.io/serviceaccount/token
file. This will prevent the Pod from authenticating to the Kubernetes API unless you manually supply credentials using a different method. This field is also supported on service account objects, making them ineligible to be auto-mounted into any Pod.
If you do use service account mounting, set appropriate RBAC policies to restrict the token to your intended use cases. Avoiding highly privileged access will lessen the risk of damage should an attacker gain access to your Pod.
Summary
Accessing the Kubernetes API server from within your cluster lets running applications inspect and modify neighbouring workloads. You can add extra functionality without opening up your cluster to external API access.
The official client libraries make it simple to get up and running, if they’re suitable for your use case. In other situations you’ll need to manually make requests to https://kubernetes.default.svc
, supplying the certificate authority file and service account token that Kubernetes injects into your Pod containers. Irrespective of the approach you use, the service account must be correctly configured with RBAC role bindings so the Pod has permission to perform its intended actions.