← Back to Index

[!NOTE] This document will be updated periodically.

Exposing the Kubernetes API Through OpenShift Router

A customer wants to expose the internal Kubernetes API through the OpenShift router. This guide demonstrates how to achieve this, though it’s important to note that exposing the Kubernetes API through a router is not recommended as it introduces security risks to the cluster.

Fortunately, OpenShift Container Platform (OCP) provides an internal service that points to the Kubernetes API, which we can expose through a route.

Configuration

Create a route to expose the Kubernetes API service:

cat << EOF > ${BASE_DIR}/data/install/route-kube-api.yaml
        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          name: kube-api-passthrough-route
          namespace: default
        spec:
          # Replace with your cluster's domain
          host: api-via-router.apps.demo-01-rhsys.wzhlab.top
          to:
            kind: Service
            name: kubernetes
          port:
            # This is the name of the port on the kubernetes service
            targetPort: https
          tls:
            # This is the most important part.
            # It tells the router to forward the TCP stream without touching it.
            termination: passthrough
        EOF
        
        oc apply -f ${BASE_DIR}/data/install/route-kube-api.yaml

Testing

Test the exposed route by sending requests to the router:

oc login --insecure-skip-tls-verify=true https://api-via-router.apps.demo-01-rhsys.wzhlab.top -u admin -p redhat
        
        # WARNING: Using insecure TLS client config. Setting this option is not supported!
        
        # Login successful.
        
        # You have access to 75 projects, the list has been suppressed. You can list all projects with 'oc projects'
        
        # Using project "default".
        
        oc get node
        
        # NAME             STATUS   ROLES                  AGE     VERSION
        
        # master-01-demo   Ready    control-plane,master   168m    v1.31.9
        
        # master-02-demo   Ready    control-plane,master   3h18m   v1.31.9
        
        # master-03-demo   Ready    control-plane,master   3h18m   v1.31.9
        
        # worker-01-demo   Ready    worker                 170m    v1.31.9
        
        # worker-02-demo   Ready    worker                 170m    v1.31.9

Understanding the Implementation

Let’s examine why this configuration works by looking at the Kubernetes service:

oc get svc kubernetes -n default -o wide
        
        # NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE    SELECTOR
        
        # kubernetes   ClusterIP   172.22.0.1   <none>        443/TCP   3d1h   <none>
        
        oc get ep kubernetes -o wide
        
        # NAME         ENDPOINTS                                                  AGE
        
        # kubernetes   192.168.99.23:6443,192.168.99.24:6443,192.168.99.25:6443   3d1h

The Kubernetes service doesn’t have a selector, so the service will search for endpoints with the same name as the service and redirect traffic to these endpoints. In this case, the endpoints are the API server instances running on the control plane nodes.

Security Considerations

Important Security Warning: Exposing the Kubernetes API through a router bypasses several security layers that are normally in place. This approach:

For production environments, consider using proper load balancers, VPNs, or other secure access methods instead.