Spring REST 2 - Deploying the Spring REST Demo in KIND using Gateway API and Helm

Posted by bit1 on April 6, 2026

Lets deploy the Demo Spring Boot application into Kubernetes using kind as the cluster and cloud-provider-kind to provide HTTP access to the cluster.

Once kind is installed, we will:

  • run cloud-provider-kind to deploy the CRDs and deal with the Gateway API
  • use Helm to:
    • install Postgres, a Gateway, and some namespaces
    • deploy the Spring Demo application
    • seed some data (5,000 Customers and related data)
  • run some K6 read and write tests

The repository to run the above is here tony-waters/spring-boot-kubernetes:

git clone https://github.com/tony-waters/spring-boot-kubernetes.git

Installation

Both kind and cloud-provider-kind can be installed using golang. To install kind:

go install sigs.k8s.io/kind@v0.31.0

for cloud-provider-kind:

go install sigs.k8s.io/cloud-provider-kind@latest

Running the cluster

To be compatible with cloud-provider-kind we need to create a cluster using a kind node image with version 1.33 or above:

kind create cluster --image kindest/node:v1.33.4

The cloud-provider-kind GitHub docs suggest we need to remove a label from the single kind node, kind-control-plane, to allow ingress access to the control plane, which is forbidden in the default single node setup (though in my setup the kind-control-plane node did not have this label):

kubectl label node kind-control-plane node.kubernetes.io/exclude-from-external-load-balancers-

Note that before running cloud-provider-kind there are no Custom Resource Definitions (CRDs) in the cluster:

> kubectl get crd
No resources found

To get the Gateway API CRDs, and provide access to the cluster. we need cloud-provider-kind running as a separate application. There are 2 ways of doing this:


1. Running cloud-provider-kind from a shell

On my local Linux system go installs the cloud-provider-kind binary in $GOBIN (usually ~/go/bin). We can (and to make our lives easier, should) make it available more generally by installing it into /usr/local/bin:

sudo install ~/go/bin/cloud-provider-kind /usr/local/bin

Then we can run it in a dedicated shell:

cloud-provider-kind

2. Running cloud-provider-kind from a Docker container

cloud-provider-kind also comes as a Docker image:

docker run -d \
  --name cloud-provider-kind \
  --rm \
  --network host \
  -v /var/run/docker.sock:/var/run/docker.sock \
  registry.k8s.io/cloud-provider-kind/cloud-controller-manager:v0.10.0

Note

Whichever option you use, the cluster must be up and running when you do this step as it adds some CRDs to the cluster. Its also important that you keep this running throughout the process. This caught me out once or twice!

There should now be some CRDs in the cluster:

tw:~/Code/spring-boot-kubernetes$ kubectl get crd
NAME                                           CREATED AT
backendtlspolicies.gateway.networking.k8s.io   2026-04-11T20:41:16Z
gatewayclasses.gateway.networking.k8s.io       2026-04-11T20:41:16Z
gateways.gateway.networking.k8s.io             2026-04-11T20:41:16Z
grpcroutes.gateway.networking.k8s.io           2026-04-11T20:41:16Z
httproutes.gateway.networking.k8s.io           2026-04-11T20:41:16Z
referencegrants.gateway.networking.k8s.io      2026-04-11T20:41:16Z

Installing the Spring Demo Infrastructure

In order to run the application we need to make the following changes to the cluster:

  • create namespaces
  • add a Gateway to allow traffic in
  • add a Postgres database

To make things easy to reason over, I have created an over-simplified set of Helm charts for this, keeping values.yaml files to a minimum. Install them from the helm and helm-infra directories. Start by installing the infrastructure - the separate components of helm-infra (namespaces, Gateway, and Postgres) are installed using a parent chart.

helm dependency build ./helm-infra
helm install infra ./helm-infra

With any luck, Postgres will soon be running in the cluster. Also, we should soon have a Gateway with an IP address we can use to access the cluster:

tw:~/spring-boot-kubernetes$ kubectl get gateway -A
NAMESPACE   NAME      CLASS                 ADDRESS      PROGRAMMED   AGE
gateway     gateway   cloud-provider-kind   **172.18.0.3**   True         114s

If not, check the cloud-provider-kind logs for errors.


Deploy the application

Once this is all setup we are ready to deploy an actual application. In order to work with the Kubernetes Gateway API we need to have a HTTPRoute to connect the Gateway with the application Service.

Here is the Gateway from the previous step:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: application-gateway
  namespace: application-gateway
spec:
  gatewayClassName: cloud-provider-kind
  listeners:
    - name: default
      port: 80
      protocol: HTTP
      allowedRoutes:
        namespaces:
          from: All

Here is what the HTTPRoute looks like:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: springapp
  namespace: application
spec:
  parentRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: application-gateway
      namespace: application-gateway
  hostnames:
    - application
  rules:
    - backendRefs:
        - kind: Service
          name: springapp
          port: 80
      matches:
        - path:
            type: PathPrefix
            value: /

I am asking the Gateway to send any traffic with Host: application as a Header to the springapp service on port 80. The Service sends this request to the application on port 8080:

apiVersion: v1
kind: Service
metadata:
  name: springapp
  namespace: application
spec:
  type: ClusterIP
  selector:
    app: springapp
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080

To deploy the application:

helm install springapp ./helm/springapp

Once everything is up and running, you should have a working HTTPRoute. Check for “Route is accepted” and “All references resolved” messages:

> kubectl describe httproute springapp -n application

Name:         springapp
Namespace:    application
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: springapp
              meta.helm.sh/release-namespace: default
API Version:  gateway.networking.k8s.io/v1
...
Status:
  Parents:
    Conditions:
      Message:               **Route is accepted.**
      Observed Generation:   1
      Reason:                Accepted
      Status:                True
      Type:                  Accepted
      Last Transition Time:  2026-04-16T18:07:45Z
      Message:               **All references resolved**
      Observed Generation:   1
      Reason:                ResolvedRefs
      Status:                True
      Type:                  ResolvedRefs
...
Events:              <none>


Seed some data

Optionally, if you want to throw some seed data into the mix to make tests more realistic run the seeder. This will create 5,000 customers with related data:

helm install springseed ./helm/springseed

You should see this in the logs:

Seed complete: 5000 customers  

Test it works

Using the IP address from the Gateway, we can check if the application is healthy:

curl -H "Host: application" http://172.18.0.3:80/actuator/health/liveness

And talk to the REST API:

curl -H "Host: application" http://172.18.0.3:80/api/customers

Run some K6 tests:

I have included read and write tests for K6. Run the write test like this:

k6 run \
  -e TEST_PROFILE=smoke \
  -e BASE_URL=http://172.18.0.3 \
  -e HOST_HEADER=application \
  ./k6/write-test.js

If thats clean, you can try running using TEST_PROFILE=load and TEST_PROFILE=stress.

Same for the read test:

k6 run \
  -e TEST_PROFILE=smoke \
  -e BASE_URL=http://172.18.0.3 \
  -e HOST_HEADER=application \
  ./k6/read-test.js

Cleanup

All that is left to do is clean up:

kind delete cluster

If your running cloud-provider-kind in a container you may need to cleanup Docker, including volumes[1].


Conclusion

Thats it! We now have the Spring Demo application running in a kubernetes kind cluster and accessible via the Gateway API using cloud-provider-kind. Also, we have added some K6 tests to see how the system holds up.


Notes

  1. I found it easier to zap all running containers then heavily docker prune.

Resources