Skip to content

Deploy Ingress NGINX Controller

Introduction

In this guide, you will deploy the Kubernetes-maintained Ingress NGINX controller using Helm, and create resources to route external traffic to multiple dummy backend services.

Info

Kubernetes Ingress is a Kubernetes resource that manages external access to the services in a cluster by exposing HTTP or HTTPS routes. Traffic routing is controlled by rules defined on the Ingress resource.

However, an Ingress resource alone does nothing — it needs an Ingress controller to interpret the rules and handle the traffic. The controller acts as a reverse proxy and load balancer, routing external traffic to backend services according to the Ingress rules.

graph LR;
  client([client])-. Ingress controller as <br> load balancer .->ingress[Ingress];
  ingress-->|routing rule|service[Service];
  subgraph cluster
  ingress;
  service-->pod1[Pod];
  service-->pod2[Pod];
  end
  classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
  classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
  classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
  class ingress,service,pod1,pod2 k8s;
  class client plain;
  class cluster cluster;

Step 0: Prerequisites

Install Helm

Helm is a tool that helps you manage applications on Kubernetes. It allows you to define, install, and upgrade applications using configuration templates called Helm charts. Helm is often described as the package manager for Kubernetes — it simplifies the process of deploying and maintaining even complex applications.

Ensure that Helm is installed locally. If it isn't, follow the official Helm installation guide.

Download Templates

Download the following templates and store them in a dedicated folder:

Or copy and paste them from below:

backends.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cheddar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cheddar
  template:
    metadata:
      labels:
        app: cheddar
    spec:
      containers:
      - image: errm/cheese:cheddar
        name: cheese
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stilton
spec:
  replicas: 1
  selector:
    matchLabels:
      app: stilton
  template:
    metadata:
      labels:
        app: stilton
    spec:
      containers:
      - image: errm/cheese:stilton
        name: cheese
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wensleydale
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wensleydale
  template:
    metadata:
      labels:
        app: wensleydale
    spec:
      containers:
      - image: errm/cheese:wensleydale
        name: cheese
---
apiVersion: v1
kind: Service
metadata:
  name: cheddar
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: cheddar
---
apiVersion: v1
kind: Service
metadata:
  name: stilton
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: stilton
---
apiVersion: v1
kind: Service
metadata:
  name: wensleydale
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: wensleydale
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cheddar
spec:
  ingressClassName: nginx
  rules:
    - host: cheddar.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: cheddar
                port:
                  number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: stilton
spec:
  ingressClassName: nginx
  rules:
    - host: stilton.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: stilton
                port:
                  number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wensleydale
spec:
  ingressClassName: nginx
  rules:
    - host: wensleydale.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wensleydale
                port:
                  number: 80

Step 1: Deploy the Ingress NGINX Controller

Using Helm, you can install the Ingress NGINX controller with all its default values using the following command:

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace

If the Ingress controller is already installed, this command will upgrade it. If the Ingress controller is not yet installed, this command will install it. The Ingress controller is installed in the ingress-nginx Namespace. If the Namespace does not exist, Helm creates it automatically.

The output should look similar to this:

Example output
Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Tue Apr  8 13:52:20 2025
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

As mentioned in the example output above, you can check the status of your installation by running:

kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch
Example output
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE   SELECTOR
ingress-nginx-controller   LoadBalancer   10.240.23.162   <pending>       80:31329/TCP,443:32034/TCP   2h    app=ingress-nginx

Notice the <pending> status of the external IP — as soon as it becomes available, you can enter it in your browser. You should receive a 404 Not Found error message from NGINX. You can also run:

curl <external_ip>
Example output
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

This is expected — the Ingress controller is reachable, but no backend services are deployed yet, and no Ingress resource has been created yet to define the routing rules.

Now you can deploy some dummy backend services, and then create Ingress resources for them. The Ingress controller load balancer will route HTTP(S) traffic to the backend services configured in these Ingress resources.

Configuration Options (Optional)

The ingress-nginx Helm chart with its default configuration works out of the box. However, for more advanced use cases, some configuration options should be considered.

Configuration options
controller:
  replicaCount: 1

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50

  ingressClassResource:
    name: nginx
    enabled: true
    default: true

  service:
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    annotations:
      "helm.sh/resource-policy": keep
      "loadbalancer.openstack.org/keep-floatingip": "true"
  • replicaCount: The number of replicas of the Ingress controller Deployment. Consider this value when deploying applications with scalability or high availability as a requirement.

  • autoscaling: An alternative to a fixed number of replicas is autoscaling based on the HorizontalPodAutoscaler. Based on a targeted utilization of CPU and memory, the number of Ingress controller Pods will be autoscaled. The number of replicas will never go below minReplicas or above maxReplicas.

  • ingressClassResource: The IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The default annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class.

  • loadBalancerIP: The static IP address for the load balancer.

    Important

    This will only work if the specified floating IP is assigned to the corresponding OpenStack project and the IP address is not already in use.

  • loadBalancerSourceRanges: The IP ranges (CIDR) that are allowed to access the load balancer. By default, all access is allowed, but once one or more IP ranges are defined, all other access is prohibited.

  • The annotation "helm.sh/resource-policy": keep instructs Helm to skip deleting this resource when a helm operation (such as helm uninstall, helm upgrade or helm rollback) would result in its deletion. However, this resource becomes orphaned. Helm will no longer manage it in any way.

  • If "loadbalancer.openstack.org/keep-floatingip" is set to true, the floating IP that is associated to the load balancer will be kept associated to the corresponding OpenStack project. Therefore, not specifying this value might result in losing this floating IP.

To apply the configuration options shown above, save them in a file named values.yaml. Then, install or upgrade the ingress-nginx release with this custom configuration by using the --values flag in your Helm command:

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --values values.yaml

This will override the default Helm chart values with your custom settings from values.yaml.

Step 2: Deploy Dummy Backend Services

Now that the Ingress NGINX controller is running, you are ready to create some content in the form of websites. In this guide, you will use the errm/cheese images. There are three tags available for this image: cheddar, stilton, and wensleydale. These images contain a simple HTTP server that serves a picture of said cheese.

Tip

If you don't like cheese, or if you just want to get some practice, you can deploy any other dummy backend, such as the paulbouwer/hello-kubernetes image with some custom messages.

Using the template provided in Download Templates, run the following command:

kubectl apply --filename backends.yaml

This will create three Deployments and three Services (one of each per cheese) to expose them on port 80.

backends.yaml explained
apiVersion: apps/v1
kind: Deployment
metadata:
  # Name of the Deployment.
  name: cheddar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cheddar
  # Template describes the Pods that will be created.
  template:
    metadata:
      labels:
        app: cheddar
    spec:
      # Containers belonging to the Pod.
      containers:
      # Use the cheddar tag of the errm/cheese Docker image for this container.
      - image: errm/cheese:cheddar
        name: cheese
---
apiVersion: v1
kind: Service
metadata:
  # Name of the Service.
  name: cheddar
spec:
  # Type of Service is ClusterIP (default, can be omitted).
  type: ClusterIP
  ports:
  # Expose Service on port 80.
  - port: 80
  selector:
    # Route service traffic to Pods with label keys and values matching this selector (see L. 14).
    app: cheddar

Step 3: Expose Backends With Ingress

You have deployed your backend applications, each exposed internally by a Service resource, but the Ingress NGINX controller does not yet know how to route external traffic to them. To do so, you need to create Ingress resources.

For each backend service, you will need a hostname to host the service at. For this, you have the following options:

  • Generate DNS A-records that point to your Ingress controller load balancer for each backend service (cheddar.example.com, stilton.example.com, wensleydale.example.com).

  • Use sslip.io, which allows DNS wildcards for any IP address. With this option, you can use the hostname cheddar.<load_balancer_ip>.sslip.io (and the same goes for stilton and wensleydale).

In the ingress.yaml template provided, adjust the spec.rules.host value accordingly. Then, run the following command to create three Ingress resources with the specified hosts:

kubectl apply --filename ingress.yaml
ingress.yaml explained
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cheddar
spec:
  # Name of the IngressClass cluster resource.
  # The associated IngressClass defines which controller will implement the resource.
  ingressClassName: nginx
  rules:
    # Fully qualified domain name of a network host.
    - host: cheddar.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            # Defines the referenced service endpoint to which the traffic will be forwarded to.
            backend:
              service:
                name: cheddar
                port:
                  number: 80

Info

If you set ingressClassResource.default to true in your ingress-nginx Helm chart configuration (see Configuration Options), you can omit the ingressClassName: nginx field, since Kubernetes will automatically use it as the default Ingress controller.

To verify that the Ingress resources are working, you can navigate to one of your host names (for example cheddar.example.com) in your browser. There you should see an image of the cheese of your choice. If you head to another host name, an image of a different cheese should be returned. With this, you have verified that the Ingress NGINX controller correctly routes your requests to the corresponding services.

Conclusion

Congratulations! You have now successfully deployed the Ingress NGINX controller, deployed some dummy backend services, and then created and configured minimal Ingress resources to serve those services at your domain names.

As a next step, you could consider installing the cert-manager cluster add-on in order to provision TLS certificates and secure your Ingress resources. For instructions, see our guide: Secure Ingress with TLS Using cert-manager.