Exposing services in a Kubernetes cluster without an ingress controller resembles running a business without a receptionist. Traffic arrives, but nobody routes it intelligently, nobody terminates TLS, and nobody enforces the rules that production demands. The combination of ingress-nginx and cert-manager has become the default answer for teams that want a single, predictable front door for their applications, complete with certificates that renew themselves while nobody watches. This article walks through a complete deployment of that stack, from installing the controller through Helm to verifying that the first HTTPS request returns a valid Let's Encrypt certificate, with the trade-offs and production settings that separate a hobby setup from a serious one.

Why Ingress Controllers And Automated Certificate Management Became Non Negotiable In Modern Kubernetes Environments

A raw Kubernetes cluster knows how to run pods and expose them through Services, but it has no native concept of HTTP routing, TLS termination, or virtual hosts. An ingress controller fills that gap. It watches the Kubernetes API for Ingress resources, compiles them into an HTTP proxy configuration, and serves traffic to the right backend based on hostnames and paths.

Ingress-nginx, the community project maintained under the Kubernetes umbrella, remains the most widely deployed choice despite a planned end-of-life transition to Gateway API in 2026. It runs nginx under the hood, supports advanced routing annotations, integrates with virtually every certificate authority through cert-manager, and exposes Prometheus metrics out of the box. A small point worth noting upfront is the scheduled move toward Gateway API, which means teams starting greenfield projects today should keep one eye on that migration path while still benefiting from the maturity of ingress-nginx.

Certificates used to be the bane of operations teams. Somebody remembered to renew them until somebody forgot, and then a weekend got ruined. Let's Encrypt changed the economics by issuing free certificates through an automated ACME protocol. Cert-manager brought that automation inside Kubernetes, turning certificates into declarative custom resources that live next to the workloads they protect. The combination removes an entire class of operational pain.

Installing Ingress Nginx Through Helm With Production Oriented Values

Helm makes the installation tidy and reproducible. The official ingress-nginx chart accepts a values file that covers replica counts, resource limits, TLS defaults, and metric exposure. Adding the repository and preparing the namespace takes seconds.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx

A production-flavoured values file pins the settings that actually matter. Defaults work in a demo but leave gaps in a serious deployment.

# ingress-nginx-values.yaml
controller:
  replicaCount: 2

  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 512Mi

  podDisruptionBudget:
    enabled: true
    minAvailable: 1

  metrics:
    enabled: true
    serviceMonitor:
      enabled: true

  config:
    ssl-protocols: "TLSv1.2 TLSv1.3"
    ssl-prefer-server-ciphers: "true"
    hsts: "true"
    hsts-include-subdomains: "true"
    hsts-max-age: "31536000"
    use-forwarded-headers: "true"
    proxy-real-ip-cidr: "0.0.0.0/0"

  service:
    type: LoadBalancer

Installing the controller with these values produces a stable baseline that supports modern TLS only, enforces HSTS, and exposes metrics for monitoring.

helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  -f ingress-nginx-values.yaml

After a minute or two the cloud provider provisions a load balancer and assigns an external IP. The wait is normal, and impatience during this step has caused more misdiagnoses than any real bug.

kubectl get svc -n ingress-nginx
kubectl get pods -n ingress-nginx

Once the LoadBalancer shows a real address, every DNS record for the intended hostnames needs to point at it. Without working DNS, nothing downstream will function, because Let's Encrypt validates domain ownership by reaching back to that exact address.

Deploying Cert Manager Into Its Own Namespace With Custom Resource Definitions

Cert-manager lives in its own namespace and ships as a set of custom resource definitions plus controllers that reconcile them. The installation process handles both pieces in a single Helm command when the installCRDs flag is enabled.

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.16.0 \
  --set installCRDs=true \
  --set global.leaderElection.namespace=cert-manager

Three pods should appear within a minute. A quick check confirms the health of the installation before moving on.

kubectl get pods -n cert-manager

The three controllers handle distinct responsibilities. One reconciles Certificate resources, another handles CertificateRequest objects that talk to issuers, and the third manages webhook validation. Understanding this separation helps later when troubleshooting, because logs from each pod cover different parts of the pipeline.

Creating Cluster Issuers For Let's Encrypt Staging And Production Environments

Cert-manager distinguishes between Issuers, which live in a single namespace, and ClusterIssuers, which apply across the entire cluster. For most deployments the ClusterIssuer is the right choice because certificates for many namespaces flow through the same authority.

Two ClusterIssuers are customary from day one. The staging issuer uses the Let's Encrypt staging server, which has generous rate limits and is ideal for testing. The production issuer hits the real endpoint and should only be invoked once the staging flow works end to end. Burning through production rate limits during configuration experiments is a classic beginner mistake.

# cluster-issuers.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в браузере должен быть включен Javascript.
    privateKeySecretRef:
      name: letsencrypt-staging-key
    solvers:
      - http01:
          ingress:
            class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в браузере должен быть включен Javascript.
    privateKeySecretRef:
      name: letsencrypt-production-key
    solvers:
      - http01:
          ingress:
            class: nginx

Applying both resources registers the cluster with Let's Encrypt and prepares it to solve HTTP-01 challenges through the ingress-nginx controller.

kubectl apply -f cluster-issuers.yaml
kubectl get clusterissuers

The READY column should show True for both issuers within a few seconds. If it stays False, the events on the resource usually reveal the problem immediately.

Understanding The Difference Between HTTP 01 And DNS 01 Challenge Methods

Let's Encrypt verifies domain ownership through one of two challenge types, and the choice shapes the entire architecture:

  • HTTP-01 challenges require port 80 to be open to the public internet, and Let's Encrypt reaches a special path on the ingress to confirm control
  • DNS-01 challenges require API access to the DNS provider so cert-manager can publish a temporary TXT record, and they work behind firewalls that block inbound HTTP entirely
  • Wildcard certificates covering *.example.com are only possible through DNS-01, which is the main reason teams with many subdomains adopt it early
  • Multi-cloud or hybrid deployments often mix both methods, using HTTP-01 for public-facing services and DNS-01 for anything inside a private network
  • Rate limits apply per registered domain, making careful staging testing more valuable than it first appears

Most single-cluster deployments start with HTTP-01 because it requires nothing beyond a DNS A record pointing at the ingress. Teams expand to DNS-01 later when wildcard coverage or private ingress becomes necessary.

Creating An Ingress Resource That Automatically Triggers Certificate Issuance

With the controller running and the issuers ready, a standard Ingress resource becomes the trigger for automatic certificate provisioning. The magic lives in a single annotation that tells cert-manager which issuer to use, plus a tls block that names the Secret where the resulting certificate will land.

# demo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - demo.example.com
      secretName: demo-tls
  rules:
    - host: demo.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: demo-service
                port:
                  number: 80

Applying the manifest sets the chain in motion. Cert-manager notices the tls block, creates a Certificate resource, which spawns a CertificateRequest, which in turn creates an Order and a Challenge. Ingress-nginx temporarily serves the HTTP-01 challenge path, Let's Encrypt validates ownership, and the signed certificate lands in the demo-tls Secret. All of this completes in under a minute when conditions are healthy.

kubectl apply -f demo-ingress.yaml
kubectl get certificate demo-tls
kubectl describe certificate demo-tls
kubectl get secret demo-tls -o yaml

A certificate in the Ready: True state signals success. The browser should then show a valid padlock on https://demo.example.com, signed by Let's Encrypt, with automatic renewal handled sixty days before expiry.

Troubleshooting Common Failure Modes In The Certificate Issuance Pipeline

Things break. The most common failure is a certificate stuck in a pending state, which usually traces back to one of a handful of root causes. Checking the Challenge resource provides the fastest diagnosis.

kubectl get challenges -A
kubectl describe challenge <name>
kubectl logs -n cert-manager -l app=cert-manager --tail=100

The events typically name the problem directly. DNS not resolving to the ingress IP is the top offender. Firewall rules blocking inbound port 80 come second. Rate limits on the production Let's Encrypt endpoint sit in third place, and they are why staging exists. A subtle gotcha involves incorrect ingressClassName values that leave the challenge path served by the wrong controller, especially in clusters where multiple ingress solutions coexist.

Another classic trap surfaces when the Ingress resource specifies a host that does not match any DNS record. Cert-manager will happily attempt validation anyway, Let's Encrypt will fail the challenge, and the error message will mention NXDOMAIN. Checking DNS resolution from outside the cluster before blaming Kubernetes saves considerable time.

Hardening The Setup With Security Headers Rate Limits And Monitoring

A baseline HTTPS deployment is only the starting line. Production ingress configurations benefit from additional hardening applied through annotations or ConfigMap values. Security headers, rate limits, and connection timeouts all belong in the shared controller config rather than duplicated across every Ingress resource.

# Snippet of controller ConfigMap hardening
config:
  server-tokens: "false"
  proxy-connect-timeout: "10"
  proxy-read-timeout: "60"
  proxy-send-timeout: "60"
  limit-req-status-code: "429"
  use-gzip: "true"
  enable-brotli: "true"
  custom-http-errors: "404,503"

Monitoring completes the picture. With metrics.enabled turned on in the Helm values, Prometheus can scrape request counts, latencies, connection pool sizes, and certificate expiry timestamps. A simple Grafana dashboard plus an alert on certmanager_certificate_expiration_timestamp_seconds dropping below thirty days of remaining validity catches renewal failures long before they become outages.

Final Thoughts On Running Ingress Nginx And Cert Manager Together In Real Environments

The combination described here has quietly become infrastructure that most teams never think about, which is the highest compliment any operational tool can earn. Ingress-nginx handles routing with predictable nginx semantics, cert-manager handles certificates without human involvement, and the interplay between them turns what used to be a quarterly fire drill into an invisible background task. The only reason anybody remembers certificates exist is when a browser padlock disappears, and even then the fix is usually a misconfigured DNS record rather than anything Kubernetes did wrong.

Looking forward, the migration from ingress-nginx toward Gateway API will reshape parts of this stack over the coming years, but the fundamental pattern of declarative certificates living alongside workloads is here to stay. Teams adopting this setup today gain production-grade HTTPS on day one and set themselves up for a smooth evolution as the Kubernetes ingress ecosystem continues to mature.