Skip to main content

Prerequisites

  • A running Kubernetes cluster.
  • Helm 3 or above. See Installing Helm.
  • Access to a Postgres instance (or use the default Postgres chart installed by Convoy’s Helm chart)
  • Access to a Redis instance (or use the default Redis chart installed by Convoy’s Helm chart)
The Convoy Helm chart depends on Bitnami Postgres and Redis charts and will install them by default for testing and evaluation. For production, you should use managed or self-hosted Postgres and Redis instead of these defaults and configure the chart to point at your external databases.

Steps

Add Convoy’s chart repository to Helm:
helm repo add convoy https://frain-dev.github.io/helm-charts
Update the chart repository:
helm repo update
Install the chart with default values:
helm install convoy convoy/convoy --namespace convoy --create-namespace
Install the chart with a custom values.yaml:
helm install convoy convoy/convoy --namespace convoy --create-namespace --values values.yaml
Upgrade the chart:
helm upgrade convoy convoy/convoy --namespace convoy --values values.yaml

Upgrading

Upgrading to Convoy v24.8.x (Convoy Helm charts v3.1.0)

To upgrade to Convoy v24.8.x, you need to update the chart to the 3.x.x series. See the Convoy Helm chart on ArtifactHub for the latest 3.x.x version. In this release, we have gated a number of features behind a license. If you use them, these features will cease to work until you provide a license key. Read here to learn about all the paid features, and here to learn how to get a license key.

Upgrading the Convoy version without upgrading the chart

Change the image tag values to the Convoy version you want. The chart exposes a global tag that fans out to all Convoy components (server and agent), and per-component tags if you need finer control:
values.yml
# Recommended: update all Convoy components at once
global:
    convoy:
        tag: vX.Y.Z # replace with the Convoy version you want to run (for example v26.1.4; see Convoy Releases for the latest version)

Production deployment (managed Postgres and Redis)

For production, we strongly recommend using managed Postgres and Redis instead of the bundled Bitnami charts. Disable the bundled Postgres and Redis charts and switch to external services:
values.yml
postgresql:
    enabled: false

redis:
    enabled: false

global:
    externalDatabase:
        enabled: true # use an external / managed Postgres
    nativeRedis:
        enabled: false # do not deploy the in-cluster Redis
    externalRedis:
        enabled: true # use an external / managed Redis

Example: managed Postgres (RDS, CloudSQL, etc.)

Point Convoy at a managed Postgres instance by configuring global.externalDatabase and disabling the in-cluster Postgres chart:
values.yml
global:
    externalDatabase:
        enabled: true
        host: my-postgres-host.rds.amazonaws.com
        port: 5432
        database: convoy
        username: convoy
        # When 'secret' is set, the inline password values are ignored.
        # Create a Kubernetes Secret with key 'password' containing your DB password.
        secret: convoy-postgres
        # options: "sslmode=require&connect_timeout=30" # example for managed Postgres with TLS

postgresql:
    enabled: false

Example: managed Redis

Use a managed Redis (e.g. AWS ElastiCache, GCP Memorystore, Azure Managed Redis, etc.) by configuring global.externalRedis, disabling global.nativeRedis, and disabling the in-cluster Redis chart:
values.yml
global:
    nativeRedis:
        enabled: false
    externalRedis:
        enabled: true
        host: my-redis-host.cache.amazonaws.com
        port: '6379'
        scheme: 'rediss' # use 'rediss' for TLS-enabled Redis endpoints
        database: '0'
        # When 'secret' is set, the inline password values are ignored.
        # Create a Kubernetes Secret with key 'password' containing your Redis password.
        secret: convoy-redis

redis:
    enabled: false

Redis Sentinel

Use Redis Sentinel when you want Convoy to discover the current Redis primary through Sentinels (typical for HA deployments). The Convoy Helm chart wires this through global.externalRedis: set scheme: redis-sentinel, point host at a Sentinel endpoint, and use port: '26379' (unless your provider uses a different Sentinel port).
global.nativeRedis.enabled: true configures Convoy for a single Redis host and port (6379). For Sentinel, set global.nativeRedis.enabled: false and global.externalRedis.enabled: true so server and agent receive CONVOY_REDIS_SCHEME=redis-sentinel and related variables.
Helm values (external / managed Redis with Sentinel):
values.yml
global:
    nativeRedis:
        enabled: false
    externalRedis:
        enabled: true
        host: redis-sentinel.example.com # DNS of your Sentinel service or load balancer
        port: '26379'
        scheme: redis-sentinel
        sentinelMasterName: mymaster # logical master name your Sentinels monitor (provider-specific)
        password: your-redis-password # AUTH to the Redis primary after Sentinel discovery
        sentinelPassword: '' # set if Sentinels require a password (same as Redis on many setups)
        sentinelUsername: '' # optional ACL username for Sentinel
        database: '0'
        secret: '' # optional: Secret name with key 'password' for Redis (inline password ignored)
        sentinelSecret: '' # optional: Secret with key 'password' for Sentinel auth

redis:
    enabled: false # no in-cluster Redis when using external Sentinel
  • sentinelMasterName must match the master name configured in your Sentinel deployment (for example Bitnami’s default is often mymaster; AWS ElastiCache publishes its replication group / primary name in the console).
  • password is the Redis primary password (used for AUTH after Sentinel discovers the primary). sentinelPassword / sentinelUsername are only needed when Sentinels enforce authentication.
  • Secrets (recommended for production):
    • global.externalRedis.secret — Kubernetes Secret name. When set, the chart sets CONVOY_REDIS_PASSWORD from secretKeyRef with key password; the inline password value is ignored. Create the Secret in the same namespace as Convoy (e.g. stringData.password).
    • global.externalRedis.sentinelSecret — Secret name for the Sentinel password. When set, CONVOY_REDIS_SENTINEL_PASSWORD is sourced from that Secret with key password; inline sentinelPassword is ignored. There is no separate Secret reference for sentinelUsername today—use sentinelUsername in values if your Sentinels need an ACL username.
  • If global.externalRedis.addresses is set (cluster mode), it takes precedence over host; use the non-addresses form above for a single Sentinel entry point unless your operator documents otherwise.
If server.rollout.enabled or agent.rollout.enabled is true (Argo Rollouts), the rollout templates may not yet emit all Redis Sentinel environment variables (including sentinelSecret). Use the default Deployment-based install for Sentinel, or confirm rendered manifests with helm template before relying on Rollouts with redis-sentinel.
In-cluster Bitnami Redis (replication + Sentinel) for testing: you can enable Sentinel on the dependency chart (redis.architecture: replication, redis.sentinel.enabled: true) while still pointing Convoy at global.externalRedis. Important details:
  • With fullnameOverride: redis, Bitnami usually exposes 6379 and 26379 on Services named redis and redis-headless. There is typically no Service called redis-sentinel; set global.externalRedis.host: redis (or redis-headless to discover all pods) and port: '26379'.
  • When nativeRedis.enabled is false, keep redis.auth.enabled: true in the Bitnami subchart so the bundled Redis stays password-protected (the chart may tie auth to nativeRedis by default).
  • Bitnami’s default Sentinel image tags can be removed from Docker Hub; pin redis.sentinel.image to the same bitnamilegacy/redis-sentinel tag line as redis.image if pulls fail.
  • redis.replica.replicaCount in recent Bitnami node layouts is the number of redis-node-* pods (each runs Redis + Sentinel sidecar); scale it for multi-node tests.
Verify from the cluster:
kubectl run redis-check --rm -it --restart=Never -n convoy --image=docker.io/bitnamilegacy/redis:8.2.1-debian-12-r0 -- \
  redis-cli -h redis -p 26379 -a YOUR_PASSWORD SENTINEL get-master-addr-by-name mymaster
(Use your namespace, host, password, and master name. From your laptop, use kubectl port-forward svc/redis 26379:26379 and redis-cli -h 127.0.0.1 -p 26379.) See Redis Sentinel in Configuration and the redis object in the full configuration reference (master_name, sentinel_password, etc.).

Connectivity and startup behaviour

  • If Postgres or Redis are unreachable (DNS, firewall, wrong credentials, TLS issues), Convoy pods will fail to start and may go into CrashLoopBackOff. Check the pod logs for connection errors.
  • Helm itself will succeed in creating resources, but the deployment will not become healthy until Convoy can connect to the database and Redis.
  • For managed services with TLS, ensure you set appropriate connection options (for Postgres) or scheme: rediss (for Redis), and that your network policies / security groups allow traffic from the cluster to the managed service.
  • Both the server and agent deployments include a wait-for-migrate init container that runs migrate up before the application starts. If migrations cannot run (for example, the database is unreachable), these init containers will fail and the pods will not become Ready.

Scaling and resources

Use server.app.resources and agent.app.resources to set CPU/memory requests and limits, and server.autoscaling / agent.autoscaling to enable horizontal autoscaling. Start with conservative values and tune based on your event volume and observed CPU/memory usage:
values.yml
server:
    app:
        resources:
            requests:
                cpu: 500m
                memory: 1Gi
            limits:
                cpu: 1
                memory: 2Gi
    autoscaling:
        enabled: true
        minReplicas: 2
        maxReplicas: 10
        targetCPUUtilizationPercentage: 70
        targetMemoryUtilizationPercentage: 80

agent:
    app:
        resources:
            requests:
                cpu: 500m
                memory: 1Gi
            limits:
                cpu: 1
                memory: 2Gi
    autoscaling:
        enabled: true
        minReplicas: 2
        maxReplicas: 10
        targetCPUUtilizationPercentage: 70
        targetMemoryUtilizationPercentage: 80
For higher traffic workloads, increase the requests/limits and allow higher maxReplicas, and use metrics from your cluster (Prometheus, cloud monitoring) to fine-tune thresholds.

ArgoCD Configuration

Convoy uses helm hooks to trigger migrations every time the charts are installed or upgraded. These hooks alone won’t work for an ArgoCD installation since they are not compatible.
Helm hooks
'helm.sh/hook': post-install,post-upgrade
'helm.sh/hook-delete-policy': before-hook-creation
You can apply this ArgoCD definition to run migrations on your cluster when the chart is installed and upgraded
ArgoCD Definition
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
    name: convoy
    namespace: argocd
    finalizers:
        - resources-finalizer.argocd.argoproj.io
spec:
    project: default
    source:
        repoURL: 'https://frain-dev.github.io/helm-charts'
        targetRevision: 'X.Y.Z' # use the chart version you want to deploy (for example 3.7.5; see ArtifactHub for the latest version)
        chart: convoy
        helm:
            releaseName: convoy
            valuesObject:
                migrate:
                    jobAnnotations:
                        argocd.argoproj.io/hook: Sync
    destination:
        server: 'https://kubernetes.default.svc'
        namespace: convoy
    syncPolicy:
        syncOptions:
            - ServerSideApply=true
            - CreateNamespace=true
        automated:
            selfHeal: true
            prune: true

Login to your instance

Use the credentials below to sign into your freshly minted Convoy instance:
Email: superuser@default.com 
Password: default