Prerequisites
- A running Kubernetes cluster.
- Helm 3 or above. See Installing Helm.
- Access to a Postgres instance (or use the default Postgres chart installed by Convoy’s Helm chart)
- Access to a Redis instance (or use the default Redis chart installed by Convoy’s Helm chart)
The Convoy Helm chart depends on Bitnami Postgres and Redis charts and will install them by default for testing and evaluation. For production, you should use managed or
self-hosted Postgres and Redis instead of these defaults and configure the chart to point at your external databases.
Steps
Add Convoy’s chart repository to Helm:values.yaml:
Upgrading
Upgrading to Convoy v24.8.x (Convoy Helm charts v3.1.0)
To upgrade to Convoy v24.8.x, you need to update the chart to the3.x.x series.
See the Convoy Helm chart on ArtifactHub for the latest 3.x.x version.
In this release, we have gated a number of features behind a license.
If you use them, these features will cease to work until you provide
a license key.
Read here to learn about all the paid features,
and here to learn how to get a license key.
Upgrading the Convoy version without upgrading the chart
Change the image tag values to the Convoy version you want. The chart exposes a global tag that fans out to all Convoy components (server and agent), and per-component tags if you need finer control:values.yml
Production deployment (managed Postgres and Redis)
For production, we strongly recommend using managed Postgres and Redis instead of the bundled Bitnami charts. Disable the bundled Postgres and Redis charts and switch to external services:values.yml
Example: managed Postgres (RDS, CloudSQL, etc.)
Point Convoy at a managed Postgres instance by configuringglobal.externalDatabase and disabling the in-cluster Postgres chart:
values.yml
Example: managed Redis
Use a managed Redis (e.g. AWS ElastiCache, GCP Memorystore, Azure Managed Redis, etc.) by configuringglobal.externalRedis, disabling global.nativeRedis, and disabling the in-cluster Redis chart:
values.yml
Redis Sentinel
Use Redis Sentinel when you want Convoy to discover the current Redis primary through Sentinels (typical for HA deployments). The Convoy Helm chart wires this throughglobal.externalRedis: set scheme: redis-sentinel, point host at a Sentinel endpoint, and use port: '26379' (unless your provider uses a different Sentinel port).
global.nativeRedis.enabled: true configures Convoy for a single Redis host and port (6379). For Sentinel, set global.nativeRedis.enabled: false and global.externalRedis.enabled: true so server and agent receive CONVOY_REDIS_SCHEME=redis-sentinel and related variables.values.yml
sentinelMasterNamemust match the master name configured in your Sentinel deployment (for example Bitnami’s default is oftenmymaster; AWS ElastiCache publishes its replication group / primary name in the console).passwordis the Redis primary password (used forAUTHafter Sentinel discovers the primary).sentinelPassword/sentinelUsernameare only needed when Sentinels enforce authentication.- Secrets (recommended for production):
global.externalRedis.secret— KubernetesSecretname. When set, the chart setsCONVOY_REDIS_PASSWORDfromsecretKeyRefwith keypassword; the inlinepasswordvalue is ignored. Create the Secret in the same namespace as Convoy (e.g.stringData.password).global.externalRedis.sentinelSecret— Secret name for the Sentinel password. When set,CONVOY_REDIS_SENTINEL_PASSWORDis sourced from that Secret with keypassword; inlinesentinelPasswordis ignored. There is no separate Secret reference forsentinelUsernametoday—usesentinelUsernamein values if your Sentinels need an ACL username.
- If
global.externalRedis.addressesis set (cluster mode), it takes precedence overhost; use the non-addresses form above for a single Sentinel entry point unless your operator documents otherwise.
If
server.rollout.enabled or agent.rollout.enabled is true (Argo Rollouts), the rollout templates may not yet emit all Redis Sentinel environment variables (including sentinelSecret). Use the default Deployment-based install for Sentinel, or confirm rendered manifests with helm template before relying on Rollouts with redis-sentinel.redis.architecture: replication, redis.sentinel.enabled: true) while still pointing Convoy at global.externalRedis. Important details:
- With
fullnameOverride: redis, Bitnami usually exposes 6379 and 26379 on Services namedredisandredis-headless. There is typically no Service calledredis-sentinel; setglobal.externalRedis.host: redis(orredis-headlessto discover all pods) andport: '26379'. - When
nativeRedis.enabledis false, keepredis.auth.enabled: truein the Bitnami subchart so the bundled Redis stays password-protected (the chart may tie auth tonativeRedisby default). - Bitnami’s default Sentinel image tags can be removed from Docker Hub; pin
redis.sentinel.imageto the samebitnamilegacy/redis-sentineltag line asredis.imageif pulls fail. redis.replica.replicaCountin recent Bitnami node layouts is the number ofredis-node-*pods (each runs Redis + Sentinel sidecar); scale it for multi-node tests.
kubectl port-forward svc/redis 26379:26379 and redis-cli -h 127.0.0.1 -p 26379.)
See Redis Sentinel in Configuration and the redis object in the full configuration reference (master_name, sentinel_password, etc.).
Connectivity and startup behaviour
- If Postgres or Redis are unreachable (DNS, firewall, wrong credentials, TLS issues), Convoy pods will fail to start and may go into
CrashLoopBackOff. Check the pod logs for connection errors. - Helm itself will succeed in creating resources, but the deployment will not become healthy until Convoy can connect to the database and Redis.
- For managed services with TLS, ensure you set appropriate connection options (for Postgres) or
scheme: rediss(for Redis), and that your network policies / security groups allow traffic from the cluster to the managed service. - Both the server and agent deployments include a
wait-for-migrateinit container that runsmigrate upbefore the application starts. If migrations cannot run (for example, the database is unreachable), these init containers will fail and the pods will not become Ready.
Scaling and resources
Useserver.app.resources and agent.app.resources to set CPU/memory requests and limits, and server.autoscaling / agent.autoscaling to enable horizontal autoscaling.
Start with conservative values and tune based on your event volume and observed CPU/memory usage:
values.yml
maxReplicas, and use metrics from your cluster (Prometheus, cloud monitoring) to fine-tune thresholds.
ArgoCD Configuration
Convoy uses helm hooks to trigger migrations every time the charts are installed or upgraded. These hooks alone won’t work for an ArgoCD installation since they are not compatible.Helm hooks
ArgoCD Definition