Argo CD 3.3 landed in early 2026 as the latest stable release, and it brings a set of features that address real pain points teams have been working around for a long time. If you have been following along with my getting started guide or have Argo CD running in production, this release is worth paying close attention to. The highlights are PreDelete hooks, Source Hydrator improvements with inline parameter support, shallow Git cloning, and native KEDA integration.
PreDelete Hooks: Safe Application Teardown
This is the feature that has been sitting near the top of community wishlists for years. Argo CD has supported resource hooks for sync operations since its early days. You could run PreSync hooks to set up prerequisites, Sync hooks for coordinated deployments, and PostSync hooks for follow-up tasks like running smoke tests or sending notifications. But deletion was the gap. When you deleted an Application, everything was torn down immediately with no opportunity to run cleanup logic first.
PreDelete hooks fill that gap. You can now annotate Kubernetes resources (typically Jobs) with argocd.argoproj.io/hook: PreDelete, and Argo CD will execute them before removing any of the Application's resources.
Here is a practical example. Suppose you have an application that registers itself with a service mesh and needs to deregister cleanly before its pods are removed:
apiVersion: batch/v1
kind: Job
metadata:
name: mesh-deregister
annotations:
argocd.argoproj.io/hook: PreDelete
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
template:
spec:
containers:
- name: deregister
image: my-org/mesh-tools:1.4
command:
- /bin/sh
- -c
- |
mesh-cli deregister --service $SERVICE_NAME
mesh-cli drain --wait-timeout 30s
env:
- name: SERVICE_NAME
value: my-application
restartPolicy: Never
backoffLimit: 2
When you run argocd app delete my-application or kubectl delete application my-application, Argo CD detects the PreDelete hook, creates the Job, and waits for it to complete successfully before proceeding with resource deletion. If the hook fails, the deletion is blocked and Argo CD sets a DeletionError condition on the Application. You can fix the hook in Git, and Argo CD will retry on its next reconciliation loop.
The use cases go beyond service mesh deregistration. Teams are using PreDelete hooks for draining traffic from load balancers, exporting data or state before a database is removed, notifying dependent systems that a service is going away, and cleaning up external resources like DNS records or cloud provider objects that Kubernetes does not manage natively.
One important detail: PreDelete hooks only run during explicit Application deletion. They do not fire during normal sync operations, even when pruning is enabled. This means you can safely add PreDelete hooks to your manifests without affecting day-to-day deployments.
Source Hydrator: Inline Parameters Change the Workflow
The Source Hydrator was the headline feature of Argo CD v3, and 3.3 makes it significantly more practical with inline parameter support.
For anyone unfamiliar, the Source Hydrator implements the rendered manifest pattern. Instead of having Argo CD render your Helm charts or Kustomize overlays at sync time, the hydrator renders them ahead of time and commits the resulting plain Kubernetes manifests to a designated branch in Git. This means the exact manifests that will be applied to your cluster are visible, diffable, and auditable in Git before they ever reach the cluster.
The hydrator configuration lives in the Application spec:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/my-org/deploy-config.git
path: helm-charts/my-app
targetRevision: HEAD
helm:
valueFiles:
- values.yaml
- values-production.yaml
parameters:
- name: image.tag
value: v2.1.0
- name: replicas
value: "3"
releaseName: my-app-prod
syncSource:
targetBranch: environments/production
path: helm-charts/my-app
The drySource points at your unrendered configuration, and the syncSource defines where the hydrated output gets committed. The key addition in 3.3 is the tool-specific configuration block inside drySource. Previously, you needed to commit separate parameter files for every configuration change, which created a lot of noise in your commit history. Now you can specify Helm values, Kustomize options, or plugin configuration directly inline.
For Kustomize users, inline parameters look like this:
drySource:
repoURL: https://github.com/my-org/deploy-config.git
path: kustomize/my-app
targetRevision: HEAD
kustomize:
namePrefix: prod-
images:
- my-org/api-server:v3.0.1
The hydrator also received performance improvements in 3.3. It now avoids unnecessary calls to the repo server when the dry source has not changed, which matters when you have dozens of Applications pointing at the same repository. For monorepo layouts, the hydrator handles path-specific change detection more intelligently, so a change in one application's directory does not trigger re-hydration for every other application in the repository.
To enable the hydrator, set hydrator.enabled: "true" in the argocd-cmd-params-cm ConfigMap. You also need to configure separate repository credentials for reading (the standard repository secret) and writing (a secret labeled argocd.argoproj.io/secret-type: repository-write) since the hydrator needs push access to commit hydrated manifests.
Shallow Git Cloning: Minutes to Seconds
This is a quality-of-life improvement that will make the biggest difference for teams with large repositories. By default, Argo CD clones the full Git history every time it fetches from a repository. For a repository with thousands of commits or a monorepo with years of history, this means every sync and health check starts with a multi-minute Git fetch.
Shallow cloning changes this by fetching only the commits Argo CD actually needs. You enable it by setting a depth when configuring a repository, either through the CLI or the repository configuration:
argocd repo add https://github.com/my-org/deploy-config.git --depth 1
With --depth 1, Argo CD fetches only the latest commit on the target branch. The performance improvement scales with repository size. Small repositories will not notice much difference, but teams working with large monorepos or repositories with long histories can see fetch times drop from minutes to single-digit seconds.
There is one trade-off to be aware of. Shallow clones do not contain full commit history, which means features that rely on walking the Git log (like showing commit messages in the UI's revision history) will have limited information. For most teams this is an acceptable trade-off, especially on repositories where the primary goal is fast manifest rendering.
KEDA Integration
Argo CD 3.3 adds native understanding of KEDA (Kubernetes Event-Driven Autoscaling) resources. If you are running KEDA in your clusters, this is a welcome addition.
The practical improvements are twofold. First, Argo CD can now display accurate health status for KEDA ScaledObjects and ScaledJobs. Previously, these custom resources showed as "Healthy" as soon as they were created, even if the underlying scaling was misconfigured or failing. Now Argo CD understands KEDA's status conditions and reflects the actual scaling state.
Second, you can pause and resume KEDA resources directly from the Argo CD UI. This is useful during maintenance windows when you want to prevent autoscaling from interfering with planned operations. Instead of manually patching KEDA resources or scaling them to fixed replica counts, you can toggle the pause state from the application view in the Argo CD dashboard.
OIDC Background Token Refresh
This one is less glamorous than the other features but solves a frustration that anyone who has spent time in the Argo CD UI will recognize. If you use OIDC-based authentication (which most production setups do), your session would expire based on the token's TTL, and you would be abruptly logged out, often in the middle of debugging a failed sync or tracing through an application's resource tree.
Argo CD 3.3 introduces automatic OIDC token refresh. The server now monitors token expiration and proactively refreshes tokens in the background before they expire. You configure this with the refreshTokenThreshold setting, which specifies how close to expiration a token should be before the server initiates a refresh. No more lost context from unexpected logouts.
Granular Cluster Resource Restrictions
Argo CD has always allowed administrators to restrict which API groups and resource kinds Applications can manage. Version 3.3 adds resource-name-level restrictions, which is a significant improvement for multi-tenant clusters.
Consider a cluster where multiple teams share a namespace and each team manages their own CustomResourceDefinitions. Previously, you could restrict access at the CRD kind level, but that was all-or-nothing. Either a team could manage all CRDs or none. With 3.3, you can restrict access to specific CRD names:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.exclusions: |
- apiGroups:
- apiextensions.k8s.io
kinds:
- CustomResourceDefinition
names:
- "*.other-team.example.com"
This extends naturally to any cluster-scoped or namespace-scoped resource where you need finer-grained control than kind-level restrictions provide.
Upgrading to 3.3
If you are already on Argo CD 3.x, the upgrade to 3.3 is straightforward. There are no major breaking changes from 3.2. The usual advice applies: review the release notes for the complete list of changes, test the upgrade in a non-production environment first, and pay attention to any deprecation warnings in your logs after upgrading.
If you are still on the 2.x line, the jump to 3.3 is larger and involves the breaking changes introduced in 3.0 (legacy ConfigMap-based repo config removal, metrics consolidation, Dex SSO claim changes, and the switch to annotation-based resource tracking by default). Plan that migration carefully, but do not put it off. The 2.x line is approaching end of life, and the features in the 3.x series represent a significant step forward for production GitOps workflows.
Wrapping Up
Argo CD 3.3 is a release that focuses on operational maturity. PreDelete hooks complete the application lifecycle story. The Source Hydrator with inline parameters makes the rendered manifest pattern viable for teams that found the previous workflow too rigid. Shallow cloning removes a performance bottleneck that scaled with repository age. And the KEDA integration, OIDC refresh, and granular resource restrictions are the kind of improvements that make running Argo CD in production measurably smoother.
If you are looking for help upgrading to Argo CD 3.3, designing your Source Hydrator workflow, or building out your GitOps platform, get in touch. I work with teams to plan and execute these transitions with minimal disruption.