Kargo v1.9 landed recently and the Akuity team is calling it the most significant release since v1.0. Having spent some time with it, that claim holds up. This is not a release dominated by one headline feature but rather a broad set of improvements that address pain points you hit once you start running Kargo at any real scale. API tokens for CI integration, a new REST API that future-proofs Kargo's Kubernetes compatibility, Warehouse performance tuning, a generic webhook receiver, and a batch of new promotion steps and expression functions all shipped in the same release.
If you are new to Kargo, start with my introduction to Kargo and verification deep dive before reading this post. If you are already running Kargo and looking for reasons to upgrade, this post covers the features that matter most.
The New REST API
Under the hood, Kargo's API server has historically used a Connect-based RPC layer built on protocol buffers. This worked well until Kubernetes v1.35 made breaking changes to how its Go types interact with the google.golang.org/protobuf library, making it difficult to represent Kubernetes resource types as protocol buffers. Beginning with Kubernetes v1.36, it will be impossible entirely. This currently prevents Kargo from upgrading its own Kubernetes dependencies past v1.34.x. Rather than working around the issue, the Kargo team built a new RESTful API to replace the Connect-based one.
From a user perspective, the switch is mostly invisible. The Kargo CLI and UI both use the new API. If you have scripts that interact with Kargo's API directly using the old Connect endpoints, you will want to start migrating. The legacy Connect API is deprecated in v1.9 and will continue to be served alongside the new one through v1.11.x. It will be removed in v1.12.0, at which point Kargo can freely update its Kubernetes dependencies.
The practical benefit is that Kargo will be able to advance past Kubernetes v1.34.x. If you are running or planning to run Kubernetes v1.35+ clusters, this matters.
API Tokens
Before v1.9, integrating Kargo into CI pipelines or automation scripts meant sharing short-lived OIDC credentials or configuring service accounts through Kubernetes directly. Kargo v1.9 introduces JWT-based API tokens that you can provision through the CLI or UI, associate with a specific role, and use from any HTTP client.
Creating a system-level token looks like this:
kargo create token --system --role kargo-admin my-ci-token
The token value is displayed once at creation time and cannot be recovered afterward, so store it in your secrets manager immediately. You can also scope tokens to individual projects by omitting the --system flag and specifying a project:
kargo create token --project my-project --role my-role project-ci-token
List existing tokens with:
kargo get tokens --system
Or filter by role:
kargo get tokens --system --role kargo-admin
Delete a token when it is no longer needed:
kargo delete token --system my-ci-token
The tokens integrate with both the kargo CLI and standard HTTP tools. If your CI pipeline needs to approve a piece of Freight or trigger a promotion, you no longer need to set up Kubernetes RBAC and kubeconfig files in your CI environment. A Kargo API token and the REST API endpoint are enough.
This is a big quality-of-life improvement for teams that run Kargo in a shared platform environment where CI systems do not have direct Kubernetes API access.
Warehouse Performance Tuning
Warehouse artifact discovery has been one of the more common sources of frustration at scale. Container registries enforce rate limits, and Kargo has historically applied conservative client-side rate limits on top of those to avoid hitting them. If you have a large number of Warehouses all polling busy registries, discovery can become slow simply because Kargo is throttling itself to be a good API citizen.
Kargo v1.9 addresses this from two angles.
Metadata Caching
Individual image repository subscriptions in a Warehouse can now opt into tag metadata caching. When enabled, Kargo caches the metadata for image tags it has already seen, skipping redundant registry calls on subsequent discovery cycles. This is particularly effective for repositories where tags are immutable, which is the common case in most CI/CD pipelines.
The caching behavior is controlled at two levels. Operators set a system-wide policy through the controller.images.cache.cacheByTagPolicy Helm value, choosing from Forbid, Allow, Require, or Force. The default is Allow. Within that policy, individual Warehouse image subscriptions opt in by setting cacheByTag: true:
spec:
subscriptions:
- image:
repoURL: ghcr.io/my-org/my-app
imageSelectionStrategy: NewestBuild
cacheByTag: true
The Forbid and Force policies override the per-subscription setting, so operators can enforce a consistent approach across all projects. If your image tags are immutable (and they should be in a GitOps workflow), enabling metadata caching can significantly reduce the number of registry API calls your Warehouses make during each discovery cycle. Be careful with mutable tags like latest though, as caching can cause Kargo to select stale images indefinitely.
Configurable Rate Limits
The conservative default rate limits that Kargo applies to registry clients are now tunable through the controller.images.registries.rateLimit Helm value. This is a per-registry, client-side rate limit expressed in requests per second. The default is 20:
controller:
images:
registries:
rateLimit: 20
Be cautious with this. The Kargo chart documentation warns that turning it up is not a guarantee of improved Warehouse performance. When registries begin enforcing their own rate limits because the client is not self-throttling, the resulting errors can degrade performance worse than the conservative default. For teams running private registries with plenty of headroom, the default may have been unnecessarily restrictive, but for public registries like Docker Hub you are better off leaving it alone and relying on metadata caching to reduce the total number of calls instead.
Generic Webhook Receiver
My previous post on webhook receivers covered the platform-specific receivers for GitHub, GitLab, Docker Hub, and the others that Kargo supports natively. But not every artifact repository has a native Kargo receiver. Amazon ECR, for example, does not.
Kargo v1.9 introduces a generic webhook receiver that handles arbitrary HTTP POST requests. You define the parsing rules and matching criteria, and Kargo extracts the relevant information from the payload to trigger the appropriate Warehouses.
The configuration follows the same ProjectConfig or ClusterConfig pattern as the platform-specific receivers. You create a Secret containing a shared secret, then define the receiver with optional filtering and targeting rules:
apiVersion: v1
kind: Secret
metadata:
name: generic-webhook-secret
namespace: my-project
labels:
kargo.akuity.io/cred-type: generic
data:
secret: <base64-encoded-secret>
---
apiVersion: kargo.akuity.io/v1alpha1
kind: ProjectConfig
metadata:
name: my-project
namespace: my-project
spec:
webhookReceivers:
- name: my-generic-receiver
generic:
secretRef:
name: generic-webhook-secret
actions:
- action: Refresh
whenExpression: "request.header('X-Event-Type') == 'push'"
targetSelectionCriteria:
- kind: Warehouse
name: my-warehouse
The whenExpression field filters incoming requests using Kargo's expression language. In this example, only requests with an X-Event-Type: push header will trigger the action. You can also inspect the parsed JSON body via request.body for more granular matching. The targetSelectionCriteria field supports label selectors and index selectors in addition to static names, which is useful when multiple Warehouses should respond to the same webhook.
Retrieve the generated webhook URL the same way as any other receiver:
kubectl get projectconfigs my-project \
-n my-project \
-o=jsonpath='{.status.webhookReceivers}'
This closes the gap for teams using registries that Kargo does not have a dedicated receiver for. As long as the platform can send an HTTP POST with a JSON body, you can wire it into Kargo's event-driven discovery pipeline.
Expression Language Improvements
Kargo's expression language, used in promotion steps, webhook receivers, and verification, picked up several useful additions in v1.9.
Alternative Delimiters
If you have ever tried to embed a JSON object inside a Kargo expression, you have run into the problem of closing braces conflicting with the ${{ }} delimiter syntax. Kargo v1.9 introduces an alternative delimiter ${% %} that help mitigate this conflict:
- uses: json-update
config:
path: config.json
updates:
- key: metadata
value: ${% {"version": vars.version, "tag": vars.tag} %}
This is a small change that eliminates an annoying workaround.
semverParse Function
The new semverParse() function breaks a semantic version string into its components:
- uses: yaml-update
config:
path: values.yaml
updates:
- key: image.majorVersion
value: ${{ semverParse(imageFrom("my-registry.io/my-app").Tag).Major }}
This is useful when your deployment configuration needs to reference individual parts of a version string, for example when setting a major version label or constructing a version-dependent URL.
Shared Resource Access
Two new functions provide access to resources in the shared resources namespace (formerly the global credentials namespace). Both return the resource's Data field as a map[string]string, so you access individual keys with dot notation. For example, you could use sharedSecret() to pull a Slack token from a centrally managed Secret during an http promotion step:
- uses: http
config:
url: https://slack.com/api/chat.postMessage
headers:
- name: Authorization
value: Bearer ${{ sharedSecret("slack").token }}
Or use sharedConfigMap() to read shared configuration like a base repository URL:
- uses: git-clone
config:
repoURL: ${{ sharedConfigMap("repo-config").repoURL }}
checkout:
- branch: main
path: ./src
The sharedSecret() function is restricted to Secrets labeled with kargo.akuity.io/cred-type: generic to prevent accidental exposure of repository credentials. Both functions return an empty map if the resource does not exist rather than failing the step.
Expanded Promotion Steps
The promotion step library continues to grow. Here are the notable additions and improvements in v1.9.
git-clone Enhancements
The git-clone step now supports sparse checkouts and Git submodules. Sparse checkouts are valuable when your repository is large but your promotion only needs to modify files in a specific subdirectory. Rather than cloning the entire repository, you specify the paths you need:
- uses: git-clone
config:
repoURL: https://github.com/my-org/infra.git
checkout:
- branch: main
path: ./src
sparse:
- services/my-app
- configs/common
When all checkouts use sparse patterns, Kargo automatically applies a blobless clone optimization, which further reduces the amount of data fetched from the remote.
git-push Force Push
The git-push step now supports a force option that can force a push to a target branch. One such use case is pushing rendered manifests that do not depend on previous state, where you want to completely replace the branch content each time a promotion runs. Use it with caution as it will overwrite any commits on the remote branch that are not in your local branch. The default is false.
yaml-merge
A new yaml-merge step consolidates multiple YAML files into a single output file. This is useful when you need to assemble a configuration from multiple sources during a promotion:
- uses: yaml-merge
config:
inFiles:
- ./src/base/values.yaml
- ./src/overrides/env-values.yaml
outFile: ./out/merged/values.yaml
The first file in the list serves as the base, and each subsequent file is merged over it. Mappings merge recursively, scalar values get overridden, and sequences are replaced entirely rather than appended.
http Step Improvements
The http promotion step gained more flexible response body parsing. Previously, extracting values from API responses required the response body to be JSON with a predictable structure. The updated step handles a wider range of response formats and makes it easier to extract specific fields.
argocd-update with Label Selectors
The argocd-update step can now identify target Applications using label selectors in addition to the existing name-based approach. This is helpful in environments where Application names follow a convention but you want to target a set of Applications matching a label query rather than listing them individually.
Namespace Terminology Changes
Kargo v1.9 renames two concepts that have caused confusion since the early releases.
What was called the "global credentials namespace" is now the "shared resources namespace," with a default of kargo-shared-resources. What was called the "cluster secrets namespace" is now the "system resources namespace," with a default of kargo-system-resources.
The old namespace names continue to work. Kargo v1.9 automatically migrates existing Secrets to the new locations. The naming change reflects the fact that these namespaces have grown beyond just holding credentials and secrets; they now serve as general-purpose shared resource stores accessible across projects or by the system itself.
If you reference these namespaces in documentation or automation, update the names to avoid confusion as the old terminology is phased out.
Breaking Changes and Deprecations
A few things to watch out for when upgrading.
The SemVerConstraint field, deprecated since v1.7, has been removed. If you have Warehouses still using SemVerConstraint instead of constraint, they will break on upgrade. Migrate before upgrading.
The allowTags and ignoreTags fields on Warehouse image subscriptions are deprecated in favor of allowTagsRegexes and ignoreTagsRegexes. These fields still work in v1.9 but artifact discovery will fail if they are non-empty starting in v1.11.0, with full removal in v1.13.0. You have time to migrate but do not wait too long.
The Kargo CLI must be upgraded alongside the server. Protocol buffer serialization changes to the Warehouse and Freight types mean an older CLI will not work correctly with a v1.9 server.
Upgrading
If you are using the Kargo Helm chart, the upgrade path is straightforward:
helm upgrade kargo oci://ghcr.io/akuity/kargo-charts/kargo \
--namespace kargo \
--version 1.9.0 \
--reuse-values
After upgrading the server, upgrade your CLI via Homebrew (brew upgrade kargo), by downloading the latest binary from the GitHub releases page, or from the Kargo dashboard's CLI tab. Verify the upgrade with:
kargo version
What's Next
Kargo v1.9 rounds out many of the rough edges that appeared as teams moved from evaluation to production use. API tokens and the new REST API make it much easier to integrate Kargo into existing toolchains. The Warehouse performance improvements address the most common scaling bottleneck. The generic webhook receiver closes a gap that blocked adoption for teams running on AWS with ECR.
If you have not read the earlier posts in this series, start with the Kargo introduction for the core concepts, then the verification and soak times deep dive for production-hardening your pipelines, and the webhook receivers guide for event-driven artifact discovery. For more on the Argo CD side of the equation, the Argo CD deep dive and ApplicationSets post cover the deployment layer that Kargo orchestrates on top of.
The full release notes are available in the Kargo v1.9.0 documentation.