Migration to Chart 1.500.0
This guide covers breaking changes when upgrading from a prior Helm chart version to 1.500.0.
Chart 1.500.0 introduces several breaking changes to configuration structure, default values, and resource naming. Review this guide carefully before upgrading.
Quick Decision Guide
Do I need to read this guide?
| Your Situation | Action Required |
|---|---|
| New installation (no prior installation) | No action - skip this guide |
| Upgrading from 1.2.x | CRITICAL - Multiple required changes before upgrade (see the following sections) |
Using localCharts or chartBaseUrl in values | SCHEMA FAILURE - Must remove before upgrade or helm upgrade fails immediately |
Have broker.url set (all 1.2.x users) | SCHEMA FAILURE - Must add global.broker.embedded: false or helm upgrade fails immediately |
| Session key not explicitly set | CRITICAL - Extract old value first or all users are logged out |
| Using external references (External Secrets, ArgoCD, and so on) | Required - Update secret/configmap names |
Using samDeployment.imagePullSecret | Required - Move to global.imagePullSecrets |
| Using bundled persistence and upgrading from 1.1.0 or earlier | CRITICAL - Must migrate StatefulSets before upgrade |
| Relying on pre-1.500.0 defaults | Required - Explicitly set production values |
Most critical issues for 1.2.x upgrades: localCharts/chartBaseUrl removal causes schema failure and sam.sessionSecretKey change logs out all users. See the full breaking changes in the following sections.
What's New in 1.500.0
In addition to the breaking changes listed in the following sections, 1.500.0 introduces the following new capabilities:
| Feature | Description | Values Key |
|---|---|---|
| GCR Pull Secret Automation | Pass a dockerconfigjson credentials file via --set-file and the chart automatically creates the image pull secret and injects it into all pod specs. No manual kubectl create secret step is required. Mutually exclusive with global.imagePullSecrets. See GCR Credentials File and Air-Gapped: Step 3. | global.imagePullKey |
| Custom CA Certificates | Inject custom or self-signed CA certificates for internal infrastructure (broker, OIDC provider, LLM service) via a Kubernetes ConfigMap. See Custom CA Certificates. | samDeployment.customCA |
| Embedded Solace Broker | Deploy a single-node Solace PubSub+ broker in-cluster for evaluation. No external broker is required. See Kubernetes Quick Start. | global.broker.embedded |
| Agent Mesh Pre-flight Validation (sam-doctor) | Pre-install/pre-upgrade Helm hook that validates configuration before any workload pods are created. Misconfigurations surface as a clear error instead of CrashLoopBackOff. Enabled by default (samDoctor.enabled: true); requires the enterprise image to include sam_doctor. See sam-doctor. | samDoctor.enabled |
| JSON Schema Validation | values.schema.json is now shipped with the chart. Helm rejects invalid configuration at helm lint, helm install, helm upgrade, and helm template with clear error messages. Also enforces conditional rules (for example, external datastore credentials required when global.persistence.enabled: false). | Built-in |
| Cluster Resource Checks | At helm install/upgrade time, validates that referenced Secrets, ConfigMaps, StorageClass, and IngressClass actually exist in the cluster. Reports all missing resources in one aggregated error instead of letting pods fail with ImagePullBackOff or PVCs get stuck Pending. No-op during helm template/--dry-run=client. | validations.clusterResourceChecks |
Migration Timeline
From 1.1.0 and Earlier
If upgrading from 1.1.0 or earlier with bundled persistence, also address:
From 1.2.x to 1.500.0
All users upgrading from 1.2.x must address:
- localCharts and chartBaseUrl Keys Removed
- Embedded Broker Enabled by Default
- Image Configuration Restructured
- Session Key Secret Location Changed
- Default Values Changed
- Image Pull Policy Changed
Breaking Changes Detail
1. localCharts and chartBaseUrl Keys Removed
In 1.500.0, the agent chart is always bundled inside the main chart. These keys no longer exist in the schema.
Old values (1.2.x) - remove these:
samDeployment:
agentDeployer:
chartBaseUrl: "https://..."
localCharts:
enabled: true
mountPath: "/opt/helm-charts"
Also update the version fields:
samDeployment:
agentDeployer:
version: "k8s-1.500.0" # was k8s-1.2.x
chartVersion: "1.500.0" # was 1.2.x
2. Embedded Broker Enabled by Default
global.broker.embedded defaults to false in 1.2.x and true in 1.500.0. If your values file contains broker.url (or any external broker credentials), running helm template or helm upgrade without setting this flag will produce:
Error: Conflicting broker configuration: cannot set broker.url when
global.broker.embedded is true.
All 1.2.x customers used an external broker. Add this to your 1.500.0 values file:
global:
broker:
embedded: false
broker.url, broker.clientUsername, broker.password, and broker.vpn carry forward unchanged.
3. Image Configuration Restructured
All users upgrading from 1.2.x must update their values file before running helm upgrade. The default repository value in 1.2.x included the registry hostname (gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise). In 1.500.0, the chart prepends global.imageRegistry to repository automatically. Upgrading without updating your values produces a double-prefixed image reference that Kubernetes cannot pull, and pods immediately enter ImagePullBackOff.
Starting with 1.500.0, the registry is separated from the repository. The chart constructs the full image reference as registry/repository:tag, where registry defaults to global.imageRegistry (gcr.io/gcp-maas-prod).
What breaks without migration:
# Kubernetes will try to pull this broken image reference:
gcr.io/gcp-maas-prod/gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise:1.97.2
Old Format (1.2.x):
samDeployment:
image:
repository: gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise
tag: "1.83.1"
pullPolicy: Always
agentDeployer:
image:
repository: gcr.io/gcp-maas-prod/sam-agent-deployer
tag: "1.6.3"
pullPolicy: Always
New Format (1.500.0):
Do not carry forward pre-1.500.0 tag values. Each release ships with new image versions. The correct tag for your 1.500.0 chart is defined in the chart's values.yaml. Confirm the expected tags before editing your values file:
helm show values /path/to/charts/solace-agent-mesh-<version>.tgz \
| grep -E "repository:|tag:"
# global.imageRegistry defaults to gcr.io/gcp-maas-prod — no change needed for GCR users
samDeployment:
image:
repository: solace-agent-mesh-enterprise # registry prefix removed
tag: "<tag from chart values.yaml>"
agentDeployer:
image:
repository: sam-agent-deployer # registry prefix removed
tag: "<tag from chart values.yaml>"
For air-gapped or internal registry users, set global.imageRegistry to redirect all images with a single value:
global:
imageRegistry: my-registry.internal # all images redirect here
samDeployment:
image:
repository: solace-agent-mesh-enterprise # registry prefix removed
tag: "<tag from chart values.yaml>"
agentDeployer:
image:
repository: sam-agent-deployer # registry prefix removed
tag: "<tag from chart values.yaml>"
Migration Steps:
Step 1: Remove the registry hostname from samDeployment.image.repository and samDeployment.agentDeployer.image.repository. If you use an internal registry, set global.imageRegistry to that registry hostname. Then confirm the correct tag values for your release from the chart's values.yaml and update your values file accordingly:
helm show values /path/to/charts/solace-agent-mesh-<version>.tgz \
| grep -E "repository:|tag:"
Step 2: Validate your updated values file before upgrading. Check that all image references resolve correctly:
helm template <release-name> /path/to/charts/solace-agent-mesh-<version>.tgz \
-f updated-values.yaml \
| grep "image:" | sort -u
Every image should show the correct registry prefix exactly once (for example, gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise:1.97.2).
4. Session Key Secret Location Changed
In 1.2.x the session key was stored in <release>-environment. In 1.500.0 it moves to a new secret with a different name. On first upgrade, the chart cannot find the old value and generates a new random key, instantly logging out all active users.
If sam.sessionSecretKey was already set explicitly in your 1.2.x values: carry it forward unchanged. No action needed.
If it was not set explicitly, extract the value before upgrading:
kubectl get secret <release>-environment -n <namespace> \
-o go-template='{{index .data "SESSION_SECRET_KEY" | base64decode}}{{"\n"}}'
Then set it explicitly in your 1.500.0 values:
sam:
sessionSecretKey: "<value from above>"
5. Pull Secret Migration
In 1.2.x, pull secrets were attached to the shared solace-agent-mesh-sa ServiceAccount. In 1.500.0, core and agent-deployer pods use new auto-generated ServiceAccounts with no pull secret attached.
Old values (1.2.x):
samDeployment:
imagePullSecret: "my-reg-secret"
New values (1.500.0):
global:
imagePullSecrets:
- "my-reg-secret"
samDeployment:
imagePullSecret: "" # clear the old field
Alternatively, use global.imagePullKey with --set-file to let the chart create the secret automatically. See GCR Credentials File.
6. Secrets and ConfigMaps Restructured
The monolithic secret and configmap have been split into multiple focused resources for improved security and organization.
Old Resources (pre-1.500.0):
solace-agent-mesh-secret(single monolithic secret)solace-agent-mesh-config(single monolithic configmap)
New Resources (1.500.0):
All resources follow the naming pattern {release}-solace-agent-mesh-{component}. To see the exact names in your deployment:
kubectl get secrets -n <namespace> -l app.kubernetes.io/instance=<release>
kubectl get configmaps -n <namespace> -l app.kubernetes.io/instance=<release>
Migration Action:
- Pods will automatically pick up the new secret and configmap names
- Update external references if you have:
- External Secrets Operator syncing to Solace Agent Mesh secrets
- ArgoCD or other GitOps patches referencing old names
- Custom scripts or operators reading Agent Mesh secrets/configmaps
- Backup/restore automation referencing old names
7. Default Values Changed
Chart 1.500.0 changes several default values to suit quickstart evaluation.
| Setting | Pre-1.500.0 Default | 1.500.0 Default | Impact |
|---|---|---|---|
global.broker.embedded | N/A (new field) | true | Deploys embedded Solace broker |
global.persistence.enabled | false | true | Deploys PostgreSQL and SeaweedFS |
sam.authorization.enabled | true | false | Disables RBAC/OIDC authentication |
service.type | LoadBalancer | ClusterIP | Requires port-forward for access |
service.tls.enabled | true | false | Disables TLS |
samDeployment.image.pullPolicy | Always | IfNotPresent | Reduces registry load |
8. Sample Values Files Removed
Sample values files in samples/values/ have been removed and consolidated into comprehensive inline documentation within the main values.yaml.
Removed Files:
samples/values/quickstart.yamlsamples/values/production.yamlsamples/values/sam-tls-oidc-bundled-persistence.yamlsamples/values/sam-tls-bundled-persistence-no-auth.yaml- Other sample files
New Approach:
- Use the main
values.yamlas reference documentation - Create custom override files (for example,
production-overrides.yaml)
Migration Action:
- If you were using
-f samples/values/*.yaml, migrate to custom override files - See the inline documentation in
values.yamlfor all configuration options - See Production Kubernetes Installation for examples
9. Bundled Persistence VCT Labels
This section only applies if you are using bundled persistence (global.persistence.enabled: true) and upgrading from chart version 1.1.0 or earlier. External persistence users and new installations are not affected.
Starting with chart versions after 1.1.0, the bundled persistence layer uses minimal VolumeClaimTemplate (VCT) labels for StatefulSets. This prevents upgrade failures when labels change over time, but a one-time migration is required for existing deployments.
Why this matters: Kubernetes StatefulSet VCT labels are immutable. Without migration, upgrades will fail with:
StatefulSet.apps "xxx-postgresql" is invalid: spec: Forbidden: updates to statefulset spec
for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy'... are forbidden
Step 1: Delete StatefulSets while preserving data (--cascade=orphan retains PVCs):
kubectl delete sts <release>-postgresql <release>-seaweedfs --cascade=orphan -n <namespace>
Step 2: Upgrade the Helm release:
helm upgrade <release> /path/to/charts/solace-agent-mesh-<version>.tgz \
-f your-values.yaml \
-n <namespace>
Step 3: Verify the upgrade succeeded and data is intact:
kubectl get pods -l app.kubernetes.io/instance=<release> -n <namespace>
kubectl get pvc -l app.kubernetes.io/instance=<release> -n <namespace>
The new StatefulSets automatically reattach to the existing PVCs, preserving all data.
10. Image Pull Policy Change
The default pullPolicy for all images has changed from Always to IfNotPresent.
Old Behavior (pre-1.500.0):
samDeployment:
image:
pullPolicy: Always
agentDeployer:
image:
pullPolicy: Always
New Behavior (1.500.0):
samDeployment:
image:
pullPolicy: IfNotPresent # New default
agentDeployer:
image:
pullPolicy: IfNotPresent # New default
Impact:
- Deployments with pinned tags (for example,
1.97.2) are unaffected - If you use mutable tags (for example,
latest) or republish images under the same tag, restore the previous behavior explicitly:
samDeployment:
image:
pullPolicy: Always
agentDeployer:
image:
pullPolicy: Always
Migration Action:
- Review your image tagging strategy
- If using immutable tags (recommended), no action needed
- If using mutable tags, explicitly set
pullPolicy: Always
Migration Checklist
Phase 1: Update Values File
- Remove localCharts and chartBaseUrl
- Add
global.broker.embedded: false - Restructure image configuration
- Preserve session key
- Migrate pull secret
- Apply default value overrides (broker, persistence, authorization)
- Migrate from sample values files (if applicable)
- Review image pull policy
- Validate values file (catches schema errors before touching the cluster)
Phase 2: Prepare the Cluster
-
Backup current deployment
- Export values:
helm get values <release> -n <namespace> > current-values.yaml - If using bundled persistence, verify PVC backup strategy
- Export values:
-
Migrate StatefulSet VCT labels (bundled persistence + upgrading from 1.1.0 or earlier only)
-
Preserve shared ServiceAccount (embedded persistence users only). Set in your values file:
samDeployment:serviceAccount:name: "solace-agent-mesh-sa"persistence-layer:postgresql:serviceAccountName: solace-agent-mesh-saseaweedfs:serviceAccountName: solace-agent-mesh-sa -
Update external references (External Secrets Operator, ArgoCD/Flux, backup scripts)
Phase 3: Upgrade and Verify
- Run
helm upgrade(see Upgrade Command) - Confirm running agents (
sam-agent-*) were unaffected. They continue on the old agent chart throughout. - Verify pods start successfully (check for
ImagePullBackOff) - Test RBAC/OIDC authentication
Upgrade Command
After completing the migration checklist:
Step 1: Validate the values file (catches schema errors before touching the cluster)
helm template <release> <chart> -n <namespace> -f values-1.500.0.yaml > /dev/null
Also verify image references have no double-prefix:
helm template <release> <chart> -f values-1.500.0.yaml \
| grep "image:" | sort -u
- Correct:
gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise:1.97.2 - Wrong:
gcr.io/gcp-maas-prod/gcr.io/gcp-maas-prod/solace-agent-mesh-enterprise:1.97.2
Step 2: Review what will change (if using helm-diff plugin)
helm diff upgrade <release> <chart> \
--namespace <namespace> \
-f values-1.500.0.yaml
Step 3: Run the upgrade
helm upgrade <release> <chart> \
--namespace <namespace> \
-f values-1.500.0.yaml
helm upgrade on the main chart does not touch sam-agent-* pods. They continue running on the old agent chart throughout the upgrade with no intervention required. After the upgrade, new agents use the 1.500.0 agent chart. Existing agents can be redeployed from the Agent Mesh UI to pick up the new version.
Step 4: Verify the upgrade
# Check rollout status
kubectl rollout status deployment/<release>-solace-agent-mesh-core -n <namespace>
# Verify pods are running
kubectl get pods -l app.kubernetes.io/instance=<release> -n <namespace>
# Check for ImagePullBackOff errors
kubectl get pods -l app.kubernetes.io/instance=<release> -n <namespace> | grep -i "ImagePullBackOff\|ErrImagePull"
# Confirm new secrets were created
kubectl get secrets -n <namespace> | grep -E "core-secrets|database|storage"
Troubleshooting Migration Issues
ImagePullBackOff After Upgrade
Symptom: Pods fail to start with ImagePullBackOff or ErrImagePull errors.
Cause: Double-prefixed image reference due to incomplete image configuration migration.
Solution:
# Check actual image reference being used
kubectl describe pod <pod-name> -n <namespace> | grep "Image:"
# If you see double-prefix (e.g., gcr.io/gcp-maas-prod/gcr.io/gcp-maas-prod/...):
# 1. Update your values file to remove registry from repository
# 2. Rollback and re-upgrade with corrected values
helm rollback <release> -n <namespace>
helm upgrade <release> /path/to/charts/solace-agent-mesh-<version>.tgz -n <namespace> -f corrected-values.yaml
StatefulSet Update Failures (Bundled Persistence)
Symptom: Upgrade fails with "StatefulSet.apps is invalid: spec: Forbidden" error.
Cause: VCT labels are immutable in StatefulSets.
Solution: See the Bundled Persistence VCT Labels section.
Service Account Not Found
Symptom: Pods fail with "service account not found" errors.
Cause: Service account name changed from hardcoded to auto-generated.
Solution:
# In your values file, explicitly set the old service account name
samDeployment:
serviceAccount:
name: solace-agent-mesh-sa # Your old SA name
External References Broken
Symptom: External Secrets Operator, ArgoCD patches, or monitoring fail.
Cause: Secret/ConfigMap names changed from monolithic to focused resources.
Solution: Update external references to use new resource names:
- Old:
<release>-secret,<release>-config - New:
<release>-secret-auth,<release>-secret-core,<release>-core-env, and so on
Rollback
If you encounter issues, rollback to the previous chart version:
helm rollback <release> -n <namespace>
After rollback:
- Verify pods are running
- Check that old secrets/configmaps still exist
Getting Help
If you encounter migration issues:
- Check the inline documentation in the chart's
values.yaml - Contact Solace support with your migration questions
Related Documentation
- Kubernetes Quick Start - Updated for 1.500.0
- Production Kubernetes Installation - Production configuration examples
- Air-Gapped Kubernetes Installation - Air-gapped deployment guidance