Requirements
- Kubernetes cluster at version >= 1.24
- Autoscaler
- Persistent Volume/Storage Class(Optional)
- AMD64 or ARM64-based nodes.
- The agent will be installed using a Helm chart.
Cluster InstallationThe Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.
Autoscaler (recommended, but optional)
- While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env zero usage. Otherwise, your deployment concurrency will be limited by the cluster’s capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
- The env zero agent will create a new pod for each deployment you run on env zero.
Pods are ephemeral and will be destroyed after a single deployment. - A pod running a single deployment requires at least
cpu: 460mandmemory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation. - Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.
Persistent Volume/Storage Class (optional)
- env zero will store the deployment state and working directory on a persistent volume in the cluster.
- Must support Dynamic Provisioning and ReadWriteMany access mode.
- The requested storage space is
300Gi. - The cluster must include a
StorageClassnamedenv0-state-sc. - The Storage Class should be set up with
reclaimPolicy: Retain, to prevent data loss in case the agent needs to be replaced or uninstalled.
| Cloud | Solution |
|---|---|
| AWS | EFS CSI For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass |
| GCP | Filestore, OpenSource NFS |
| Azure | Azure Files |
PVC AlternativeBy default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env zero-Hosted Encrypted State with
env0StateEncryptionKey.Sensitive Secrets
- Using secrets stored on the env zero platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
- Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
- If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
- This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
- In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed in Custom/Optional Configuration.
- Storing secrets is supported using these secret stores:
| Secret store | Secret reference format | Secret Region & Permissions |
|---|---|---|
| AWS Secrets Manager (us-east-1) | ${ssm:<secret-name>} | Set by the awsSecretsRegion helm value. Defaults to us-east-1Role must have permissions: secretsmanager:GetSecretValue |
| GCP Secrets Manager | ${gcp:<secret-id>} | Your GCP project’s default region Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission ‘secrets.versions.access’ is required |
| Azure Key Vault | ${azure:<secret-name>@<vault-name>} | Your Azure subscription’s default region |
| HashiCorp Vault | ${vault:<path>.<key>@<namespace>} where @<namespace> is optional | |
| OCI Vault Secrets | ${oci:<secret-id>} | The region defined in the credentials provided in the agent configuration. |
Custom/Optional Configuration
You provide youragentAccessToken (from the env zero UI) along with an optional values.customer.yaml for additional configuration.
For the complete list of configuration options, see Custom/Optional Configuration.
Further Configuration
The env zero agent externalizes a wide array of values that may be set to configure the agent. We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required. For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.Job Limits
You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter oncount/jobs.batch.
See here for more details.
Installation
-
Add our Helm Repo
-
Update Helm Repo
- Get your agent access token by creating an agent pool in the env zero UI.
-
Install the Helm Charts
Terraform example
Upgrade
Upgrade NotesMake sure to keep your
values.customer.yaml file for use during upgrades.Verify Installation/Upgrade
After installing a new version of the env zero agent helm chart, it is highly recommended to verify the installation by running:Using the helm template command
Alternatively to using helm to install the agent directly, you could use helm template in order to generate the K8S yaml files for you. Then you’d be able to run these files with a different K8S pipeline, like running kubectl apply or using ArgoCD.
In order to generate the yaml files using helm template, you should first add the env zero helm chart
<KUBERNETES_VERSION>is the version of your kubernetes cluster<MY_NAMESPACE>is the k8s namespace in which the agent will be installedvalues.customer.yamlis your custom values file for additional configuration options.
🌐 Required Outbound Domains
| Wildcard / Domain | Purpose / Used By |
|---|---|
*.env0.com, *.amazonaws.com | env zero SaaS Platform - Required for the agent to communicate with the env zero SaaS platform. |
ghcr.io | GitHub Container Registry - Hosts the Docker image of the env zero agent. |
*.hashicorp.com | HashiCorp - Used to download Terraform binaries. |
registry.terraform.io, registry.opentofu.org | Module Registries - Used to download public modules from the Terraform or OpenTofu registries. |
github.com, gitlab.com, bitbucket.org | Version Control Systems (VCS) - Used for Git operations over ports 22, 9418, 80, and 443. |
*.infracost.io | Infracost - Used for cost estimation functionality. |
github.com | External Tools & Integrations - Used to download external tools required for custom flows or env zero native integrations. |
dl.k8s.io | Kubernetes - Used to download and install kubectl. |
get.helm.sh | Helm - Used to download and install helm. |
dl.google.com | Google Cloud SDK - Used to download and install gcloud. |
💡 Note: All domains listed above require outbound HTTPS (port 443) access from the env zero agent.
Only open access to the domains for the features you are actually using.