Skip to main content
For an overview of self-hosted agents, including how to create and manage agent pools, see Self-Hosted Agents Overview.

Requirements

Cluster InstallationThe Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.
  • While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env zero usage. Otherwise, your deployment concurrency will be limited by the cluster’s capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
  • The env zero agent will create a new pod for each deployment you run on env zero.
    Pods are ephemeral and will be destroyed after a single deployment.
  • A pod running a single deployment requires at least cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
  • Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.
For the EKS cluster, you can use this TF example.

Persistent Volume/Storage Class (optional)

  • env zero will store the deployment state and working directory on a persistent volume in the cluster.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The cluster must include a StorageClass named env0-state-sc.
  • The Storage Class should be set up with reclaimPolicy: Retain, to prevent data loss in case the agent needs to be replaced or uninstalled.
We recommend the current implementations for the major cloud providers:
CloudSolution
AWSEFS CSI
For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass
GCPFilestore, OpenSource NFS
AzureAzure Files
PVC AlternativeBy default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env zero-Hosted Encrypted State with env0StateEncryptionKey.

Sensitive Secrets

  • Using secrets stored on the env zero platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
  • Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
  • If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed in Custom/Optional Configuration.
  • Storing secrets is supported using these secret stores:
Secret storeSecret reference formatSecret Region & Permissions
AWS Secrets Manager (us-east-1)${ssm:<secret-name>}Set by the awsSecretsRegion helm value. Defaults to us-east-1
Role must have permissions: secretsmanager:GetSecretValue
GCP Secrets Manager${gcp:<secret-id>}Your GCP project’s default region
Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission ‘secrets.versions.access’ is required
Azure Key Vault${azure:<secret-name>@<vault-name>}Your Azure subscription’s default region
HashiCorp Vault${vault:<path>.<key>@<namespace>} where @<namespace> is optional
OCI Vault Secrets${oci:<secret-id>}The region defined in the credentials provided in the agent configuration.
Allow storing secrets in env zeroAlternatively, you could explicitly allow env zero to store secrets on its platform, by opting-in in your organization’s policy - For more info read here

Custom/Optional Configuration

You provide your agentAccessToken (from the env zero UI) along with an optional values.customer.yaml for additional configuration. For the complete list of configuration options, see Custom/Optional Configuration.

Further Configuration

The env zero agent externalizes a wide array of values that may be set to configure the agent. We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required. For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.

Job Limits

You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter on count/jobs.batch. See here for more details.

Installation

  1. Add our Helm Repo
    helm repo add env0 https://env0.github.io/self-hosted
    
  2. Update Helm Repo
    helm repo update
    
  3. Get your agent access token by creating an agent pool in the env zero UI.
  4. Install the Helm Charts
    helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent --set-string agentAccessToken='<your-agent-access-token>' -f values.customer.yaml
    # values.customer.yaml should contain any optional configuration options as detailed above
    
Terraform example
# Create agent pool and install agent using helm chart

variable "agent_custom_configuration" {
  type        = map(any)
  description = "See https://docs.envzero.com/guides/admin-guide/self-hosted-kubernetes-agent/custom-optional-configuration"
}

resource "env0_agent_pool" "default" {
  name = "default self-hosted agent"
}

resource "env0_agent_secret" "first" {
  agent_id = env0_agent_pool.default.id
}

resource "helm_release" "this" {
  repository = "https://env0.github.io/self-hosted"
  chart      = "env0-agent"

  namespace = "env0-agent-default"
  name      = "env0-agent-default"

  create_namespace = true
  timeout          = 600

  set_sensitive {
    name  = "agentAccessToken"
    value = env0_agent_secret.first.secret
  }

  values = [
    yamlencode(var.agent_custom_configuration)
  ]
}

Upgrade

helm upgrade env0-agent env0/env0-agent --namespace env0-agent
Upgrade NotesMake sure to keep your values.customer.yaml file for use during upgrades.
Custom Agent Docker ImageIf you extended the docker image on the agent, you should update the agent version in your custom image as well.

Verify Installation/Upgrade

After installing a new version of the env zero agent helm chart, it is highly recommended to verify the installation by running:
helm test env0-agent --namespace env0-agent --logs --timeout 1m

Using the helm template command

Alternatively to using helm to install the agent directly, you could use helm template in order to generate the K8S yaml files for you. Then you’d be able to run these files with a different K8S pipeline, like running kubectl apply or using ArgoCD. In order to generate the yaml files using helm template, you should first add the env zero helm chart
helm repo add env0 https://env0.github.io/self-hosted
helm repo update
Then, run the following command. If your Kubernetes cluster is version 1.21 and up:
helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> --api-version=batch/v1/CronJob -n <MY_NAMESPACE> --set-string agentAccessToken='<your-agent-access-token>' -f values.customer.yaml
If your Kubernetes cluster version is less than 1.21:
helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> -n <MY_NAMESPACE> --set-string agentAccessToken='<your-agent-access-token>' -f values.customer.yaml
  • <KUBERNETES_VERSION> is the version of your kubernetes cluster
  • <MY_NAMESPACE> is the k8s namespace in which the agent will be installed
  • values.customer.yaml is your custom values file for additional configuration options.
Using env0ConfigSecretName with the helm template commandIf using helm template, the feature that checks the Kubernetes secret defined by the env0ConfigSecretName Helm value to determine whether the PVC should be created will not function. This feature relies on an active connection to the cluster

🌐 Required Outbound Domains

Wildcard / DomainPurpose / Used By
*.env0.com, *.amazonaws.comenv zero SaaS Platform - Required for the agent to communicate with the env zero SaaS platform.
ghcr.ioGitHub Container Registry - Hosts the Docker image of the env zero agent.
*.hashicorp.comHashiCorp - Used to download Terraform binaries.
registry.terraform.io, registry.opentofu.orgModule Registries - Used to download public modules from the Terraform or OpenTofu registries.
github.com, gitlab.com, bitbucket.orgVersion Control Systems (VCS) - Used for Git operations over ports 22, 9418, 80, and 443.
*.infracost.ioInfracost - Used for cost estimation functionality.
github.comExternal Tools & Integrations - Used to download external tools required for custom flows or env zero native integrations.
dl.k8s.ioKubernetes - Used to download and install kubectl.
get.helm.shHelm - Used to download and install helm.
dl.google.comGoogle Cloud SDK - Used to download and install gcloud.

💡 Note: All domains listed above require outbound HTTPS (port 443) access from the env zero agent.
Only open access to the domains for the features you are actually using.
Firewall RulesNote that if your cluster is behind a managed firewall, you might need to whitelist the Cluster’s API server’s FQDN and corresponding Public IP.