Install Calico networking and network policy for on-premises deployments
Big picture
Install Calico to provide both networking and network policy for self-managed on-premises deployments.
Value
Calico networking and network policy are a powerful choice for a CaaS implementation. If you have the networking infrastructure and resources to manage Kubernetes on-premises, installing the full Calico product provides the most customization and control.
Concepts
Recommended: Tigera Operator
Calico is installed by an operator which manages the installation, upgrade, and general lifecycle of a Calico cluster. The operator is installed directly on the cluster as a Deployment, and is configured through one or more custom Kubernetes API resources.
Calico manifests
Calico can also be installed using raw manifests as an alternative to the operator. The manifests contain the necessary resources for installing Calico on each node in your Kubernetes cluster. Using manifests is not recommended as they cannot automatically manage the lifecycle of the Calico as the operator does. However, manifests may be useful for clusters that require highly specific modifications to the underlying Kubernetes resources.
Before you begin...
- Ensure that your Kubernetes cluster meets requirements. If you do not have a cluster, see Installing kubeadm.
How to
Install Calico
- Operator
- Manifest (v3 CRDs)
- Migrate to v3 CRDs
- Manifest
-
Install the Tigera Operator and custom resource definitions.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/v1_crd_projectcalico_org.yamlkubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/tigera-operator.yaml -
Download the custom resources necessary to configure Calico.
:::tip Automatic data plane deployment You can select either eBPF or iptables to be deployed automatically. :::
- eBPF
- iptables
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/custom-resources-bpf.yamlcurl -O https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/custom-resources.yamlIf you wish to customize the Calico install, customize the downloaded custom-resources.yaml manifest locally.
-
Create the manifest to install Calico.
- eBPF
- iptables
kubectl create -f custom-resources-bpf.yamlkubectl create -f custom-resources.yaml -
Monitor the deployment by running the following command:
watch kubectl get tigerastatusAfter a few minutes, all the Calico components display
Truein theAVAILABLEcolumn.- eBPF
- iptables
Expected outputNAME AVAILABLE PROGRESSING DEGRADED SINCEapiserver True False False 4m9scalico True False False 3m29sgoldmane True False False 3m39sippools True False False 6m4skubeproxy-monitor True False False 6m15swhisker True False False 3m19sExpected outputNAME AVAILABLE PROGRESSING DEGRADED SINCEapiserver True False False 4m9scalico True False False 3m29sgoldmane True False False 3m39sippools True False False 6m4swhisker True False False 3m19s
| Policy | IPAM | CNI | Overlay | Routing | Datastore |
|---|---|---|---|---|---|
This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.
This manifest installs Calico using the native projectcalico.org/v3 CRD API group. The v3 API group supports tiered policy, admission webhooks, and other features that aren't available with the legacy crd.projectcalico.org/v1 API group.
If you're setting up a new cluster and don't need to customize the underlying Kubernetes resources, we recommend using the operator instead. The operator automatically manages the lifecycle of Calico components, including upgrades and scaling.
Before installing, enable the MutatingAdmissionPolicy feature gate on the Kubernetes API server. Native projectcalico.org/v3 CRDs rely on MutatingAdmissionPolicies for defaulting, which are currently a beta Kubernetes feature and are not enabled by default.
-
Download the Calico v3 CRD manifest.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/calico-v3-crds.yaml -O -
If you are using pod CIDR
192.168.0.0/16, skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required - Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR. -
Customize the manifest as necessary.
-
Apply the manifest.
kubectl apply -f calico-v3-crds.yaml -
Generate TLS certificates for the admission webhook.
The manifest applied above includes a
calico-webhooksDeployment that serves aValidatingWebhookConfigurationfor tiered RBAC enforcement onprojectcalico.org/v3policy resources, but no TLS material is provisioned by default. Until the certificates are in place, write operations onprojectcalico.org/v3policy resources will fail closed.The following commands create a self-signed CA and a server certificate valid for
calico-webhooks.kube-system.svc. If you already manage TLS via cert-manager or an internal PKI, use that instead and skip ahead.# Self-signed CA.openssl genrsa -out ca.key 2048openssl req -x509 -new -nodes -key ca.key -days 3650 \-subj "/CN=calico-webhooks-ca" -out ca.crt# Server cert with SANs for the webhook service.openssl genrsa -out tls.key 2048cat > csr.conf <<'EOF'[req]distinguished_name = req_distinguished_namereq_extensions = v3_reqprompt = no[req_distinguished_name]CN = calico-webhooks.kube-system.svc[v3_req]keyUsage = digitalSignature, keyEnciphermentextendedKeyUsage = serverAuthsubjectAltName = @alt_names[alt_names]DNS.1 = calico-webhooks.kube-system.svcDNS.2 = calico-webhooks.kube-system.svc.cluster.localEOFopenssl req -new -key tls.key -out tls.csr -config csr.confopenssl x509 -req -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial \-days 365 -extensions v3_req -extfile csr.conf -out tls.crt -
Create the webhook TLS secret in the
kube-systemnamespace.kubectl -n kube-system create secret tls calico-webhooks-tls \--cert=tls.crt --key=tls.key -
Patch the
ValidatingWebhookConfigurationwith the CA bundle so the API server trusts the webhook.kubectl patch validatingwebhookconfiguration calico-webhooks \--type='json' \-p="[{\"op\":\"replace\",\"path\":\"/webhooks/0/clientConfig/caBundle\",\"value\":\"$(base64 -w0 < ca.crt)\"}]"
| Policy | IPAM | CNI | Overlay | Routing | Datastore |
|---|---|---|---|---|---|
This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.
If you have an existing manifest-based Calico install using the legacy crd.projectcalico.org/v1 CRDs, you can migrate to the native projectcalico.org/v3 CRDs. This enables tiered policy, admission webhooks, and other v3-only features.
Prerequisites
- Calico installed via
calico.yamlmanifest (not operator) kubectlaccess to the cluster- A recent Calico version that includes the migration controller
- The
MutatingAdmissionPolicyfeature gate must be enabled on the Kubernetes API server before starting the migration. Nativeprojectcalico.org/v3CRDs rely on MutatingAdmissionPolicies for defaulting, which are currently a beta Kubernetes feature and are not enabled by default.
Migration steps
-
Install the v3 CRDs alongside the existing v1 CRDs.
kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/v3_projectcalico_org.yaml -
Install the DatastoreMigration CRD.
kubectl apply --server-side -f https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/kube-controllers/pkg/controllers/migration/crd/migration.projectcalico.org_datastoremigrations.yaml -
Create a DatastoreMigration resource to start the migration. The migration controller copies all Calico resources from v1 CRDs to v3 CRDs.
kubectl apply -f - <<EOFapiVersion: migration.projectcalico.org/v1beta1kind: DatastoreMigrationmetadata:name: v1-to-v3spec:type: APIServerToCRDsEOF -
Monitor the migration progress.
kubectl get datastoremigration v1-to-v3 -wThe migration transitions through
Pending→Migrating→Converged. Once it reachesConverged, all resources have been copied to v3 CRDs. -
When the migration reaches
Converged, configure calico-node (and calico-typha if present) to use the v3 API group.kubectl set env -n kube-system daemonset/calico-node CALICO_API_GROUP=projectcalico.org/v3If you're running calico-typha, update it as well:
kubectl set env -n kube-system deployment/calico-typha CALICO_API_GROUP=projectcalico.org/v3 -
The migration controller monitors the rollout and transitions to
Completeonce all pods are running with the v3 API group. kube-controllers restarts automatically to pick up the new API group.kubectl get datastoremigration v1-to-v3 -w -
Once the migration is
Complete, delete the DatastoreMigration resource to clean up the old v1 CRDs.kubectl delete datastoremigration v1-to-v3The finalizer on the resource deletes all
crd.projectcalico.orgCRDs before removing the resource. -
Install the admission webhook resources. The Deployment, Service, RBAC,
MutatingAdmissionPolicyresources, andValidatingWebhookConfigurationare bundled intocalico-v3-crds.yamlalongside the CRDs and core Calico components. Extract just the webhook resources withyqand apply them, so the existingcalico-nodeDaemonSet,kube-controllersDeployment, andcalico-typhaDeployment in your cluster (including theCALICO_API_GROUPenv you set above) are not overwritten.curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/calico-v3-crds.yaml -Oyq 'select(.metadata.name == "calico-webhooks" or .kind == "MutatingAdmissionPolicy" or .kind == "MutatingAdmissionPolicyBinding")' \calico-v3-crds.yaml | kubectl apply -f - -
Generate TLS certificates for the admission webhook.
The
calico-webhooksDeployment serves aValidatingWebhookConfigurationfor tiered RBAC enforcement onprojectcalico.org/v3policy resources, but no TLS material is provisioned by default. Until the certificates are in place, write operations onprojectcalico.org/v3policy resources will fail closed.The following commands create a self-signed CA and a server certificate valid for
calico-webhooks.kube-system.svc. If you already manage TLS via cert-manager or an internal PKI, use that instead and skip ahead.# Self-signed CA.openssl genrsa -out ca.key 2048openssl req -x509 -new -nodes -key ca.key -days 3650 \-subj "/CN=calico-webhooks-ca" -out ca.crt# Server cert with SANs for the webhook service.openssl genrsa -out tls.key 2048cat > csr.conf <<'EOF'[req]distinguished_name = req_distinguished_namereq_extensions = v3_reqprompt = no[req_distinguished_name]CN = calico-webhooks.kube-system.svc[v3_req]keyUsage = digitalSignature, keyEnciphermentextendedKeyUsage = serverAuthsubjectAltName = @alt_names[alt_names]DNS.1 = calico-webhooks.kube-system.svcDNS.2 = calico-webhooks.kube-system.svc.cluster.localEOFopenssl req -new -key tls.key -out tls.csr -config csr.confopenssl x509 -req -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial \-days 365 -extensions v3_req -extfile csr.conf -out tls.crt -
Create the webhook TLS secret in the
kube-systemnamespace.kubectl -n kube-system create secret tls calico-webhooks-tls \--cert=tls.crt --key=tls.key -
Patch the
ValidatingWebhookConfigurationwith the CA bundle so the API server trusts the webhook.kubectl patch validatingwebhookconfiguration calico-webhooks \--type='json' \-p="[{\"op\":\"replace\",\"path\":\"/webhooks/0/clientConfig/caBundle\",\"value\":\"$(base64 -w0 < ca.crt)\"}]"
Based on your datastore and number of nodes, select a link below to install Calico.
The option, Kubernetes API datastore, more than 50 nodes provides scaling using Typha daemon. Typha is not included for etcd because etcd already handles many clients so using Typha is redundant and not recommended.
- Install Calico with Kubernetes API datastore, 50 nodes or less
- Install Calico with Kubernetes API datastore, more than 50 nodes
- Install Calico with etcd datastore
Install Calico with Kubernetes API datastore, 50 nodes or less
This option is maintained for upgrade compatibility, but we recommend that new clusters use the operator, which will automatically configure Calico correctly for your cluster size (including deploying the Typha scale-out proxy and securing it when necessary).
-
Download the Calico networking manifest for the Kubernetes API datastore.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/calico.yaml -O -
If you are using pod CIDR
192.168.0.0/16, skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required — Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR. -
Customize the manifest as necessary.
-
Apply the manifest using the following command.
kubectl apply -f calico.yaml
The geeky details of what you get:
| Policy | IPAM | CNI | Overlay | Routing | Datastore |
|---|---|---|---|---|---|
Install Calico with Kubernetes API datastore, more than 50 nodes
This option is maintained for upgrade compatibility but we recommend that new clusters use the operator to deploy Typha (the extra scale-out component included in this manifest) instead of using this method. The operator deploys Typha, autoscales it, and auto-configures mTLS between the per-host agent and Typha for maximum security. This manifest option leaves scaling up to you and, by default, it does not secure Typha's port.
-
Download the Calico networking manifest for the Kubernetes API datastore.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/calico-typha.yaml -o calico.yaml -
If you are using pod CIDR
192.168.0.0/16, skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required — Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR. -
Modify the replica count to the desired number in the
Deploymentnamed,calico-typha.apiVersion: apps/v1beta1kind: Deploymentmetadata:name: calico-typha...spec:...replicas: <number of replicas>We recommend at least one replica for every 200 nodes, and no more than 20 replicas. In production, we recommend a minimum of three replicas to reduce the impact of rolling upgrades and failures. The number of replicas should always be less than the number of nodes, otherwise rolling upgrades will stall. In addition, Typha only helps with scale if there are fewer Typha instances than there are nodes.
noteIf you set
typha_service_nameand set the Typha deployment replica count to 0, Felix will not start. -
Customize the manifest if desired.
-
Apply the manifest.
kubectl apply -f calico.yaml
The geeky details of what you get:
| Policy | IPAM | CNI | Overlay | Routing | Datastore |
|---|---|---|---|---|---|
Install Calico with etcd datastore
The etcd datastore is not recommended for new Kubernetes installs:
- etcd is another component to manage and maintain.
- Some newer Kubernetes-targeted features (such as service matches in policy) are not supported with the etcd datastore.
- eBPF data plane mode is not supported with the etcd datastore. It relies on watching services to implement some features, which is not supported with the etcd datastore.
- The Cloud/Enterprise versions of Calico do not support etcd mode at all so using this manifest prevents you from upgrading.
However, it is the only option that supports running both OpenStack and Kubernetes nodes in the same cluster.
-
Download the Calico networking manifest for etcd.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.32.0/manifests/calico-etcd.yaml -o calico.yaml -
If you are using pod CIDR
192.168.0.0/16, skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required — Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR. -
In the
ConfigMapnamed,calico-config, set the value ofetcd_endpointsto the IP address and port of your etcd server.noteYou can specify more than one
etcd_endpointusing commas as delimiters. -
Customize the manifest if desired.
-
Apply the manifest using the following command.
kubectl apply -f calico.yaml
The geeky details of what you get:
| Policy | IPAM | CNI | Overlay | Routing | Datastore |
|---|---|---|---|---|---|
Next steps
Required
Recommended - Networking
- If you are using the default BGP networking with full-mesh node-to-node peering with no encapsulation, go to Configure BGP peering to get traffic flowing between pods.
- If you are unsure about networking options, or want to implement encapsulation (overlay networking), see Determine best networking option.
Recommended - Security
- Secure Calico component communications
- Secure hosts by installing Calico on hosts
- Secure pods with Calico network policy
- If you are using Calico with Istio service mesh, get started here: Enable application layer policy