In a nutshell

Kyma allows you to extend applications with microservices and Functions. First, connect your application to a Kubernetes cluster and expose the application's API or events securely. Then, implement the business logic you require by creating microservices or Functions and triggering them to react to particular events or calls to your application's API. To limit the time spent on coding, use the built-in cloud services from Service Catalog, exposed by open service brokers from such cloud providers as GCP, Azure, and AWS.

Kyma comes equipped with these out-of-the-box functionalities:

Main features

Major open-source and cloud-native projects, such as Istio, NATS, Serverless, and Prometheus, constitute the cornerstone of Kyma. Its uniqueness, however, lies in the "glue" that holds these components together. Kyma collects those cutting-edge solutions in one place and combines them with the in-house developed features that allow you to connect and extend your enterprise applications easily and intuitively.

Kyma allows you to extend and customize the functionality of your products in a quick and modern way, using serverless computing or microservice architecture. The extensions and customizations you create are decoupled from the core applications, which means that:

  • Deployments are quick.
  • Scaling is independent from the core applications.
  • The changes you make can be easily reverted without causing downtime of the production system.

Last but not least, Kyma is highly cost-efficient. All Kyma native components and the connected open-source tools are written in Go. It ensures low memory consumption and reduced maintenance costs compared to applications written in other programming languages such as Java.

Technology stack

The entire solution is containerized and runs on a Kubernetes cluster. Customers can access it easily using a single sign on solution based on the Dex identity provider integrated with any OpenID Connect-compliant identity provider or a SAML2-based enterprise authentication server.

The communication between services is handled by the Istio Service Mesh component which enables security, traffic management, routing, resilience (retry, circuit breaker, timeouts), monitoring, and tracing without the need to change the application code. Build your applications using services provisioned by one of the many Service Brokers compatible with the Open Service Broker API, and monitor the speed and efficiency of your solutions using Prometheus, which gives you the most accurate and up-to-date monitoring data.

Key components

Kyma is built of numerous components but these three drive it forward:

  • Application Connector:
    • Simplifies and secures the connection between external systems and Kyma
    • Registers external Events and APIs in the Service Catalog and simplifies the API usage
    • Provides asynchronous communication with services and Functions deployed in Kyma through Events
    • Manages secure access to external systems
    • Provides monitoring and tracing capabilities to facilitate operational aspects
  • Serverless:
    • Ensures quick deployments following a Function approach
    • Enables scaling independent of the core applications
    • Gives a possibility to revert changes without causing production system downtime
    • Supports the complete asynchronous programming model
    • Offers loose coupling of Event providers and consumers
    • Enables flexible application scalability and availability
  • Service Catalog:
    • Connects services from external sources
    • Unifies the consumption of internal and external services thanks to compliance with the Open Service Broker standard
    • Provides a standardized approach to managing the API consumption and access
    • Eases the development effort by providing a catalog of API and Event documentation to support automatic client code generation

This basic use case shows how the three components work together in Kyma:




Kyma is built on the foundation of the best and most advanced open-source projects which make up the components readily available for customers to use. This section describes the Kyma components.


Kyma security enforces role-based access control (RBAC) in the cluster. Dex handles the identity management and identity provider integration. It allows you to integrate any OpenID Connect or SAML2-compliant identity provider with Kyma using connectors. Additionally, Dex provides a static user store which gives you more flexibility when managing access to your cluster.

Service Catalog

The Service Catalog lists all of the services available to Kyma users through the registered Service Brokers. Use the Service Catalog to provision new services in the Kyma Kubernetes cluster and create bindings between the provisioned service and an application.

Helm Broker

The Helm Broker is a Service Broker which runs in the Kyma cluster and deploys Kubernetes native resources using Helm and Kyma addons. An addon is an abstraction layer over a Helm chart which allows you to represent it as a ClusterServiceClass in the Service Catalog. Use addons to install GCP, Azure and AWS Service Brokers in Kyma.

Application Connector

The Application Connector is a proprietary Kyma solution. This endpoint is the Kyma side of the connection between Kyma and the external solutions. The Application Connector allows you to register the APIs and the Event Catalog, which lists all of the available events, of the connected solution. Additionally, the Application Connector proxies the calls from Kyma to external APIs in a secure way.


Eventing allows you to easily integrate external applications with Kyma. Under the hood, it implements NATS to ensure Kyma receives business events from external sources and is able to trigger business flows using Functions or services.

Service Mesh

The Service Mesh is an infrastructure layer that handles service-to-service communication, proxying, service discovery, traceability, and security, independently of the code of the services. Kyma uses the Istio Service Mesh that is customized for the specific needs of the implementation.


The Serverless component allows you to reduce the implementation and operation effort of an application to the absolute minimum. Kyma Serverless provides a platform to run lightweight Functions in a cost-efficient and scalable way using JavaScript and Node.js. Serverless in Kyma relies on Kubernetes resources like Deployments, Services and HorizontalPodAutoscalers for deploying and managing Functions and Kubernetes Jobs for creating Docker images.


Kyma comes with tools that give you the most accurate and up-to-date monitoring data. Prometheus open source monitoring and alerting toolkit provides this data, which is consumed by different add-ons, including Grafana for analytics and monitoring, and Alertmanager for handling alerts.


The tracing in Kyma uses the Jaeger distributed tracing system. Use it to analyze performance by scrutinizing the path of the requests sent to and from your service. This information helps you optimize the latency and performance of your solution.

API Gateway

The API Gateway aims to provide a set of functionalities which allow developers to expose, secure, and manage their APIs in an easy way. The main element of the API Gateway is the API Gateway Controller which exposes services in Kyma.


Logging in Kyma uses Loki, a Prometheus-like log management system.


The Console is a web-based administrative UI for Kyma. It uses the Luigi framework to allow you to seamlessly extend the UI content with custom micro frontends. The Console has a dedicated Console Backend Service which provides a tailor-made API for each view of the Console UI.


Rafter is a solution for storing and managing different types of assets, such as documents, files, images, API specifications, and client-side applications. It uses MinIO as object storage and relies on Kubernetes custom resources (CRs). The custom resources are managed by a controller that communicates through MinIO Gateway with external cloud providers.

Testing Kyma

Kyma components use Octopus for testing. Octopus is a testing framework that allows you to run tests defined as Docker images on a running cluster. Octopus uses two CustomResourceDefinitions (CRDs):

  • TestDefinition, which defines your test as a Pod specification.
  • ClusterTestSuite, which defines a suite of tests to execute and how to execute them.

Add a new test

To add a new test, create a yaml file with TestDefinition CR in your chart. To comply with the convention, place it under the tests directory. See the exemplary chart structure for Dex:

Click to copy
# Chart tree
├── Chart.yaml
├── templates
│   ├── tests
│   │ └── test-dex-connection.yaml
│   ├── dex-deployment.yaml
│   ├── dex-ingress.yaml
│   ├── dex-rbac-role.yaml
│   ├── dex-service.yaml
│   ├── pre-install-dex-account.yaml
│   ├── pre-install-dex-config-map.yaml
│   └── pre-install-dex-secrets.yaml
└── values.yaml

The test adds a new test-dex-connection.yaml under the templates/tests directory. For more information on TestDefinition, read the Octopus documentation.

The following example presents TestDefinition with a container that calls the Dex endpoint with cURL. You must define at least the spec.template parameter which is of the PodTemplateSpec type.

Click to copy
apiVersion: ""
kind: TestDefinition
name: {{ .Chart.Name }}
app: {{ .Chart.Name }}-tests {{ .Chart.Name }}-tests {{ .Release.Service }} {{ .Release.Name }} {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
annotations: "false"
- name: tests
command: ["/usr/bin/curl"]
args: [
"--max-time", "10",
"--retry", "60",
"--retry-delay", "3",
"http://dex-service.{{ .Release.Namespace }}.svc.cluster.local:5556/.well-known/openid-configuration"
restartPolicy: Never

Tests execution

To run all tests deployed on a Kyma cluster using Kyma CLI, run:

Click to copy
kyma test run

WARNING: The kubeconfig file downloaded from UI does not grant enough privileges to run a test using Kyma CLI. Instead, use the kubeconfig file from the cloud provider.

Internally, the ClusterTestSuite resource is defined. It fetches all TestDefinitions and executes them.

Run tests manually

To run tests manually, you can pass test names to Kyma CLI explicitly. To list all the deployed TestDefinition sets, run:

Click to copy
kyma test definitions

Then, run only the desired tests by passing the TestDefinition names:

Click to copy
kyma test run <test-definition-1> <test-definition-2> ...

See the current tests progress in the ClusterTestSuite status. Run:

Click to copy
kyma test status

The ID of the test execution is the same as the ID of the testing Pod. The testing Pod is created in the same Namespace as its TestDefinition. To get logs for a specific test, run the following command:

Click to copy
kyma test logs <test-suite-1> <test-suite-2> ...


Kyma uses Helm charts to deliver single components and extensions, as well as the core components. This document contains information about the chart-related technical concepts, dependency management to use with Helm charts, and chart examples.

Manage dependencies with Init Containers

The ADR 003: Init Containers for dependency management document declares the use of Init Containers as the primary dependency mechanism.

Init Containers present a set of distinctive behaviors:

  • They always run to completion.
  • They start sequentially, only after the preceding Init Container completes successfully. If any of the Init Containers fails, the Pod restarts. This is always true, unless the restartPolicy equals never.

Readiness Probes ensure that the essential containers are ready to handle requests before you expose them. At a minimum, probes are defined for every container accessible from outside of the Pod. It is recommended to pair the Init Containers with readiness probes to provide a basic dependency management solution.


Here are some examples:

  1. Generic

    Click to copy
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
    replicas: 3
    app: nginx
    app: nginx
    - name: nginx
    image: nginx:1.7.9
    - containerPort: 80
    path: /healthz
    port: 80
    initialDelaySeconds: 30
    timeoutSeconds: 1
    Click to copy
    apiVersion: v1
    kind: Pod
    name: myapp-pod
    - name: init-myservice
    image: busybox
    command: ['sh', '-c', 'until nslookup nginx; do echo waiting for nginx; sleep 2; done;']
    - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  2. Kyma

    Click to copy
    apiVersion: apps/v1
    kind: Deployment
    name: helm-broker
    app: helm-broker
    replicas: 1
    app: helm-broker
    type: RollingUpdate
    maxUnavailable: 0
    app: helm-broker
    - name: init-helm-broker
    command: ['sh', '-c', 'until nc -zv service-catalog-controller-manager.kyma-system.svc.cluster.local 8080; do echo waiting for etcd service; sleep 2; done;']
    - name: helm-broker
    - containerPort: 6699
    port: 6699
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 3
    successThreshold: 1
    timeoutSeconds: 2

Support for the Helm wait flag

High level Kyma components, such as core, come as Helm charts. These charts are installed as part of a single Helm release. To provide ordering for these core components, the Helm client runs with the --wait flag. As a result, Helm waits for the readiness of all of the components, and then evaluates the readiness.

For Deployments, set the strategy to RollingUpdate and set the MaxUnavailable value to a number lower than the number of replicas. This setting is necessary, as readiness in Helm v3 is fulfilled if the number of replicas in ready state is not lower than the expected number of replicas:

Click to copy
ReadyReplicas >= TotalReplicas - MaxUnavailable

Chart installation details

Helm performs the chart installation process. This is the order of operations that happen during the chart installation:

  • resolve values
  • recursively gather all templates with the corresponding values
  • sort all templates
  • render all templates
  • separate hooks and manifests from files into sorted lists
  • aggregate all valid manifests from all sub-charts into a single manifest file
  • execute PreInstall hooks
  • create a release using the ReleaseModule API and, if requested, wait for the actual readiness of the resources
  • execute PostInstall hooks


All notes are based on Helm v3.2.1 implementation and are subject to change in future releases.

  • Regardless of how complex a chart is, and regardless of the number of sub-charts it references or consists of, it's always evaluated as one. This means that each Helm release is compiled into a single Kubernetes manifest file when applied on API server.

  • Hooks are parsed in the same order as manifest files and returned as a single, global list for the entire chart. For each hook the weight is calculated as a part of this sort.

  • Manifests are sorted by Kind. You can find the list and the order of the resources on the Helm Github page.

  • To provide better error handling, Helm validates rendered templates against the Kubernetes OpenAPI schema before they are sent to the Kubernetes API. This means any resources that don't comply with the Kubernetes API docs (for example because of unsupported fields) will fail the release.


  • resource is any document in a chart recognized by Helm. This includes manifests, hooks, and notes.
  • template is a valid Go template. Many of the resources are also Go templates.

Deploy with a private Docker registry

Docker is a free tool to deploy applications and servers. To run an application on Kyma, provide the application binary file as a Docker image located in a Docker registry. Use the DockerHub public registry to upload your Docker images for free access to the public. Use a private Docker registry to ensure privacy, increased security, and better availability.

This document shows how to deploy a Docker image from your private Docker registry to the Kyma cluster.


The deployment to Kyma from a private registry differs from the deployment from a public registry. You must provide Secrets accessible in Kyma, and referenced in the .yaml deployment file. This section describes how to deploy an image from a private Docker registry to Kyma. Follow the deployment steps:

  1. Create a Secret resource.
  2. Write your deployment file.
  3. Submit the file to the Kyma cluster.

Create a Secret for your private registry

A Secret resource passes your Docker registry credentials to the Kyma cluster in an encrypted form. For more information on Secrets, refer to the Kubernetes documentation.

To create a Secret resource for your Docker registry, run the following command:

Click to copy
kubectl create secret docker-registry {secret-name} --docker-server={registry FQN} --docker-username={user-name} --docker-password={password} --docker-email={registry-email} --namespace={namespace}

Refer to the following example:

Click to copy
kubectl create secret docker-registry docker-registry-secret --docker-server=myregistry:5000 --docker-username=root --docker-password=password --namespace=production

The Secret is associated with a specific Namespace. In the example, the Namespace is production. However, you can modify the Secret to point to any desired Namespace.

Write your deployment file

  1. Create the deployment file with the .yml extension and name it deployment.yml.

  2. Describe your deployment in the .yml file. Refer to the following example:

    Click to copy
    apiVersion: apps/v1
    kind: Deployment
    namespace: production # {production/stage/qa}
    name: my-deployment # Specify the deployment name.
    annotations: true
    replicas: 3 # Specify your replica - how many instances you want from that deployment.
    app: app-name # Specify the app label. It is optional but it is a good practice.
    app: app-name # Specify app label. It is optional but it is a good practice.
    version: v1 # Specify your version.
    - name: container-name # Specify a meaningful container name.
    image: myregistry:5000/user-name/image-name:latest # Specify your image {registry FQN/your-username/your-space/image-name:image-version}.
    - containerPort: 80 # Specify the port to your image.
    - name: docker-registry-secret # Specify the same Secret name you generated in the previous step for this Namespace.
    - name: example-secret-name # Specify your Namespace Secret, named `example-secret-name`.
  3. Submit your deployment file using this command:

    Click to copy
    kubectl apply -f deployment.yml

Your deployment is now running on the Kyma cluster.

Resource quotas

Resource quotas are a convenient way to manage the consumption of resources in a Kyma cluster. You can easily set resource quotas for every Namespace you create through the Console UI.

When you click Create Namespace, you can define:

  • Total Memory Quotas, which limit the overall memory consumption by the Namespace by creating a ResourceQuota object.
  • Limits per container, which limit the memory consumption for individual containers in the Namespace by creating LimitRange objects.

To manage existing resource quotas in a Namespace, select that Namespace in the Namespaces view of the Console and go to the Resources tab. This view allows you to edit or delete the existing limits.

TIP: If you want to manage ResourceQuotas and LimitRanges directly from the terminal, follow the Kubernetes guide.



Kyma is a complex tool which consists of many different components that provide various functionalities to extend your application. This entails high technical requirements that can influence your local development process. To meet the customer needs, we ensured Kyma modularity. This way you can decide not to include a given component in the Kyma installation, or install it after the Kyma installation process.

To make the local development process easier, we introduced the Kyma Lite concept in which case some components are not included in the local installation process by default. These are the Kyma and Kyma Lite components in their installation order:

ComponentKymaKyma Lite


By default, Kyma is installed with the default chart values defined in the values.yaml files. However, you can also install Kyma with the pre-defined profiles that differ in the amount of resources, such as memory and CPU, that the components can consume. The currently supported profiles are:

  • Evaluation - a profile with limited resources that you can use for trial purposes
  • Production - a profile configured for high availability and scalability. It requires more resources than the evaluation profile but is a better choice for production workload.

You can check the values used for each component in respective folders of the resources directory. The profile-evaluation.yaml file contains values used for the evaluation profile, and the profile-production.yaml file contains values for the production profile. If the component doesn't have files for respective profiles, the profile values are the same as default chart values defined in the values.yaml file.

A profile is defined globally for the whole Kyma installation. It's not possible to install a profile only for the selected components. However, you can set overrides to override values set for the profile. The profile values have precedence over the default chart values, and overrides have precedence over the applied profile.

To install Kyma with any of the predefined profiles, follow the instructions described in the cluster Kyma installation document and set a profile with the --profile flag, as described in the Install Kyma section.

NOTE: You can also set profiles on a running cluster during the Kyma upgrade operation.

Installation guides

Follow these installation guides to install Kyma locally or on a cluster:

Read the rest of the installation documents to learn how to:

NOTE: Make sure that the version of the documentation selected in the left pane of matches the version of Kyma you're using.

Install Kyma locally

This Installation guide shows you how to quickly deploy Kyma locally on the MacOS, Linux, and Windows platforms. Kyma is installed locally using a proprietary installer based on a Kubernetes operator.

NOTE: By default, the local Kyma Lite installation on Minikube requires 4 CPU and 8GB of RAM. If you want to add more components to your installation, install Kyma on a cluster.

TIP: See the troubleshooting guide for tips.



NOTE: To work with Kyma, use only the provided commands. Kyma requires a specific Minikube configuration and does not work on a basic Minikube cluster that you can start using the minikube start command.

Install Kyma

Follow these instructions to install Kyma from a release or from sources:

  • From a release
  • From sources

Post-installation steps

Kyma comes with a local wildcard self-signed server.crt certificate. The kyma install command downloads and adds this certificate to the trusted certificates in your OS so you can access the Console UI.

NOTE: Mozilla Firefox uses its own certificate keychain. If you want to access the Console UI though Firefox, add the Kyma wildcard certificate to the certificate keychain of the browser. To access the Application Connector and connect an external solution to the local deployment of Kyma, you must add the certificate to the trusted certificate storage of your programming environment. See the Java environment as an example.

  1. After the installation is completed, you can access the Console UI. To open the the Console UI in your default browser, run:

    Click to copy
    kyma console
  2. Select Login with Email. Use the email address and the password printed in the terminal once the installation process is completed.

  3. At this point, Kyma is ready for you to explore. See what you can achieve using the Console UI or check out one of the available examples.

Learn also how to test Kyma or reinstall it without deleting the cluster from Minikube.

Stop and restart Kyma without reinstalling

Use the Kyma CLI to restart the Minikube cluster without reinstalling Kyma. Follow these steps to stop and restart your cluster:

  1. Stop the Minikube cluster with Kyma installed. Run:

    Click to copy
    minikube stop
  2. Restart the cluster without reinstalling Kyma. Run:

    Click to copy
    kyma provision minikube

The Kyma CLI discovers that a Minikube cluster is initialized and asks if you want to delete it. Answering no causes the Kyma CLI to start the Minikube cluster and restarts all of the previously installed components. Even though this procedure takes some time, it is faster than a clean installation as you don't download all of the required Docker images.

Install Kyma on a cluster

This installation guide explains how you can quickly deploy Kyma on a cluster with a wildcard DNS provided by using a GitHub release of your choice.

TIP: The domain is not recommended for production. If you want to expose the Kyma cluster on your own domain, follow the installation guide. To install Kyma using your own image instead of a GitHub release, follow the instructions.


CAUTION: As of version 1.20, Kubernetes is deprecating Docker as a container runtime in favor of containerd. Due to a different way in which containerd handles certificate authorities, Kyma's built-in Docker registry will not work correctly on clusters running with a self-signed TLS certificate on top of Kubernetes installation where containerd is used as a container runtime. If that is your case, either upgrade the cluster to use Docker instead of containerd, generate a valid TLS certificate for your Kyma instance or configure an external Docker registry.

  • GKE
  • AKS
  • Gardener

Choose the release to install

  1. Go to Kyma releases and choose the release you want to install.

  2. Export the release version as an environment variable:

    Click to copy

Prepare the cluster

  • GKE
  • AKS
  • Gardener

Install Kyma

Install Kyma using Kyma CLI:

Click to copy
kyma install -s $KYMA_VERSION

To install Kyma with one of the predefined profiles, run:

Click to copy
kyma install -s $KYMA_VERSION --profile {evaluation|production}

NOTE: If you don't specify $KYMA_VERSION, the version from the latest commit on the main branch is installed. If you don't specify the profile, the default version of Kyma is installed.

Post-installation steps

Access the cluster

  1. To open the cluster's Console on your default browser, run:

    Click to copy
    kyma console
  2. To log in to your cluster's Console UI, use the default admin static user. Click Login with Email and sign in with the email address. Use the password printed after the installation. To get the password manually, you can also run:

    Click to copy
    kubectl get secret admin-user -n kyma-system -o jsonpath="{.data.password}" | base64 --decode

If you need to use Helm to manage your Kubernetes resources, read the additional configuration document.

Install Kyma with your own domain

This guide explains how to deploy Kyma on a cluster using your own domain.

TIP: Get a free domain for your cluster using services like or similar.


  • GKE
  • AKS

Choose the release to install

  1. Go to Kyma releases and choose the release you want to install.

  2. Export the release version as an environment variable. Run:

    Click to copy

Set up the DNS

  • GKE
  • AKS

Generate the TLS certificate

  • GKE
  • AKS

Prepare the cluster

  • GKE
  • AKS

Install Kyma

NOTE: If you want to use the Kyma production profile, see the following documents before you go to the next step:

  1. Install Kyma using Kyma CLI:

    Click to copy
    kyma install -s $KYMA_VERSION --domain $DOMAIN --tls-cert $TLS_CERT --tls-key $TLS_KEY

Configure DNS for the cluster load balancer

  • GKE
  • AKS

Access the cluster

  1. To open the cluster's Console in your default browser, run:

    Click to copy
    kyma console
  2. To log in to your cluster's Console UI, use the default admin static user. Click Login with Email and sign in with the email address. Use the password printed after the installation. To get the password manually, you can also run:

    Click to copy
    kubectl get secret admin-user -n kyma-system -o jsonpath="{.data.password}" | base64 --decode

If you need to use Helm to manage your Kubernetes resources, read the additional configuration document.

Use your own Kyma Installer image

When you install Kyma from a release, you use the release artifacts that already contain the Kyma Installer - a Docker image containing the combined binary of the Kyma Operator and the component charts from the /resources folder. If you want to install Kyma from sources, you must build the image yourself. You also require a new image if you add components and custom Helm charts that are not included in the /resources folder to the installation.

Alternatively, you can also install Kyma from the latest main or any previous main commit using Kyma CLI. See different installation source flags.

In addition to the tools required to install Kyma on a cluster, you also need:

  1. Clone the Kyma repository to your machine using either HTTPS or SSH. Run this command to clone the repository and change your working directory to kyma:

    • HTTPS
    • SSH
  2. Build a Kyma-Installer image that is based on the current Kyma Operator binary and includes the current installation configurations and resources charts. Run:

    Click to copy
    docker build -t kyma-installer -f tools/kyma-installer/kyma.Dockerfile .
  3. Push the image to your Docker Hub. Run:

    Click to copy
    docker tag kyma-installer:latest {YOUR_DOCKER_LOGIN}/kyma-installer
    docker push {YOUR_DOCKER_LOGIN}/kyma-installer
  4. Install Kyma using your image. Run this command:

    Click to copy
    kyma install -s {YOUR_DOCKER_LOGIN}/kyma-installer

Use Helm

You can use Helm to manage Kubernetes resources in Kyma, for example to check the already installed Kyma charts or to install new charts that are not included in the Kyma Installer image.

Helm v3

As of version 1.14, Kyma uses Helm v3 to install and maintain components. Unlike its predecessor, Helm v3 interacts directly with the Kubernetes API and thus no longer features an in-cluster server. With Tiller gone, managing Kubernetes resources using Helm v3 CLI requires no manual configuration.

Upgrade Kyma

CAUTION: Before you upgrade your Kyma deployment to a newer version, check the release notes of the target release for migration guides. If the target release comes with a migration guide, make sure to follow it closely. If you upgrade to a newer release without performing the steps described in the migration guide, you can compromise the functionality of your cluster or make it unusable altogether.

Upgrading Kyma is the process of migrating from one version of the software to a newer release. This operation depends on release artifacts listed in the Assets section of the GitHub releases page and migration guides delivered with the target release.

To upgrade to a version that is several releases newer than the version you're currently using, you must move up to the desired release incrementally. You can skip patch releases.

For example, if you're running Kyma 1.0 and you want to upgrade to version 1.3, you must perform these operations:

  1. Upgrade from version 1.0 to version 1.1.
  2. Upgrade from version 1.1 to version 1.2.
  3. Upgrade from version 1.2 to version 1.3.

NOTE: Kyma does not support a dedicated downgrade procedure. You can achieve a similar result by creating a backup of your cluster before upgrade. Read about backup in Kyma to learn more.

The upgrade procedure relies heavily on Helm. As a result, the availability of cluster services during the upgrade is not defined by Kyma and can vary from version to version. The existing custom resources (CRs) remain in the cluster.

For more details, read about the technical aspects of the upgrade.

Upgrade Kyma to a newer version

Follow these steps:

  1. Kyma CLI should be in the same version of the release you would like to upgrade to. To check which version you're currently running, run this command:

    Click to copy
    kyma version
  2. Perform the required actions described in the migration guide published with the release you want to upgrade to. Migration guides are linked in the release notes and are available on the respective release branches in the docs/migration-guides directory.

    NOTE: Not all releases require you to perform additional migration steps. If your target release doesn't come with a migration guide, proceed to the next step.

  3. Trigger the upgrade:

    CAUTION: Do not forget to supply the same overrides using the -o flag and the same component list using the -c flag if you provided any of them during the installation. There might be new components on the version that you would like to upgrade to. It is important to add them also to your custom component list.

    Click to copy
    kyma upgrade -s {VERSION}

    If you want to upgrade Kyma to use one of the predefined profiles, run:

    Click to copy
    kyma upgrade -s {VERSION} --profile {evaluation|production}

Update Kyma

This guide describes how to update Kyma deployed locally or on a cluster.

NOTE: Updating Kyma means introducing changes to a running deployment. If you want to upgrade to a newer version, read the installation document.



Kyma consists of multiple components, installed as Helm releases.

Update of an existing deployment can include:

  • Changes in charts
  • Changes in overrides
  • Adding new Helm releases

The update procedure consists of three main steps:

  • Prepare the update
  • Trigger the update process

In case of dependency conflicts or major changes between components versions, some updates may not be possible.

CAUTION: Currently Kyma doesn't support removing components as a part of the update process.

Prepare the update

  • If you update an existing component, make all required changes to the Helm charts of the component located in the resources directory.

  • If you add a new component to your Kyma deployment, add a top-level Helm chart for that component. Additionally, download the current Installation custom resource from the cluster and add the new component to the components list:

    Click to copy
    kubectl -n default get installation kyma-installation -o yaml > installation.yaml
  • If you introduce changes in the overrides, create a file with your changes as ConfigMaps or Secrets. See the configuration document for more information on overrides.

Perform the update

If your changes involve any modifications in the /resources folder that includes component chart configurations, perform the steps under the Update with resources modifications tab. If you only modify installation artifacts, for example by adding or removing components in the installation files or adding or removing overrides in the configuration files, perform the steps under the Update without resources modifications tab.

Read about each update step in the following sections.

  • Update with resources modifications
  • Update without resources modifications

Back up Kyma

The Kyma cluster load consists of Kubernetes objects and volumes. Kyma relies on a managed Kubernetes cluster for periodic backups of Kubernetes objects to avoid any manual steps.

For example, Gardener uses etcd as the Kubernetes backing store for all cluster data. Gardener runs periodic jobs to take major and minor snapshots of the etcd database to include Kubernetes objects in the backup. The major snapshot that includes all resources is taken on a daily basis, and minor snapshots happen every five minutes. If the etcd database experiences any problems, Gardener automatically restores the Kubernetes cluster using the most recent snapshot.

NOTE: Backup does not include Kubernetes volumes. That's why you should back up your volumes periodically using the VolumeSnapshot API resource.

On-demand volume snapshots

Kubernetes provides the VolumeSnapshot API resource that you can use to create a snapshot of a Kubernetes volume. You can use the snapshot to provision a new volume pre-populated with the snapshot data or to restore the existing volume to the state represented by the snapshot.

Taking volume snapshots is possible thanks to Container Storage Interface (CSI) drivers which allow third-party storage providers to expose storage systems in Kubernetes. For details on available drivers, see the full list of drivers.

Follow the tutorial to create on-demand volume snapshots for cloud providers.

TIP: Follow the instructions on restoring resources using Velero to learn how to back up and restore individual resources.

Error handling

Kyma Operator features a retry mechanism to handle temporary issues such as prolonged process of creating a resource path for custom resources in the Kubernetes API Server. If an error occurs while processing a component, Kyma Operator restores the initial state of that component and retries the step. Specific behavior of the controller depends on the nature of the operation that was interrupted by an error.

Installation error

If an error occurs during component installation, Kyma Operator deletes the corresponding Helm release and retries the operation. If such a release does not exist, the deletion step is skipped.

Upgrade error

If an error occurs during component upgrade, Kyma Operator rolls back the corresponding Helm release to the last deployed revision and retries the operation. If the release history does not include a deployed revision, the controller returns an error and stops the process.

Retry policy

The retry policy is based on the configuration that specifies the intervals between consecutive attempts. The default configuration consists of 5 retries with the following time intervals between each attempt: 10s, 20s, 40s, 60s, 80s. After that, the installation is stopped and can be restarted manually by setting the action: install label on the Installation CR.

NOTE: The configuration of retries can be adjusted by setting the backoffIntervals argument in the Installer Deployment. The value is a comma-separated list of numbers that represent the intervals between consecutive retries. It also defines the total number of retries. For example, the default one is: 10,20,40,60,80.

If an error occurs before the installation of components, the installation is retried every 30 seconds until successful.



You can configure the Kyma installation by:

  • Customizing the list of the components to install.
  • Providing overrides that change the configuration values used by one or more components.

The list of components to install is defined in the Installation custom resource (CR). The overrides are delivered as ConfigMaps or Secrets defined by the user before triggering the installation. The Kyma Installer reads the configuration from the Installation CR and the overrides and applies it in the installation process.

Default settings

The default settings for the cluster and local installation are defined in different files.

  • Local installation
  • Cluster installation

Installation configuration

Before you start the Kyma installation process, you can customize the default settings.


One of the released Kyma artifacts is the Kyma Installer, a Docker image that combines the Kyma Operator executable with charts of all components available in the release. The Kyma Installer can install only the components contained in its image. The Installation CR specifies which components of the available components are installed. The component list in the Installation CR has the components that are not an integral part of the default Kyma Lite package commented out with a hash character (#). The Kyma Installer doesn't install these components. You can customize the list of components by:

  • Uncommenting a component entry to install the component.
  • Commenting out a component entry using the hash character (#) to skip the installation of that component.
  • Adding new components along with their chart definitions to the list. If you do that, you must build your own Kyma Installer image as you are adding a new component to Kyma.

For more details on custom component installation, see the configuration document.


The common overrides that affect the entire installation, are described in the installation guides. Other overrides are component-specific. To learn more about the configuration options available for a specific component, see the Configuration section of the component's documentation.

CAUTION: Override only values for those parameters from values.yaml files that are exposed in configuration documents for a given component.

Read more about the types of overrides and the rules for creating them.

CAUTION: An override must exist in a cluster before the installation starts. If you fail to deliver the override before the installation, the configuration can't be applied.

Advanced configuration

All values.yaml files in charts and sub-charts contain pre-defined attributes that are:

  • Configurable
  • Used in chart templates
  • Recommended production settings

You can only override values that are included in the values.yaml files of a given resource. If you want to extend the available customization options, request adding a new attribute with a default value to the pre-defined list in values.yaml. Raise a pull request in which you propose changes in the chart, the new attribute, and its value to be added to values.yaml. This way you ensure that you can override these values when needed, without these values being overwritten each time an update or rebase takes place.

NOTE: Avoid modifications of such open-source components as Istio or Service Catalog as such changes can severely impact their future version updates.

Custom component installation

By default, you install Kyma with a set of components provided in the Kyma Lite package.

During installation, the Kyma Installer applies the content of the local or cluster installation file that includes the list of component names and Namespaces in which the components are installed. The Installer skips the lines starting with a hash character (#):

Click to copy
# - name: "tracing"
# namespace: "kyma-system"

You can modify the component list as follows:

  • Add components to the installation file before the installation
  • Add components to the installation file after the installation
  • Remove components from the installation file before the installation

NOTE: Currently, it is not possible to remove a component that is already installed. If you remove it from the installation file or precede its entries with a hash character (#) when Kyma is already installed, the Kyma Installer simply does not update this component during the update process but the component is not removed.

Each modification requires an action from the Kyma Installer for the changes to take place:

  • If you make changes before the installation, proceed with the standard installation process to finish Kyma setup.
  • If you make changes after the installation, follow the update process to refresh the current setup.

Read the subsections for details.

Provide a custom list of components

You can provide a custom list of components to Kyma CLI during the installation. The version of your component's deployment must match the version that Kyma currently supports.

NOTE: For some components, you must perform additional actions to exclude them from the Kyma installation. In case of the Service Catalog, you must provide your own deployment of this component in the Kyma-supported version before you remove them from the installation process. See the values.yaml file for the currently supported version of the Service Catalog.

Installation from the release

  1. Create a file with the list of components you desire to install. You can copy and paste most of the components from the regular installation file, then modify the list as you like. An example file can look like the following:
Click to copy
- name: "cluster-essentials"
namespace: "kyma-system"
- name: "testing"
namespace: "kyma-system"
- name: "istio"
namespace: "istio-system"
- name: "xip-patch"
namespace: "kyma-installer"
- name: "eventing"
namespace: "kyma-system"
  1. Follow the installation steps to install Kyma locally from the release or install Kyma on a cluster. While installing, provide the path to the component list file using the -c flag.

Installation from sources

  1. Customize the installation by adding a component to the list of components or removing the hash character (#) in front of the name and namespace entries in the following installation files:

  2. Follow the installation steps to install Kyma locally from sources or install Kyma on a cluster.

Post-installation changes

You can only add a new component after the installation. Removal of the installed components is not possible. To add a component that was not installed with Kyma by default, perform the following steps.

  1. Download the current Installation custom resource from the cluster:

    Click to copy
    kubectl -n default get installation kyma-installation -o yaml > installation.yaml
  2. Add the new component to the list of components or remove the hash character (#) preceding these lines:

    Click to copy
    #- name: "tracing"
    # namespace: "kyma-system"
  3. Check which version you're currently running. Run this command:

    Click to copy
    kyma version
  4. Trigger the update using the same version and the modified installation file:

    Click to copy
    kyma upgrade -s {VERSION} -c {INSTALLATION_FILE_PATH}

Helm overrides for Kyma installation

Kyma packages its components into Helm charts that the Kyma Operator uses during installation and updates. This document describes how to configure the Kyma Installer with new values for Helm charts to override the default settings in values.yaml files.


The Kyma Operator is a Kubernetes Operator that uses Helm to install Kyma components. Helm provides an overrides feature to customize the installation of charts, for example to configure environment-specific values. When using Kyma Operator for Kyma installation, users can't interact with Helm directly. The installation is not an interactive process.

To customize the Kyma installation, the Kyma Operator exposes a generic mechanism to configure Helm overrides called user-defined overrides.

User-defined overrides

The Kyma Operator finds user-defined overrides by reading the ConfigMaps and Secrets deployed in the kyma-installer Namespace and marked with:

  • the installer: overrides label
  • a component: {COMPONENT_NAME} label if the override refers to a specific component

NOTE: There is also an additional "" label in all ConfigMaps and Secrets that allows you to easily filter the installation resources.

The Kyma Operator constructs a single override by inspecting the ConfigMap or Secret entry key name. The key name should be a dot-separated sequence of strings corresponding to the structure of keys in the chart's values.yaml file or the entry in chart's template.

The Kyma Operator merges all overrides recursively into a single yaml stream and passes it to Helm during the Kyma installation and upgrade operations.

Common vs. component overrides

The Kyma Operator looks for available overrides each time a component installation or an update operation is due. Overrides for a component are composed of two sets: common overrides and component-specific overrides.

Kyma uses common overrides for the installation of all components. ConfigMaps and Secrets marked with the installer: overrides label contain the definition.

Kyma uses component-specific overrides only for the installation of specific components. ConfigMaps and Secrets marked with both installer: overrides and component: {component-name} labels contain the definition. Component-specific overrides have precedence over common ones in case of conflicting entries.

NOTE: Add the additional "" label to both common and component-specific overrides to enable easy installation resources filtering.

Overrides examples

Top-level charts overrides

Overrides for top-level charts are straightforward. Just use the template value from the chart as the entry key in the ConfigMap or Secret. Leave out the .Values. prefix.

See the example:

The Installer uses a rafter top-level chart that contains a template with the following value reference:

Click to copy
resources: {{ toYaml .Values.resources | indent 12 }}

The chart's default values minio.resources.limits.memory and minio.resources.limits.cpu in the values.yaml file resolve the template. The following fragment of values.yaml shows this definition:

Click to copy
memory: "128Mi"
cpu: "100m"

To override these values, for example to 512Mi and 250m, proceed as follows:

  • Create a ConfigMap file with the minio.resources.limits.memory: 512Mi and minio.resources.limits.cpu: 250m entries to the ConfigMap:
Click to copy
apiVersion: v1
kind: ConfigMap
name: rafter-overrides
namespace: kyma-installer
installer: overrides
component: rafter ""
controller-manager.minio.resources.limits.memory: 512Mi #increased from 128Mi
controller-manager.minio.resources.limits.cpu: 250m #increased from 100m

While installing Kyma, provide the file path using the -o flag. Once the installation starts, the Kyma Operator generates overrides based on the ConfigMap entries. The system uses the value of 512Mi instead of the default 128Mi for MinIO memory and 250m instead of 100m for MinIO CPU from the chart's values.yaml file.

For overrides that the system should keep in Secrets, just define a Secret object instead of a ConfigMap with the same key and a base64-encoded value. Be sure to label the Secret.

If you add the overrides after installation, trigger the update process using Kyma CLI. Provide the same version of the installed Kyma:

Click to copy
kyma upgrade -s {VERSION}

Sub-chart overrides

Overrides for sub-charts follow the same convention as top-level charts. However, overrides require additional information about sub-chart location.

When a sub-chart contains the values.yaml file, the information about the chart location is not necessary because the chart and its values.yaml file are on the same level in the directory hierarchy.

The situation is different when the Kyma Operator installs a chart with sub-charts. All template values for a sub-chart must be prefixed with a sub-chart "path" that is relative to the top-level "parent" chart.

This is not a Kyma Operator-specific requirement. The same considerations apply when you provide overrides manually using the helm command-line tool.

For example, there's the connector-service sub-chart nested in the application-connector chart installed by default as part of the Kyma package. In its deployment.yaml, there's the following fragment:

Click to copy
serviceAccountName: {{ .Chart.Name }}
- name: {{ .Chart.Name }}
image: {{ }}/{{ }}connector-service:{{ }}
imagePullPolicy: {{ .Values.deployment.image.pullPolicy }}
- "--appTokenExpirationMinutes={{ .Values.deployment.args.appTokenExpirationMinutes }}"

This fragment of the values.yaml file in the connector-service chart defines the default value for appTokenExpirationMinutes:

Click to copy
appTokenExpirationMinutes: 5

To override this value and change it from 5 to 10, do the following:

  1. Create a ConfigMap file and name it after the main component chart in the resources folder and add the -overrides suffix to it. In this example, that would be application-connector-overrides.

  2. Add the connector-service.deployment.args.appTokenExpirationMinutes: 10 entry under the data field in the ConfigMap.

Notice that the user-provided override key now contains two parts:

  • The chart "path" inside the top-level application-connector chart called connector-service
  • The original template value reference from the chart without the .Values. prefix, deployment.args.appTokenExpirationMinutes

While installing Kyma, provide the file path using the -o flag. Once the installation starts, the Kyma Operator generates overrides based on the ConfigMap entries. The system uses the value of 10 instead of the default value of 5 from the values.yaml chart file.

Global overrides

There are several important parameters usually shared across the charts. Helm convention to provide these requires the use of the global override key. For example, to define the global.domain override, just use global.domain as the name of the key in a ConfigMap or Secret for the Kyma Operator.

Once the installation starts, the Kyma Operator merges all of the ConfigMap entries and collects all of the global entries under the global top-level key to use for the installation.

Values and types

The Kyma Operator generally recognizes all override values as strings. It internally renders overrides to Helm as a yaml stream with only string values.

There is one exception to this rule with respect to handling booleans: The system converts true or false strings that it encounters to a corresponding boolean true or false value.

Merging and conflicting entries

When the Kyma Operator encounters two overrides with the same key prefix, it tries to merge them. If both of them represent a ConfigMap (they have nested sub-keys), their nested keys are recursively merged. If at least one of keys points to a final value, the Kyma Operator performs the merge in a non-deterministic order, so either one of the overrides is rendered in the final yaml data.

It is important to avoid overrides having the same keys for final values.

Non-conflicting merge example

Two overrides with a common key prefix ("a.b"):

Click to copy
"a.b.c": "first"
"a.b.d": "second"

The Kyma Operator yields the correct output:

Click to copy
c: first
d: second

Conflicting merge example

Two overrides with the same key ("a.b"):

Click to copy
"a.b": "first"
"a.b": "second"

The Kyma Operator yields either:

Click to copy
b: "first"

Or (due to non-deterministic merge order):

Click to copy
b: "second"

Install components from user-defined URLs

The Kyma Operator allows you to use external URLs as sources for the components you decide to install Kyma with. Using this mechanism, you can install Kyma with a customized component, which you store in GitHub or as a .zip or .tgz archive on a server, and use the officially released sources for other components.

To install a component using an external URL as the source, you must add the source.url attribute to the entry of a component in the Installation custom resource (CR).

The address must expose the Chart.yaml of the component directly. This means that for Git repositories or archives that do not store this file at the top level, you must specify the path to the file.

To specify the exact location of the Chart.yaml, append it to the URL beginning with two backslashes // to indicate the path within the archive or repository. See these sample entries for components with user-defined source URLs from the Installation CR for more details:

  • Archive URL
  • Git repository URL

Error handling and retry policy

If you specify an external URL as a source for a Kyma component, the Kyma Operator attempts to access it three times during the installation process. If it fails to reach the specified URL in one of the three attempts, or fails to find the required files, the installation step fails and the component installation is repeated according to the default installation retry process.

There is no fallback mechanism implemented. This means that in a case where the Operator fails to install a component using a custom URL, the installation step always fails, even if the component sources are included in the Kyma Installer image.

Custom Resource


The CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to control the Kyma Installer, a proprietary solution based on the Kubernetes operator principles. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy
kubectl get crd -o yaml

Sample custom resource

This is a sample CR that controls the Kyma Installer. This example has the action label set to install, which means that it triggers the installation of Kyma. The name and namespace fields in the components array define which components you install and Namespaces in which you install them.

NOTE: See the installer-cr.yaml.tpl file in the /installation/resources directory for the complete list of Kyma components.

Click to copy
apiVersion: ""
kind: Installation
name: kyma-installation
namespace: default
action: install
version: "1.0.0"
url: ""
profile: "evaluation"
- name: "cluster-essentials"
namespace: "kyma-system"
- name: "istio"
namespace: "istio-system"
- name: "provision-bundles"
- name: "dex"
namespace: "kyma-system"
- name: "core"
namespace: "kyma-system"

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

metadata.nameYesSpecifies the name of the CR.
metadata.labels.actionYesDefines the behavior of the Kyma Installer. Available options are install and uninstall.
spec.versionNoWhen manually installing Kyma on a cluster, specify any valid SemVer notation string.
spec.urlNoSpecifies the location of the Kyma sources tar.gz package. For example, for the main branch of Kyma, the address is This attribute is deprecated.
spec.profileNoSpecifies the profile which will be used for installation or upgrade. Available options are evaluation and production.
spec.componentsYesLists which components of Helm chart components to install, update or uninstall.
spec.components.nameYesSpecifies the name of the component which is the same as the name of the component subdirectory in the resources directory.
spec.components.namespaceYesDefines the Namespace in which you want the Installer to install or update the component.
spec.components.sourceNoDefines a custom URL for the source files of the given component. For more details, read the configuration document.
spec.components.releaseNoProvides the name of the Helm release. The default parameter is the component name.

Additional information

The Kyma Installer adds the status section which describes the status of Kyma installation. This table lists the fields of the status section.

status.stateYesDescribes the installation state. Takes one of four values.
status.descriptionYesDescribes the installation step the installer performs at the moment.
status.errorLogYesLists all errors that happen during installation and uninstallation.
status.errorLog.componentYesSpecifies the name of the component that causes the error.
status.errorLog.logYesProvides a description of the error.
status.errorLog.occurrencesYesSpecifies the number of subsequent occurrences of the error.

The status.state field uses one of the following four values to describe the installation state:

InstalledInstallation successful.
UninstalledUninstallation successful.
InProgressThe process of (un)installing Kyma is running and no errors for the current step have been logged.
ErrorThe Installer encountered a problem in the current step.

These components use this CR:

InstallerThe CR triggers the Installer to install, update or delete of the specified components.


Develop a service locally without using Docker

You can develop services in the local Kyma installation without extensive Docker knowledge or a need to build and publish a Docker image. The minikube mount feature allows you to mount a directory from your local disk into the local Kubernetes cluster.

This tutorial shows how to use this feature, using the service example implemented in Go.


Install Go tools.


Install the example on your local machine

  1. Install the example:

    Click to copy
    go get -insecure
  2. Navigate to installed example and the http-db-service folder inside it:

    Click to copy
    cd ~/go/src/
  3. Build the executable to run the application:

    Click to copy
    CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

Mount the example directory into Minikube

For this step, you need a running local Kyma instance. Read the installation document to learn how to install Kyma locally.

  1. Open the terminal window. Do not close it until the development finishes.
  2. Mount your local drive into Minikube:

    Click to copy
    # Use the following pattern:
    minikube mount {LOCAL_DIR_PATH}:{CLUSTER_DIR_PATH}
    # To follow this guide, call:
    minikube mount ~/go/src/

    See the example and expected result:

    Click to copy
    # Terminal 1
    minikube mount ~/go/src/
    Mounting /Users/{USERNAME}/go/src/ into /go/src/ on the minikube VM
    This daemon process must stay alive for the mount to still be accessible...
    ufs starting

Run your local service inside Minikube

  1. Create Pod that uses the base Go image to run your executable located on your local machine:

    Click to copy
    # Terminal 2
    kubectl run mydevpod --image=golang:1.9.2-alpine --restart=Never -n stage --overrides='
    "command": ["./main"],
  2. Expose the Pod as a service from Minikube to verify it:

    Click to copy
    kubectl expose pod mydevpod --name=mypodservice --port=8017 --type=NodePort -n stage
  3. Check the Minikube IP address and Port, and use them to access your service.

    Click to copy
    # Get the IP address.
    minikube ip
    # See the example result:
    # Check the Port.
    kubectl get services -n stage
    # See the example result: mypodservice NodePort <none> 8017:32226/TCP 5m
  4. Call the service from your terminal.

    Click to copy
    curl {minikube ip}:{port}/orders -v
    # See the example: curl -v
    # The command returns an empty array.

Modify the code locally and see the results immediately in Minikube

  1. Edit the main.go file by adding a new test endpoint to the startService Function:

    Click to copy
    router.HandleFunc("/test", func (w http.ResponseWriter, r *http.Request) {
  2. Build a new executable to run the application inside Minikube:

    Click to copy
    CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
  3. Replace the existing Pod with the new version:

    Click to copy
    kubectl get pod mydevpod -n stage -o yaml | kubectl replace --force -f -
  4. Call the new test endpoint of the service from your terminal. The command returns the Test string:

    Click to copy
    curl -v

Publish a service Docker image and deploy it to Kyma

Follow this tutorial to learn how to develop a service locally. You can immediately see all the changes made in a local Kyma installation based on Minikube, without building a Docker image and publishing it to a Docker registry, such as the Docker Hub.

Using the same example service, this tutorial explains how to build a Docker image for your service, publish it to the Docker registry, and deploy it to the local Kyma installation. The instructions base on Minikube, but you can also use the image that you create and the Kubernetes resource definitions that you use on the Kyma cluster.

NOTE: The deployment works both on local Kyma installation and on the Kyma cluster.


Build a Docker image

The http-db-service example used in this guide provides you with the Dockerfile necessary for building Docker images. Examine the Dockerfile to learn how it looks and how it uses the Docker Multistaging feature, but do not use it one-to-one for production. There might be custom LABEL attributes with values to override.

  1. Download the http-db-service example from the examples repository. In your terminal, navigate to the examples/http-db-service directory.
  2. Run the build with ./

NOTE: Ensure that the new image builds and is available in your local Docker registry by calling docker images. Find an image called example-http-db-service and tagged as latest.

Register the image in the Docker Hub

This guide bases on Docker Hub. However, there are many other Docker registries available. You can use a private Docker registry, but it must be available in the Internet. For more details about using a private Docker registry, see the tutorial.

  1. Open the Docker Hub webpage.
  2. Provide all of the required details and sign up.

Sign in to the Docker Hub registry in the terminal

  1. Call docker login.
  2. Provide the username and password, and select the ENTER key.

Push the image to the Docker Hub

  1. Tag the local image with a proper name required in the registry: docker tag example-http-db-service {USERNAME}/example-http-db-service:0.0.1.
  2. Push the image to the registry: docker push {USERNAME}/example-http-db-service:0.0.1.
  3. To verify if the image is successfully published, check if it is available online at the following address:{USERNAME}/example-http-db-service/.

Deploy to Kyma

The http-db-service example contains sample Kubernetes resource definitions needed for the basic Kyma deployment. Find them in the deployment folder. Perform the following modifications to use your newly-published image in the local Kyma installation:

  1. Go to the deployment directory.
  2. Edit the deployment.yaml file. Change the image attribute to {USERNAME}/example-http-db-service:0.0.1.
  3. Create the new resources in local Kyma using these commands: kubectl create -f deployment.yaml -n stage && kubectl create -f ingress.yaml -n stage.
  4. Edit your /etc/hosts to add the new http-db-service.kyma.local host to the list of hosts associated with your minikube ip. Follow these steps:
    • Open a terminal window and run: sudo vim /etc/hosts
    • Select the i key to insert a new line at the top of the file.
    • Add this line: {YOUR.MINIKUBE.IP} http-db-service.kyma.local
    • Type :wq and select the Enter key to save the changes.
  5. Run this command to check if you can access the service: curl https://http-db-service.kyma.local/orders. The response should return an empty array.

Restore resources using Velero

This tutorial shows how to use Velero to perform a partial restore of individual applications running on Kyma. Follow the guidelines to back up your Kubernetes resources and volumes so that you can restore them on a different cluster.

NOTE: Be aware that a full restore of a Kyma cluster is not supported. Start with the existing Kyma installation and restore specific resources individually.


Download and install Velero CLI.


Follow these steps to install Velero and back up your Kyma cluster.

  1. Install the Velero server.

    • Google Cloud Platform
    • Azure
  2. Create a backup of all the resources on the cluster:

    Click to copy
    velero backup create {NAME} --wait
  3. Once the backup succeeds, remove the velero Namespace:

    Click to copy
    kubectl delete ns velero

Create on-demand volume snapshots

This tutorial shows how to create on-demand volume snapshots you can use to provision a new volume or restore the existing one.


Perform the steps:

  1. Assume that you have the pvc-to-backup PersistentVolumeClaim which you have created using a CSI-enabled StorageClass. Trigger a snapshot by creating a VolumeSnapshot object:

NOTE: You must use CSI-enabled StorageClass to create a PVC, otherwise it won't be backed up.

Click to copy
kind: VolumeSnapshot
name: volume-snapshot
volumeSnapshotClassName: csi-snapshot-class
persistentVolumeClaimName: pvc-to-backup
  1. Recreate the PVC using the snapshot as the data source:
Click to copy
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-restored
- ReadWriteOnce
storageClassName: csi-storage-class
storage: 10Gi
name: volume-snapshot
kind: VolumeSnapshot

This will create a new pvc-restored PVC with pre-populated data from the snapshot.

You can also create a CronJob to handle taking volume snapshots periodically. A sample CronJob definition which includes the required ServiceAccount and roles looks as follows:

Click to copy
apiVersion: v1
kind: ServiceAccount
name: volume-snapshotter
kind: Role
name: volume-snapshotter
namespace: {NAMESPACE}
- apiGroups: [""]
resources: ["volumesnapshots"]
verbs: ["create", "get", "list", "delete"]
kind: RoleBinding
name: volume-snapshotter
namespace: {NAMESPACE}
kind: Role
name: volume-snapshotter
- kind: ServiceAccount
name: volume-snapshotter
apiVersion: batch/v1beta1
kind: CronJob
name: volume-snapshotter
namespace: {NAMESPACE}
schedule: "@hourly" #Run once an hour, beginning of hour
serviceAccountName: volume-snapshotter
- name: job
- /bin/bash
- -c
- |
# Create volume snapshot with random name.
RANDOM_ID=$(openssl rand -hex 4)
cat <<EOF | kubectl apply -f -
kind: VolumeSnapshot
name: volume-snapshot-${RANDOM_ID}
namespace: {NAMESPACE}
"job": "volume-snapshotter"
"name": "volume-snapshot-${RANDOM_ID}"
volumeSnapshotClassName: {SNAPSHOT_CLASS_NAME}
persistentVolumeClaimName: {PVC_NAME}
# Wait until volume snapshot is ready to use.
for ((i=1; i<=attempts; i++)); do
STATUS=$(kubectl get volumesnapshot volume-snapshot-${RANDOM_ID} -n {NAMESPACE} -o jsonpath='{.status.readyToUse}')
if [ "${STATUS}" == "true" ]; then
echo "Volume snapshot is ready to use."
if [[ "${i}" -lt "${attempts}" ]]; then
echo "Volume snapshot is not yet ready to use, let's wait ${retryTimeInSec} seconds and retry. Attempts ${i} of ${attempts}."
echo "Volume snapshot is still not ready to use after ${attempts} attempts, giving up."
exit 1
sleep ${retryTimeInSec}
# Delete old volume snapshots.
kubectl delete volumesnapshot -n {NAMESPACE} -l job=volume-snapshotter,name!=volume-snapshot-${RANDOM_ID}

Create on-demand volume snapshots for cloud providers

These tutorials show how to create on-demand volume snapshots for cloud providers. Before you proceed with the tutorial, read the general instructions on creating volume snapshots.

  • Create a volume snapshot for AKS
  • Create a volume snapshot for GKE

Create volume snapshots for Gardener providers

  • GCP
  • AWS
  • Azure



The troubleshooting section aims to identify the most common recurring problems the users face when they install and start using Kyma, as well as the most suitable solutions to these problems.

If you can't find a solution, don't hesitate to create a GitHub issue or reach out to either the #installation or #general Slack channel to get direct support from the community.

Basic troubleshooting

Console UI password

If you forget the password for the, you can get it from the admin-user Secret located in the kyma-system Namespace. Run this command:

Click to copy
kubectl get secret admin-user -n kyma-system -o jsonpath="{.data.password}" | base64 --decode

Kyma Installer doesn't respond as expected

If the Installer does not respond as expected, check the installation status using the script with the --verbose flag added. Run:

Click to copy
scripts/ --verbose

Installation successful, component not working

If the installation is successful but a component does not behave in the expected way, inspect Helm releases for more details on all of the installed components.

Run this command to list all of the available Helm releases:

Click to copy
helm list --all-namespaces --all

Run this command to get more detailed information about a given release:

Click to copy

NOTE: Names of Helm releases correspond to names of Kyma components.

Additionally, see if all deployed Pods are running. Run this command:

Click to copy
kubectl get pods --all-namespaces

The command retrieves all Pods from all Namespaces, the status of the Pods, and their instance numbers. Check if the status is Running for all Pods. If any of the Pods that you require do not start successfully, install Kyma again.

Can't log in to the Console after hibernating the Minikube cluster

If you put a local cluster into hibernation or use minikube stop and minikube start the date and time settings of Minikube get out of sync with the system date and time settings. As a result, the access token used to log in cannot be properly validated by Dex and you cannot log in to the console. To fix that, set the date and time used by your machine in Minikube. Run:

Click to copy
minikube ssh -- docker run -i --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)

Errors after restarting Kyma on Minikube

If you restart Kyma using unauthorized methods, such as triggering the installation when a Minikube cluster with Kyma is already running, the cluster might become unresponsive which can be fixed by reinstalling Kyma. To prevent such behavior, stop and restart Kyma using only the method described.

Can't deprovision Gardener cluster

If you are unable to deprovision a Gardener cluster, you might receive the following error:

Click to copy
Flow "Shoot cluster deletion" encountered task errors: [task "Cleaning extended API groups" failed: 1 error occurred:
retry failed with context deadline exceeded, last error: remaining objects are still present: [*v1beta1.CustomResourceDefinition /]

If this happens, you must remove the finalizer from the kyma-installation CR before you deprovision the cluster. Run this command:

Click to copy
kubectl patch installation kyma-installation --type=merge -p '{"metadata":{"finalizers":null}}'

Console access network error

If you try to access the Console of a local or a cluster Kyma deployment and your browser shows a 'Network Error', your local machine doesn't have the Kyma self-signed TLS certificate added to the system trusted certificate list. To fix this, follow one of these two approaches:

  1. Add the Kyma certificate to the trusted certificates list of your OS:

    • Minikube on MacOS
    • Minikube on Linux
    • Cluster installation with
  2. Trust the certificate in your browser. Follow this guide for Chrome or this guide for Firefox. You must trust the certificate for these addresses:,,, and

    TIP: This solution is suitable for users who don't have administrative access to the OS.

Common installation errors

Job failed: DeadlineExceeded error

The Job failed: DeadlineExceeded error indicates that a job object didn't finish in a set time leading to a time-out. Frequently this error is followed by a message that indicates the release which failed to install: Helm install error: rpc error: code = Unknown desc = a release named core already exists.

As this error is caused by a time-out, restart the installation.

If the problem repeats, find the job that causes the error and reach out to the #installation Slack channel or create a GitHub issue.

Follow these steps to identify the failing job:

  1. Get the installed Helm releases which correspond to components:

    Click to copy
    helm list --all-namespaces --all

    A high number of revisions may suggest that a component was reinstalled several times. If a release has the status different to Deployed, the component wasn't installed.

  2. Get component details:

    Click to copy

    Pods with not all containers in READY state can cause the error.

  3. Get the deployed jobs:

    Click to copy
    kubectl get jobs --all-namespaces

    Jobs that are not completed can cause the error.

Installation fails without an apparent reason

If the installation fails and the feedback you get from the console output isn't sufficient to identify the root cause of the errors, use the helm history command to inspect errors that were logged for every revision of a given Helm release.

To list all of the available Helm releases, run:

Click to copy
helm list --all-namespaces

To inspect a release and its logged errors, run:

Click to copy

NOTE: Names of Helm releases correspond to names of Kyma components.

Maximum number of retries reached

The Kyma Installer retries the failed installation of releases a set number of times (default is 5). It stops the installation when it reaches the limit and returns this message: Max number of retries reached during step {STEP_NAME}. Fetch the logs of the Kyma Installer to check the reason for failure. Run:

Click to copy
kubectl -n kyma-installer logs -l 'name=kyma-installer'

After you fix the error that caused the installation to fail, run this command to restart the installation process:

Click to copy
kubectl -n default label installation/kyma-installation action=install

"Failed to pull image" error

When you try to install Kyma locally on Minikube, the installation may fail at a very early stage logging this error:

Click to copy
ERROR: Failed to pull image "": rpc error: code = Unknown desc = Error response from daemon: Get dial tcp: lookup on read udp> read: connection refused

This message shows that the installation fails because the required Docker image can't be downloaded from a Google Container Registry address. Minikube can't download the image because its DNS server can't resolve the image's address.

If you get this error, check if any process is listening on port 53. Run:

Click to copy
sudo lsof -i tcp:53

If the port is taken by a process other than Minikube, the output of this command will point you to the software causing the issue.

To fix this problem, try adjusting the configuration of the software that's blocking the port. In some cases, you might have to uninstall the software to free port 53.

For example, dnsmasq users can add listen-address= to dnsmasq.conf to run dnsmasq and Minikube at the same time.

For more details, refer to the issue #3036.

Cannot create a volume snapshot

If a PersistentVolumeClaim is not bound to a PersistentVolume, the attempt to create a volume snapshot from that PersistentVolumeClaim will fail with no retries. An event will be logged to indicate no binding between the PersistentVolumeClaim and the PersistentVolume.

This may happen if PersistentVolumeClaim and VolumeSnapshot specifications are in the same YAML file. As a result, the VolumeSnapshot and the PersistentVolumeClaim objects are created at the same time, but the PersisitentVolume is not available yet so it cannot be bound to the PersistentVolumeClaim. To solve this issue, wait until the PersistentVolumeClaim is bound to the PersistentVolume and then create the snapshot.


Kyma features and concepts in practice

The table contains a list of examples that demonstrate Kyma functionalities. You can run all of them locally or on a cluster. Examples are organized by a feature or concept they showcase. Each of them contains ready-to-use code snippets and the instructions in documents.

Follow the links to examples' code and content sources, and try them on your own.

Orders ServiceThis example demonstrates how you can use Kyma to expose microservices and Functions on HTTP endpoints and bind them to an external database.Go, Redis
HTTP DB ServiceTest the service that exposes an HTTP API to access a database on the cluster.Go, MSSQL
Alert RulesConfigure alert rules in Kyma.Prometheus
Custom Metrics in KymaExpose custom metrics in Kyma.Go, Prometheus
Event Email ServiceSend an automated email upon receiving an Event.NodeJS
TracingConfigure tracing for a service in Kyma.Go