This document guides you through the steps to convert a LINBIT SDS Operator deployment using a separate etcd cluster to a deployment using the Kubernetes API as database.
When to Consider a Migration
Because the Operator v1 for LINSTOR® is older than the LINSTOR Kubernetes back-end database, the default Operator v1 deployment for a long time used a separate etcd cluster. Since then, maintaining etcd as the back-end database proved to be difficult, and often required manual intervention.
Because of this, consider migrating away from using a separate etcd cluster and use the Kubernetes back end.
If you already use a back end other than etcd, no migration is necessary. You can check which back end your LINSTOR cluster currently uses by entering the following command:
kubectl exec deploy/linstor-op-cs-controller -- cat /etc/linstor/linstor.toml
If the deployment is using an etcd back end, output should show this:
[db] connection_url = "etcd://linstor-op-etcd:2379"
If the connection_url
value starts with etcd://
then the LINSTOR controller
uses an etcd back end and you should consider migrating to a native Kubernetes
database back end.
Prerequisites
This guide assumes:
- You used Helm to create the original deployment and are familiar with upgrading Helm deployments.
- Your LINBIT SDS deployment is up-to-date with the latest v1 release. Check the releases here.
- You have the following command line tools available:
Migrating the LINSTOR Controller Database
During the database migration, the LINSTOR controller cannot be running so you will need to stop it. This means that you will not be able to use LINSTOR to provision or delete volumes during this time. Existing volumes will continue to work normally.
Stopping the LINSTOR Controller
To prevent unwanted modifications to the database during the migration operation, you need to stop the LINSTOR controller. To stop the LINSTOR controller, set the expected replicas for the controller to zero.
First, find the release information for your current LINSTOR in Kubernetes
deployment by using a helm list
command.
helm list
Output under the APP VERSION
column should show the version number of your
LINSTOR Operator deployment, for example:
NAME [...] APP VERSION
[...]
linstor-op [...] 1.10.8
Next, set a variable equal to your installed LINSTOR Operator version.
CURRENTVERS=1.10.7
Next, change the deployment so that there are no deployed replicas of the LINSTOR controller. For the purposes of this how-to guide, this is the same as “stopping” the controller.
helm upgrade linstor-op linstor/linstor --version $CURRENTVERS --reuse-values --set operator.controller.replicas=0
kubectl rollout status deploy/linstor-op-cs-controller --watch
Output should eventually show that the LINSTOR Operator was successfully rolled out:
deployment "linstor-op-cs-controller" successfully rolled out
Verifying That the LINSTOR Controller Is Not Running
If you enter a kubectl get pods
command, output should not show a
linstor-op-cs-controller
Pod. You can also verify the new deployment by using
a kubectl describe
command:
kubectl describe deployments.apps linstor-op-cs-controller | grep -i replicas
Output should show that there are no replicas of the deployment:
Replicas: 0 desired | 0 updated | 0 total | 0 available | 0 unavailable
[...]
Preparing a Pod for Running the Database Migration
After stopping the LINSTOR controller in your Kubernetes deployment, you can prepare a Pod that will run the database migration, from etcd to Kubernetes native.
Getting Information About the Current Deployment
First, get information about the current deployment that you will use to populate environment variables that you will use to create the database migration Pod:
kubectl get deploy/linstor-op-cs-controller --output=jsonpath='IMAGE={$.spec.template.spec.containers[?(@.name=="linstor-controller")].image}{"\n"}CONFIG_MAP={$.spec.template.spec.volumes[?(@.name=="linstor-conf")].configMap.name}{"\n"}SERVICE_ACCOUNT={$.spec.template.spec.serviceAccountName}{"\n"}'
Output should be similar to this:
IMAGE=drbd.io/linstor-controller:v1.24.2
CONFIG_MAP=linstor-op-cs-controller-config
SERVICE_ACCOUNT=linstor-controller
Creating Environment Variables
Copy this output and paste it into your shell to create the IMAGE
,
CONFIG_MAP
, and SERVICE_ACCOUNT
environment variables.
Deploying a Database Migration Pod
To deploy a Pod that will handle the database migration work, first create a YAML configuration file that will describe the database migration Pod by entering the following command:
cat << EOF > linstor-db-migration-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: linstor-database-migration
spec:
serviceAccountName: $SERVICE_ACCOUNT
containers:
- name: backup
image: $IMAGE
command:
- /bin/bash
- -c
- "sleep infinity"
volumeMounts:
- name: linstor-conf
mountPath: /etc/linstor
readOnly: true
- name: backup
mountPath: /backup
- name: logs
mountPath: /logs
volumes:
- name: backup
emptyDir: {}
- name: logs
emptyDir: {}
- name: linstor-conf
configMap:
name: $CONFIG_MAP
EOF
Next, apply the configuration file to your Kubernetes deployment and wait for the Pod to reach a ready state.
kubectl apply -f linstor-db-migration-pod.yaml
kubectl wait -f linstor-db-migration-pod.yaml --for=condition=Ready --timeout=15m
Backing Up the Database
After the Pod is in a ready state, you can use it to create a backup of your existing database. You will use this backup later to migrate your deployment data to the native Kubernetes back-end database.
To back up the etcd database, enter the following command:
kubectl exec linstor-database-migration -- /usr/share/linstor-server/bin/linstor-database export-db -c /etc/linstor /backup/backup-before-migration.json
After exporting the backup, copy it to your host. This will be an additional "backup copy” should there be issues during the migration.
kubectl cp linstor-database-migration:backup/backup-before-migration.json backup-before-migration.json
Updating the LINSTOR Configuration to Point to the New Database Back End
Enter the following command to update the LINSTOR controller configuration so that it uses the native Kubernetes database as a back end.
helm upgrade linstor-op linstor/linstor --version $CURRENTVERS --reuse-values --set operator.controller.dbConnectionURL=k8s
This will cause the LINSTOR configuration to be updated, setting the Kubernetes API as new connection_url
.
However, it can take a few seconds for the configuration in the Pod to be updated. Wait until the
configuration has updated in the container and shows the connection_url = "k8s"
:
kubectl exec linstor-database-migration -- cat /etc/linstor/linstor.toml
[db]
connection_url = "k8s"
Importing the Database From a Backup
Import the database into the new Kubernetes back end from the backup that you made earlier, by entering the following command:
kubectl exec linstor-database-migration -- /usr/share/linstor-server/bin/linstor-database import-db -c /etc/linstor /backup/backup-before-migration.json
Output should eventually show that the database import operation finished.
[...]
20:32:26.970 [main] INFO LINSTOR/linstor-db -- SYSTEM - Import finished
Starting the LINSTOR Controller
After importing the database to the new back end, you can start the LINSTOR controller by updating the replica count of its Pod, by entering the following command:
helm upgrade linstor-op linstor/linstor --version $CURRENTVERS --reuse-values --set operator.controller.replicas=1
Verifying That the LINSTOR Controller Is Running
You can enter another kubectl get pods
command to verify that you started
the LINSTOR controller successfully. Output should show a
linstor-op-cs-controller
Pod in a running state.
In addition, you can check the state of the LINSTOR cluster using the linstor
command:
kubectl exec deploy/linstor-op-cs-controller -- linstor node list
kubectl exec deploy/linstor-op-cs-controller -- linstor resource list
kubectl exec deploy/linstor-op-cs-controller -- linstor volume list
Verifying the LINSTOR Controller Database Back End
Finally, you can verify that the LINSTOR controller has a connection to the native Kubernetes database back end, by entering the following command:
kubectl exec deploy/linstor-op-cs-controller -- cat /etc/linstor/linstor.toml
Output from the command should show that the LINSTOR controller uses the native Kubernetes database as its back end:
[db]
connection_url = "k8s"
Cleaning Up the Database Migration and etcd Resources
After successfully verifying that the LINSTOR controller Pod is again up and running, you can delete the database migration Pod.
kubectl delete pod linstor-database-migration
Because your LINSTOR controller no longer uses an etcd database as its back end, you can upgrade the deployment and disable the etcd back end in the configuration, by entering the following command:
helm upgrade linstor-op linstor/linstor --version $CURRENTVERS --reuse-values --set etcd.enabled=false