Skip to content

Auto-Migrate with tool "ecm"

1. What is "ecm"?

1.1 Overview

The “ecm” tool - name stands for "Engines to Cloud Migration" - is a shell script developed by Switch. Its primary purpose is to automate the migration of OpenStack resources—such as virtual machines, networks, and security groups—from Engines to the Switch Cloud. It is intended for Engines users for whom rebuilding cloud infrastructure from scratch is not a viable option.

If you use the tool, please keep in mind that you do so at your own risk. The tool is not perfect and cannot cover every possible case. In the event that errors occur, you should be prepared to dive deeper into the underlying Vexxhost tools. In the worst case, you may even have to forgo the ‘Auto-Migrate’ option entirely and plan for a rebuild.

And please remeber: If you decide to use the auto-migrate feature, we highly recommend migrating your test or staging environments first.

1.2 Technical Details

From a technical perspective, “ecm” serves as an abstraction layer on top of the Vexxhost OS Migrate toolkit. The script simplifies the migration process by reducing complexity, hiding low-level operations, and automatically performing several steps that are not documented in the original OS Migrate tools.

1.3 Limitations and Flexibility

This convenience comes at the expense of some flexibility. Users who require full access to all configuration options may still use the original OS Migrate toolkit after the initial setup, provided they have sufficient knowledge to configure it appropriately.

2. Installation

  • The “ecm” script consists of a single file. It can be installed anywhere on your desktop system, but placing it in a directory included in your $PATH variable is recommended.
  • No root privileges are required for installation.

  • Download the "ecm" script:

# As normal User on your workstation/laptop from where you have access to
mkdir <some base directory>
cd <some base directory>
git clone git@gitlab.switch.ch:engines2cloud/ecm.git
cd ./ecm
./ecm           # Get a Usage 

2.1 Prerequisites

The script requires the following CLI tools to be available locally:

  • openstack, version ≥ 7.0.0
  • ansible, version ≥ 2.17.3
  • python3, version ≥ 3.13.0
  • openssl, version ≥ 3.5.0

The script must be installed on a system that has access to both the Engines UI and the Cloud UI, i.e., it must not be blocked by firewalls.

On macOS, these tools can be installed via Homebrew. Both commands must be available in the system’s $PATH. The script itself will check for the prerequisites.

2.2 Preparations on OpenStack side

  • Request a destination project on SWITCHcloud OpenStack environment by contacting your local "Informatikdienste". For Switch employees go to Request a Cloud project.
  • Make sure to ask for sufficient quotas for the new project in order to be able to take all resources.
  • You may use the script ecm itself to get a rough estimation where not sufficient quotas are given, see instructions below.
  • Consider thath you need more quotas as previously as three helper VMs will be created. Here is a overview:
Resource Additional quotas for ecm
RAM on source +2 GB
RAM on destination +4 GB
Cores on source +2
Cores on destination +4
Disk on source +40 GB
Disk on destination +80 GB
Floating IPs on source +1
Floating IPs on destination +2
Number of volumes on source +2
Number of volumes on destination +4
Number of snapshots on source +1
Number of snapshots on destination +1
  • Contact cloud-support@switch.ch for all Quota requirements.
  • We strongly recommend to order separate OpenStack destination projects for you test, stage or production environment. This has the nice side effect that the likelihood for running into Quota overflow is reduced.

3. Configuration File

In addition to the script itself, “ecm” requires one configuration file:

$HOME/.ecm/config

3.1 Main Configuration File

  • The main configuration file is located at $HOME/.ecm/config.
  • You can create an initial version of this file with:
ecm cconfig
  • Afterwards, please change all sections marked with "==> your value <==
  • Each parameter within this file is explained in the accompanying comments.
  • The values of parameters with prefix SRC_OS_* or DST_OS_* correspond to the values of the OpenStack Openrc API file you can download from the horizon UI.

4. How to Use

4.1 Displaying a help page

A simple usage overview can be obtained by running the command without parameters:

# Display the help page of ecm
ecm

4.2 Create an initial config file

# Create config file ~/.ecm/config
ecm config
  • Change the values marked with "==> your value <==
  • You will be asked once for your OpenStack API passwords of both the source and destination projects.

4.3 Displaying Status

Once the prerequisites and configuration files are in place, working with ecm becomes straightforward. A common first step is to display the current migration status. This command can also be executed at any time during the process:

# Display status
ecm status

4.4 Analyse Quotas

The ecm script can be used to automatically analyze the current quota status. It is strongly recommended to run this step before starting the migration, as the migration process itself will consume resources, making it difficult to accurately determine the remaining resource requirements.

You start a Quota analysis with the command:

# Analyse quotas, this will take some time
ecm quota

4.5 Perform Initial Setup

If the status output indicates that the Migrator Host or the two Conversation Hosts do not exist, the initial setup must be executed:

# Initial setup: Creates Migrator Host and both Conversation Hosts
ecm setup

Afterwards, it is recommended to check the status again:

ecm status

Individual steps—such as installing the Migrator Host software only, or creating/recreating the Conversation Hosts—can be executed using the subcommands:

ecm ansible
ecm convs

In most cases, however, these subcommands are not required.

4.6 Export Source Objects

This step exports all relevant OpenStack objects—such as networks, security groups, and virtual machines (referred to as workloads)—to the Migrator Host in the form of YAML files. This step is usually non-critical:

# Export all objects
ecm export

4.7 Import Objects (excluding VMs)

Next, the OpenStack objects are created in the destination project in the correct order. Virtual machines (workloads) should be migrated in a separate step (see below):

# Import most objects excluding VMs
ecm import

4.8 Migrate Virtual Machines

The final step is the most time-consuming and critical one: migrating the VMs one-to-one to the destination project. All associated volumes are copied, which can take a considerable amount of time. Please carefully note the following points:

  • VMs on the source side are powered off one by one.
  • This step will therefore cause downtime for any services currently running on the source VMs.
  • Estimate roughly one hour per 100 GB of data.
  • By default, all VMs on the source side are migrated—and therefore powered off. If you do not want this behavior, you must override it manually on the Migrator Host (explained below).
  • Floating IP addresses cannot currently be preserved. New floating IPs will be assigned. To mitigate this, consider lowering DNS TTL values in advance, so that DNS records can be quickly updated to point to the new IPs.
  • Ensure that on OS side all relevant services are boot-safe so that they will be started automatically after the migration, for example: systemctl enable <myservice.service>

Once you have confirmed that these conditions are acceptable, start the actual migration:

ecm [-f] import_workloads      # The -f option suppresses the confirmation prompt with the above warnings

During migration, you can inspect per-VM logfiles on the Migrator Host if desired:

ssh ecm@<migrator-host-ip>      # Obtain the IP via: openstack server list -> look for the "migrator" entry
cd /home/ecm/os_migrate_data_dirs/direct/workload_logs
less <vmname>.log

4.9 Restricting Migration Objects

By default, all VMs, networks, subnets, and other objects are migrated. If you wish to limit the migration to a subset of objects, you must edit the config file:

vi ~/.ecm/config
  • Search for the keyword Filters.
  • Replace the default pattern - regex: .* with your own list of object names.

Example: To migrate only the VMs vm1 and vm2, update the configuration as follows:

# In config file ~/.ecm/config
os_migrate_workloads_filter:
  - vm1
  - vm2
  #- regex: .*

After adjusting the filters, the migration can be performed as described above:

ecm export
ecm import
ecm import_workloads
  • Alternatively, you may edit the config file on the migrator host:
ssh ecm@<migrator-host-ip>      # Obtain the IP via: openstack server list -> look for the "migrator" entry
vie                             # Alias for vi /home/ecm/etc/os-migrate-vars.yml
ework                           # Export workloads
iwork                           # Import workloads

4.10 Debugging

Here are the most possible reasons why the migration might fail:

  • The original flavor does not exist on the destination. => Solution: Edit file ~/.ecm/config, section "flavor mapping" and add and own mapping for you source flavor. Then export all objects again and retry the ecm import_workloads again.

  • The source destination network does not exist and cannot be created. => Solution: Login to the migrator, type viw which edits the workloads yaml file, search for the unkown network and change its name to an existing one on destination side. The retry it directly with the command: iwork

4.11 Completing the Migration

After the migration has finished, perform the following manual checks to verify success:

  • Confirm that all relevant objects were migrated:
openstack server list
openstack network list
openstack security group list
# ...etc.
  • Try to log in via SSH to your jumphosts or to any VMs accessible through floating IPs.
  • Verify that the required services are running at the operating system level.
  • Take note of your new floating IP addresses, update your DNS records accordingly, and wait for the TTL to expire.
  • Test your service URLs (if applicable).
  • Leave the source objects untouched for the time being.

4.12 Rollback Procedure

If you need to roll back, simply start your VMs manually on the source side and restore the DNS records to their original values. You can then analyze the migrated VMs at your own pace before attempting the migration again.

4.13 Cleanup

After several weeks of observation, and if everything appears to be functioning correctly, you may proceed with cleanup:

  1. Remove the temporary ecm hosts (the Migrator Host and the two Conversation Hosts):

  2. This will also remove temporary files and caches on your local machine.

ecm clean
  • As your personal credentials were stored (encyrpted) on your local machine, you still may consider to change them on both source and destination side.

  • Remove your resources in the source project, or request complete removal of the source project by sending an email to the support team engines-support@switch.ch

5. Limitations

5.1 Objects which need admin rights

Objects that need admin rights cannot me migrated by ordinary users. So the following playbooks are not part of "ecm":

import_projects.yml
import_users.yml
import_roles.yml
import_users_keypairs.yml
import_user_project_role_assignments.yml
import_flavors.yml
import_images.yml

If you have sufficient rights you may migrate them manually like this:

# We use object typpe "project" in this example:
# First login to migrator host:
ssh ecm@<migrator host ip>

# Export
ecm@migrator$ OSM_CMD $OSM_DIR/playbooks/export_projects.yml

# Import
ecm@migrator$ OSM_CMD $OSM_DIR/playbooks/import_projects.yml

5.2 The network "switch-net"

On SWITCHengines, there is a special internal network called "switch-net" which offers a direct, NAT-free floating ip in the range 130.59.99.0/24. For the moment, there is no equivalent network on SWITCHcloud.

Therefore, if you plan to migrate VMs belonging to this network the "ecm" the tool will map this network to the "public" network on SWITCHcloud. So you will lose the NAT-free feature of the "switch-net" network and mabye other related features as well.