Migration Tips

In this article we will give you hints for migration and how to proceed with the error analysis.

Important tables for error analysis

In addition to the large log files, there are other database tables in which errors and error notes for the migration are listed. For example, within the "swag_migration_logging" table, you can filter based on the error levels and/or the entity where there are problems. 

Other interesting tables are:
- `swag_migration_mapping`
- `swag_migration_media_file`
- `swag_migration_data`

​Recommendation: Migration of very large amounts of data via console

If your source store contains very large amounts of data or you want to run the migration in the background, we recommend the migration via console (CLI). You can start the migration via CLI after the step "Data check".

It is important that you first start the migration normally via the administration and cancel it after the step "Data check". Then navigate in the console to the root directory of the target store. You should now be above the public folder.

Execute the following command here:

php bin/console migration:migrate argument

The Argument variable can contain the following values:

- basicSettings: Basic settings and categories (SalesChannel attachment etc.) Will be executed automatically when importing other DataSelections.
- cms: Layouts.
- customersOrders: All customers, orders and documents.
- media: All media and folders.
- newsletterRecipient: Newsletter recipients.
- products: All product data and associated entities. Also associated entities from "media".
- productReviews: Product reviews.
- promotions: Discounts & promotions.
- seoUrls: SEO URLs.
- customerWishlists: watch lists.

Further accelerate migration for large data volumes via local database

In the case of very large data volumes, migration directly via a locally located database can make sense, as this limits the load on one system. With a number of several million data records, especially variants, the migration can take quite some time.

Not only is information read out, but the data is already prepared during the read process for the subsequent write process. Of course, the wizard should enable the smoothest possible migration, but such large amounts of data are always a challenge and not the main purpose of the extension. In such edge cases, rework may therefore also be necessary.

Whether the migration should be done locally or via API / store domain can be defined within the migration wizard (Edit connection).

Error message "No connection established 

The following error message may appear if, for example, you made a mistake when entering the store domain/API key. If you are sure that the domain is correct, the reason may be that you are not using the latest version of the migration extension. Therefore, please check the version of the extension and perform the available update.

Complete error message:
No connection established
No connection could be established to the specified server. Please check the specified store domain.


An index that is not fully built can cause the migration to get "stuck". Evidence of this can be the following notifications, all of which indicate indexing is not complete:

Circa 1395350 products remaining ...
Approximately 1400 categories remaining ...


In order for indexing to be fully completed, the following must be ensured:

- Sufficient resources on the server, a sufficiently high memory limit (at least 2GB).
- No long lasting processes may be terminated by the server.
- The message queue must be reset:

The reset of the message queue can be done as follows:

CREATE TABLE backup_dead_message LIKE dead_message;
INSERT INTO backup_dead_message SELECT * FROM dead_message;
DELETE FROM dead_message;

CREATE TABLE backup_enqueue LIKE enqueue;
INSERT INTO backup_enqueue SELECT * FROM enqueue;
DELETE FROM enqueue;

CREATE TABLE backup_message_queue_stats LIKE message_queue_stats;
INSERT INTO backup_message_queue_stats SELECT * FROM message_queue_stats;
DELETE FROM message_queue_stats;

CREATE TABLE backup_increment LIKE increment;
INSERT INTO backup_increment SELECT * FROM increment;
DELETE FROM increment;

Afterwards, please make sure that the message queue is processed via the CLI: 


You can now trigger the reindexing via the message queue using the following CLI command:

bin/console dal:refresh:index --use-queue

The indexing is then processed via the message queue, which can take several hours. As soon as this has been completed, the cache should be cleared via FTP (delete all subfolders from /var/cache/*).

Transferring articles that have already been migrated again

If the migration has already been performed, for example for test purposes, the Shopware installation has remembered the articles that have already been transferred. All read data is given a checksum, which is used to check whether the data has already been migrated once during subsequent migrations. This prevents data from being migrated twice and possibly overwritten.

To create the checksum referred to, Shopware creates a new table called "swag_migration_mapping". Resetting the checksum and also migrating again without resetting the checksum can be done as often as needed. This is done via the migration extension and is described in more detail in the following article:


Within the table "swag_migration_mapping" individual data can also be deleted manually, e.g. in order to perform only the transfer of certain entities again. For example, the following SQL command would only migrate the newsletter recipients again. The entity can be changed accordingly:

UPDATE swag_migration_mapping
SET checksum = null
WHERE entity = "newsletter_recipient"

Was this article helpful?