Cross-Cluster Shunting

Beta Feature

The cross-cluster shunt middleware is currently in beta testing and is not yet generally available. If you are interested in using shunting to migrate an existing Swift cluster, please contact support.


When migrating data from an existing Swift cluster into a new SwiftStack managed cluster, this shunt will allow you to point traffic at the new cluster partway through a migration without impacting the data clients can store and retrieve.


Once this middleware has been enabled, you must also provide the host at which to reach the cluster off of which you are migrating. This should be a URL of the form

Other Configuration Tips

It is recommended that you change the Object replicator reclaim age and Object reconstructor reclaim age values in the cluster tuning page to be at least as large as the expected duration of the migration process. This is so that DELETE requests to the new cluster are served as expected.

How It Works

While the shunt is running in your new cluster, it will decide when it is necessary to make a remote request to the old cluster, and do so whenever it cannot provide a correct response based only on the information it receives from the new cluster. In addition to correctness of responses, the shunt sometimes proxies a request to both the old and new cluster in order to maintain correctness of data.

The shunt has two modes, passthrough mode and boss mode. In passthrough mode, all requests that make it to the shunt (i.e. are not serviced by prior middlewares) are proxied to the old cluster. Passthrough mode is mostly intended to allow time to migrate all the clients of the old cluster to point at the new one while making a fallback simpler in case of a problem; boss mode is where the magic happens. The information in this document pertains to the shunt in boss mode unless otherwise specified.


There are several options for how to manage authentication and authorization between your two clusters. You will need to think carefully about what choice is best for you.

No Auth

One option is to disable auth entirely on one (or both) cluster(s). If, for example, you can set up authentication on the new cluster to your satisfaction and guarantee that no requests will go directly to the old cluster, turning auth off on the old cluster will be satisfactory. Because the shunt is after any auth middlewares in the Swift pipeline, no request will be proxied to the old cluster unless the user is appropriately authenticated/authorized. However, this approach has its risks - if anyone accesses the old cluster, they will have total access to all its data.

Turning off authentication on the new cluster is almost certainly a bad idea. If this is the right path for you, you're probably already used to running zero-auth clusters.

Shared Auth

If you are currently using an external auth service, such as Keystone, you can configure it such that the same users have access to both clusters. In this case, a user can obtain a token from Keystone that will provide authentication on both clusters. When the shunt makes a request to the old cluster, this Keystone token will be included in the request, providing appropriate auth handling on both clusters.

Heterogeneous Auth

It is possible to run one auth system on the new cluster and a different system on the old cluster: see Auth Negotiation For Swift Shunt.


The shunt will autovivify, or automatically create, accounts and containers for you in some circumstances. Namely, if a request for an account, container, or object comes in and the associated account (and possibly container) is not already present on the new cluster, the shunt will attempt tocreate that account (and possibly container) on the new cluster by grabbing its metadata from the old cluster. If the account (or container) is not found on the old cluster, nothing is created on the new cluster.


For any object GET or HEAD, the shunt will first try to retrieve the object from the new cluster. Then, if that response returns a 404, and the 404 does not come along with a tombstone (a tombstone is an internal swift indicator that an object is deleted, rather than just being missing), the shunt will pass a request off to the old cluster and return that to you.

Any time a request was made to the remote cluster, there will always be a Remote-X-Trans-Id header in the response. This header key may be useful for tracking how frequently requests are serviceable without reference to the cluster that you're migrating off of. In some cases, the response from the old cluster will be returned and the headers and status from the new cluster will be provided in headers prefixed with Local-; most of the time, though, the response will be the one provided by the new cluster with the headers and status from the old cluster provided as Remote-.

For an object DELETE or PUT, the shunt will pass the message along to the new cluster, never proxy it to the old cluster, and return exactly as if it were not installed.

For an object POST, the shunt will always send the request both to the old cluster and to the new cluster, in order to prevent certain data race scenarios where metadata could otherwise be lost.

Accounts and Containers

Accounts and containers behave identically under the shunt.

A GET request will always return a combination of the results from both the new and the old cluster (and, as above, will thus always include Remote-X-Trans-Id, Remote-Status, etc). The results from the two are combined into one listing in the format requested, with the right filters applied.


It is possible to see an object listing in a container, or a container listing in an account, that will then 404 on a GET request. Unfortunately, some tradeoffs are necessary in order to shunt in a way that does not dramatically degrade performance, and this is one of them.

A HEAD request will behave similarly, except without parsing any listings. You will be able to find both local and remote headers on the account in the response.

A POST will behave similarly as well, for the same reasons as described in the object section.

A PUT or DELETE request will never send a request out to the old cluster.

Illegal Requests

Any request that would not normally be a valid Swift request is invalid.

COPY requests are invalid.

DELETE requests on containers and accounts are invalid.

Bypassing The Shunt

If you need to bypass the shunt, pass the header Bypass-Shunt: True along with any request. This will effectively turn off the shunt for that request; the request will be serviced solely by the new cluster.

Any non-data requests (i.e. requests that do not like look my.domain/v1/...), notably auth requests (which look like my.domain/auth/...), if they make it to the shunt, will be passed through to the rest of the application.