Recently we had to migrate all the data from one Redis server to another. There are a few ways this can be done. However, we needed to migrate the data without downtime or any interruptions to the service.
We decided the best course of action was a three step process:
- Update the service to send all write commands (
DEL, etc) to both servers. Read commands (
GET, etc) would still continue to goto the first server.
- This would suffice for all the keys with a TTL, but would not guarantee all of the non-expirable data would make it over. So then we run a step to iterate through all of the keys and
RESTOREthem into the new service. This would certainly recopy keys that already exist, but that was fine with us.
- Once the new Redis server looked good we could make the appropriate changes to the application to point solely to the new Redis server.
Now, step 1 and 3 were actually quite easy because we already have our own Redis client interface where it was easy to hide the sending of reads and writes to separate servers without any change in application logic. Thanks Go interfaces!
Step 2 is the bulk of the work and has a few gotchas:
- Redis provides a serialisation format from the
DUMPcommand so that you can load any key into the same or a new server. However, this is a binary format and care needs to be taken when handling it correctly in Bash. More details about this later.
DUMPcommand serialises the value, but not the expiry so you have to be careful to also (separately) request the TTL so that it can be included in the
RESTOREonly supports whole second TTLs, so make sure you use the
TTLcommand and not the more accurate
PTTLcommand. In our case the application did not require millisecond expiries so this wasn’t a problem.
RESTOREboth only work on a single key. So we have to scan all keys in Redis and issue individual
RESOREfor each key. Fortunately the redis-cli provides a way to do non-blocking key scanning, as well as specifying a pattern. More details about this later.
TTLcommand will return special negative values depending on your Redis version if there is no expiry set on a key. However,
RESTOREonly accepts 0 (meaning no expiry) or a positive amount of whole seconds.
- One other annoying thing to note is that the
RESTOREcommand does not allow you to restore a key that already exists. It will return a
RESTOREdoes allow an extra argument
REPLACEto avoid this. However, it must be placed after the binary data which we will see is not possible with using the redis-clitool wiht the
-xargument. The easiest way around this is to
DELthe key beforehand.
export OLD=”redis-cli -h foo.0001.usw2.cache.amazonaws.com”
export NEW=”redis-cli -h bar.clustercfg.usw2.cache.amazonaws.com”
I have obfuscated the original Redis hostnames. The important part is that you provide the CLI with any arguments you need to interact with the server.
Before we continue we should check that both servers are accessible and responding:
# PONG$NEW PING
Great! Now let’s push the big red button. Or, at least look at it:
for KEY in $($OLD --scan); do
$OLD --raw DUMP "$KEY" | head -c-1 > /tmp/dump
TTL=$($OLD --raw TTL "$KEY") case $TTL in
$NEW DEL "$KEY"
$NEW DEL "$KEY"
cat /tmp/dump | $NEW -x RESTORE "$KEY" 0
$NEW DEL "$KEY"
cat /tmp/dump | $NEW -x RESTORE "$KEY" "$TTL"
esac echo "$KEY (TTL = $TTL)"
I should also mention that we were only copying around 320k keys. Bash seemed to have no problem with this (after all the keys were loaded) but if you are loading a huge amount of keys this may not work.
It might be helpful to read my other blog post: Deleting a Huge Number of Keys in Redis.
If you see an error similar to “
(error) MOVED 660 10.1.174.29:6379” it’s because you are running against a cluster and will need to add the “
-c” option to your redis-clicommand.
Originally published at http://elliot.land on November 27, 2018.