AWS News Blog

Now Available: Amazon ElastiCache Global Datastore for Redis

Voiced by Polly

In-memory data stores are widely used for application scalability, and developers have long appreciated their benefits for storing frequently accessed data, whether volatile or persistent. Systems like Redis help decouple databases and backends from incoming traffic, shedding most of the traffic that would had otherwise reached them, and reducing application latency for users.

Obviously, managing these servers is a critical task, and great care must be taken to keep them up and running no matter what. In a previous job, my team had to move a cluster of physical cache servers across hosting suites: one by one, they connected them to external batteries, unplugged external power, unracked them, and used an office trolley (!) to roll them to the other suite where they racked them again! It happened without any service interruption, but we all breathed a sigh of relief once this was done… Lose cache data on a high-traffic platform, and things get ugly. Fast. Fortunately, cloud infrastructure is more flexible! To help minimize service disruption should an incident occur, we have added many high-availability features to Amazon ElastiCache, our managed in-memory data store for Memcached and Redis: cluster mode, multi-AZ with automatic failover, etc.

As Redis is often used to serve low latency traffic to global users, customers have told us that they’d love to be able to replicate Amazon ElastiCache clusters across AWS regions. We listened to them, got to work, and today, we’re very happy to announce that this replication capability is now available for Redis clusters.

Introducing Amazon ElastiCache Global Datastore For Redis
In a nutshell, Amazon ElastiCache Global Datastore for Redis lets you replicate a cluster in one region to clusters in up to two other regions. Customers typically do this in order to:

  • Bring cached data closer to your users, in order to reduce network latency and improve application responsiveness.
  • Build disaster recovery capabilities, should a region be partially or totally unavailable.

Setting up a global datastore is extremely easy. First, you pick a cluster to be the primary cluster receiving writes from applications: this can either be a new cluster, or an existing cluster provided that it runs Redis 5.0.6 or above. Then, you add up to two secondary clusters in other regions which will receive updates from the primary.

This setup is available for all Redis configurations except single node clusters: of course, you can convert a single node cluster to a replication group cluster, and then use it as a primary cluster.

Last but not least, clusters that are part of a global datastore can be modified and resized as usual (adding or removing nodes, changing node type, adding or removing shards, adding or removing replica nodes).

Let’s do a quick demo.

Replicating a Redis Cluster Across Regions
Let me show you how to build from scratch a three-cluster global datastore: the primary cluster will be located in the us-east-1 region, and the two secondary clusters will be located in the us-west-1 and us-west-2 regions. For the sake of simplicity, I’ll use the same default configuration for all clusters: three cache.r5.large nodes, multi-AZ, one shard.

Heading out to the AWS Console, I click on ‘Global Datastore’, and then on ‘Create’ to create my global datastore. I’m asked if I’d like to create a new cluster supporting the datastore, or if I’d rather use an existing cluster. I go for the former, and create a cluster named global-ds-1-useast1.

I click on ‘Next’, and fill in details for a secondary cluster hosted in the us-west-1 region. I unimaginatively name it global-ds-1-us-west1.

Then, I add another secondary cluster in the us-west-2 region, named global-ds-1-uswest2: I go to ‘Global Datastore’, click on ‘Add Region’, and fill in cluster details.

A little while later, all three clusters are up, and have been associated to the global datastore.

Using the redis-cli client running on an Amazon Elastic Compute Cloud (Amazon EC2) instance hosted in the us-east-1 region, I can quickly connect to the cluster endpoint and check that it’s indeed operational.

[us-east-1-instance] $ redis-cli -h $US_EAST_1_CLUSTER_READWRITE
> ping
PONG
> set paris france
OK
> set berlin germany
OK
> set london uk
OK
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

This looks fine. Using an EC2 instance hosted in the us-west-1 region, let’s now check that the data we stored in the primary cluster has been replicated to the us-west-1 secondary cluster.

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READONLY
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

Nice. Now let’s add some more data on the primary cluster…

> hset Parsifal composer "Richard Wagner" date 1882 acts 3 language "German"
> hset DonGiovanni composer "W.A. Mozart" date 1787 acts 2 language "Italian"
> hset Tosca composer "Giacomo Puccini" date 1900 acts 3 language "Italian"

…and check as quickly as possible on the secondary cluster.

> keys *
1) "DonGiovanni"
2) "london"
3) "berlin"
4) "Parsifal"
5) "Tosca"
6) "paris"
> hget Parsifal composer
"Richard Wagner"

That was fast: by the time I switched to the other terminal and ran the command, the new data was already there. That’s not really surprising since the typical network latency for cross region traffic ranges from 60 milliseconds to 200 milliseconds depending on regions.

Now, what would happen if something went wrong with our primary cluster hosted in us-east-1? Well, we could easily promote one of the secondary clusters to full read/write capabilities.

For good measure, I also remove the us-east-1 cluster from the global datastore. Once this is complete, the global datastore looks like this.

Now, using my EC2 instance in the us-west-1 region, and connecting to the read/write endpoint of my cluster, I add more data…

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READWRITE
> hset Lohengrin composer "Richard Wagner" date 1850 acts 3 language "German"

… and check that it’s been replicated to the us-west-2 cluster.

[us-west-2-instance] $ redis-cli -h $US_WEST_2_CLUSTER_READONLY
> hgetall Lohengrin
1) "composer"
2) "Richard Wagner"
3) "date"
4) "1850"
5) "acts"
6) "3"
7) "language"
8) "German"

It’s all there. Global datastores make it really easy to replicate Amazon ElastiCache data across regions!

Now Available!
This new global datastore feature is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London). Please give it a try and send us feedback, either on the AWS forum for Amazon ElastiCache, or through your usual AWS support contacts.

Julien;

Julien Simon

Julien Simon

As an Artificial Intelligence & Machine Learning Evangelist for EMEA, Julien focuses on helping developers and enterprises bring their ideas to life.