Songkick is a U.Okay.-based live performance discovery service and reside music platform owned by Warner Music Group connecting music followers to reside occasions. Yearly, we assist over 175 million music followers world wide monitor their favourite artists, uncover concert events and reside streams, and purchase tickets with confidence on-line and through their cell apps and web site. 

Now we have about 15 builders throughout 4 groups, all based mostly in London, and my function is to supply help throughout these groups by serving to them to make technical selections and to architect options. After migrating to Google Cloud, we wished a totally managed caching resolution that might combine properly with the opposite Google instruments that we’d come to like, and free our builders to work on revolutionary, customer-delighting merchandise. Memorystore, Google’s scalable, safe, and extremely accessible reminiscence service for Redis, helped us meet these targets. 

Totally managed Memorystore eliminated hassles 

Our unique caching infrastructure was constructed solely with on-premises Memcached, which we discovered easy to make use of on the time. Ultimately, we turned to Redis to leverage superior options like dictionaries and increments. In our service-oriented structure, we had each of those open supply information shops working for us. We had two Redis clusters—one for persistent information, and one as a simple caching layer between our entrance finish and our companies.

After we had been making selections about use Google Cloud, we realized there was no actual benefit for having two caching applied sciences (Memcached and Redis) and determined to make use of solely Redis as a result of the whole lot we used it for might be dealt with by Redis and this manner we don’t want data in each databases. We do know that Redis will be extra complicated to make use of and handle however that wasn’t a giant concern for us as a result of it might be utterly managed by Google Cloud once we use Memorystore. With Memorystore automating complicated duties for Redis like enabling excessive availability, failover, patching, and monitoring, we may focus that point now on new engineering alternatives.

We thought of the hours we spent fixing damaged Redis clusters and debugging community issues. Our staff is extra closely skilled in creating versus managing infrastructure, so issues with Redis had confirmed distracting and time-consuming for the staff. Additionally, with a self-managed instrument, there would probably be some user-facing downtime. However Memorystore was a safe, totally managed possibility that was cost-effective and promised to avoid wasting us these hassles. It supplied the advantages of Redis with out the price of managing it. Selecting it was a no brainer. 

How Memorystore works for us

Let’s take a look at a few our use instances for Memorystore. Now we have two ranges of caching on Memorystore—the entrance finish caches outcomes from API calls to our companies and a few companies cache database outcomes. Often, our caching key for the entrance finish companies is the URL and any primitive values that may get handed. With the URL and the question parameters, the entrance finish seems to be to see if it already has a end result for it, or if it must then go speak to the service. 

Now we have a couple of companies the place we even have a caching layer throughout the service itself that talks to Redis first earlier than deciding whether or not it must go, then invokes our enterprise logic and talks to the databases. That caching sits in entrance of the service, working on the identical precept because the front-end caching. 

We additionally use Fastly as a caching layer in entrance of our entrance ends. So, on a person web page degree, the entire web page could also be closely cached in Fastly, reminiscent of when a web page is for a leaderboard of the highest artists on the platform.

Memorystore is available in for user-level content material, reminiscent of if there’s an occasion web page that pulls some details about the artist and a few details about the occasion, and possibly some suggestions for the artists. If the Fastly cache on the artist web page had expired, it might go to the entrance finish, which might know to speak to the assorted companies to show the entire requested info on the web page. On this case, there is likely to be three separate bits of knowledge sitting in our Redis cache. Our artist pages have elements that aren’t cached in Fastly, so there we rely rather more closely on Redis. 

Our Redis cache TTL (time-to-live) tends to be fairly low; typically now we have only a ten-minute cache. Different instances, with very static information, we are able to cache it in Redis for a couple of hours. We decide an inexpensive caching time for every information merchandise, after which set the TTL based mostly on that dedication. A specific artist is likely to be known as 100,000 instances a day, so even placing only a ten-minute cache on that makes an enormous distinction in what number of calls a day now we have to place into our service. 

For this use case, now we have one extremely accessible Memorystore cluster of about four GB of reminiscence, and we use a cache eviction coverage of allkeys-lru (least lately used). Proper now on that cluster, we’re getting about 400 requests per second in peaks. That’s a mean day’s busy interval, nevertheless it’ll spike a lot greater than that in sure circumstances. 

We had two completely different Redis clusters in our outdated infrastructure. The primary is as simply described. The second was persistent Redis. When contemplating migration to Google Cloud, we determined to make use of Redis in the way in which it actually excels in and determined to simplify, and re-architect the 4 or 5 options that use the persistent Redis, both utilizing Cloud SQL for MySQL or utilizing BigQuery. Generally we use Redis to combination information, and now that we’re on Google Cloud, we may simply use BigQuery and have much better evaluation choices than we had for aggregating on Redis.

We additionally use Memorystore as a distributed mutex. There are specific actions in our system the place we don’t need the identical factor taking place concurrently—for instance, a migration of knowledge for a specific occasion, the place two admins is likely to be attempting to choose up the identical piece of labor on the similar time. If that information migration occurred concurrently, it may show damaging to our system. So we use Redis right here as a mutex lock between completely different processes, to make sure they occur consecutively as an alternative of concurrently. 

Memorystore and Redis work for us in peaceable concord

Now we have not seen any issues with Redis because the migration. We additionally love the monitoring capabilities you get out of the field with Memorystore. After we deploy a brand new characteristic, we are able to simply examine if it all of the sudden fills the cache, or if now we have a extremely low hit ratio that signifies we have made an error in our implementation.

One other profit: the Memorystore interface works precisely such as you’re simply speaking to Redis. We use odd Redis in a Docker container in our growth environments, so once we’re working it regionally, it’s seamless to examine that our caching code is doing precisely what it’s meant to. 

Now we have each manufacturing and staging environments, each Virtual Private Clouds, every with its personal Memorystore cluster. Now we have unit assessments, which by no means actually contact Redis, and integration assessments, which speak to a neighborhood MySQL in Docker and a Redis in Docker as properly. And we even have acceptance assessments—browser automation assessments that run within the staging setting, which speak to Cloud SQL and Memorystore.

Planning encores with Memorystore

For a possible future use case for Memorystore, we’re nearly definitely going to be including Pub/Sub to our infrastructure, and we’ll be utilizing Redis to deduplicate some messages coming from Pub/Sub, reminiscent of once we don’t need to ship the identical electronic mail twice in fast succession. We’re trying ahead to Pub/Sub’s totally managed companies as properly, since we’re at present working RabbitMQ, which too usually requires debugging. We carried out an experiment utilizing Pub/Sub for a similar use case, and it labored very well, so it made for an additional straightforward determination.

Memorystore is only one of Google’s information cloud options we use on a regular basis. Extra ones embrace Cloud SQL, BigQuery and Dataflow for an ETL pipeline, information warehousing, and our analytics merchandise. There, we combination information that the artist is fascinated by, feed that again into MySQL, after which floor that in our artist merchandise. As soon as now we have Pub/Sub, we’ll have nearly each little bit of Google Cloud database sort. That’s proof of how we really feel about Google Cloud’s instruments.

Study extra in regards to the companies and merchandise making music at Songkick. Curious to study extra about Memorystore? Try the Google Cloud weblog for a take a look at performance tuning best practices for Memorystore for Redis.



Leave a Reply

Your email address will not be published. Required fields are marked *