Nick Fisher's tech blog

Spring

Configuring Lettuce/Webflux to work with Clustered Redis

Lettuce has some pretty nice out of the box support for working with clustered redis. This combination–a reactive client and application along with clustered redis–is about as scalable, performant, and resilient as things can get in distributed systems [though there are other tradeoffs which are not the subject of this post].

How to Configure Lettuce to connect to a local Redis Instance with Webflux

The source code for this post can be found on Github.

In a previous post, we detailed how to write integration tests for lettuce clients in spring boot webflux using a redis test container. That’s fine and well when you’re just writing code for a quick feedback loop, but is useless when it comes to running the application in real life. This post will start up redis locally and then explain how to best connect to it using lettuce in webflux.

How to use a Redis Test Container with Lettuce/Spring Boot Webflux

The source code for this post can be found on Github.

Another way to write integration tests for code that verifies your interactions with redis actually make sense is to use a test container. This framework assumes you have docker up and running, but if you do it will pull a specified container image [typically you’ll just use docker hub, though it’s important to note that they rate limit you, so don’t go overboard], then you can interact with that container in your integration tests.

How to use Embedded Redis to Test a Lettuce Client in Spring Boot Webflux

The source code for this article can be found on Github.

Lettuce is a redis client with reactive support. There is a super handy embedded redis for java project out there, and this kind of integration testing inside your service is worth its weight in gold, in my humble opinion. This post will detail how to merge both of these worlds together, and set up redis integration tests when you’re using a lettuce client.

Publishing to SNS in Java with the AWS SDK 2.0

SNS is a medium to broadcast messages to multiple subscribers. A common use case is to have multiple SQS queues subscribing to the same SNS topic–this way, the publishing application only needs to focus on events that are specific to its business use case, and subscribing applications can configure an SQS queue and consume the event independently of other services. This helps organizations scale and significantly reduces the need to communicate between teams–each team can focus on its contract and business use case.

In-Memory Caching in Sprint Boot Webflux/Project Reactor

Sample code for this article can be found on Github.

In memory caching can significantly improve performance in a microservices environment, usually because of the tail latency involved in calling downstream services. Caching can also help with resilience, though the extent to which that matters will depend on how you’re actually leveraging that caching. There are two flavors of caching that you’re like to want to use, the first is using the Mono as a hot source [which is demonstrated here], and the second would be when you want to selectively cache individual key/value pairs.

Newer Posts