Nick Fisher's tech blog

Webflux

How to use Caffeine Caches Effectively in Spring Boot Webflux

The source code for this post can be found on Github.

When someone talks about a caffeine cache, they are talking about Ben Manes caching library, which is a high performance, in memory cache written for java. If you’re using reactive streams, you can’t reliably use a LoadingCache because it’s blocking by default. Thankfully, tapping into a couple of basic features of reactive streams and caffeine can get us there.

Publishing to SNS in Java with the AWS SDK 2.0

SNS is a medium to broadcast messages to multiple subscribers. A common use case is to have multiple SQS queues subscribing to the same SNS topic–this way, the publishing application only needs to focus on events that are specific to its business use case, and subscribing applications can configure an SQS queue and consume the event independently of other services. This helps organizations scale and significantly reduces the need to communicate between teams–each team can focus on its contract and business use case.

DynamoDB and Duplicate Keys in Global Secondary Indexes

If there’s something in the documentation about what the behavior of a DynamoDB Global Secondary Index is when there are duplicate keys in the index, it isn’t easy to find. I tested this empirically with an embedded DynamoDB mock for java and will quickly share my findings here with you.

Query a DynamoDB Local Secondary Index with Java

DynamoDB’s Local Secondary Indexes allow for more query flexibility than a traditional partition and range key combination. They are also the only index in DynamoDB where a strongly consistent read can be requested [global secondary indexes, the other index that dynamo supports, can at best be eventually consistent]. I will walk through an example for how to use local secondary indexes in dynamo using the AWS SDK 2.0 for Java, which has full reactive support, in this post.

In-Memory Caching in Sprint Boot Webflux/Project Reactor

Sample code for this article can be found on Github.

In memory caching can significantly improve performance in a microservices environment, usually because of the tail latency involved in calling downstream services. Caching can also help with resilience, though the extent to which that matters will depend on how you’re actually leveraging that caching. There are two flavors of caching that you’re like to want to use, the first is using the Mono as a hot source [which is demonstrated here], and the second would be when you want to selectively cache individual key/value pairs.

Newer Posts