How to Zip Reactor Mono Objects that Return Void
Leveraging Mono.zip appropriately will [with the right configuration] lead to a high amount of performance and concurrency. There is one caveat to its usage though:
Leveraging Mono.zip appropriately will [with the right configuration] lead to a high amount of performance and concurrency. There is one caveat to its usage though:
There’s a very insidious bug that can happen when you’re writing reactive code, and it basically comes down to whether an underlying Mono in a chain of operations was actually subscribed to, rather than merely observing a method invocation. I’ll demonstrate with an example.
The source code for this post can be found on Github.
When someone talks about a caffeine cache, they are talking about Ben Manes caching library, which is a high performance, in memory cache written for java. If you’re using reactive streams, you can’t reliably use a LoadingCache because it’s blocking by default. Thankfully, tapping into a couple of basic features of reactive streams and caffeine can get us there.
DynamoDB streams record information about what has changed in a DynamoDB table, and AWS lambdas are ways to run code without managing servers yourself. DynamoDB streams also have an integration with AWS Lambdas so that any change to a DynamoDB table can be processed by an AWS Lambda–still without worrying about keeping your servers up or maintaining them. That is the subject of this post.
AWS Lambda functions were the first “serverless” way to run code. Of course, there are still servers, but the point is that you can nearly forget about managing those servers and all of that is owned by AWS.
DynamoDB transactions can be used for atomic updates. Atomic updates in DynamoDB without transactions can be difficult to implement–you’ll often have to manage the current state of the update yourself in something like a saga, and have business logic specific rollback procedures. Further, without a transaction manager, the data will be in an inconsistent state at some point in time while the saga is ongoing. An alternative to that is a Two Phase Commit, but that’s also expensive both from the standpoint of developers making it work as well as performance [2PC typically call for a lock being held during the operation, and even then there’s a possibility that the operation ends up in an inconsistent state at some point].
SNS is a medium to broadcast messages to multiple subscribers. A common use case is to have multiple SQS queues subscribing to the same SNS topic–this way, the publishing application only needs to focus on events that are specific to its business use case, and subscribing applications can configure an SQS queue and consume the event independently of other services. This helps organizations scale and significantly reduces the need to communicate between teams–each team can focus on its contract and business use case.
Nested attributes in DynamoDB are a way to group data within an item together. The attributes are said to be nested if they are embedded within another attribute.
Scanning in DynamoDB is exactly what it sounds like: loop through every single record in a table, optionally filtering for items with a certain condition when dynamo returns them to you. In general, you shouldn’t do this. DynamoDB is designed to store and manage a very large amount of data. Scanning through a large amount of data is very expensive, even in a distributed world. In the best case, you’ll be waiting a long time to see results. In the worst case, you might see service outages as you burn through your RCUs.
If there’s something in the documentation about what the behavior of a DynamoDB Global Secondary Index is when there are duplicate keys in the index, it isn’t easy to find. I tested this empirically with an embedded DynamoDB mock for java and will quickly share my findings here with you.
A DynamoDB Global Secondary Index is an eventually consistent way to efficiently query for data that is not normally found without a table scan. It has some similarities to Local Secondary Indexes, which we covered in the last post, but are more flexible than them because they can be created, updated, and deleted after the base table has been created, which is not true of Local Secondary Indexes.
DynamoDB’s Local Secondary Indexes allow for more query flexibility than a traditional partition and range key combination. They are also the only index in DynamoDB where a strongly consistent read can be requested [global secondary indexes, the other index that dynamo supports, can at best be eventually consistent]. I will walk through an example for how to use local secondary indexes in dynamo using the AWS SDK 2.0 for Java, which has full reactive support, in this post.