in our current web application setup we use Redis to cache responses from a REST API. In order to be able to invalidate specifically all responses related to a certain entity, we have implemented a custom tagging logic. In addition to the actual response, we store a Set for every entity contained in the response, referencing the response key. On every entity mutation, we specifically invalidate all responses, that are linked in the entities Set.
As some of the entities are linked to 10,000+ responses, I wrote a small Lua Script to UNLINK the keys with a single server request in order to avoid a lot of network requests between the PHP API server and Redis (in contrast to iterating over the keys received by SCAN on client side and UNLINK them in batches). But due to the single threaded nature of Redis this approach caused the whole API to slow down, because the EVAL command blocked all other requests.
So I came across KeyDB and understood that KeyDB uses multiple threads to process incoming commands. This seemed like the perfect solution for my use case and I replaced Redis by KeyDB (default config), set ‘server-threads’ to 4 and ran a simple test:
- execute a long running script (eval ‘local i = 0 repeat redis.call(“keys”, “*”) i = i + 1 until i == 500000 return i’ 0)
- open another connection and run another command (GET testkey)
=> but unfortunately, the second command was still blocked until the first one was finished.
I’m a bit confused about this behaviour as I expected, that multiple threads are processing the incoming commands. I also found this topic Atomicity vs Multithreading where the author asks if also the processing of commands is multithreaded. But the answer is related to modules, whereas I use regular Redis commands.
Am I missing something out or is there maybe a better solution for my use case?
Thanks in advance!