KeyDB not doing queries concurrently? Throttles queries!

I love Stackoverflow, so I hope it is OK if I draw attention to the problem I think Im seeing by linking to the SO questions here:

In short:

I expect KeyDB to be able to handle executing queries while a “long running query” is being executed. My tests seems to show that it does not; queries are being throttled/delayed when a long running query is being executed.

I would greatly appreciate if someone would like to take a look at the SO thread =)

regards

Hi Teddy!

Thanks for contacting us. I’m taking a through look at the thread and will do an investigation on your behalf. Feel free to report any new findings for us.

Hi Teddy…so this is my assumption for your particular case why your HGETALL query seems to be done consecutively than concurrently.

You have a HGETALL which is a read on EVERY single subkey of a hash. We need to actually place a block for the HGETALL to ensure data consistency during the read.

Let say we don’t have the block: Assume the HGETALL is being PARTIALLY performed but not yet finished. A HSET/HDEL could change the same hash. This can create erroneous results. Likewise, what would happen if the hash was deleted completely during the HGETALL?

For this reason, we force the block to prevent potential read errors.

@bobeqalpha That makes sense. Do other O(n) operations like LRANGE do the same?

Also, how granular is that lock? Is it per-key or global?

Normally, the lock is global, so every KeyDB query (including LRANGE) will be performed one at a time.

@teddy @jgaskins MVCC is available to allow concurrent queries. Commands such as HGETALL/LRANGE will retrieve from the database at the beginning of the command’s execution. If there are writes during the HGETALL/LRANGE, it will not affect the command’s output.

For more information, see

Thanks for the replies/feedback, appreciated!

Someone on Stackoverflow also answered:

KeyDB, in fact, only runs the IO operations and Redis protocol parsing operations in parallel. It processes the commands in serial, i.e. process commands one-by-one, and working threads are synced with a spin lock.

This seems to be in line with what you say Bob. I have not myself tested doing a long query on one key (“key1”) and then doing many operations on another key (“key2”); does KeyDB block then? This is actually what @jgaskins is asking also, “how granular is that lock”, that question is very relevant.

To answer your question:
In my world, without getting into how it would be implemented, I could and would accept that whoever asked first gets the result as the state was when it was asked. So, if I do a “get all” on a large hash, and then during that fetch, someone edits the same hash, I get the results as they are when I fetch them.

If a hash has 1M entries, and my query is building the result-set, and I get to 500k, and then someone edits the 499k-entry then I have the previous version in my result-set since I fetched it before the edit. If someone edits the 501k, then I will probably get that when my query gets that far. If someone deletes the key altogether, I would be fine with getting the hash as it was if they didnt delete it (read into a separate store or whatever).

This is the same as with any SQL db:
If I start doing a large SELECT query, and someone starts deleting rows from the same table, then I normally dont have to wait for the large SELECT query to finish, the DELETEs can go on. InnoDb has as far as I know row-locking, not table-lock

Ah, you posted as I was writing =)

If there is one global lock, then the behaviour is the same as Redis, as far as I am concerned, which is a shame; when I first read about KeyDB, it sounded to me like KeyDB tackled the big Redis issue with one-query-at-a-time. But, it seems I was mistaken. I really think you should clarify this on the webpage.

MVCC? Never heard of. Reading your link now, thanks!

Very funny, MVCC is exactly what I was babblinga about in my answer above, but in less technical terms.

Sooo, MVCC is Enterprise only? That is unfortunate… No chance that will make its way into Community real soon? =)

Thanks for your feedback Teddy. We’ll find ways to improve our documentation.