This is a thread to discuss new ideas or things we change with KeyDB.
I think scaling:
- Making multi master work with 3+ servers.
- Making redis cluster to work with multi master.
- Making normal redis cluster better (CP)
How about a viewer tool? With your additions, I’m not sure if existing viewer tools will work
Good news on that front, we’re working on a dashboard. I’m interested in what you would find most important in a viewer?
Also existing Redis tools will work but won’t support KeyDB specific features.
For #3 I can report we’re working on allowing transparent clustering. That is access keys from a single server regardless of which shard it lives on.
This looks like “proxy” ? While I meant for the cluster to be CP. While multi-master is AP. This way you can chose either one (current redis isn’t CP or AP).
For a viewer, I think it needs to be fast so I’m not sure if a web based tool is the way to go. Maybe a client based is still better. A viewer is more for development, where as the dashboard, would be more for production.
Hi John, do you have near term plan on merging Redis 6.0 features to KeyDB?
Yes, in fact the merge is largely complete in the unstable branch. The next KeyDB release will be v6 to match.
We’re behind on just a few bug fixes which will be resolved before the next release.
looking forward to trying it
@hengkuang unstable is now at par with Redis 6.0.1. The next planned KeyDB release is on Monday where we will match the current Redis v6.0.3.
Thanks you @jdsully, that’s great news! I definitely want to try it for many helpful features.
Explore the idea of replacing RocksDB with HSE. “HSE delivers up to six times the performance, 11 times lower latency and seven times greater write endurance.” https://github.com/hse-project/hse
Re FLASH, the ability to tag keys as “never” and “always” store in memory
Some kind of native struct data type support
Currently I store a ton of flatbuffers with lots of binary values in Redis (soon to be KeyDB). This lets me do things like grab them out of memory, without having to reserialize them everytime, and pass them back to the client which can read them in place. And I’ve written modules so that I can update them in place in Redis.
I know postgres has jsonb where you can access/update jsons. so you get “update in place” but no client reading in place. And definitely no binary data.
You could write your own where you define/compile/upload schemas for datatypes the same way you do Modules. But unless there’s a way for clients to either read the buffers in place or deserialize them, then a lot of the value is lost. Then again you could not reinvent the wheel and add native support for flatbuffers, just upload your schema. Could also be done with the Bitsery library with more work-- it’s fantastic I use it for serializing then persisting data to disk on my clients.
- Pointer Data Type
If you had the above struct support, another thing I often do is… GET a buffer, check what media objects it requires (because I have no idea what the filename is to start off), then GET each of those. So multiple roundtrips to avoid storing media objects multiple times. It would be amazing if there were some kind of pointer data type in the above struct implementation, so that whenever i GET the buffer, the media object is automatically pulled for me as well.
Implement io_uring for all networking and other io. I’m just about to for the server framework I contribute to https://github.com/stefanocasazza/ULib. Reads without syscalls! Shared memory buffers between kernel and userspace! Tons more IOPS!
KeyDB Pro is actually quite modular so it would be possible. I am a bit worried about how new HSE is though.
I can confirm this is definitely on my todo list.
Take a look at ModJS which will give maximum flexibility in presenting your data here. It is very easy to write a command that takes in structured data, stores it as a flat key, and when requested deserializes it back.
- shared memory communication
this could be something cool to look into. Given the prevalence of Kubernetes and the fact that cloud servers scale by equivalent intervals of cores and memory, I bet a lot of people are running their application servers on the same machines are their KeyDB instance.
- nested data structures
in contemplating remodeling my data model to eliminate usage of mongo as a cold storage database, and store everything in KeyDB with Flash… having the ability to store lists nested in a hashtable… and even maybe a hashtable nested in a hashtable… would make the process a breeze. considering the top level keyspace is obviously extinguished at UINT32_MAX with a 32 bit hash, it’s a serious concern when stored data by user pairs.
Edit: i decided to implement my own custom type via a module with my favorite hash map ska::bytell_hash_map (https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-world-forgot-or-a-better-alternative-to-integer-modulo/) so that I can handle all the nesting I need
#1 You should take a look at ModJS part of this is bringing the application closer to the database by literally being in the same process.
#2 is something I’ve wanted to do for a long time but nobody had asked for it yet!
But on that hash table, it’s fantastic. Reading his many blog posts you’ll have a PhD in hash tables, they’re incredibly detailed. The bytell one doesn’t grow until like 93.5% capacity, so extremely memory efficient. and is still faster than any of the google ones which grow at 50%. If you check it out use the xxHash (uint32_t)XXH3_64bits_withSeed(…) function. It has the greatest bandwidth of any 32-bit with perfect distribution.
expirations on hash fields would be fantastic
And bit operations on hash fields. Currently I have a module written to do this, but that of course blocks all keys.