More details about how Multi-master works

Hello,

We are considering to migrate our infrastructure from Redis Replica to KeyDB Active Replica and Multi-master.

Now, we are trying to understand how we can have a Multi-master setup.

Goals

Test, how data is persists in Multi-master configuration

Nodes

  1. Multi-master node - 6.0.9-1
  2. Primary master - 6.0.9-1 - primary-master-ip
  3. Secondary master - 5.3.2-2 - secondary-master-ip

Connections

Primary master -->  Multi-master <-- Secondary Master
                         ^
                         |
                       Clients   

Configuration

Multi-master node

db0=0 keys, db1=0, db2=0, db3=0

# keydb.conf
active-replica yes
multi-master yes

> info keyspace
# Keyspace

Primary master

db0=1, db1=1, db2=1, db3=1

> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db1:keys=1,expires=0,avg_ttl=0
db2:keys=1,expires=0,avg_ttl=0
db3:keys=1,expires=0,avg_ttl=0

Secondary master

db0=0, db1=1428, db2=1, db3=0

> info keyspace
# Keyspace
db1:keys=1428,expires=0,avg_ttl=0
db2:keys=1,expires=1,avg_ttl=388191

Configuring Multi-master replication

Add replication from Primary master

> replicaof primary-master-ip 6379

> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db1:keys=1,expires=0,avg_ttl=0
db2:keys=1,expires=0,avg_ttl=0
db3:keys=1,expires=0,avg_ttl=0

All keys are sync from Primary master

Add replication form Secondary master

> replicaof secondary-master-ip 6379

> info keyspace
# Keyspace
db1:keys=1428,expires=0,avg_ttl=0
db2:keys=1,expires=1,avg_ttl=443864

All keys are sync from Secondary master and override data from Primary master

Replication status

> info replication

# Replication
role:active-replica
master_global_link_status:up
Master 0:
master_host:secondary-master-ip
master_port:6379
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:95419
Master 1:
master_host:primary-master-ip
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_repl_offset:5608
slave_priority:100
slave_read_only:0
connected_slaves:0
master_replid:66589faaf56b9f10bd6325014d01c79d4bb6828e
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:0

Adding local key

# keydb.conf
active-replica yes
multi-master yes
replicaof primary-master-ip 6379
replicaof secondary-master-ip 6379

# Restart KeyDB service
systemctl restart keydb

# Add local key to the DB0
keydb-cli
> select 0
> set local db0

> get local
"db0"

> bgsave
> quit

systemctl restart keydb
keydb-cli
> info keyspace

# Keyspace
db1:keys=1428,expires=0,avg_ttl=0
db2:keys=1,expires=1,avg_ttl=507358

Locally added data is not saved across reboots. It is overridden by the data from Master nodes

Conclusion:

  1. KeyDB in Multi-master configuration override all data from the last master node
  2. KeyDB doesn’t persist locally added data on restart and override all data from the last master node

Documentation

  1. Active Replica Setup
  2. Using Multiple Masters

GitHub

  1. Issue in multi-master feature. Keydb is dropping local data. #210

Questions

  1. Described behavior is the correct one? KeyDB Multi-master will drop already sync data from the previous masters?
    How then we should interpret the following sentence?

    KeyDB will not drop its database when sync’ing with the master

  2. If described behavior is correct - how then we can accumulate data from multiple nodes at startup?

  3. How we can persist locally added data between reboot of the Multi-master node?
    Maybe it can be done by using 2 Multi-master nodes and then on restart it will get all the data from its mirror?

As it was mentioned by @jdsully on GitHub issue, there was a bug in the version 6.x.
Now we did tests with the fixed version - keydb-6.0.13-1