Active repication masters flush data alternately

Hello,

3 weeks ago we have switched from Redis to KeyDB (v.6.0.8-1) on our production environment in active-replica configuration. After two weeks we had network issue (lost connection between both masters) and that was time when we started to have problems with replication. During network issue we found in logs something like this:

Primary master:
8:13:S 02 Oct 2020 13:28:59.252 # MASTER timeout: no data nor PING received…
8:13:S 02 Oct 2020 13:28:59.252 # Connection with master lost.
8:13:S 02 Oct 2020 13:28:59.252 * Caching the disconnected master state.
8:13:S 02 Oct 2020 13:28:59.252 * Connecting to MASTER secondary:6379
8:13:S 02 Oct 2020 13:28:59.254 * MASTER <-> REPLICA sync started
8:13:S 02 Oct 2020 13:28:59.255 * Non blocking connect for SYNC fired the event.
8:13:S 02 Oct 2020 13:28:59.256 * Master replied to PING, replication can continue…
8:13:S 02 Oct 2020 13:28:59.258 * Partial resynchronization not possible (no cached master)
8:13:S 02 Oct 2020 13:29:01.264 * Full resync from master: 8de3d2b4d0ba9817fcff18037a2547a4c3177bce:173509023120
8:13:S 02 Oct 2020 13:29:01.265 * Discarding previously cached master state.
8:13:S 02 Oct 2020 13:29:01.277 - DB 3: 2353110 keys (2353103 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 4: 2050682 keys (2050676 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 5: 1372625 keys (1372625 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 6: 2941251 keys (2941244 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 7: 2825323 keys (2825294 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 12: 531883 keys (531883 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 14: 1884488 keys (1884471 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 18: 562305 keys (562276 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 19: 471620 keys (471617 volatile) in 524288 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 23: 5070 keys (5066 volatile) in 8192 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 24: 2778902 keys (2778901 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 25: 2779001 keys (2778996 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 - DB 26: 1632196 keys (1632193 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.277 . 3599 clients connected (1 replicas), 80556334120 bytes in use

Secondary master (failover):
8:13:S 02 Oct 2020 13:28:59.259 * Replica primary:6379 asks for synchronization
8:13:S 02 Oct 2020 13:28:59.259 * Full resync requested by replica primary:6379
8:13:S 02 Oct 2020 13:28:59.259 * Starting BGSAVE for SYNC with target: disk
8:13:S 02 Oct 2020 13:29:01.264 * Background saving started by pid 56
8:13:S 02 Oct 2020 13:29:01.265 - DB 3: 2353118 keys (2353111 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 4: 2050679 keys (2050673 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 5: 1372624 keys (1372624 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 6: 2941250 keys (2941243 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 7: 2825310 keys (2825281 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 12: 531880 keys (531880 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 14: 1884496 keys (1884479 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 18: 562334 keys (562305 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 19: 471620 keys (471617 volatile) in 524288 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 23: 5066 keys (5062 volatile) in 8192 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 24: 2778900 keys (2778899 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 25: 2778999 keys (2778994 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 - DB 26: 1632194 keys (1632191 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:01.265 . 1 clients connected (2 replicas), 79117771136 bytes in use
8:13:S 02 Oct 2020 13:29:01.265 - Error writing to client: Broken pipe
8:13:S 02 Oct 2020 13:29:01.265 # Connection with replica client id #7 lost.
8:13:S 02 Oct 2020 13:29:06.307 - DB 3: 2353121 keys (2353114 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 4: 2050692 keys (2050686 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 5: 1372625 keys (1372625 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 6: 2941255 keys (2941248 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 7: 2825340 keys (2825311 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 12: 531889 keys (531889 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 14: 1884515 keys (1884498 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 18: 562331 keys (562302 volatile) in 1048576 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 19: 471620 keys (471617 volatile) in 524288 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 23: 5070 keys (5066 volatile) in 8192 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 24: 2778904 keys (2778903 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 25: 2779003 keys (2778998 volatile) in 4194304 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 - DB 26: 1632203 keys (1632200 volatile) in 2097152 slots HT.
8:13:S 02 Oct 2020 13:29:06.307 . 1 clients connected (1 replicas), 79118629176 bytes in use

As you can see secondary master somehow found 2 replicas. After this situation every 8 minutes our primary master was blocking connections from clients for some short amount of time. We are guessing that it is somehow connected with this in logs:

8:13:S 05 Oct 2020 20:00:28.026 * Starting BGSAVE for SYNC with target: disk
8:13:S 05 Oct 2020 20:00:29.443 * Background saving started by pid 458

458:13:C 05 Oct 2020 20:08:52.984 * DB saved on disk
458:13:C 05 Oct 2020 20:08:54.286 * RDB: 1180 MB of memory used by copy-on-write
8:13:S 05 Oct 2020 20:08:56.125 * Background saving terminated with success
8:13:S 05 Oct 2020 20:08:57.020 - Accepted secondary:44812
8:13:S 05 Oct 2020 20:08:57.023 * Replica secondary:6379 asks for synchronization
8:13:S 05 Oct 2020 20:08:57.023 * Full resync requested by replica secondary:6379
8:13:S 05 Oct 2020 20:08:57.023 * Starting BGSAVE for SYNC with target: disk
8:13:S 05 Oct 2020 20:08:58.445 * Background saving started by pid 459

We decided to restart primary and then secondary master. Both masters started and loaded data from AoF and after this they alternately started flushing data and making full resync from each other. That was moment when we decided to turn off replication on both masters. Now we try to figure out how to prevent situations like this. Is this somehow correlated with issue #210 and upgrade to version 6.0.13-1 will help ?
Our configuration:

loglevel debug
port 6379
maxmemory 180gb
maxmemory-policy allkeys-lru
appendonly yes
no-appendfsync-on-rewrite no
databases 51
active-replica yes
replicaof secondary 6379
activedefrag yes

everything else is default. Our cache is now ~100GB and as you can see in our logs we use several data bases inside cache.