A Problem about Only One Node in an Ignite Server Cluster with Data Not Expiring

60 Views Asked by At

I started a 3-node Ignite cluster without enabling persistence. After running for a period of time, I found that the cached data on one node will never expire, while the other two nodes can expire normally.

I connect and insert data using the following method:

public class InsertData05 {
    public static void main(String[] args) throws Exception {
        ClientConfiguration clientConfiguration = new ClientConfiguration();
        clientConfiguration.setAddresses("127.0.0.1:10800");
        IgniteClient client = Ignition.startClient(clientConfiguration);

        ClientCacheConfiguration configuration = new ClientCacheConfiguration();
        configuration.setName("test");
        configuration.setCacheMode(CacheMode.REPLICATED);

        ClientCache<Object, Object> cache = client.getOrCreateCache(configuration);
        Duration duration = new Duration(TimeUnit.SECONDS, 60);
        ExpiryPolicy expiryPolicy = ModifiedExpiryPolicy.factoryOf(duration).create();
        for (int i = 0; i < 1; i++) {
            cache.withExpiryPolicy(expiryPolicy).put("bb" + i,"bb");
        }

        client.close();
    }
}

After opening the DEBUG log of the Ignite server node, it was found that the problematic node no longer appears to scan for expired data on the off heap cache and place it in the queue. In its log, there is only information similar to the following:

[2023-08-31 17:51:01.355][DEBUG][GridTimeoutProcessor] Timeout has occurred [obj=org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager$1$1@qso42s7c, process=true]
[2023-08-31 17:51:01.355][DEBUG][GridClosureProcessor] Grid runnable started: closure-proc-worker
[2023-08-31 17:51:01.355][DEBUG][PartitionsEvictManager] After filling the evict queue [res=0, tombstone=true, size=0]
[2023-08-31 17:51:01.355][DEBUG][GridClosureProcessor] Grid runnable finished normally: closure-proc-worker

In the normal node log, there will be the following logs:

[2023-08-31 17:51:00.265][DEBUG][GridTimeoutProcessor] Timeout has occurred [obj=org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager$1$1@670cf995, process=true]
[2023-08-31 17:51:00.265][DEBUG][GridTimeoutProcessor] Timeout has occurred [obj=org.apache.ignite.internal.processors.cache.distributed.dht.topology.PartitionsEvictManager$1$1@7f8e158b, process=true]
[2023-08-31 17:51:00.265][DEBUG][GridClosureProcessor] Grid runnable started: closure-proc-worker
[2023-08-31 17:51:00.265][DEBUG][GridClosureProcessor] Grid runnable started: closure-proc-worker
[2023-08-31 17:51:00.265][DEBUG][PartitionsEvictManager] After filling the evict queue [res=0, tombstone=false, size=0]
[2023-08-31 17:51:00.266][DEBUG][PartitionsEvictManager] After filling the evict queue [res=0, tombstone=true, size=0]
[2023-08-31 17:51:00.266][DEBUG][GridClosureProcessor] Grid runnable finished normally: closure-proc-worker
[2023-08-31 17:51:00.266][DEBUG][GridClosureProcessor] Grid runnable finished normally: closure-proc-worker

Version:

I am using Gridgain Community Edition version 8.8.9

What are the possible reasons for non tombstone data not being scanned? I found GG-34133, which seems to have been optimized here, but I am unable to view its detailed information. Is there a correlation between the two?

1

There are 1 best solutions below

2
Stephen Darlington On

This does look like a bug. Since you're using a pretty old version (nearly two years old at this point), I would suggest upgrading. The ticket you reference -- fixed in 8.8.11 -- might be relevant, but I can't be completely sure.