How to specify partition count on cache creation using Ignite thin client?

64 Views Asked by At

We are using Ignite cache in application. Multiple instances joins into topology, where caches created using Ignite.getOrCreateCache() with RendezvousAffinityFunction for partitioning like:

public static IgniteCache<String, Value> getOrCreateCache(Ignite ignite, String cacheName, int partitionCount) {
        return ignite.getOrCreateCache(
                new CacheConfiguration<String, Value>(cacheName)
                        .setGroupName("group")
                        .setBackups(0)
                        .setCacheMode(CacheMode.PARTITIONED)
                        .setAtomicityMode(CacheAtomicityMode.ATOMIC)
                        .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
                        .setAffinity(new RendezvousAffinityFunction(true, partitionCount))
               );
    }

I am migrating application from thick client to thin client, thus using IgniteClient.getOrCreateCache() like:

    public static ClientCache<String, Value> getOrCreateCache(IgniteClient igniteClient, String cacheName, int partitionCount) {
        return igniteClient.getOrCreateCache(
                new ClientCacheConfiguration()
                        .setName(cacheName)
                        .setGroupName("group")
                        .setBackups(0)
                        .setCacheMode(CacheMode.PARTITIONED)
                        .setAtomicityMode(CacheAtomicityMode.ATOMIC)
                        .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
                        // .setAffinity() ???
                );
    }

I am struggling of finding more info about configuring cache over IgniteClient, either in documentations or sources. As I could find, thin client supports partition awareness, and can distribute cache operations over nodes. How many partition would be created on cache that way, and why isn't it configurable?

2

There are 2 best solutions below

1
Pavel Tupitsyn On BEST ANSWER

NOTE: As the other answer explains, you don't need to change the default number in 99% use cases.

With that said, partition count is provided by the affinity function. You can set it like this:

CacheConfiguration<Object, Object> cacheCfg = new CacheConfiguration<>("cache-1")
        .setAffinity(new RendezvousAffinityFunction().setPartitions(512));

However, custom affinity can't be set from the client side (for technical reasons - affinity function can have custom implementation and would require code deployment).

Workarounds:

1
user21160483 On

Be aware that by default you get 1024 partitions for a partitioned cache. This works very well for most applications as there is sufficient divisions of data but not too many. The hash function that is applied to keys to determine which partition is also well suited for 1024 partitions.

I can think of only a very few reasons to change the partition count. These are as follows:

  1. I have a cluster that is more than 1024 nodes and I would like to see my data partitioned onto all nodes.
  2. I have 1 cache that is exceedingly large and I would like to see smaller partitions on that cache. Even for the small cache scenario I might opt to use a replicated cache especially if this cache is not updated that much.

I hope the above thoughts are helpful.