shareChannel effect on Gatling gRPC

98 Views Asked by At

I have a scenario to run 5000 bidiStream gRPC Users ramping up 2 Users / sec, this is my current settings

val grpcConf = grpc(managedChannelBuilder(s"${url}").usePlaintext())
                //.shareChannel
val settings_Result = grpc("name")
    .bidiStream[RequestName, ResponseName](RelayServerGrpc.METHOD, "NameStream")
//exec scenario
exec(
    httpconf.PaymentFinalizerProcess_Result
      .connect
      .header(metadataObject.Authorization)(s"Bearer ${metadataObject.TokenKey}")
      .endCheck(statusCode is Status.Code.OK)
  )
.exec(REST_API)
.exec(bidiStream_Complete)

Usually I will start to get INTERNAL errors at around 1800 Users (timestamp 900 sec), but when I turn on .shareChannel it would appear sooner at 900 Users (timestamp 450 sec). I have tested same scenario with both settings around 3-4 times to ensure the cause is from enabling .shareChannel

So I would want to ask the effect of .shareChannel to know more and improve this script, please help me on this.

1

There are 1 best solutions below

1
George Leung On BEST ANSWER

If shareChannel is used, all the virtual users share the same ManagedChannel.

One ManagedChannel (roughly speaking) uses one single TCP connection for all requests.

Multiple stubs on a managed channel in grpc?
TCP sessions with gRPC

Sharing the TCP connection lowers resource consumption, so a higher throughput might be reached. This can be useful if the server's capability to handle a large amount of connections is not important in the load test.


In your case it seems that your server (or the load generator, or the network) cannot handle a large amount of in-flight requests in a single connection. That's OK, just don't use the shareChannel option. The load test is more realistic that way.