i have a cassandra table and im reading it via spark scala in gcp dataproc. the below is the optimised code running. but it fails rarely.
mygcpdf.rdd.repartitionByCassandraReplica(SC.CASSANDRA_KEY_SPACE,
cassandraTable)
.joinWithCassandraTable[tbl](SC.CASSANDRA_KEY_SPACE,cassandraTable)
.on(SomeColumns("col1", "col2", "col3"))
val newdf = joined.map(x => x._2).toDF(al_cols)
throws error: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded)
is this because the cassandra table doesnt have matching key combiantion for those records it is failing. is there any option to ignore those records and proceed with other.