I saw this question (Question Link) here. I think it might not be possible to co locate spark workers/executors on same machine in Kubernetes environment. Answer to the question looks correct. I want to know is there any other way provided by spark-cassandra-connector to achieve similar functionality on Kubernetes environment.
Unless you deploy both Cassandra and Spark in the same container then by definition the Cassandra data cannot be local to the Spark worker/executor.
As I've explained in my answer to https://community.datastax.com/questions/11464/, both the Cassandra and Spark JVMs must exist in the same container/VM/server for the data to be local. Cheers!