I have a counter table in cassandra 3.9
create table counter_table ( id text, hour_no int, platform text, type text, title text,
count_time counter,
PRIMARY KEY (id, hour_no, platform, type , title));
my spark (2.1.0) scala (2.11) code is
import com.datastax.driver.core.{ConsistencyLevel, DataType}
import com.datastax.spark.connector.writer.WriteConf
val writeConf = WriteConf(consistencyLevel = ConsistencyLevel.ONE, ifNotExists = true)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", "false").option("inferSchema", "true").load("csv_file_path")
val newNames = Seq("id" , "hour_no" , "platform" , "type" , "title" , "count_time")
val dfRenamed = df.toDF(newNames: _*)
dfRenamed.write.format("org.apache.spark.sql.cassandra").
mode(SaveMode.Append).options(Map( "table" -> "counter_table", "keyspace" -> "key1",
"output.consistency.level" -> "LOCAL_ONE", "output.ifNotExists" -> "true" )).save()
The spark code gives error of consistency
Caused by: com.datastax.driver.core.exceptions.WriteFailureException:
Cassandra failure during write query at consistency LOCAL_QUORUM (2 responses were required but only 1 replica responded, 1 failed)
How can we specify Consistency of ONE in DataFrame
Both of your parameters are missing the beginning
All parameters should be prefixed with spark.cassandra.
But you have a second problem.
It is impossible to do an IF NOT EXISTS
query with any consistency level other than SERIAL
since it uses PAXOS. Which means you shouldn't be able to do ONE
Update: I now know that it is possible to do some very dangerous things with Paxos CL's, so it is possible to force different CL's for portions of the transaction but you shouldn't as you will basically break all the guarantees you wanted with the check in the first place.