I know I can use a custom dialect for having a correct mapping between my db and spark but how can I create a custom table schema with specific field data types and lengths when I use spark's jdbc.write
options? I would like to have granular control over my table schemas when I load a table from spark.
There is a minimal flexibility for writes, implemented by
but if you want
to have granular control over my table schemas when I load a table from spark.
you might have to implement your own JdbcDialect
. It is internal developer API and as far as I can tell it is not plugable so you may need customized Spark binaries (it might be possible to registerDialect
but I haven't tried this).