Search code examples
pysparkapache-spark-sqlazure-databricks

Databricks Move Table to a particular Schema in SQL Database


The following pyspark code will copy data to the default dbo Schema in Azure SQL Database.

test2.write.mode("overwrite") \
    .format("jdbc") \
    .option("url", jdbcUrl) \
    .option("dbtable", 'UpdatedProducts')\
    .save()

However, we have multiple schemas in the database.

We have a schema called OCC. Can someone modify the code to allow us to copy data to the schema OCC?


Solution

  • Thank you @Alex Ott, making it as an answer it will useful to other community members

    I tried to reproduce the same in my environment and got the below results with dbo.schema:

    enter image description here

    If you have multiple schema then, use concatenation and making as schema.table_name

    l_schemas=["OCC","dbo","one"]# list for storing your schemas
    l_tables=["table1","table2","table3"] # list for storing respective tables in that particular indexed schema
    for i in range(0,len(l_schemas)):
        s=l_schemas[i]+"."+l_tables[i] # concatenation and making as schema.table_name
        df.write.mode("overwrite") \
        .format("jdbc") \
        .option("url", jdbcUrl) \
        .option("dbtable", s)\
        .save()