I'm currently working with Azure Synapse and have encountered an issue for which I can't seem to find a workaround. (Due to confidentiality concerns with my customer's information, I've obscured some details.)
I created this serverless database (let's says "MyDB") :
I developed a pipeline to retrieve CSV files and convert them into Parquet format, which was successful.
Now, my objective is to create another pipeline —a Copy Data task— to import Parquet files into my serverless database.
To do this, I selected my Azure Data Lake Gen 2 Account where my Parquet files are stored:
I'm able to preview my data without issues. However, when attempting to select my previously created serverless database as the destination, it appears that none is available:
In terms of performance, costs, etc., a serverless database suffices for our needs, and we prefer not to establish a dedicated SQL Pool.
All necessary permissions are correctly set up. It's perplexing, and I feel like I'm missing something obvious.
Could anyone offer some assistance or insights?
Each Azure Synapse Analytics workspace includes serverless SQL pool endpoints, providing the capability to query data stored in Azure Data Lake (in formats such as Parquet, Delta Lake, and delimited text), Azure Cosmos DB, or Dataverse.
The Serverless SQL pool serves as a querying service for your data lake, offering access to your data with the following capabilities:
It utilizes a familiar T-SQL syntax, allowing you to query data directly without the requirement to copy or load data into a specialized store.
For example, if you have data in your Parquet files and you want to move data from Parquet files to a serverless SQL table, follow these steps:
In the image below, the pointer is directed to the ADLS Parquet folder.
Right-click on the Parquet folder, and you will find the following options.
You can select the database and create the table.
Results: