Search code examples
visual-studioazurepowerbicubessas-tabular

How to handle multiple customers with different SQL databases


Summary

I have a project with multiple existing MSSQL databases, I already created an Azure Analysis Service where I deployed my first Tabular Cube. I already tested to access the Analysis Service, worked perfectly. Finally I have to duplicate the above described for ~90 databases (90 different customers). I'm unsure how to organize this project and I'm not sure about the possibilities I have.

What I did

I already browsed the Internet to find some information, but I just found a single source where somebody asked a similar question, the first reply is what I was already thinking about, as I described below. The last reply I don't really understand, what does he mean with one solution, is there another hierarchy above the project?

Question

A possibility would be to import each database as a source in the same project, but I think this means I have to import each table from this source, means finally 5*90 = 450 tables, I think this gets quickly outta control?

Also I thought about duplicating the whole Visual Studio Project folder for ~90 times for each customer, but at the moment I fail to find all references to change the name, but I think this wouldn't be to hard.

Is there an easier way to achieve my goal? Especially regarding maintainability.

Solution

I will make a completely new Database with all the needed tables. Inside those tables I copy the databases from all customers with a new column customerId. The data I'll transfer with a cyclic job, periodicity to define. Updates in already existing row in the customer database I handle with a trigger.


Solution

  • For this the best approach would be to create a staging database and import the data from the other databases, so your Tabular Model can read the data from it.

    Doing 90+ databases is going to be a massive admin overhead and getting the cube to lad them effectively is going to be problematic. Move the data using SSIS/Data factory as you'll be able to better orchestrate the data movement, and incremental loads that way. That way if you need to add/remove/update data sources it is not done in the Cube, its all done at the database/data factory level.