I have a big cube with 2.5 million new data per day. 19 million a week. Those data are historical data, no update , no remove and no change.So what's best partition strategy for this kind of data ? You can see only one week there are a lot of data. Shall I create a new partition everyday to process new data and merge into a static large partition at night ?
I think the best solution is to use different ranges:
At the end of each week you can merge daily partitions. 19 millions per one partition is good, but using weekly basis for older years may cause additional time for querying and processing.
So you'll have less than 100 partitions for entire measure group at least for the nearest 40 years (7 daily + 53 weekly + 40 yearly).
And don't forget to add slices to every created partition.
Removing unnecessary indexes (e.g. for high-selective attributes used as properties etc.) may also help to speed up process time and decrease disk space usage.