I have a data set with about 7999 attributes and 39 labels, with 3339 total observations (resulting in 3339x8038 data set), and I'm trying to upload id to Azure ML platform. I've selected the 'type' as 'tabular', encoding as 'utf-8', no row skipping, and use header from first file. The problem is, that the headers are still not included and the data is interpreted as string with 0s, 1s, and commas (see pic https://i.sstatic.net/rplxz.jpg)
Am I missing something? For smaller data sets it seemed to work. My headers are A1, ... A7999 for the attributes, and L1, ... L39 for the labels.
Thanks for help in advance.
our system does our best guess over file settings when you try to create a dataset, but cannot guarantee perfect guesses in all cases.
In such scenarios, you should be able to adjust the settings. We had a bug with the ability to change those settings, but rolled out a fix. Can you try to change those now?