I have a web application which allows the user to upload DICOM and Non-DICOM files. I am using JavaScript, HTML5, Webkitdirectory, and Datatable to populate selected files on UI. The issue i am facing is -
If the folder contains more than 768 DCM files (to be precise), Edge browser freezes and user cannot perform any operations until user hits 'recover this page' button. The size of the DCM file and number studies do not matter.
Few observations :
- Reading 1
Total Number of Studies: 4, Total Number of DCM files: 768, Total size of the folder: ~500 MB, Time to display on UI: 40 Seconds
- Reading 2
Total Number of Studies: 5, Total Number of DCM files: 769, Total size of the folder: ~500 MB, Time to display on UI: Browser Freezes
- Reading 3
Total Number of Studies: 2, Total Number of DCM files: 768, Total size of the folder: ~30 MB, Time to display on UI: 40 Seconds
- Reading 4
Total Number of Studies: 3, Total Number of DCM files: 769, Total size of the folder: ~30 MB, Time to display on UI: Browser Freezes.
Note : This work like a charm in Chrome And Firefox.
Here is my code which is executed for each and every DICOM file (Inside forEach) -
var fileReader = new FileReader();
fileReader.onload = function(evt){
console.log("Completed Reading");
var arrayBuffer = fileReader.result;
var byteArray = new Uint8Array(arrayBuffer);
_parseDicom(byteArray);
try {
if (fileReader.readyState !== 2) {
fileReader.abort();
}
}
catch (err) {
console.log('error occured: '+err);
}
}
var blob = f.slice(0, 50000);
console.log("Starting to Read");
fileReader.readAsArrayBuffer(blob);
Analysis :
I believe this may be due to browser memory or function call stack as
readAsArrayBuffer
would generate onload event once it finishes reading.
Any workaround or any idea if this is an issue with MS Edge browser itself ?
I'm curios what your _parseDicom
dose... if it can be optimised to check if some magic number is there to indicate if a file is a DCM file (I don't know what a DCM file is)
Perhaps you shouldn't do all those File reads in a forEach loop since that will read all files in parallel, you could maybe optimize the memory if you do one at a time.
async function doWork(fileList) {
for (const file of FileList) {
const res = new Response(file.slice(0, 50000))
const arrayBuffer = await res.arrayBuffer()
const byteArray = new Uint8Array(arrayBuffer)
_parseDicom(byteArray)
}
}
Perhaps something that would be better is if your _parseDicom could accept a stream instead of a byte array
async function doWork(fileList) {
for (const file of fileList) {
const res = new Response(file.slice(0, 50000))
const stream = res.body.getReader()
await _parseDicom(stream)
}
}
The other best thing could be to move this into a web worker and postMessage the file to it
function doWork(files) {
var file = files.pop()
file && new Response(file.slice(0, 50000)).arrayBuffer(function(arrayBuffer){
var byteArray = new Uint8Array(arrayBuffer)
_parseDicom(byteArray)
doWork(files)
})
}
doWork(Array.from(fileList))