I have a case where in I need to read a flat file with close to 100000 logical records. Each logical record is comprised of nx128 character parts. ie, Type A: 3x128, Type B : 4-5 X 128 etc where maximum possible n is 6.
Application has to read the file and process the records. The problem is 'n' can be determined only when we read the first 52 characters of each nx128 partition.
Could you please suggest any design paterns which I can re-use or any efficient algorithms to perform this ?
Note : 1. Performance is an important criteria as application need to process thousands of file like this everyday. 2. The data is not separated by lines. Its a long string like pattern
You could adopt a master-worker (or master-slave) pattern where in a master thread would be responsible for reading the first 52 characters of data to determine the length of the record. The master may then defer the actual work of reading and processing the records to a worker thread, and move on to the next record again to read only the first 52 characters. Each worker would be responsible for (re)opening the file and processing a particular range of characters; the worker needs to be provided with this information.
Since, I haven't seen the structure of the file, I can only post a few possible limitations or concerns for an implementer to think about: