We use curl on our openvms system to successfully download many files - no problems. We have a particularly big ZIP file that I wanted to try and multi-part download in parallel using the --range flag of curl to get different parts of the file that we can then append to one large ZIP.
So as a test I tried it out on a smaller file of approx 50 Mbtyes. Using this sequence of commands it worked perfectly: Note that the curl commands will normally be run in parallel, not just one after the other as shown
$ curl --range 0-5000000 bigfile.zip -o part1.zip
$ curl --range 5000001-50000000 bigfile.zip -o part2.zip
When the above two commands complete I do
$ copy part1.zip,part2.zip final.zip
and the following unzip works as expected
$ unzip -ao final.zip
Ok, so I thought I would try and split it 3 ways now e.g
$ curl --range 0-5000000 bigfile.zip -o part1.zip
$ curl --range 5000001-30000000 bigfile.zip -o part2.zip
$ curl --range 30000001-50000000 bigfile.zip -o part3.zip
Three zips are produced as expected , but this time after
$ copy part1.zip,part2.zip,part3.zip final.zip
on the unzip I get ....
$ unzip -ao final.zip
Archive: final.zip;1
**warning final.zip;1: 1 extra byte at beginning or within zipfile**
(attempting to process anyway)
file #1: bad zipfile offset (local header sig): 1
(attempting to re-compensate)
inflating: CompanyRel.txt [text]
error: invalid compressed data to inflate
[ WriteRecord: sys$put failed ]
[ %RMS-F-RSZ, invalid record size ]
[ %NONAME-W-NOMSG, Message number 00000000 ]
Any suggestions as to how to fix would be welcome
Your part*.zip files are very likely of record format Stream_LF, which you can check with a DIR/FULL command. For whatever reason, the VMS copy command appends a line feed (the above LF) to the contents of each file after the first comma (or plus) in the source file list. You can check that for your successfully concatenated zip file out of two parts. The new file is one byte longer and that is the LF at the end.
That additional LF at the end doesn't disturb unzip. But with three files in the list there are two LFs, one after the second part and one at the end. And the one "in the middle" is the one unzip complains about: file offsets are wrong, etc.
After copying the files with curl, try to set a different record format: udf. Something like
$ set file/attribute=(rfm=udf) part%.zip
then do the concatenation with copy. This should prevent the copy command from appending the LF and should make unzip happy.