In google cloud I have a storage and a segment where I copy a 2.5Gb file but it takes 15 hours. What configuration should the storage and the segment have to copy fast? . To copy I use: gsutil cp filex gs: // namesegment
I created a 3 GB file:
dd if=/dev/zero of=./gentoo_root.img bs=4k iflag=fullblock,count_bytes count=3G
For uploading to GCS you have some options:
You can use the gsutil tool transfers to enable a multi-threaded copy with -m flag:
gsutil -m cp gentoo_root.img gs://my-bucket/file.img
You can use gsutil with parallel composite uploads:
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp gentoo_root.img gs://my-bucket/file.img
You can find more information: Strategies for transferring big data sets