Currently we have a bash script in place that will go through bitbucket store clones of each repository on a hard drive and backup that hard drive. Recently though the number of repositories has exceeded the bitbucket api pagelen count of 100. I've been unable to find a better way to do this so that we can backup more than 100.
repositories=`curl -s -S --user <user>:<password> https://api.bitbucket.org/2.0/repositories/<name>\?pagelen\=100 | jq -r '.values[] | "\(.full_name)"'`
echo "backup-bitbucket STARTED -- " $(date +"%m-%d-%Y"+"%T") >> ${LOGFILE}
# Back up each bitbucket repository by cloning or pulling the latest changes
for i in $repositories
do
echo "Starting backup for repository ${i}..." >> ${LOGFILE}
if [ -d "${BACKUPTODIR}${i}" ]; then
git -C ${BACKUPTODIR}${i} pull
else
git clone https://<user>:<password>@bitbucket.org/${i}.git ${BACKUPTODIR}${i}
fi
echo "Completing backup for repository ${i}..." >> ${LOGFILE}
done
Any help in this matter would be appreciated
I tried it on my terminal, and apprently it's possible to add the page
option in the url.
So &page=2
gives you the next 100 results
When you add &page
make sure you put quotes around the URL