Cloud Director Scheduled DB Backup

Unfortunately, the new Cloud Director version does not have an automated backup option. Only manual option to press button in appliance UI or execute command. 

So we have to create one ourselves using scripts!

We will use internal command /opt/vmware/appliance/bin/create-db-backup. On the NFS shared transfer service storage backup file is created. The .tgz file contains the database dump file, the global.properties,  responses.properties,  certificates,  proxycertificates, and truststore files of the primary cell.

https://docs.vmware.com/en/VMware-Cloud-Director/10.2/VMware-Cloud-Director-Install-Configure-Upgrade-Guide/GUID-04415BDC-7C21-4ECE-A51C-1067120BB65D.html

Self developed script is executing backup command then checks and deletes 30 days old files. Then using SFTP backup is uploaded to backup server. Script is scheduled via crontab.

Crontab


00 01 * * * /root/vcd_backup_script.sh > /dev/null 2>&1

Script


#!/bin/bash

# Directory where backups are stored
vcd_backup="/opt/vmware/vcloud-director/data/transfer/backups"

# Check and delete the backups which are older than 30 days
remove_old_backups() {
  # Find files older than 30 days
  older_backups=$(find "$vcd_backup" -type f -mtime +30)

  if [ -n "$older_backups" ]; then
    echo "Deleting older backups:"
    echo "$older_backups" | xargs -t rm -f
  else
     echo "No backups older than 30 days found."
  fi
}

# Check if the current node is the Primary VCD Cell
if sudo -i -u postgres repmgr node check --role | grep -q primary; then
  echo "Running Backup Job"
  # Execute the backup script
  /opt/vmware/appliance/bin/create-backup.sh
  # Call the function to remove old backups
  remove_old_backups
  #Upload to SFTP
  cd /opt/vmware/vcloud-director/data/transfer/backups/
  latest_file=$(ls -t | head -n 1)
  sftp user@10.10.10.1 << !
  cd upload
  put /opt/vmware/vcloud-director/data/transfer/backups/$latest_file
  bye
  !
else
  echo "This is not the Primary cell"
fi

SFTP Server Config


  • Local account  created with private/public key authorisation
  • Folder from sftp server location mapped to user home directory.
# /etc/fstab
# Created by anaconda on Wed Aug 31 13:18:39 2022
#

/data/duo-vcd/upload/ /home/duo-vcd-ssh/upload/  none    bind

Backup Server Clean Up


  • Script /root/clean_backup.sh
  • Executed via crontab daily
#!/bin/bash
#
find /data/duo-vcd/upload/backup-* -mtime +30 -type f -delete



After upgrading to the new version of cloud director 10.5.1.1, we noticed problems with a new database backup script. It was no longer able to execute normally from crontab. We saw errors about can't find binary executable psql. Little modifications in script solved errors. We altered parameter -i. Locate this script /opt/vmware/appliance/bin/create-backup.sh and make backup. Then alter script like below.
HTTP_CERT_ID=$(sudo -i -u postgres psql -d vcloud -A -t -c "$HTTP_CERT_ID_SQL")
if [[ (0 -ne $?) || -z "$HTTP_CERT_ID" ]]; then
    log_and_echo_error "Failed to obtain current webserver certificate ID."
    exit $CERT_BACKUP_ERROR
fi
JMX_CERT_ID=$(sudo -i -u postgres psql -d vcloud -A -t -c "$JMX_CERT_ID_SQL")
if [[ (0 -ne $?) || -z "$JMX_CERT_ID" ]]; then
    log_and_echo_error "Failed to obtain current JMX certificate ID."
    exit $CERT_BACKUP_ERROR
fi

Leave a Reply

Your email address will not be published. Required fields are marked *

I’m Aigars

Welcome to Virtualisation Alley, my cozy corner of the internet dedicated to VMware. Here, I invite you to join me on a journey into virtual world. Let’s go.

Let’s connect