How to Migrate Email without IMAP credentials

Here are a few ways you can migrate emails without knowing the IMAP credentials.

  1. Use the Admin Password.
  2. Migrate emails using SFTP.
  3. Import/Export using RoundCube?

Use the Admin Password

Some email services allow you to use the administrator password to sign into any email account. This allows you to move emails without knowing the users password.

You can refer to this FAQ on the imapsync website.

https://imapsync.lamiral.info/FAQ.d/FAQ.Admin_Authentication.txt

Migrating Files using SFTP

Disclaimer:

This option will only work if you have ftp/ssh/filesystem access.
Depending on email volume, you could miss emails that arrive during the transition.
If possible, it is recommended to use something like imapsync.
There could be format issues if the two email servers use different mailbox formats and/or email server software.

Emails are usually stored in the users home directory. Depending on the hosting provider, it could be /mail or ~/mail

You can zip up the mail directory and then unzip on the target server. This would only work if you have access to the filesystem. Create your email accounts before unzipping.

You could transfer the passwd and shadow files to keep the email passwords the same. Again, create the email addresses on the target server first and then either overwrite, or merge the differences between the shadow and passwd files.

For example, on cPanel servers, the mail directory is in ~/mail and the shadow and passwd files are in ~/etc/DOMAIN.COM

If you are logged in as root, you will need to change ~/ to /home/USER/ substituting USER for the actual cPanel user.

Import/Export messages from RoundCube?

You can import and export emails using the RoundCube webmail interface. However, the export is limited to one. message. at. a. time. This could work for a handful of messages, but can get quite tedious if you have a large number of emails.

Backup UISP Application Backup Files with Rsync

UISP runs inside of a docker container. To copy out the backup files we need to use the “docker cp” command.

sudo docker cp unms:/home/app/unms/data/unms-backups ./uisp-backups

This will copy the backups into ./uisp-backups directory.

On an Ubuntu system, docker needs sudo permissions. If you copy the backups with the above command, the backup files will be assigned to the root user and you will not be able to use your normal user to manipulate the files.

You can either add your current user to the Docker group, or change the files owner

sudo chown username:username -R ./uisp-backups/

We can now copy all the automatic backups with rsync

sudo rsync -a ./uisp-backups -e "ssh -p 22" backupuser@backuphost:/backups

You can also automate this with Cron by doing something like

1 1 * * 1 docker cp unms:/home/app/unms/data/unms-backups ~/uisp-backups && rsync -a ~/uisp-backups -e "ssh -p 22" backupuser@backuphost:/backups

Every Monday at 1:01AM, copy the current UISP automatic backups, then use rsync to copy them to a remote server.

This expects that the current user has permissions to call Docker without sudo.

How to Archive UniFi Protect Footage

Here are some links and notes on archiving a UniFi Protect’s footage.

Apparently, the .ubv files just need to be remuxed to .mp4 so they are easily playable. On the UniFi Protect appliances, they have a ubnt_ubvexport and ubnt_ubvinfo binary that can do the remux. You can copy the binary off and run it with QEMU on x86 hardware.

Helpful Links.

https://github.com/danielfernau/unifi-protect-video-downloader

https://github.com/petergeneric/unifi-protect-remux

https://durdle.com/2022/01/22/extract-unifi-protect-video/

RSYNC

We can setup rsync to copy the raw footage off the Unifi Protect appliance. Once we have it locally, we can use the remux tool to convert the files to .mp4 so we can easily view them.

A cool thing about using rsync, is if our copy gets interrupted, we can just rerun the command and it will pick up where it left off without duplicating anything.

The following command is a mouthful. It searches for all the recorded video files for cameras with the MAC addresses specified. (MAC addresses can be found from the web interface), There are only a couple things to change or tweak for the command to work for you.

MAC1 should be the MAC address of camera 1 while MAC2 is the MAC address of the next camera we want to archive.
Change the dst_directory to the archive directory or drive.
And of course we need to change the IP address (10.0.0.1) to the UniFi Protect IP address.

ssh root@10.0.0.1 'find /srv/unifi-protect/video/ \( -name "MAC1*" -o -name "MAC2*" \) -printf %P\\0\\n' | rsync -a -v --exclude="*timelapse*" --files-from=- root@10.0.0.1:/src/unifi-protect/video/ dst_directory/

Here are the details for the commands.

  • -printf %P\\0\\n : Don’t print the full path, i.e. “/src/unifi-protect/video/”
  • -name “MAC1*” : Search for recording files that start with camera mac1 address.
  • -o -name “MAC2*” : Let’s us search for multiple “cameras” add more -o -name “mac3*” etc
  • rsync
  • -a : archive mode, copies date, permissions etc.
  • -v : verbose output. Not needed, but it is nice to see what it is copying.
  • –exclude=”*timelapse*” : Exclude timelapse files. Remove this if you want to archive them.
  • –files-from=- : Tells rsync to use the standard input for the list of files to download.
  • root@10.10.1.1:/src/unifi-protect/video/ : This is the source directory for where the video files are located
  • /archive/directory : The path where we are archiving the video footage.

Acquire ubnt_ubvinfo from UDM

Before we can use remux, we need to setup a local copy of ubnt_ubvinfo.

You should be able to use the following scp command to copy the ubnt_ubvinfo or ubnt_ubvexport binary from the UniFi Protect appliance.

scp root@10.0.0.1:/usr/share/unifi-protect/app/node_modules/.bin/ubnt_ubvexport ./

To install on Intel or AMD CPU’s, check out the following section on the unifi-protect-remux page.

https://github.com/petergeneric/unifi-protect-remux#quick-start-for-x86-linux

As a side note, it looks like you can download an old x86 version of ubnt_ubvinfo from archive.org. Use at your own discretion.

wget https://archive.org/download/ubnt_ubvinfo/ubnt_ubvinfo

Install unifi-protect-remux

Install ffmpeg

apt install -y ffmpeg

or

dnf install -y ffmpeg

Now we can download and install remux.

wget https://github.com/petergeneric/unifi-protect-remux/releases/download/v3.0.6/remux-x86_64.tar.gz
tar zxf remux-x86_64.tar.gz
sudo mv remux /usr/bin/

Now we can remux the files.

remux --with-audio=true dst_directory/*.ubv

You will need to script a way to recursively loop through the directories, or just do it manually.

UDM Pro Error Changing WAN IP Addresses

There appears to be a bug on the UDM Pro that you can encounter while trying to update your WAN IP addresses. The error was similar to “Can’t change IP Address “PublicIP” used in Default Network”

https://community.ui.com/questions/UDM-Pro-Cant-set-Static-IP-Address-on-WAN-interface/9f83c841-da1a-4b16-b963-c4be3ae3fbab?page=2

It appears that the issue stems from the Internet Source IP being used in the LAN Network settings.

The way to work around this is to disable the Internet Source IP. However, this is greyed out which keeps us from making any changes. We can however use the Chrome Developer tools to get around this restriction.

  • Enable the Legacy Interface. UniFi Network Settings -> System -> Legacy Interface
  • Go to Settings -> Networks -> Edit (Select Default Network)
  • Open up the Dev tools with Ctrl + Shift + i and select Console
  • Paste the following in and hit enter
$$('[disabled]').forEach( a => a.disabled=false )
Enable Internet Source IP on UDM Pro
  • Find “Internet Source IP”, Disable and Save!

Swap back to the new user interface and go change the WAN IP address.

Identify “Magic Bytes” for Mikroitk Backup File

the “magic bytes” are the first few bytes of a file that can tell you what format the file is.

https://en.wikipedia.org/wiki/List_of_file_signatures

Mikrotik Backup file magic bytes

File typeMagic bytes
RC4 EncryptedEFA89172
AES EncryptedEFA89173
Not Encrypted88ACA1B1

Using the above list, we can view a Mikrotik .backup file in a hex editor like GHex or dump it with xxd.

Extract UniFi .unf backup file

In this post we are going to extract the contents of a UniFi .unf backup.

This is helpful if we need to do any sort of recovery, or need to look through the database to find system information.

  1. Acquire backup
  2. Decrypt and extract backup
  3. Dump database to JSON file

Acquire Backup

This is easy to do. Log into the web interface go to Settings -> System -> Maintenance -> Backup and Restore

Scroll down to Available Backups and download.

Download Backup in UniFi Controller

You can also get the file via scp or sftp. Manual backups are located in

/usr/lib/unifi/data/backup

and auto backups are in

/usr/lib/unifi/data/backup/autobackup

Decrypt and Extract Backup

We’ll be getting the following decrypt script from here. https://github.com/zhangyoufu/unifi-backup-decrypt More notes on it below.

We’ll need to make sure that openssl and zip are installed

sudo apt install openssl zip

Download the script with wget

wget https://raw.githubusercontent.com/zhangyoufu/unifi-backup-decrypt/master/decrypt.sh

Make it executable

sudo chmod u+x decrypt.sh

And now we can convert the UniFi .unf backup file to a .zip

sudo ./decrypt.sh autobackup_6.2.33.unf autobackup_6.2.33.zip

Now we can extract the zip archive. You can do this on Windows, macOS, or Linux through the GUI or you can extract with

sudo unzip autobackup_6.2.33.zip -d unifi

This will extract all the files and folders to a directory named unifi.

cd unifi

Dump database to JSON

You should now see the db.gz file. This is a compressed archive of the database in BSON (Binary JSON) format. We can use the mongo-tools to convert this to a more human readable JSON format.

sudo apt install mongo-tools

Now we can extract the archive and pipe it through bsondump.

gunzip -c db.gz | bsondump

You can run it through grep to filter out what you need.

You can also dump the db to a json file with

bsondump --bsonFile=db --outFile=db.json

More notes on the decrypt script.

The decrypt script is really simple. It looks like it uses a key to decrypt the UniFi backup and then puts all the contents into a zip file. There is also an encryption script. Theoretically you can decrypt, make changes to the config and then reencrypt and restore to a server.

#!/bin/sh

# Authors:
# 2017-2019 Youfu Zhang
# 2019 Balint Reczey <balint.reczey@canonical.com>

set -e

usage() {
    echo "Usage: $0 <input .unf file> <output .zip file>"
}

if [ -z "$2" -o ! -f "$1" ]; then
    usage
    exit 1
fi

INPUT_UNF=$1
OUTPUT_ZIP=$2

TMP_FILE=$(mktemp)
trap "rm -f ${TMP_FILE}" EXIT

openssl enc -d -in "${INPUT_UNF}" -out "${TMP_FILE}" -aes-128-cbc -K 626379616e676b6d6c756f686d617273 -iv 75626e74656e74657270726973656170 -nopad
yes | zip -FF "${TMP_FILE}" --out "${OUTPUT_ZIP}" > /dev/null 2>&1

Troubleshooting Backup Errors on WHM / cPanel

Below are some helpful locations of files, logs etc for troubleshooting backup errors on WHM

WHM backup logs
Change date to the correct date. Should be one log per day or I guess every time a backup runs.

/usr/local/cpanel/logs/cpbackup/{date}.log

View WHM backup config

cat /var/cpanel/backups/config

View WHM remote destination config(s)
Replace *** with the appropriate name.

cat /var/cpanel/backups/***.backup_destination

Rysnc.pm file

May need to modify this file to increase time out limits if you are having issues with time out errors for backups.

/usr/local/cpanel/Cpanel/Transport/Files/Rsync.pm

This link has some more info https://forums.cpanel.net/threads/what-commands-does-cpanel-run-over-ssh-to-do-rsync-backups.671777/

Unable to prune transport Rsync Incremental Backup – WHM/cPanel

For some reason I keep getting an alert about the transport failing to prune the incremental backups. Shows “ssh slave failed: timed out”

Going to the backup server shows that the directories have been pruned. This makes the alert a bit confusing.

It appears that others are experiencing the same problem.

https://forums.cpanel.net/threads/remote-incremental-backups-timeouts.606911/page-3

https://forums.cpanel.net/threads/error-pruning-ssh-slave-failed-timed-out.627691/

You can check the backup log to see if it gives you any errors or ideas on what the problem is. Replace {currentdate} with the date of the log file you want.

/usr/local/cpanel/logs/cpbackup/{currentdate}.log

One thing to try is to increase the time out on the

In WHM, go to Backup -> Backup Configuration -> Additional Destinations -> Your Destination
Scroll down to the bottom and enter a higher timeout value

Setting cPanel remote transport timeout

One user said they patched the rsync.pm file. Looks like there may be a 30 second timeout for rsync, so maybe increasing that would help.

/usr/local/cpanel/Cpanel/Transport/Files/Rsync.pm

Backup Matrix Synapse PostgreSQL Database

This is part of a series of posts on backing up and restoring a backup for Matrix Synapse server. Synapse was installed using the matrix-docker-ansible deployment which while a little complicated can greatly ease management later on down the road. All the main components are in docker containers so we need to use docker to access.

https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/docs/maintenance-postgres.md#backing-up-postgresql

As the root user run

docker exec --env-file=/matrix/postgres/env-postgres-psql matrix-postgres pg_dumpall -h matrix-postgres | gzip -c > /matrix/postgres.sql.gz

This will dump the Postgres database in /matrix/postgres.sql.gz
We can use this later to restore to a new server or keep as a backup.

Configure rsnapshot on Ubuntu Server

rsnapshot is a utility that uses rsync to backup files locally or it can backup files from a remote server.

While trying to figure out a good solution for backing up an Ubuntu Server I decided to try rsnapshot, however since it can either create a local backup or pull a remote backup it needs to be configured to do that on the backup server side. It does not “push” a backup to a backup server.

Some helpful snippits from the man file.

rsnapshot will typically be invoked as root by a cron job, or series of cron jobs. It is possible, however, to run as any    arbitrary user with an alternate configuration file.
...
USAGE
        rsnapshot can be used by any user, but for system-wide backups you will probably want to run it as root.
...
NOTES
        Make sure your /etc/rsnapshot.conf file has all elements separated by tabs.  See
        /usr/share/doc/rsnapshot/examples/rsnapshot.conf.default.gz for a working example file.
    Make sure you put a trailing slash on the end of all directory references.  If you don't, you may have extra directories    created in your snapshots.  For more information on how the trailing slash is handled, see the rsync(1) manpage.

Overview

Scenario

Host A runs xyz application and host B is the backup server. We create a backup user on host A, host B then uses that user to ssh and rsync backups to itself.

  1. Create backup user
  2. Configure rysnc to be used without a password
  3. Setup SSH Key, aka Passwordless authentication (On backup server)
  4. Setup rsnapshot config (On backup server)
  5. Configure rsnapshot in crontab (On backup server)
  6. Final Testing

Create backup user

The following commands are fairly straight forward. Change backupuser to whatever you want to call your backup user.

sudo useradd -m backupuser
passwd backupuser
sudo usermod -a -G sudo backupuser

Configure rysnc to be used without a password

We need to setup the backup user to be able to use “sudo rsync” without having to input the user password. If we don’t use sudo we can’t access system files for backups. And if we have to manually input the password every time rsync runs, then the backups would not be automatic. The following link was helpful.

https://unix.stackexchange.com/questions/325100/proper-way-to-set-up-rsnapshot-over-ssh

All we need to do is create a file in /etc/sudoers.d/username and then tell it we don’t need to enter a password when “sudo rsync” is run.

sudo tee /etc/sudoers.d/backupuser <<<'backupuser ALL = (root) NOPASSWD: /usr/bin/rsync'

Setup SSH Key, aka Passwordless authentication (On backup server)

Log into the backup server

Create SSH keys. Note that since rsnapshot wants to run as root, we create the key and copy it as the root user.

sudo ssh-keygen

Accept all the defaults so we can login from the backup server without having to enter in a password.

Copy ssh key to the server we are wanting to back up

sudo ssh-copy-id backupuser@ip

enter in the password and the the key should get copied it over. Once complete, verify that you can login without having to enter in a password.

Setup rsnapshot config (On backup server)

Open up the rsnapshot config file and modify where appropriate. /etc/rsnapshot.conf

Change the path to where the snapshots are stored. By default it stores them under /.snapshots. I moved it under a local user as I am not needing to use rsnapshot to backup the local backup server files.


# SNAPSHOT ROOT DIRECTORY
snapshot_root /home/user/rsnapshot/snapshots/

Add a daily backup option under Backup levels

# BACKUP LEVELS / INTERVAL #
retain daily 6

Setup remote server to get a backup from. Replace ipaddress and directories as needed. hostname is the sever name. You can change to whatever you want.

### BACKUP POINTS / SCRIPTS ###
# LOCALHOST
# Comment or delete entries unless you want to backup those as well
# EXAMPLE.COM
backup  backupuser@ipaddress:/home/     hostname/       +rsync_long_args=--rsync-path="sudo rsync"

If you would like to back up multiple locations you can create multiple entries with different remote paths. Example locations to add

backup  backupuser@ipaddress:/etc/     hostname/       +rsync_long_args=--rsync-path="sudo rsync"
backup  backupuser@ipaddress:/usr/local/     hostname/       +rsync_long_args=--rsync-path="sudo rsync"

Verify that the config is good with

sudo rsnapshot configtest

It should return Syntax OK

Setup Crontab

sudo crontab -e

Add the following line to run rsnapshot at 3AM every day. More information about crontab can be found here.

0 3 * * * /usr/bin/rsnapshot daily

Final Testing

Manually run a backup to verify everything is set up correctly.

sudo rsnapshot daily

After it runs you can check the directory you specified in the config file to verify that the files did get copied.