How do you back up your music PC?

Dave Merrill

Axe-Master
Both physically how, and what software do you use to do it.
Data is potentially pretty big.
I've used hotswappable drive bays, Syquest drives and data DAT back in the day.
Options depend somewhat on whether you want to back up and restore the system drive and applications too.
I'm not a pro studio, responsible for client projects, just my music environment and my own recordings.

Thoughts?
 
Acronis to a separate SSD.

All DAW project files (going back 15 years) backed up to two different SSD’s and also to Dropbox.
 
Synology NAS backup utility. Then backup the NAS periodically to a USB drive I keep offsite. Now that I have real internet maybe I’ll reconsider some cloud option
 
Last edited:
Here's a summary of what I do:

1) robocopy files from multiple SSDs to a single external HDD
2) robocopy DAW project files to OneDrive
3) dupe SSDs using external duper

Here's an excerpt of the "script" I use to do steps 1 and 2. I just put the commands in a batch file on my desktop and run every time I use the PC. The first run takes a while, after that it's quicker because it only copies new and changed files.

# "...

# Backup_Studio_Audio_E
robocopy "E:\" "G:\z_Backups\z_Backup_Studio_Audio_E" /XO /E /R:0 /W:0 /PURGE /NFL /NDL /MT /LOG+:c:\z_downloads\robocopy.log

# ...

# Map O drive to OneDrive:
[REDACTED]

# Copy DAW projects to OneDrive
robocopy "E:\data\Music\Cubase_Projects" "O:\Music\Cubase_Projects_Backups" /XO /E /R:0 /W:0 /PURGE /NFL /NDL /MT /LOG+:c:\z_downloads\robocopy_Cubase_Projects.log

# ..."
 
Last edited:
I just backup my pojects, videos and audio files to an external hard drive locally and then also have a copy off site. The programs on the pc including the pc are not that important. I change those out every year or two anyway. I live too far in the sticks for fast internet so idrive or whatever would not work for me otherwise I would use something like that and just have it backup all the time.
 
I used to have two additional external USB SSDs (in addition to the main SSD) in my PC. Run a script with robocopy commands when necessary. 3 copies total.
 
On mac:

System drive -> 4TB USB-C SSD via Carbon Copy Cloner, changed files every 2 hours, refresh everything every week, with snapshots.

Anything I can store on an external, I do, on a second 4TB USB-C SSD (I don't like Apple's internal storage design and use it as little as possible).

Folders on the root of that drive (e.g., documents, music, movies, photos, big-picture folders for my day job and my music job, etc.) get backed up to a NAS, also via CCC with snaspshots and occasional refreshes, automatically. The schedules are based on how often I actually change things in them. My current NAS is a Synology and has a big mdraid+btrfs RAID10 in it. Some time this year, I'm probably going to switch to using a FreeBSD VM on my home server (not TrueNAS, because TrueNAS is crap).

The NAS actively syncs my stuff to Backblaze and day job stuff to Google Drive (we use Google services enough that I think we're up to 200TB of Drive storage...plus, it's their data anyway).

When I was on a PC, it was the same idea, but the external drives were network shares via 10GbE to a storage server. I quit doing that when I switched to macOS because for whatever reason Mac doesn't get anywhere near the latency or throughput performance that Windows, Linux, or FreeBSD do over 10GbE. It was noticeably slower working off the storage server than a local SSD, but only on macOS. And, no it wasn't the array in the storage server; that was all flash. The only spinning hard drives I still own are the ones in the Synology. IDK...maybe there's one in my PS5, I didn't check because I don't care.
 
On mac:

System drive -> 4TB USB-C SSD via Carbon Copy Cloner, changed files every 2 hours, refresh everything every week, with snapshots.

Anything I can store on an external, I do, on a second 4TB USB-C SSD (I don't like Apple's internal storage design and use it as little as possible).

Folders on the root of that drive (e.g., documents, music, movies, photos, big-picture folders for my day job and my music job, etc.) get backed up to a NAS, also via CCC with snaspshots and occasional refreshes, automatically. The schedules are based on how often I actually change things in them. My current NAS is a Synology and has a big mdraid+btrfs RAID10 in it. Some time this year, I'm probably going to switch to using a FreeBSD VM on my home server (not TrueNAS, because TrueNAS is crap).

The NAS actively syncs my stuff to Backblaze and day job stuff to Google Drive (we use Google services enough that I think we're up to 200TB of Drive storage...plus, it's their data anyway).

When I was on a PC, it was the same idea, but the external drives were network shares via 10GbE to a storage server. I quit doing that when I switched to macOS because for whatever reason Mac doesn't get anywhere near the latency or throughput performance that Windows, Linux, or FreeBSD do over 10GbE. It was noticeably slower working off the storage server than a local SSD, but only on macOS. And, no it wasn't the array in the storage server; that was all flash. The only spinning hard drives I still own are the ones in the Synology. IDK...maybe there's one in my PS5, I didn't check because I don't care.
I have a 10Gb ethernet backbone but all of my cat videos still run at the same speed.
 
I have a 10Gb ethernet backbone but all of my cat videos still run at the same speed.

Not sure what this means....your internet is definitely not 10GbE (if it is, we should talk).

In macOS, even things like playing an album of FLAC files with VLC would stutter at the start of every track when I was playing off my FreeBSD file server accessed over 10GbE. I ran the same basic setup with everything but the system drive on that same storage server (which was a ZFS stripe over mirrors of SSDs backed by ~80GB of L1ARC in practice and was capable of saturating 10GbE) for several years without experiencing anything like that with Windows, Linux, or FreeBSD clients. The same thing happens trying to access Linux VMs over RDP or VNC - there is no noticeable benefit to using 10GbE (even dedicated to just that one connection with hardware pass-through of the NIC on the VM host) over GbE.

The problem is at least one of macOS, Thunderbolt, or macOS's NIC tuning (I copied as many of the settings as possible and did some experimentation before I gave up on the storage server approach).

Despite the fact that macOS at least started off as FreeBSD userland with a Mach kernel, macOS does not perform anywhere near as well as FreeBSD, and Linux and Windows have mostly caught up with it in the last 5 years or so.

My suspicion is that it's actually because of Thunderbolt (which I firmly believe is a design flaw masquerading as a feature). My 10GbE adapter for macOS is in a TB dock. It shares 4 PCIe lanes with displayport, several USB ports, and Thunderbolt daisy chains. The 10GbE NICs I've used in my various other desktops and servers are all x8 cards just for 10GbE via either RJ45 or SFP+ (some are dual port, but even the single port cards are still 8-lane cards).

I'm fully willing to admit that I may be doing something wrong, but if your clients are only macOS...10GbE doesn't seem worth the expense. Maybe a dedicated TB 10GbE adapter would work better, but it's still only 4 lanes, and some of the bandwidth is reserved for DisplayPort that you can't use. And maybe I should have bought a mini with 10GbE built in. But...that's literally giving Apple more money to get around their stupid design flaws.
 
Not sure what this means....your internet is definitely not 10GbE (if it is, we should talk).

In macOS, even things like playing an album of FLAC files with VLC would stutter at the start of every track when I was playing off my FreeBSD file server accessed over 10GbE. I ran the same basic setup with everything but the system drive on that same storage server (which was a ZFS stripe over mirrors of SSDs backed by ~80GB of L1ARC in practice and was capable of saturating 10GbE) for several years without experiencing anything like that with Windows, Linux, or FreeBSD clients. The same thing happens trying to access Linux VMs over RDP or VNC - there is no noticeable benefit to using 10GbE (even dedicated to just that one connection with hardware pass-through of the NIC on the VM host) over GbE.

The problem is at least one of macOS, Thunderbolt, or macOS's NIC tuning (I copied as many of the settings as possible and did some experimentation before I gave up on the storage server approach).

Despite the fact that macOS at least started off as FreeBSD userland with a Mach kernel, macOS does not perform anywhere near as well as FreeBSD, and Linux and Windows have mostly caught up with it in the last 5 years or so.

My suspicion is that it's actually because of Thunderbolt (which I firmly believe is a design flaw masquerading as a feature). My 10GbE adapter for macOS is in a TB dock. It shares 4 PCIe lanes with displayport, several USB ports, and Thunderbolt daisy chains. The 10GbE NICs I've used in my various other desktops and servers are all x8 cards just for 10GbE via either RJ45 or SFP+ (some are dual port, but even the single port cards are still 8-lane cards).

I'm fully willing to admit that I may be doing something wrong, but if your clients are only macOS...10GbE doesn't seem worth the expense. Maybe a dedicated TB 10GbE adapter would work better, but it's still only 4 lanes, and some of the bandwidth is reserved for DisplayPort that you can't use. And maybe I should have bought a mini with 10GbE built in. But...that's literally giving Apple more money to get around their stupid design flaws.
No, my internet is not 10gbps but I do have 2 1gbps fiber connections I maintain at work. I was talking about sending downloaded videos over a 10gbe backbone but was not clear and it was a joke anyway. .

My main point was that my backup needs at home are not in the same league as yours. With my background with networks, I can create challenging networks but I have no need to do so.
 
No, my internet is not 10gbps but I do have 2 1gbps fiber connections I maintain at work. I was talking about sending downloaded videos over a 10gbe backbone but was not clear and it was a joke anyway. .

My main point was that my backup needs at home are not in the same league as yours. With my background with networks, I can create challenging networks but I have no need to do so.
And to be clear, I am not a professional like you guys. I just sit back and try to learn and enjoy playing with my toys. I am in awe of everyone here.
 
Just matched files on an external SSD here. I probably should use something more high level and techy, but this has worked for so long I just keep on with the super simple.
 
No, my internet is not 10gbps but I do have 2 1gbps fiber connections I maintain at work. I was talking about sending downloaded videos over a 10gbe backbone but was not clear and it was a joke anyway. .

My main point was that my backup needs at home are not in the same league as yours. With my background with networks, I can create challenging networks but I have no need to do so.

Enh...I like simpler networks. The 10GbE was actually pretty simple....one bridge on the server, all static IPs, no routers, firewalls, or anything fancy.

I did it largely because I'm a computer dork and generally trust open source things more than closed source things, at least when possible.
 
Just copy individual files and directories manually?
Yeah, that’s it. Studio One makes its own directories, so I just copy the whole thing to a fast SSD. I re-copy when needed and it just replaces older files with new versions.

It’s not high tech or studio-worthy, but works for my purposes.
 
Back
Top Bottom