|
FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below. |
|
Thread Tools | Display Modes |
|
|||
Remote backup for clueless n00bs
I finally convinced my boss that we needed to move beyond half-assed shared hosting companies if we're going to be doing this web hosting thing seriously, and now we have our own private server (of the virtual kind, for now). I'm still in the process of setting it up, but it's a bit of a learning process for me - this will be the first Unix machine I've set up and administered entirely remotely.
One thing I'm clueless about and having a surprisingly difficult time getting a consistent answer about is the most foolproof way to back it up remotely. My copy of Absolute FreeBSD goes into dump and restore in some depth, but doesn't assume I'm doing anything different than backing up to a local tape drive… What I'm hoping for is a method whereby I can pull the entire system's contents into an archive on my Mac at work, and then if things go bad, I can just push the backup back onto the server, reboot the virtual machine and be up and running again. Is that just a fantasy? On the other hand, since we're just operating a fairly simple web server, should I not even bother trying to back up the whole thing and only worry about backing up the web files and databases? The downtime in case of a screw-up will be longer since I'll have to reinstall and reconfigure everything, but maybe that'll be less complicated… Thinking aloud… Anyway, just seeking some input from sysadmins more wizened than I. |
|
||||
As Oko mentioned there are tons of different ways to do this. If you were saying that your web server is a VM when you said it was virtual, then you have the best situation you can have for backup. I just move a copy of my entire VM to my SAN and if things go bad, I can move it back on the server, and boot it up in 10 minutes or so. You could automate this by putting up a backup server (VM) and run a script to have it pause the VM, do the transfer, and then resume the server. You could also have the backup server rsync the web server at certain intervals, say once an hour if your data changes regularly.
On my machines, I usually set up a RAID array for my server to run on, and then I have one or more drives in the machine that are not part of the RAID volume that I use to back up the server to. This setup is good if you don't have your server running on a VM, and your backup drive can then write to tape or some other media for offsite backup without affecting the server bandwidth. Just a few suggestions. -Tim -Tim |
|
||||
Quote:
Quote:
Quote:
Quote:
|
|
||||
Everything depends on the level of access you have, and level of backups you want.
For me, at home it is often a live CD (if doing /), followed something like dump [args] -f - what | ssh -i keyfile user@host 'cat > /srv/Backups/host/what-YYYY-MM-DD.dump', with a gzip or other compressor on the fastest side. Once a year the backups get CD-R'd and stuffed in a shoe box at a safer location. While web systems I need to care from from a far usually get done by sftp'ing a backup to a server on the other side of the planet. The permissions/ownership of things being retained depends on the software you use, and how you use it. I believe OpenBSDs file sets are done with extracting tarballs over /, retentive of permissions. I've always been partial to tar.
__________________
My Journal Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''. |
|
||||
I send my stuff by ftp to DriveHQ[1] (www.drivehq.com). I use cron to run the shell script which uses tar to compress the files then sends them by ftp.
DriveHQ give you 1Gb free, so my offsite backup is done automatically and free, result! I can get at the backup from anywhere with a internet connection. I don't have to worry about security or availability of other computers. PM me if you want more advice. [1]no connection, just a happy customer. |
|
|||
Thanks for the helpful replies so far, everyone. I've been experimenting.
I'm starting to think that just tarring everything on the remote server and then pushing the archive file isn't the best idea for the obvious reason that everything will require twice as much disk space. We're not anywhere near filling up our disk quota yet, but we plan to be at least over 50% once things really get off the ground. I had the idea of just setting up a backup directory on my Mac and using rsync to pull the server's drive contents down daily (taking advantage of rsync's incremental file transfer to make things faster), then tarballing the directory locally when it's done. But apparently this requires remote root access to the remote server to do properly (maintaining permissions and such). I figure that if I tweak sshd_config to remove the restriction against root logging in remotely, but also disable password authentication and go strictly key-based, everything should be okay… but I'm still a bit timid about taking that step. Is this a reasonable idea? |
|
||||
Quote:
-Tim |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
backup freeBSD 7.0 using Backup Exec | ccc | FreeBSD General | 2 | 25th April 2009 09:23 PM |
Remote Replacement of OS | mwatkins | FreeBSD Installation and Upgrading | 4 | 5th April 2009 04:01 AM |
Computer as a remote? | Onyx | Off-Topic | 1 | 21st September 2008 09:57 AM |
Remote Installation of *BSD | JMJ_coder | Other BSD and UNIX/UNIX-like | 3 | 21st August 2008 02:19 PM |
Remote backup utility | stukov | General software and network | 18 | 13th June 2008 08:42 PM |