DaemonForums  

Go Back   DaemonForums > FreeBSD > FreeBSD General

FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below.

Reply
 
Thread Tools Display Modes
  #1   (View Single Post)  
Old 17th May 2009
Albright Albright is offline
Port Guard
 
Join Date: Apr 2009
Posts: 14
Default Remote backup for clueless n00bs

I finally convinced my boss that we needed to move beyond half-assed shared hosting companies if we're going to be doing this web hosting thing seriously, and now we have our own private server (of the virtual kind, for now). I'm still in the process of setting it up, but it's a bit of a learning process for me - this will be the first Unix machine I've set up and administered entirely remotely.

One thing I'm clueless about and having a surprisingly difficult time getting a consistent answer about is the most foolproof way to back it up remotely. My copy of Absolute FreeBSD goes into dump and restore in some depth, but doesn't assume I'm doing anything different than backing up to a local tape drive…

What I'm hoping for is a method whereby I can pull the entire system's contents into an archive on my Mac at work, and then if things go bad, I can just push the backup back onto the server, reboot the virtual machine and be up and running again. Is that just a fantasy?

On the other hand, since we're just operating a fairly simple web server, should I not even bother trying to back up the whole thing and only worry about backing up the web files and databases? The downtime in case of a screw-up will be longer since I'll have to reinstall and reconfigure everything, but maybe that'll be less complicated… Thinking aloud…

Anyway, just seeking some input from sysadmins more wizened than I.
Reply With Quote
  #2   (View Single Post)  
Old 17th May 2009
Oko's Avatar
Oko Oko is offline
Rc.conf Instructor
 
Join Date: May 2008
Location: Kosovo, Serbia
Posts: 1,102
Default

Quote:
Originally Posted by Albright View Post
One thing I'm clueless about and having a surprisingly difficult time getting a consistent answer about is the most foolproof way to back it up remotely.
Because there is no good answer which will work for everybody. That is way
there are so many different ways to accomplish things in Unix.


Quote:
Originally Posted by Albright View Post
What I'm hoping for is a method whereby I can pull the entire system's contents into an archive on my Mac at work, and then if things go bad, I can just push the backup back onto the server, reboot the virtual machine and be up and running again. Is that just a fantasy?
I.
No it is not. That can be accomplished with something as simple as
tar-ing directories you want to back up and transferring them to your
Mac laptop with sftp/scp.

You can combine sftp/scp also with dump/restore, pax, cpio, or gtar.
You can use more complicated solutions like rsync, Amanda or Bacula.
You can even use CVS, Subversion, or Mercurial to accomplish back ups.

So the first step would be to ask what you want to accomplish, what kind of files you need to back up (Mercurial is fantastic for backing up PF rules) and how robust/complicated solution you need to use.

I personally would go with as simple as possible for the specific task I am
trying to accomplish.
Reply With Quote
  #3   (View Single Post)  
Old 17th May 2009
windependence's Avatar
windependence windependence is offline
Real Name: Tim
Shell Scout
 
Join Date: May 2008
Location: Phoenix, Arizona
Posts: 116
Default

As Oko mentioned there are tons of different ways to do this. If you were saying that your web server is a VM when you said it was virtual, then you have the best situation you can have for backup. I just move a copy of my entire VM to my SAN and if things go bad, I can move it back on the server, and boot it up in 10 minutes or so. You could automate this by putting up a backup server (VM) and run a script to have it pause the VM, do the transfer, and then resume the server. You could also have the backup server rsync the web server at certain intervals, say once an hour if your data changes regularly.

On my machines, I usually set up a RAID array for my server to run on, and then I have one or more drives in the machine that are not part of the RAID volume that I use to back up the server to. This setup is good if you don't have your server running on a VM, and your backup drive can then write to tape or some other media for offsite backup without affecting the server bandwidth.

Just a few suggestions.

-Tim

-Tim
__________________
www.windependence.org
Get your Windependence today!
Reply With Quote
  #4   (View Single Post)  
Old 18th May 2009
Albright Albright is offline
Port Guard
 
Join Date: Apr 2009
Posts: 14
Default

Quote:
Originally Posted by Oko View Post
No it is not. That can be accomplished with something as simple as
tar-ing directories you want to back up and transferring them to your
Mac laptop with sftp/scp.
So I could tar, say, the entire /usr filesystem and use rsync to copy it incrementally to my Mac? It really is that easy; it'll maintain permissions and ownership and such? That's… That's beautiful… I'll do some experimentation and see if that will fill the bill.

Quote:
You can use more complicated solutions like rsync, Amanda or Bacula.
I investigated Bacula briefly, but was quickly scared away by its complexity. I think it does a whole lot more than I need it to do.

Quote:
I personally would go with as simple as possible for the specific task I am
trying to accomplish.
Yep, that's what I'm aiming for…

Quote:
Originally Posted by windependence View Post
As Oko mentioned there are tons of different ways to do this. If you were saying that your web server is a VM when you said it was virtual, then you have the best situation you can have for backup. I just move a copy of my entire VM to my SAN and if things go bad, I can move it back on the server, and boot it up in 10 minutes or so.
Well, we have access to the virtual machine, but we don't have access to the host machine. Otherwise, that would probably be the easiest solution.
Reply With Quote
  #5   (View Single Post)  
Old 18th May 2009
TerryP's Avatar
TerryP TerryP is offline
Arp Constable
 
Join Date: May 2008
Location: USofA
Posts: 1,547
Default

Everything depends on the level of access you have, and level of backups you want.

For me, at home it is often a live CD (if doing /), followed something like dump [args] -f - what | ssh -i keyfile user@host 'cat > /srv/Backups/host/what-YYYY-MM-DD.dump', with a gzip or other compressor on the fastest side. Once a year the backups get CD-R'd and stuffed in a shoe box at a safer location. While web systems I need to care from from a far usually get done by sftp'ing a backup to a server on the other side of the planet.

The permissions/ownership of things being retained depends on the software you use, and how you use it. I believe OpenBSDs file sets are done with extracting tarballs over /, retentive of permissions.



I've always been partial to tar.
__________________
My Journal

Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.
Reply With Quote
  #6   (View Single Post)  
Old 18th May 2009
michaelrmgreen's Avatar
michaelrmgreen michaelrmgreen is offline
Fdisk Soldier
 
Join Date: May 2008
Posts: 49
Default

I send my stuff by ftp to DriveHQ[1] (www.drivehq.com). I use cron to run the shell script which uses tar to compress the files then sends them by ftp.

DriveHQ give you 1Gb free, so my offsite backup is done automatically and free, result!

I can get at the backup from anywhere with a internet connection. I don't have to worry about security or availability of other computers.

PM me if you want more advice.





[1]no connection, just a happy customer.
Reply With Quote
  #7   (View Single Post)  
Old 18th May 2009
Albright Albright is offline
Port Guard
 
Join Date: Apr 2009
Posts: 14
Default

Thanks for the helpful replies so far, everyone. I've been experimenting.

I'm starting to think that just tarring everything on the remote server and then pushing the archive file isn't the best idea for the obvious reason that everything will require twice as much disk space. We're not anywhere near filling up our disk quota yet, but we plan to be at least over 50% once things really get off the ground.

I had the idea of just setting up a backup directory on my Mac and using rsync to pull the server's drive contents down daily (taking advantage of rsync's incremental file transfer to make things faster), then tarballing the directory locally when it's done. But apparently this requires remote root access to the remote server to do properly (maintaining permissions and such). I figure that if I tweak sshd_config to remove the restriction against root logging in remotely, but also disable password authentication and go strictly key-based, everything should be okay… but I'm still a bit timid about taking that step. Is this a reasonable idea?
Reply With Quote
  #8   (View Single Post)  
Old 18th May 2009
windependence's Avatar
windependence windependence is offline
Real Name: Tim
Shell Scout
 
Join Date: May 2008
Location: Phoenix, Arizona
Posts: 116
Default

Quote:
Originally Posted by Albright View Post
Thanks for the helpful replies so far, everyone. I've been experimenting.

I'm starting to think that just tarring everything on the remote server and then pushing the archive file isn't the best idea for the obvious reason that everything will require twice as much disk space. We're not anywhere near filling up our disk quota yet, but we plan to be at least over 50% once things really get off the ground.

I had the idea of just setting up a backup directory on my Mac and using rsync to pull the server's drive contents down daily (taking advantage of rsync's incremental file transfer to make things faster), then tarballing the directory locally when it's done. But apparently this requires remote root access to the remote server to do properly (maintaining permissions and such). I figure that if I tweak sshd_config to remove the restriction against root logging in remotely, but also disable password authentication and go strictly key-based, everything should be okay… but I'm still a bit timid about taking that step. Is this a reasonable idea?
Sure, if you go strictly key based, there will be no password prompt and unless they have your secret key, they're not getting in. This will allow cron to do it's job which with rsync should be a very short period of time once the initial backup is done.

-Tim
__________________
www.windependence.org
Get your Windependence today!
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
backup freeBSD 7.0 using Backup Exec ccc FreeBSD General 2 25th April 2009 09:23 PM
Remote Replacement of OS mwatkins FreeBSD Installation and Upgrading 4 5th April 2009 04:01 AM
Computer as a remote? Onyx Off-Topic 1 21st September 2008 09:57 AM
Remote Installation of *BSD JMJ_coder Other BSD and UNIX/UNIX-like 3 21st August 2008 02:19 PM
Remote backup utility stukov General software and network 18 13th June 2008 08:42 PM


All times are GMT. The time now is 06:41 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick