![]() |
|
FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below. |
![]() |
|
Thread Tools | Display Modes |
|
|||
![]()
I made a snapshot, it was "2.9G" but only actually
a few MB; then the dump was 503M. I've long known about dumps, but the snapshot is actually== ? in windows or "boot utility" terminology, if an equivalent exists ?? .............. (Not an urgent question) afaik.
__________________
FreeBSD 13-STABLE |
|
|||
![]()
http://en.wikipedia.org/wiki/Snapsho...puter_storage)) explains it rather well.
__________________
You don't need to be a genius to debug a pf.conf firewall ruleset, you just need the guts to run tcpdump |
|
||||
![]()
Yes, and no.
![]() You can use ZFS snapshots as a poor-man's backup solution, in that you will have old versions of files accessible. However, using the same set of disks for "live" data and "backup" data is just asking for trouble in the long-run. This is what I do at home (3-disks in a raidz1 configuration, using daily ZFS snapshots to keep 30-60 days of "backups" around). However, you can also use snapshots as a staging ground for creating off-system backups (copy the contents of the snapshot off to another server), and you can also create a backup server that uses snapshots to provide access to historical data (this is what we do at work; rsync data off remote server to ZFS, create snapshot, repeat daily). It all depends on what you need, and what hardware you have available. UFS snapshots are a little more tricky, but are still usable in roughly the same ways. |
|
||||
![]()
Thanks once again Freddie. So just to be clear, if the only thing I have is a snapshot, I'm probably out of luck if I have to rebuild the entire system, but if I just need some files from a prior date, I'm OK.
I haven't messed with zfs, but it may be worth it for use in my customer accounts. I will be building a NAS box for a client next week, and I am considering zfs as the file system. Is there anything I should or should not do in this case? We are going to have two RAID5 arrays, with 4 1.5TB drives in each. The second array will be used for backup of the first array. The production array will serve up NFS shares for our VM disk space. Thanks so much again for your prompt responses. -Tim |
|
||||
![]() Quote:
![]() For example, our backup server does a full system rsync of a remote server. Then takes a snapshot. The next night, it does another full system rysnc, and takes a snapshot. Any of those snapshots can be used to restore a system (boot off LiveCD, partition/format harddrives, rysnc system from backup server directory, install boot loader, reboot). Thus, if your snapshots are of full, complete systems then they can be used to restore full, complete systems. Quote:
For your purposes, you have a couple of options:
The former provides more separation between the two, but the second provides better I/O throughput and more storage space. You also have the option to add the raidz1 vdevs to the same pool to create either a striped (RAID0) pool, or a mirrored (RAID1) pool ("zpool add" vs "zpool attach"), depending on just how much redundancy you want. ![]() Personally, I'd look into using two separate servers. One to host the live data, one to act as a backup. Then you can either use the "zfs send" command to keep them in sync by sending snapshots between them, or use rsync to keep them in sync. On FreeBSD, native support for NFS sharing is available via the "sharenfs" property for individual filesystems (normal NFS configuration via /etc/rc.conf). You have to install Samba to share via SMB/CIFS. And you have to install an iSCSI target port if you want to export zvols via iSCSI. On Solaris, NFS, SMB/CIFS, and iSCSI are done natively via the sharenfs, sharecifs, and shareiscsi properties. For NAS and SAN purposes, ZFS is very nice. For storage on a single server (direct-attached) it's useful, depending on what you are doing. On a desktop, it's a bit overkill, but still has some perks. |
|
||||
![]()
OK, fantastic. I have it set up the second way with one storage pool since we'll have good backups and need the throughput. I have created two different file systems so we can use one for backup and one for production. Of course the client is being cheap and won't pay for two servers even though I agree with you that it is the way to go. I only have one question though. Since the file systems reside in the storage pool, there really is no way to know what vdev the data is in, it may even be striped across both of them as I understand it? Not that it makes any difference as if I am comprehending this correctly, the double RAID devices provide a measure of safety, more so than a single array would.
BTW, than you so much for the direction on this. I want to start using this configuration in all my client setups if it proves to be stable and reliable. Your help is always right on point as is most of the help on these forums. -Tim |
|
||||
![]()
AFAIK, you cannot determine what data is stored on what vdev, just that it's stored in that filesystem, in that storage pool. The data will be striped across all the vdevs in the pool. A single vdev or even a single disk is useless outside of the pool. You can basically treat the entire storage pool as "a single disk".
In essense, the storage pool is a RAID0 of all the vdevs. You can, if you are really paranoid about data safety, created mirrored pools instead of striped pools. You create the pool and the first vdev as per normal. But, when you create the second (and subsequent) vdevs, you use: zpool <poolname> attach <vdev type> <disk1> <disk2> <...> That will create a RAID1 across all the vdevs. You lose disk space, but gain a lot more redundancy in the pool. (Probably not useful for most people/situations.) |
![]() |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Opera Port - conflicting pkgs in "make install" | IronForge | OpenBSD Packages and Ports | 5 | 29th October 2009 05:10 AM |
Fixed "xinit" after _7 _8, "how" here in case anyones' "X" breaks... using "nvidia" | jb_daefo | Guides | 0 | 5th October 2009 09:31 PM |
"Thanks" and "Edit Tags". | diw | Feedback and Suggestions | 2 | 29th March 2009 12:06 AM |
Scripted sysinstall fails with "Command 'system' failed" | PeterSteele | FreeBSD Installation and Upgrading | 0 | 13th November 2008 11:31 PM |