|
General Hardware General hardware related questions. |
|
Thread Tools | Display Modes |
|
|||
CF as a *BSD hard drive?
Hello,
I've been reading up on a growing(?) trend in computers. Instead of shelling out $300+ for a decent solid state drive, using CF as a solid state drive. What you do is get a (or several) CF card of whatever size you want and a CF to SATA or a CF to IDE adapter and voila - your very own solid state drive. Some adapters even allow you to couple multiple CF cards together to increase storage capacity (three is the most I have seen so far). While CF cards don't have the capacity of the newest crop of solid state drives (128GB now on Newegg - for over $3,000!) nor rival the TB breaking crop of traditional hard drives, they are certainly more affordable than solid state drives and offer similar benefits. For a 32GB setup (the largest single CF card on Newegg) would be ~$100 for the card plus ~$15 for the adapter. That's $115 for this setup while a comparable solid state drive starts at $400. Once connected, the CF + adapter is seen by the system as any other IDE/SATA drive that can be formatted, booted from, read/written by the operating system, etc. And the adapters can be either internally mounted, or mounted via a PCI slot (regular and low-profile), via a 3.5" bay drive bracket, both allowing for easy insertion and removal of the CF hard drive. So you could have an entire site(s) setup in this format and take your system with you - all in a single CF card!
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
|||
While this is a neat idea.. CF cards are considerably slower then their IDE/SATA counterparts.
32G is large enough to make a decent workstation though... most of my systems have hard drives of equal capacity. (Though, lately I've been investing in 200G+ drives...). |
|
|||
Quote:
As for purchasing large solid-state drives today for a general-purpose system, if you have the money & really want the experience, go for it. However, reviews by both pundits & developers in-the-know indicate that the performance is underwhelming & the cost is very high in comparison to other popular technologies. Solid-state storage is about to have it's time in the sun, but that time is not today. The high cost & lack of performance isn't balanced by the benefit of getting away from moving parts, heat, & noise yet. |
|
||||
Sata transfers at a rate of 3 Gbits, best flash assemblies go 15 Mbytes to 40 Mbytes (at high cost). This is 10-15 times slower.
I just use USB flash sticks rated 9 Mbytes, but 25 bucks (some months ago) for 4 Gigs. And I can carry my usual OS and apps on any piece of hardware that can boot from USB. Was planning to use small IDE 40/44pins flash to hold the root but as flash sticks start to be used in marketing, prices drop even more. Got a half Tera with both eSATA and USB 2 for archival and backups. All other machines have a directory I can use with NFS, as working space when needed and until the drive dies. Caution: the market for SC is mainly digital photography and the CFs are FAT32 formatted. Speed increase is not always wired but can come, even partly, from embedded software. Newfs the CF (if that CF accepts it) you probably would fall back to 5-9 Mbytes/sec. Check with the manufacturer's sites, not resellers.
__________________
da more I know I know I know nuttin' |
|
|||
Quote:
Additionally, some "133x" or otherwise "high-speed" cards support multi-word DMA, while some cards will report DMA capability but don't actually support it, causing unusual issues with BSD. |
|
||||
We're planning on using a pair of 2 GB CF disks in a gmirror(8) setup for /, with everything else (/usr, /usr/local, /tmp, /var, swap, /home ...) in ZFS. The server we're putting this into has 24 hot-swappable drive bays with 12x 400 GB SATA drives on one 3Ware 9550 RAID controller, and 12x 500 GB SATA drives on another 3Ware 9550 RAID controller (both setup as JBOD). This is going to be our remote backups box (does rsync backups of all the major servers in the district, with ZFS snapshots done after each run).
The beauty of zpools is that you can add replace drives with larger capacity drives, and the zpool will start using all the extra space. Replace 1 drive per day with 1 TB drives, and two weeks later, we'll have doubled the storage space. What sucks about zpools (at least raidz ones) is that you can't add drives to the pool (so we can't start with 12 drives and expand to 24 down the road). If it works as well as we think it will, and as well as our test setup has been working, then we're going to put in another identical box at a second remote location, and use the ZFS snapshot streaming feature to replicate the backups. That way, we'll have duplicate backups, in two separate locations, for all the major servers in the district. All connected via gigabit fibre links! Some days, I love my job! Last edited by phoenix; 4th July 2008 at 05:30 AM. Reason: Add info on expanding raidz pools. |
|
|||
Hello,
Quote:
I'm looking right now at the SanDisk Extreme III cards (4 or 8GB). Do you know of any issues with these?
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
|||
Hello,
Quote:
CF to SATA/IDE is best for seek time performance. There is no way a traditional hard drive can rival it for random seek time, since the CF doesn't have to rotate or move a head. But, the big drawback comes in sustained read/write of large files - which is the result of the lower I/O throughput. But, many of us don't regularly transfer 1GB files back and forth. Most of are disk writing is with frequent and relatively small (<10MB) files. And I think that CF will shine in this pursuit. And don't forget that traditional hard disks are leveling off their tremendous improvements in speed and size, but the door is open for CF (and SSD) to make drastic improvements, especially as interest in them gains, and the future will potentially be very bright for them. It should also be noted that the largest market for CF is currently in the embedded and mini-itx markets.
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
|||
Right, flash media is a bit more durable then people give it credit for.. still, I'm not ready to use it in the workstation field.
Embedded? Sure thing. |
|
|||
Hello,
Quote:
But, if you're like me and have a full OS with all the applications and a complete /home that tops out at 2-3GB, then this becomes a more viable option. Seriously, with such modest means, why would I waste 998GB with a terabyte drive, just 'cause I can? I could get 2x 4GB CF cards and run them in RAID (8GB) and have some additional reliability and peace of mind for the same price, or less, than that terabyte drive. It makes more sense to me - your needs maybe different than mine.
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
||||
jmj, I hear you
IMVHO, solutions for specific needs should be designed from the ground up. A maribe PC has not the same needs than an embedded car GPS/play console, or airborne entertainment, or network applied storage. For years I am lokking for available solutions to put the darn OS, networking, applications somewhere. The BIOS usually is a E2PROM. Slow in reading but it does not need to read much. IMO, the rest should be fast access which is DRAM. That DRAM can be kept alive (battery backup to send a refresh signal when needed, technology is around for 30 years). Problem: that DRAM must get the data from somewhere. Speaking of solid state, Flash is the winner. I am speaking of installing, upgrading, expamding the range of apps on a Flash device, but loading all the flash data in a (battery backed up) DRAM device. This I don't see available anywhere. Pardon me, but a SATA enable raid made of CF cards is what I call the B&D syndrom. You can attach a circular-saw attachment on a B&W drille and hope to saw some feet off a plank, but you would get a proper job using a plain circular saw. Marketing wise. Unfortunately. Yes, I fully agree that any IT person in charge should protect his company with a choice of hardware elements freely available at a trusted computer spares supplier next door. I would use raided SATAized CFs to populate a RAM disk. But in this case, if mechanical data holders are acceptable, DVD-RW can be used both as data source and incremental backups (see man growisofs -Z and -M switches). In both cases, I would use RAM disks. So, RW speed of the flash is a lesser concern as only used as incremental backup (as you would with the DVD-RW, I didn't said DVD+RW which is limited to 1000 sessions or so). In theory, reading is only neede when booting the station, writing only needed for incremental backup. And .. , instead of the DVD, flash sticks also are a solution. (Checked yesterday, now get 8Gig instead of 4Gig for the price 2 months ago). No doubt SSD has a market (has markets). Not, IMO, where power lines (and UPSes) are available in normal environments (no vibrations, normal room temparature). Also, no doubt 4 to 8 Gigs will be soon soldered on SBC and mini|nano-itx. Just FWIW, upgraded a OpenBSD snapshot from June 12th to the 4.4 beta on a flash stick yesterday. One slice holding all (incl. /tmp). Took 4 hours vs. a couple of minutes on the hard drive (am a slow typist).
__________________
da more I know I know I know nuttin' |
|
|||
Hello,
A local store has 2GB SanDisk Ultra II CF cards on sale for $20 this week. I'm going to get one and a cheap Syba CF-to-IDE adapter from newegg and test it in my 50MHz 486 machine. It can't be any worse than the hard drive that is in there (it is painfully slow - as in rather get teeth pulled!!). After I set it up and test it I'll post some results. Stay tuned.
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
|||
I put together a WAP using CF as the harddrive last year - since all the logging is mounted externally and I netboot I can just open up the case and pop another CF in if one goes bad. I can't actually say that I've had any real performance issues or that I've had a card die - the WAP only has to manage about 15 mbps and the only writes occur when the device reboots.
For a few years prior I used 2.5 inch notebook drives which, despite operating at idle just about all the time, had a very high attrition rate. I think the capacity limitations of CF(I/II) make them appear less attractive to me than SSD drives, but I haven't seen any solid evidence that CF, which requires very little by way of power (3.3 or 5 volts), has a defined lifespan limitation. |
|
|||
I am running FreeBSD from CF cards for several years now, mainly as hard disk replacements in firewalls / routers. Some devices like PC Engines' WRAP require the use of CF cards for storage.
A week ago I replaced the hard disk in the laptop I use as 24x7 torrent server with a 8 GB SSD from Transcend. Performance is weak (15 MB/s write, 18 MB/s read) compared to a modern 2,5" hard disk. It was "cheap" (80 EUR) compared to other SSDs. But if you go for a CF card with 8GB built from SLC flash cells (not MLC like on cheap CFs) and an CF-to-IDE converter you end up spending nearly the same. And it supports UDMA33 thus lowering CPU usage (with an 550 MHz PIII this is an issue) compared to CF cards using PIO modes. Although I think that this device uses modern wear-leveling algorithms I mount / read-only, have /var mounted as RAM disk, and the writable part mounted with the 'noatime' option set. Also, I really like syslog with support for circular log files. |
|
|||
Hello,
Quote:
What do these two do ('noatime' and circular log files)?
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
||||
Single-Level Chip and Multi-Level Chip
In SLCs, each bit has two states: 1 or 0 In MLCs, each bits has three states: 1, 0, or <something> SLCs are faster, MLCs have more storage space. SLCs have better wear-levelling (hundreds of thousands of writes per bit). MLCs wear out faster (tens of thousands or less). SLCs are mor eprevalent in enterprise gear. MLCs are more prevalent in low-end consumer gear. (Going from memorey of a bunch of articles I read a few weeks ago. Please post corrections or omissions.) |
|
|||
Per NetBSD's mount(8) manpage:
Code:
noatime Never update the access time field for files. This option is useful for optimizing read performance on file systems that are used as news spools. |
|
|||
The OP may gain from looking at the classic tricks done to minimize disk writes with older CF cards on OpenBSD 3.7:
http://blog.innerewut.de/2005/05/14/openbsd-3-7-on-wrap http://blog.innerewut.de/tags/wrap/ http://www.kaschwig.net/projects/openbsd/wrap/ Yes, these blog entries reference an old version of OpenBSD, but recent traffic on misc@ has pointed that they are still valid. I used them as a reference when configuring a 4GB Eee PC to run OpenBSD-current. FWIW. Last edited by ocicat; 24th July 2008 at 10:03 AM. |
Tags |
cf card, compact-flash, hardware |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
AMD64 - Hard Drive Partitioning | Turquoise88 | General software and network | 8 | 11th September 2009 05:58 AM |
Formatting Hard Disk Drive to UFS in OS X 10.5 | Turquoise88 | Other BSD and UNIX/UNIX-like | 1 | 7th March 2009 09:57 PM |
Did I fry my hard drive? | JMJ_coder | General Hardware | 7 | 23rd December 2008 10:38 PM |
FreeBSD 7.0 hard drive problems. | Errinok | FreeBSD Installation and Upgrading | 8 | 13th June 2008 03:24 PM |
Encrypting hard drive? | ViperChief | FreeBSD Installation and Upgrading | 5 | 31st May 2008 03:42 PM |