Electronics-Related.com
Forums

Bargain LTSpice/Lab laptop

Started by bitrex January 24, 2022
On 1/24/2022 1:39 PM, bitrex wrote:
> I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year, which > seems adequate for just about anything I throw at it.
Depends, of course, on "what you throw at it". Most of my workstations have 144G of RAM, 5T of rust. My smallest (for writing software) has just 48G. The CAD, EDA and document prep workstations can easily eat gobs of RAM to avoid paging to disk. Some of my SfM "exercises" will eat every byte that's available!
> I'd be surprised if that Fujitsu can't be upgraded to at least 16. > > Another nice deal for mass storage/backups of work files are these surplus Dell > H700 hardware RAID controllers, if you have a spare 4x or wider PCIe slot you > get 8 channels of RAID 0/1 per card, the used to be in servers probably but > they work fine OOTB with Windows 10/11 and the modern Linux distros I've tried, > and you don't have to muck with the OS software RAID or the motherboard's > software RAID.
RAID is an unnecessary complication. I've watched all of my peers dump their RAID configurations in favor of simple "copies" (RAID0 without the controller). Try upgrading a drive (to a larger size). Or, moving a drive to another machine (I have 6 identical workstations and can just pull the "sleds" out of one to move them to another machine if the first machine dies -- barring license issues). If you experience failures, then you assign value to the mechanism that protects against those failures. OTOH, if you *don't*, then there any costs associated with those mechanisms become the dominant factor in your usage decisions. I.e., if they make other "normal" activities (disk upgrades) more tedious, then that counts against them, nullifying their intended value. E.g., most folks experience PEBKAC failures which RAID won't prevent. Yet, still are lazy about backups (that could alleviate those failures).
> Yes a RAID array isn't a backup but I don't see any reason not to have your > on-site backup in RAID 1.
I use surplus "shelfs" as JBOD with a SAS controller. This allows me to also pull a drive from a shelf and install it directly in another machine without having to muck with taking apart an array, etc. Think about it, do you ever have to deal with a (perceived) "failure" when you have lots of *spare* time on your hands? More likely, you are in the middle of something and not keen on being distracted by a "maintenance" issue. [In the early days of the PC, I found having duplicate systems to be a great way to verify a problem was software related vs. a "machine problem": pull drive, install in identical machine and see if the same behavior manifests. Also good when you lose a power supply or some other critical bit of hardware and can work around it just by moving media (I keep 3 spare power supplies for my workstations as a prophylactic measure) :> ]
On 1/24/2022 11:39 PM, Don Y wrote:
> On 1/24/2022 6:14 PM, bitrex wrote: >> It's not the last word in backup, why should I have to do any of that >> I just go get new modern controller and drives and restore from my >> off-site backup... > > Exactly.  If your drives are "suspect", then why are you still using them? > RAID is a complication that few folks really *need*. > > If you are using it, then you should feel 100.0% confident in taking > a drive out of the array, deliberately scribbling on random sectors > and then reinstalling in the array to watch it recover.  A good exercise > to remind you what the process will be like when/if it happens for real. > (Just like doing an unnecessary "restore" from a backup). > > RAID (5+) is especially tedious (and wasteful) with large arrays. > Each of my workstations has 5T spinning.  Should I add another ~8T just > to be sure that first 5T remains intact?  Or, should I have another > way of handling the (low probability) event of having to restore some > "corrupted" (or, accidentally deleted?) portion of the filesystem? > > Image your system disk (and any media that host applications). > Then, backup your working files semi-regularly. > > I've "lost" two drives in ~40 years:  one in a laptop that I > had configured as a 24/7/365 appliance (I'm guessing the drive > didn't like spinning up and down constantly; I may have been > able to prolong its life by NOT letting it spin down) and > another drive that developed problems in the boot record > (and was too small -- 160GB -- to bother trying to salvage). > > [Note that I have ~200 drives deployed, here]
The advantage I see in RAID-ing the system drive and projects drive is avoidance of downtime mainly; the machine stays usable while you prepare the restore solution. Enterprise situation you have other machines and a enterprise-class network and Internet connection to aid in this process, I have a small home office with one "business class" desktop PC and a consumer Internet connection, if there are a lot of files to restore the off-site backup place may have to mail you a disk. Ideally I don't have to do that either I just go to the local NAS nightly-backup but maybe lose some of the day's work if I only have one projects drive and it's failed. Not the worst thing but with a hot image you don't have to lose anything unless you're very unlucky and the second drive fails while you do an emergency sync. But particularly if the OS drive goes down it's very helpful to still have a usable desktop that can assist in its own recovery.
On 1/24/2022 10:30 PM, bitrex wrote:

>> Image your system disk (and any media that host applications). >> Then, backup your working files semi-regularly. >> >> I've "lost" two drives in ~40 years: one in a laptop that I >> had configured as a 24/7/365 appliance (I'm guessing the drive >> didn't like spinning up and down constantly; I may have been >> able to prolong its life by NOT letting it spin down) and >> another drive that developed problems in the boot record >> (and was too small -- 160GB -- to bother trying to salvage). >> >> [Note that I have ~200 drives deployed, here] > > The advantage I see in RAID-ing the system drive and projects drive is > avoidance of downtime mainly; the machine stays usable while you prepare the > restore solution.
But, in practice, how often HAS that happened? And, *why*? I.e., were you using old/shitty drives (and should have "known better")? How "anxious" will you be knowing that you are operating on a now faulted machine?
> Enterprise situation you have other machines and a enterprise-class network and > Internet connection to aid in this process, I have a small home office with one > "business class" desktop PC and a consumer Internet connection, if there are a > lot of files to restore the off-site backup place may have to mail you a disk.
Get a NAS/SAN. Or, "build" one using an old PC (that you have "outgrown"). Note that all you need the NAS/SAN/homegrown-solution to do is be "faster" than your off-site solution. Keep a laptop in the closet for times when you need to access the outside world while your primary machine is dead (e.g., to research the problem, download drivers, etc.) I have a little headless box that runs my DNS/TFTP/NTP/font/etc. services. It's a pokey little Atom @ 1.6GHz/4GB with a 500G laptop drive. Plenty fast enough for the "services" that it regularly provides. But, it's also "available" 24/7/365 (because the services that it provides are essential to EVERY machine in the house, regardless of the time of day I might choose to use them) so I can always push a tarball onto it to take a snapshot of whatever I'm working on at the time. (Hence the reason for such a large drive on what is actually just an appliance). Firing up a NAS/SAN is an extra step that I would tend to avoid -- because it's not normally up and running. By contrast, the little Atom box always *is* (so, let it serve double-duty as a small NAS).
> Ideally I don't have to do that either I just go to the local NAS > nightly-backup but maybe lose some of the day's work if I only have one > projects drive and it's failed. Not the worst thing but with a hot image you > don't have to lose anything unless you're very unlucky and the second drive > fails while you do an emergency sync.
I see data as falling in several categories, each with different recovery costs: - the OS - applications - support "libraries"/collections - working files The OS is the biggest PITA to install/restore as it's installation often means other things that depend on it must subsequently be (re)installed. Applications represent the biggest time sink because each has licenses and configurations that need to be recreated. Libraries/collections tend to just be *large* but really just bandwidth limited -- they can be restored at any time to any machine without tweeks. Working files change the most frequently but, as a result, tend to be the freshest in your mind (what did you work on, today?). Contrast that with "how do you configure application X to work the way you want it to work and co-operate with application Y?" One tends not to do much "original work" in a given day -- the number of bytes that YOU directly change is small so you can preserve your day's efforts relatively easily (a lot MAY change on your machine but most of those bytes were changed by some *program* that responded to your small changes!). Backing up libraries/collections is just a waste of disk space; reinstalling from the originals (archive) takes just as long! Applications can be restored from an image created just after you installed the most recent application/update (this also gives you a clean copy of the OS). Restoring JUST the OS is useful if you are repurposing a machine and, thus, want to install a different set of applications. If you're at this point, you've likely got lots of manual work ahead of you as you select and install each of those apps -- before you can actually put them to use! I am religious about keeping *only* applications and OS on the "system disk". So, at any time, I can reinstall the image and know that I've not "lost anything" (of substance) in the process. Likewise, not letting applications creep onto non-system disks. This last bit is subtly important because you want to be able to *remove* a "non-system" disk and not impact the operation of that machine. [I've designed some "fonts" [sic] for use in my documents. Originally, I kept the fonts -- and the associated working files used to create them -- in a folder alongside those documents. On a non-system/working disk. Moving those documents/fonts is then "complicated" (not unduly so) because the system wants to keep a handle to the fonts hosted on it!]
> But particularly if the OS drive goes down it's very helpful to still have a > usable desktop that can assist in its own recovery.
Hence the laptop(s). Buy a SATA USB dock. It makes it a lot easier to use (and access) "bare" drives -- from *any* machine!
On 25/01/2022 02:03, bitrex wrote:
> On 1/24/2022 5:48 PM, David Brown wrote: >> On 24/01/2022 21:39, bitrex wrote: >>> >>> Another nice deal for mass storage/backups of work files are these >>> surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >>> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be >>> in servers probably but they work fine OOTB with Windows 10/11 and the >>> modern Linux distros I've tried, and you don't have to muck with the OS >>> software RAID or the motherboard's software RAID. >>> >>> Yes a RAID array isn't a backup but I don't see any reason not to have >>> your on-site backup in RAID 1. >>> >> >> You use RAID for three purposes, which may be combined - to get higher >> speeds (for your particular usage), to get more space (compared to a >> single drive), or to get reliability and better up-time in the face of >> drive failures. >> >> Yes, you should use RAID on your backups - whether it be a server with >> disk space for copies of data, or "manual RAID1" by making multiple >> backups to separate USB flash drives.  But don't imagine RAID is >> connected with "backup" in any way. >> >> >>  From my experience with RAID, I strongly recommend you dump these kind >> of hardware RAID controllers.  Unless you are going for serious >> top-shelf equipment with battery backup, guaranteed response time by >> recovery engineers with spare parts and that kind of thing, use Linux >> software raid.  It is far more flexible, faster, more reliable and - >> most importantly - much easier to recover in the case of hardware >> failure. >> >> Any RAID system (assuming you don't pick RAID0) can survive a disk >> failure.  The important points are how you spot the problem (does your >> system send you an email, or does it just put on an LED and quietly beep >> to itself behind closed doors?), and how you can recover.  Your fancy >> hardware RAID controller card is useless when you find you can't get a >> replacement disk that is on the manufacturer's "approved" list from a >> decade ago.  (With Linux, you can use /anything/ - real, virtual, local, >> remote, flash, disk, whatever.)  And what do you do when the RAID card >> dies (yes, that happens) ?  For many cards, the format is proprietary >> and your data is gone unless you can find some second-hand replacement >> in a reasonable time-scale.  (With Linux, plug the drives into a new >> system.) >> >> I have only twice lost data from RAID systems (and had to restore them >> from backup).  Both times it was hardware RAID - good quality Dell and >> IBM stuff.  Those are, oddly, the only two hardware RAID systems I have >> used.  A 100% failure rate. >> >> (BSD and probably most other *nix systems have perfectly good software >> RAID too, if you don't like Linux.) > > I'm considering a hybrid scheme where the system partition is put on the > HW controller in RAID 1, non-critical files but want fast access to like > audio/video are on HW RAID 0, and the more critical long-term on-site > mass storage that's not accessed too much is in some kind of software > redundant-RAID equivalent, with changes synced to cloud backup service. > > That way you can boot from something other than the dodgy motherboard > software-RAID but you're not dead in the water if the OS drive fails, > and can probably use the remaining drive to create a today-image of the > system partition to restore from. > > Worst-case you restore the system drive from your last image or from > scratch if you have to, restoring the system drive from scratch isn't a > crisis but it is seriously annoying, and most people don't do system > drive images every day
I'm sorry, but that sounds a lot like you are over-complicating things because you have read somewhere that "hardware raid is good", "raid 0 is fast", and "software raid is unreliable" - but you don't actually understand any of it. (I'm not trying to be insulting at all - everyone has limited knowledge that is helped by learning more.) Let me try to clear up a few misunderstandings, and give some suggestions. First, I recommend you drop the hardware controllers. Unless you are going for a serious high-end device with battery backup and the rest, and are happy to keep a spare card on-site, it will be less reliable, slower, less flexible and harder for recovery than Linux software RAID - by significant margins. (I've been assuming you are using Linux, or another *nix. If you are using Windows, then you can't do software raid properly and have far fewer options.) Secondly, audio and visual files do not need anything fast unless you are talking about ridiculous high quality video, or serving many clients at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard disk will usually give you about 150 MBps - about 60 times your requirement. Using RAID 0 will pointlessly increase your bandwidth while making the latency worse (especially with a hardware RAID card). Then you want other files on a software RAID with redundancy. That's fine, but you're whole system is now needing at least 6 drives and a specialised controller card when you could get better performance and better recoverability with 2 drives and software RAID. You do realise that Linux software RAID is unrelated to "motherboard RAID" ?
On 25/01/2022 05:54, Don Y wrote:
> On 1/24/2022 1:39 PM, bitrex wrote: >> I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year, >> which seems adequate for just about anything I throw at it. > > Depends, of course, on "what you throw at it".  Most of my workstations > have 144G of RAM, 5T of rust.  My smallest (for writing software) has > just 48G.  The CAD, EDA and document prep workstations can easily eat > gobs of RAM to avoid paging to disk.  Some of my SfM "exercises" will > eat every byte that's available! > >> I'd be surprised if that Fujitsu can't be upgraded to at least 16. >> >> Another nice deal for mass storage/backups of work files are these >> surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to >> be in servers probably but they work fine OOTB with Windows 10/11 and >> the modern Linux distros I've tried, and you don't have to muck with >> the OS software RAID or the motherboard's software RAID. > > RAID is an unnecessary complication.  I've watched all of my peers dump > their RAID configurations in favor of simple "copies" (RAID0 without > the controller).  Try upgrading a drive (to a larger size).  Or, > moving a drive to another machine (I have 6 identical workstations > and can just pull the "sleds" out of one to move them to another > machine if the first machine dies -- barring license issues). >
If you have only two disks, then it is much better to use one for an independent copy than to have them as RAID. RAID (not RAID0, which has no redundancy) avoids downtime if you have a hardware failure on a drive. But it does nothing to help user error, file-system corruption, malware attacks, etc. A second independent copy of the data is vastly better there. But the problems you mention are from hardware RAID cards. With Linux software raid you can usually upgrade your disks easily (full re-striping can take a while, but that goes in the background). You can move your disks to other systems - I've done that, and it's not a problem. Some combinations are harder for upgrades if you go for more advanced setups - such as striped RAID10 which can let you take two spinning rust disks and get lower latency and higher read throughout than a hardware RAID0 setup could possibly do while also having full redundancy (at the expense of slower writes).
> If you experience failures, then you assign value to the mechanism > that protects against those failures.  OTOH, if you *don't*, then > there any costs associated with those mechanisms become the dominant > factor in your usage decisions.  I.e., if they make other "normal" > activities (disk upgrades) more tedious, then that counts against > them, nullifying their intended value. >
Such balances and trade-offs are important to consider. It sounds like you have redundancy from having multiple workstations - it's a lot more common to have a single workstation, and thus redundant disks can be a good idea.
> E.g., most folks experience PEBKAC failures which RAID won't prevent. > Yet, still are lazy about backups (that could alleviate those failures).
That is absolutely true - backups are more important than RAID.
> >> Yes a RAID array isn't a backup but I don't see any reason not to have >> your on-site backup in RAID 1. > > I use surplus "shelfs" as JBOD with a SAS controller.  This allows me to > also pull a drive from a shelf and install it directly in another machine > without having to muck with taking apart an array, etc. > > Think about it, do you ever have to deal with a (perceived) "failure" > when you have lots of *spare* time on your hands?  More likely, you > are in the middle of something and not keen on being distracted by > a "maintenance" issue.
Thus the minimised downtime you get from RAID is a good idea!
> > [In the early days of the PC, I found having duplicate systems to be > a great way to verify a problem was software related vs. a "machine > problem":  pull drive, install in identical machine and see if the > same behavior manifests.  Also good when you lose a power supply > or some other critical bit of hardware and can work around it just by > moving media (I keep 3 spare power supplies for my workstations > as a prophylactic measure)  :> ]
Having a few spare parts on-hand is useful.
On 1/25/2022 11:18 AM, David Brown wrote:
> On 25/01/2022 02:03, bitrex wrote: >> On 1/24/2022 5:48 PM, David Brown wrote: >>> On 24/01/2022 21:39, bitrex wrote: >>>> >>>> Another nice deal for mass storage/backups of work files are these >>>> surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >>>> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be >>>> in servers probably but they work fine OOTB with Windows 10/11 and the >>>> modern Linux distros I've tried, and you don't have to muck with the OS >>>> software RAID or the motherboard's software RAID. >>>> >>>> Yes a RAID array isn't a backup but I don't see any reason not to have >>>> your on-site backup in RAID 1. >>>> >>> >>> You use RAID for three purposes, which may be combined - to get higher >>> speeds (for your particular usage), to get more space (compared to a >>> single drive), or to get reliability and better up-time in the face of >>> drive failures. >>> >>> Yes, you should use RAID on your backups - whether it be a server with >>> disk space for copies of data, or "manual RAID1" by making multiple >>> backups to separate USB flash drives.  But don't imagine RAID is >>> connected with "backup" in any way. >>> >>> >>>  From my experience with RAID, I strongly recommend you dump these kind >>> of hardware RAID controllers.  Unless you are going for serious >>> top-shelf equipment with battery backup, guaranteed response time by >>> recovery engineers with spare parts and that kind of thing, use Linux >>> software raid.  It is far more flexible, faster, more reliable and - >>> most importantly - much easier to recover in the case of hardware >>> failure. >>> >>> Any RAID system (assuming you don't pick RAID0) can survive a disk >>> failure.  The important points are how you spot the problem (does your >>> system send you an email, or does it just put on an LED and quietly beep >>> to itself behind closed doors?), and how you can recover.  Your fancy >>> hardware RAID controller card is useless when you find you can't get a >>> replacement disk that is on the manufacturer's "approved" list from a >>> decade ago.  (With Linux, you can use /anything/ - real, virtual, local, >>> remote, flash, disk, whatever.)  And what do you do when the RAID card >>> dies (yes, that happens) ?  For many cards, the format is proprietary >>> and your data is gone unless you can find some second-hand replacement >>> in a reasonable time-scale.  (With Linux, plug the drives into a new >>> system.) >>> >>> I have only twice lost data from RAID systems (and had to restore them >>> from backup).  Both times it was hardware RAID - good quality Dell and >>> IBM stuff.  Those are, oddly, the only two hardware RAID systems I have >>> used.  A 100% failure rate. >>> >>> (BSD and probably most other *nix systems have perfectly good software >>> RAID too, if you don't like Linux.) >> >> I'm considering a hybrid scheme where the system partition is put on the >> HW controller in RAID 1, non-critical files but want fast access to like >> audio/video are on HW RAID 0, and the more critical long-term on-site >> mass storage that's not accessed too much is in some kind of software >> redundant-RAID equivalent, with changes synced to cloud backup service. >> >> That way you can boot from something other than the dodgy motherboard >> software-RAID but you're not dead in the water if the OS drive fails, >> and can probably use the remaining drive to create a today-image of the >> system partition to restore from. >> >> Worst-case you restore the system drive from your last image or from >> scratch if you have to, restoring the system drive from scratch isn't a >> crisis but it is seriously annoying, and most people don't do system >> drive images every day > > I'm sorry, but that sounds a lot like you are over-complicating things > because you have read somewhere that "hardware raid is good", "raid 0 is > fast", and "software raid is unreliable" - but you don't actually > understand any of it. (I'm not trying to be insulting at all - everyone > has limited knowledge that is helped by learning more.) Let me try to > clear up a few misunderstandings, and give some suggestions.
Well, Windows software raid is what it is and unfortunately on my main desktop I'm constrained to Windows. On another PC like if I build a NAS box myself I have other options.
> First, I recommend you drop the hardware controllers. Unless you are > going for a serious high-end device with battery backup and the rest, > and are happy to keep a spare card on-site, it will be less reliable, > slower, less flexible and harder for recovery than Linux software RAID - > by significant margins.
It seems shocking that Linux software RAID could approach the performance of a late-model cached hardware controller that can spend it's entire existence optimizing the performance of that cache. But I don't know how to do the real-world testing for my own use-case to know. I think they probably compare well in benchmarks.
> (I've been assuming you are using Linux, or another *nix. If you are > using Windows, then you can't do software raid properly and have far > fewer options.)
Not on my main desktop, unfortunately not. I run Linux on my laptops. If I built a second PC for a file server I would put Linux on it but my "NAS" backup is a dumb eSATA external drive at the moment
> Secondly, audio and visual files do not need anything fast unless you > are talking about ridiculous high quality video, or serving many clients > at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard > disk will usually give you about 150 MBps - about 60 times your > requirement. Using RAID 0 will pointlessly increase your bandwidth > while making the latency worse (especially with a hardware RAID card).
Yes, the use cases are important, sorry for not mentioning it but I didn't expect to get into a discussion about it in the first place! Sometimes I stream many dozens of audio files simultaneously from disk e.g. <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/> Sequential read/write performance on a benchmark for two 2TB 7200 RPM drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the Perc 700 controller seems rather good on Windows, approaching that of my OS SSD: <https://imgur.com/a/2svt7nY> Naturally the random 4k R/Ws suck. I haven't profiled it against the equivalent for Windows Storage Spaces.
> Then you want other files on a software RAID with redundancy. That's > fine, but you're whole system is now needing at least 6 drives and a > specialised controller card when you could get better performance and > better recoverability with 2 drives and software RAID. > > You do realise that Linux software RAID is unrelated to "motherboard RAID" ?
Yep
On 1/25/2022 3:43 PM, bitrex wrote:

> Yes, the use cases are important, sorry for not mentioning it but I > didn't expect to get into a discussion about it in the first place! > Sometimes I stream many dozens of audio files simultaneously from disk e.g. > > <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/> > > Sequential read/write performance on a benchmark for two 2TB 7200 RPM > drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the > Perc 700 controller seems rather good on Windows, approaching that of my > OS SSD: > > <https://imgur.com/a/2svt7nY> > > Naturally the random 4k R/Ws suck. I haven't profiled it against the > equivalent for Windows Storage Spaces.
These are pretty consumer 7200 RPM drives too, not high-end by any means.
On 25/01/2022 21:43, bitrex wrote:
> On 1/25/2022 11:18 AM, David Brown wrote: >> On 25/01/2022 02:03, bitrex wrote: >>> On 1/24/2022 5:48 PM, David Brown wrote: >>>> On 24/01/2022 21:39, bitrex wrote: >>>>> >>>>> Another nice deal for mass storage/backups of work files are these >>>>> surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >>>>> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used >>>>> to be >>>>> in servers probably but they work fine OOTB with Windows 10/11 and the >>>>> modern Linux distros I've tried, and you don't have to muck with >>>>> the OS >>>>> software RAID or the motherboard's software RAID. >>>>> >>>>> Yes a RAID array isn't a backup but I don't see any reason not to have >>>>> your on-site backup in RAID 1. >>>>> >>>> >>>> You use RAID for three purposes, which may be combined - to get higher >>>> speeds (for your particular usage), to get more space (compared to a >>>> single drive), or to get reliability and better up-time in the face of >>>> drive failures. >>>> >>>> Yes, you should use RAID on your backups - whether it be a server with >>>> disk space for copies of data, or "manual RAID1" by making multiple >>>> backups to separate USB flash drives.&nbsp; But don't imagine RAID is >>>> connected with "backup" in any way. >>>> >>>> >>>> &nbsp;&nbsp;From my experience with RAID, I strongly recommend you dump these >>>> kind >>>> of hardware RAID controllers.&nbsp; Unless you are going for serious >>>> top-shelf equipment with battery backup, guaranteed response time by >>>> recovery engineers with spare parts and that kind of thing, use Linux >>>> software raid.&nbsp; It is far more flexible, faster, more reliable and - >>>> most importantly - much easier to recover in the case of hardware >>>> failure. >>>> >>>> Any RAID system (assuming you don't pick RAID0) can survive a disk >>>> failure.&nbsp; The important points are how you spot the problem (does your >>>> system send you an email, or does it just put on an LED and quietly >>>> beep >>>> to itself behind closed doors?), and how you can recover.&nbsp; Your fancy >>>> hardware RAID controller card is useless when you find you can't get a >>>> replacement disk that is on the manufacturer's "approved" list from a >>>> decade ago.&nbsp; (With Linux, you can use /anything/ - real, virtual, >>>> local, >>>> remote, flash, disk, whatever.)&nbsp; And what do you do when the RAID card >>>> dies (yes, that happens) ?&nbsp; For many cards, the format is proprietary >>>> and your data is gone unless you can find some second-hand replacement >>>> in a reasonable time-scale.&nbsp; (With Linux, plug the drives into a new >>>> system.) >>>> >>>> I have only twice lost data from RAID systems (and had to restore them >>>> from backup).&nbsp; Both times it was hardware RAID - good quality Dell and >>>> IBM stuff.&nbsp; Those are, oddly, the only two hardware RAID systems I have >>>> used.&nbsp; A 100% failure rate. >>>> >>>> (BSD and probably most other *nix systems have perfectly good software >>>> RAID too, if you don't like Linux.) >>> >>> I'm considering a hybrid scheme where the system partition is put on the >>> HW controller in RAID 1, non-critical files but want fast access to like >>> audio/video are on HW RAID 0, and the more critical long-term on-site >>> mass storage that's not accessed too much is in some kind of software >>> redundant-RAID equivalent, with changes synced to cloud backup service. >>> >>> That way you can boot from something other than the dodgy motherboard >>> software-RAID but you're not dead in the water if the OS drive fails, >>> and can probably use the remaining drive to create a today-image of the >>> system partition to restore from. >>> >>> Worst-case you restore the system drive from your last image or from >>> scratch if you have to, restoring the system drive from scratch isn't a >>> crisis but it is seriously annoying, and most people don't do system >>> drive images every day >> >> I'm sorry, but that sounds a lot like you are over-complicating things >> because you have read somewhere that "hardware raid is good", "raid 0 is >> fast", and "software raid is unreliable" - but you don't actually >> understand any of it.&nbsp; (I'm not trying to be insulting at all - everyone >> has limited knowledge that is helped by learning more.)&nbsp; Let me try to >> clear up a few misunderstandings, and give some suggestions. > > Well, Windows software raid is what it is and unfortunately on my main > desktop I'm constrained to Windows.
OK. On desktop windows, "Intel motherboard RAID" is as good as it gets for increased reliability and update. It is more efficient than hardware raid, and the formats used are supported by any other motherboard and also by Linux md raid - thus if the box dies, you can connect the disks into a Linux machine (by SATA-to-USB converter or whatever is convenient) and have full access. Pure Windows software raid can only be used on non-system disks, AFAIK, though details vary between Windows versions. These days, however, you get higher reliability (and much higher speed) with just a single M2 flash disk rather than RAID1 of two spinning rust disks. Use something like Clonezilla to make a backup image of the disk to have a restorable system image.
> > On another PC like if I build a NAS box myself I have other options. > >> First, I recommend you drop the hardware controllers.&nbsp; Unless you are >> going for a serious high-end device with battery backup and the rest, >> and are happy to keep a spare card on-site, it will be less reliable, >> slower, less flexible and harder for recovery than Linux software RAID - >> by significant margins. > > It seems shocking that Linux software RAID could approach the > performance of a late-model cached hardware controller that can spend > it's entire existence optimizing the performance of that cache. But I > don't know how to do the real-world testing for my own use-case to know. > I think they probably compare well in benchmarks. >
Shocking or not, that's the reality. (This is in reference to Linux md software raid - I don't know details of software raid on other systems.) There was a time when hardware raid cards we much faster, but many things have changed: 1. It used to be a lot faster to do the RAID calculations (xor for RAID5, and more complex operations for RAID6) in dedicated ASICs than in processors. Now processors can handle these with a few percent usage of one of their many cores. 2. Saturating the bandwidth of multiple disks used to require a significant proportion of the IO bandwidth of the processor and motherboard, so that having the data duplication for redundant RAID handled by a dedicated card reduced the load on the motherboard buses. Now it is not an issue - even with flash disks. 3. It used to be that hardware raid cards reduced the latency for some accesses because they had dedicated cache memory (this was especially true for Windows, which has always been useless at caching disk data compared to Linux). Now with flash drives, the extra card /adds/ latency. 4. Software raid can make smarter use of multiple disks, especially when reading. For a simple RAID1 (duplicate disks), a hardware raid card can only handle the reads as being from a single virtual disk. With software RAID1, the OS can coordinate accesses to all disks simultaneously, and use its knowledge of the real layout to reduce latencies. 5. Hardware raid cards have very limited and fixed options for raid layout. Software raid can let you have options that give different balances for different needs. For a read-mostly layout on two disks, Linux raid10 can give you better performance than raid0 (hardware or software) while also having redundancy. <https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10>
>> (I've been assuming you are using Linux, or another *nix.&nbsp; If you are >> using Windows, then you can't do software raid properly and have far >> fewer options.) > > Not on my main desktop, unfortunately not. I run Linux on my laptops. If > I built a second PC for a file server I would put Linux on it but my > "NAS" backup is a dumb eSATA external drive at the moment > >> Secondly, audio and visual files do not need anything fast unless you >> are talking about ridiculous high quality video, or serving many clients >> at once.&nbsp; 4K video wants about 25 Mbps bandwidth - a spinning rust hard >> disk will usually give you about 150 MBps - about 60 times your >> requirement.&nbsp; Using RAID 0 will pointlessly increase your bandwidth >> while making the latency worse (especially with a hardware RAID card). > > Yes, the use cases are important, sorry for not mentioning it but I > didn't expect to get into a discussion about it in the first place! > Sometimes I stream many dozens of audio files simultaneously from disk e.g. > > <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/> > > Sequential read/write performance on a benchmark for two 2TB 7200 RPM > drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the > Perc 700 controller seems rather good on Windows, approaching that of my > OS SSD: > > <https://imgur.com/a/2svt7nY> > > Naturally the random 4k R/Ws suck. I haven't profiled it against the > equivalent for Windows Storage Spaces. >
SATA is limited to 500 MB/s. A good spinning rust can get up to about 200 MB/s for continuous reads. RAID0 of two spinning rusts can therefore get fairly close to the streaming read speed of a SATA flash SSD. Note that a CD-quality uncompressed audio stream is 0.17 MB/s. 24-bit, 192 kHz uncompressed is about 1 MB/s. That is, a /single/ spinning rust disk (with an OS that will cache sensibly) will handle nearly 200 hundred such streams. Now for a little bit on prices, which I will grab from Newegg as a random US supplier, using random component choices and approximate prices to give a rough idea. 2TB 7200rpm spinning rust - $50 Perc H700 (if you can find one) - $150 2TB 2.5" SSD - $150 2TB M2 SSD - $170 So for the price of your hardware raid card and two spinning rusts you could get, for example : 1. An M2 SSD with /vastly/ higher speeds than your RAID0, higher reliability, and with a format that can be read on any modern computer (at most you might have to buy a USB-to-M2 adaptor ($13), rather than an outdated niche raid card). 2. 4 spinning rusts in a software raid10 setup - faster, bigger, and better reliability. 3. A 2.5" SSD and a spinning rust, connected in a Linux software RAID1 pair with "write-behind" on the rust. You get the read latency benefits of the SSD, the combined streaming throughput of both, writes go first to the SSD and the slow rust write speed is not a bottleneck. There is no scenario in which hardware raid comes out on top, compared to Linux software raid. Even if I had the raid card and the spinning rust, I'd throw out the raid card and have a better result.
> >> Then you want other files on a software RAID with redundancy.&nbsp; That's >> fine, but you're whole system is now needing at least 6 drives and a >> specialised controller card when you could get better performance and >> better recoverability with 2 drives and software RAID. >> >> You do realise that Linux software RAID is unrelated to "motherboard >> RAID" ? > > Yep >
On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
> On 1/25/2022 11:18 AM, David Brown wrote:
> > First, I recommend you drop the hardware controllers. Unless you are > > going for a serious high-end device...
> It seems shocking that Linux software RAID could approach the > performance of a late-model cached hardware controller that can spend > it's entire existence optimizing the performance of that cache.
Not shocking at all; 'the performance' that matters is rarely similar to measured benchmarks. Even seasoned computer users can misunderstand their needs and multiply their overhead cost needlessly, to get improvement in operation. Pro photographers, sound engineering, and the occasional video edit shop will need one-user big fast disks, but in the modern market, the smaller and slower disks ARE big and fast, in absolute terms.
On 1/26/2022 1:04 PM, whit3rd wrote:
> Pro photographers, sound engineering, and the occasional video edit shop > will need one-user big fast disks, but in the modern market, the smaller and slower > disks ARE big and fast, in absolute terms.
More importantly, they are very reliable. I come across thousands (literally) of scrapped machines (disks) every week. I've built a gizmo to wipe them and test them in the process. The number of "bad" disks is a tiny fraction; most of our discards are disks that we deem too small to bother with (250G or smaller). As most come out of corporate settings (desktops being consumer-quality while servers/arrays being enterprise), they tend to have high PoH figures... many exceeding 40K (4-5 years at 24/7). Still, no consequences to data integrity. Surely, if these IT departments feared for data on the thousands of seats they maintain, they would argue for the purchase of mechanisms to reduce that risk (as the IT department specs the devices, if they see high failure rates, all of their consumers will bitch about the choice that has been IMPOSED upon them!)