Thanks for the advice, chaps. My gut feeling is to go for a couple of additional internal drives in the first instance.
As to the networking question posed by Malcolm... networking and design of high-availability systems is what I do in my "day job", so I'll have a stab at answering.
[/quote]Perhaps someone at PSquared can indicate whether running two networks switches is a good idea. Is it easy to setup? Can audio be routed over one network but fall-over to the other if there's a failure?[/quote]
Running 2 networks in parallel is trivial. The difficult bit is persuading your applications to use them both!
Normally when you configure a physical interface under Windows, you assign an IP address to the interface. So a machine with 2 interfaces wouldn have 2 IP addresses. However, since you only use 1 address (or name) when configuring the share, you don't get automatic failover of resources on failure, even though there is a physical link.
There are several ways round this. None trivial. One option would be to set up 2 physical networks and 2 address ranges, run an active routing protocol between the 2 boxes, assign internal loopback addresses on another 2 different networks to the 2 PCs, and make sure all shares reference the loopback rather than the physical interface addresses. If you have read what I've just written and don't immediately understand it, then this probably isn't the solution for you...
Another solution - and one that I use regularly - is to use some sort of proprietory interface failover, where two separate physical interfaces are treated as one logical device. The one we use is the proprietory HP "teaming" product.
You could also probably do something similar using IEEE 802.3ad link aggregation.
But, like John-Michael, I'd question the need for doing this in a Myriad environment. I'm doing this stuff in a data centre, with millions of pounds worth of kit, and tens of thousands of people using it simultaneously. If it breaks, several thousand people can't do their job.
However, in a smaller environment, where there perhaps isn't the same level of technical support available, I'd question the need to go to such complexity. The concern is is that by introducing extra "resilience", you by necessity make the system more complicated, and very quickly you can get to the point where you're actually decreasing the reliability!
An "enterprise-class" simple L2 network switch is obviously much more expensive than the 50 quid cheapies that you can pick up for home use. But the mean time between failure of these units can be typically around 50 years!
It's actually fairly unusual for the electronics of an ethernet switch to fail - the most common problems are PSU failure or fan failure. If I were building an ultra-resilient system, I'd build it with dual everything, but for a mid-level system, where I wanted high reliability without overkill, I'd probably be speccing a single switch, with dual hot-swappable PSUs and fans.
Bruce.
BruceIn charge of wires & stuff
Celtic Music Radio http://www.celticmusicradio.net