RAID specs (from Where's the next version? thread)

Forum for anything else which doesn't fit in the above forums. Site feedback, random talk, whatever, are welcome.
Post Reply
User avatar
RyanVM
Site Admin
Posts: 5190
Joined: Tue Nov 23, 2004 6:03 pm
Location: Pennsylvania
Contact:

RAID specs (from Where's the next version? thread)

Post by RyanVM » Fri Jul 15, 2005 11:44 am

This is the third or fourth hard drive I've had fail on me. I've been bitten by dataloss enough at this point that I do keep regular backups of everything (which is why all my important stuff is safely on other drives as well). It's kinda funny, I've got a 100GB WD SE that I've had since they first came out (it was the first of the SE drives) that ran me nearly $300, and it's still running like a champ in my system. It's nearly 5 years old at this point.

My plan is to eventually move to RAID1 when I can afford it, but not for this round. My goal for my next total system upgrade is to use my Raptor as the primary OS drive and have a 500-600GB RAID 0+1 array as my storage drive. That's what donations I've received through this site are being saved for, incidentally :P. I'll also be selling off some hardware at some point to get some more funds to upgrade. And my birthday's less than a week away, so that'll hopefully bring in some funds 8).
Last edited by RyanVM on Mon Jul 25, 2005 7:43 am, edited 4 times in total.

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 12:36 pm

I know the feeling :/ . I've had at least 6 die on me personally in 20 years. 2 WDs, 2 Maxtors, and 2 Seagate SCSIs (with 5 year warranties :D).

If you're going to RAID and don't yet have a controller card, try RAID 5. You can chain 3 or more drives together, you still get the benefits of striping and redundancy, and you only ever lose the capacity of one physical drive on the chain to hold parity data. The only thing you'll lose compared to a RAID 0+1 is the ability to access the array until a replacement drive is connected and the array is rebuilt (unless you have a hot spare, but you still have to rebuild).

RAID 5 controllers are getting pretty cheap nowadays.
Last edited by 5eraph on Fri Jul 15, 2005 9:01 pm, edited 1 time in total.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 15, 2005 2:17 pm

5eraph wrote:I know the feeling :/ . I've had at least 6 die on me personally in 20 years. 2 WDs, 2 Maxtors, and 2 Seagate SCSIs (with 5 year warranties :D).

If you're going to RAID and don't yet have a controller card, try RAID 5. You can chain 3 or more drives together, you still get the benefits of striping and redundancy, and you only ever lose the capacity of one physical drive on the chain to hold parity data. The only thing you'll lose compared to a RAID 10 (0+1) is the ability to access the array until a replacement drive is connected and the array is rebuilt (unless you have a hot spare, but you still have to rebuild).

RAID 5 controllers are getting pretty cheap nowadays.
Naw man, RAID 5 isn't as efficient. RAID 0+1 is the way to go.

A lot of the performance gurus nowadays are using Hitachi SATA II drives in RAID 0. Of course you need a mobo with SATA II or a separate controller to do that.

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 2:21 pm

It's more efficient, you don't waste half your capacity, at most 33% if you have 3 drives.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 15, 2005 2:23 pm

5eraph wrote:It's more efficient, you don't waste half your capacity, at most 33% if you have 3 drives.
What do you mean waste capacity? Assuming you get all drives the same size in a RAID 0 + 1 you won't have any extra capacity. Don't you actually lose capacity with a RAID 5 assuming you use identical drives?

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 2:30 pm

Not quite, RAID 1 is mirroring, to mirror 2 drives you must have a total of four, but you only have the useful storage capacity of 2. RAID 5 gets around that by using the space of one drive as parity to protect against loss on any other drive in the array. It gets more efficient with larger arrays.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 15, 2005 2:41 pm

5eraph wrote:Not quite, RAID 1 is mirroring, to mirror 2 drives you must have a total of four, but you only have the useful storage capacity of 2. RAID 5 gets around that by using the space of one drive as parity to protect against loss on any other drive in the array. It gets more efficient with larger arrays.
Yea so you only get the space of what that one drive is worth.

i.e. if you had 3 80gb drives, you would only have 80gb of storage I assume, or at least only 80gb actually backed up.

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 2:56 pm

It's the other way around. If you have 3 x 80GB drives in a RAID 5 you have the useful capacity of 160GB.

With 4 x 80GB = 240 GB,
With 5 x 80GB = 320 GB,
..
etc.

The space of only one drive is ever "wasted" on data used to rebuild the array (parity data). Mirroring always wastes half because an operational copy is always maintained, whereas a RAID 5 array must be rebuilt upon failure of a drive before the array's data can be accessed again.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 15, 2005 3:55 pm

5eraph wrote:The space of only one drive is ever "wasted" on data used to rebuild the array (parity data). Mirroring always wastes half because an operational copy is always maintained, whereas a RAID 5 array must be rebuilt upon failure of a drive before the array's data can be accessed again.
Well that sounds like a bit of an annoyance right there compared to 0+1. But what I'm saying is ok with 3 80gb drives you get 160gb storage, but only 80gb of that storage is actually going to be recoverable.

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 9:04 pm

Maybe that'll happen if you use it, Protagonist. :D I feel like I've been trolled.

Everything you need to know is here:

Code: Select all

http://www.acnc.com/04_01_05.html
Last edited by 5eraph on Sat Jul 16, 2005 12:04 am, edited 1 time in total.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 15, 2005 10:22 pm

I still don't get how you can have 3 80gb drives = total 240gb and yet have 160gb of storage and full parity for 160gb = 320gb?

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Fri Jul 15, 2005 11:23 pm

Parity isn't what you're thinking.

Think of it this way...

This is over-simplified but it's the easiest way I can explain. We have 3 numbers we want to store: 24, 25, 41. We don't want to lose any of these numbers so we have to have a way of coming up with one of them should it be lost. Let's add them up and call it parity: 24+25+41=90. The number 90 is our parity data.

Now we lose one, we have these remaining: 24, 41, and the sum 90. We can rebuild our data set using our parity information:

24 + X + 41 = 90
X = 90 - 24 - 41
X = 25

The RAID 5 controller can do something like this millions of times per stripe, storing all of its parity information in one block per stripe and all blocks must be of equal size. The fact that the controller can recreate the contents of any single missing block per stripe for all stripes from what's remaining gives the array fault-tolerance so long as we don't have multiple drive failures at once.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Sat Jul 16, 2005 1:52 am

So basically you lose 25% of the storage space compared to if you had all the drives in RAID 0, compared to losing 50% of the storage space of RAID 1 or RAID 0+1 for redundency. But you have to rebuild the array before you can access the data, and it requires a special controller or software RAID 5 (yuck!). Right? :D

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Sat Jul 16, 2005 3:02 am

With RAID 5 using 4 drives it's true that you lose 25% of total capacity for parity. With more drives in the array less space is used for parity: 5 drives = 20%, 6 drives = 16%, and so on.

I was wrong about the availability of failed RAID 5 arrays. It's quite possible to access the data in the array without rebuilding after a drive failure, but it will take extra time for the controller to recreate the missing data on-the-fly as needed. My understanding was that the array went offline until the rebuild was complete.

According to Microsoft, using RAID 5 in Windows XP requires additional hardware or software. I would recommend a hardware solution for accelerated parity operations and greater speed. Windows XP can create and use RAID 0 (striped) and RAID 1 (mirrored) arrays without additional hardware or software.

An interesting rumor I've heard is that all Windows NT versions from 4.0 upward have built-in software support to create and use RAID 5 arrays, but it was disabled in Windows XP (NT v5.1). An article at Tom's Hardware Guide seems to prove the rumor was accurate and explains how you can make Windows XP act as a RAID 5 controller using only a hex editor:
Any WindowsXP system is technically capable of running RAID arrays, as long as the desired amount of hard drives can be attached. It does not matter what hardware you are using. For RAID 5, merely three files need to be altered.
Last edited by 5eraph on Mon Nov 26, 2007 2:43 pm, edited 1 time in total.

User avatar
buletov
Posts: 380
Joined: Tue Feb 15, 2005 11:30 am

Post by buletov » Fri Jul 22, 2005 6:45 pm

I'm totally with 5eraph on this one.

Note that RAID5 with 4 drives is also faster than RAID0+1 with 4 drives:

RAID0+1: 30MB = four 15MB writes on each drive (total 60MB)
RAID5: 30MB = four 10MB writes on each drive (total 40MB)

This means that the time to store 30MB on RAID0+1 setup is the same as the time one disk needs to store 15MB. Yet, on RAID5 setup, the time is equivalent to the time required to save only 10MB, since the data will be stripped around the three disks, and only one disk will store calculated parity data.

Also note that all disks are actually used for real data storage, not just one. I believe it works something like this:

disks:
1 2 3 4
D D D P
D D P D
D P D D
P D D D
(and so on)

So, if any of the drive fails, simply replace it and rebuild the array. This way the parity information is also protected by the real data. And the rebuilt time is very fast.
Never know what life is gonna throw at you.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Fri Jul 22, 2005 11:54 pm

How much would it cost for a decent controller though?

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Sat Jul 23, 2005 12:02 am

Ok I did a little searching and I think this would be a good setup.

This controller (which handles SATA II and RAID 5):
http://www.newegg.com/Product/Product.a ... 6816118029

With 4 of these drives in RAID 5:
http://www.newegg.com/Product/Product.a ... 6822144415

So then you would have a lot of storage, a lot of speed, and also parity as well. :)

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Sun Jul 24, 2005 5:21 pm

It sounds like a good setup, Protagonist.

I was considering that controller, among others, for a similar setup. My only problem is that I'm not sure how that controller will function in a 32-bit PCI slot. I know there will be a possibly huge performance bottleneck moving larger files to and from the array through the PCI bus. I had a similar setup with an Adaptec 29160 Ultra160 that was PCI-X in a PCI slot, but it wasn't as fast as I'd hoped it would be with 3 Seagate 15kRPM SCSI drives. The only motherboards I know of that have PCI-X onboard are designed for servers, and that's not the main purpose of my day-to-day PC.

What I'm looking for in a RAID 5 card are these characteristics:
must be a PCI-e x1, x2, or x4 card, larger is better for throughput;
would like it to have 300 Mb/s burst SATA II hard drive support with NCQ;
must have an integrated XOR engine for accelerated parity operations;
must have at least 128MB of ECC cache, preferably expandable;
must have at least 6 SATA ports with online capacity expansion.

I haven't yet seen a card that satisfies the first two requirements.

I'm planning on having some extra money come September and have been running through my options. I intend to have a killer gaming rig with serious computational capability that will serve movies and music to various devices on my home network until I build a dedicated fileserver. Here is the setup I've been considering:

Asus A8N-SLI Premium motherboard (with integrated 4 port RAID 5),
AMD Athlon X2 4800+ CPU,
4 GB RAM (2 x Corsair TWINX2048-3200C2PRO),
XFX GeForce 7800GTX with factory OCed core to 490MHz (x2 in SLI),
SoundBlaster Audigy2 ZS Gamer (I like the bundled software),
2 x 74 GB WD Raptors (in RAID 1 for System drive array on nForce controller),
4 x 250 GB WD Caviars (in RAID 5 for Storage array on Sil 3114R controller).

I've been doing some research on the integrated Silicon Image 3114R controller and I am sad to report it connects through the PCI bus, not the PCI-e or HyperTransport busses. It will saturate the bus it shares with the SoundBlaster card I plan to use, possibly causing laggy audio performance and definitely slow array throughput, but I'll run with this setup anyway until I find a RAID 5 card that suits my needs. Has anyone seen such a card?
Last edited by 5eraph on Sun Jul 24, 2005 10:11 pm, edited 2 times in total.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Sun Jul 24, 2005 8:01 pm

5eraph wrote:What I'm looking for in a RAID 5 card are these characteristics:
must be a PCI-e x1, x2, or x4 card, larger is better for throughput;
would like it to have 300 Mb/s burst SATA II hard drive support with NCQ;
must have an integrated XOR engine for accelerated parity operations;
must have at least 128MB of ECC cache, preferably expandable;
must have at least 6 SATA ports with online capacity expansion.
I found a card that seems to mention everything you listed. 8)

-PCI-Express X8 bus
-Support SATA II drives
-Support S.M.A.R.T, NCQ and OOB Staggered Spin-up capable drives
-Intel IOP333 processor has integrated the RAID 6 engine inside
-128MB on-board DDR333 SDRAM with ECC protection
-One SODIMM socket to support DDR333 SDRAM with ECC protection, expandable to 1GB. An ECC or non-ECC SDRAM module using X8 or x16 devices
-Supports up to 8 SATA II drives
-Online RAID capacity expansion and RAID level migration simultaneously

:P

http://www.topmicrousa.com/arc-1220.html

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Sun Jul 24, 2005 8:15 pm

That's definitely something to look into. Thanks, Protagonist. :)

I just need to find a motherboard with a PCI-e x8 slot. The one I'm looking at has an x4 slot so the card won't fit :(, and I really want to use a couple graphics cards in SLI. I might consider using just one, but it wouldn't help; the PCI-e x16 slot that would be freed reverts to x1 functionality, so it's a no-go with that motherboard.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Sun Jul 24, 2005 8:23 pm

Well I would suggest getting a DFI mobo myself. I have an ASUS, and it's great for Intel but for AMD systems ASUS just isn't as good as the others. I have the DFI NForce4 LanParty Ultra-D and they give you a tool in the box to let you switch the jumper to make both slots run at x8. There is also a guide on the forums on how to mod the board to SLI. Or you could just by the full NForce4 Lanparty SLI-DR.

I've never considering using SLI myself. From the benchmarks I have seen on some games it will give you 30-40% increase, but on many others it will give you less or even 0% to -5% if the game doesn't support SLI.

Oh and also I heard that many XFX cards are DOA? You might want to try another brand like Leadtek, eVGA, BFG, MSI, those are all good.

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Sun Jul 24, 2005 11:55 pm

I would tend to agree that DFI caters more to the enthusiast crowd, but in this case I think I may stick with the Asus board because I'm not planning on overclocking much and am having no luck finding a board with an x8 slot. Also, I like its component placement, it is entirely fanless (I may go with water-cooling later), the PCI-e x16 slots are spaced further apart to allow for better cooling of the graphics cards if I upgrade later to dual-slot cards or water cool them, and toggling the SLI mode can be done in the BIOS (not through rotating an SLI card). All things considered the DFI mobos are extremely nice and easily match or surpass Asus featurewise, but I nitpick too much. :)

I'm not sure if a single PCI-e GeForce card can function with only x8, I'm thinking they are made as x16 for a reason. Otherwise I'd definitely consider your suggestion of making both slots work at x8 and go with 1 graphics card and the RAID card. There's also the possibility that nVidia does something more than just convert x16+x1 (or x16+x2 in DFI's mobos) to x8+x8 by rotating the SLI card; they might bridge those connections somehow as well. I'll definitely look into it though.

The only reason I'm considering XFX is for the fact that they are factory overclocking their cards higher than anyone else at the moment. I've heard great things about BFG and have seen some of their work. Believe me, if the XFX cards don't work out I'll definitely exchange them for BFGs. Gotta love newegg.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Mon Jul 25, 2005 1:28 am

5eraph wrote:I would tend to agree that DFI caters more to the enthusiast crowd, but in this case I think I may stick with the Asus board because I'm not planning on overclocking much and am having no luck finding a board with an x8 slot. Also, I like its component placement, it is entirely fanless (I may go with water-cooling later), the PCI-e x16 slots are spaced further apart to allow for better cooling of the graphics cards if I upgrade later to dual-slot cards or water cool them, and toggling the SLI mode can be done in the BIOS (not through rotating an SLI card). All things considered the DFI mobos are extremely nice and easily match or surpass Asus featurewise, but I nitpick too much. :)

I'm not sure if a single PCI-e GeForce card can function with only x8, I'm thinking they are made as x16 for a reason. Otherwise I'd definitely consider your suggestion of making both slots work at x8 and go with 1 graphics card and the RAID card. There's also the possibility that nVidia does something more than just convert x16+x1 (or x16+x2 in DFI's mobos) to x8+x8 by rotating the SLI card; they might bridge those connections somehow as well. I'll definitely look into it though.

The only reason I'm considering XFX is for the fact that they are factory overclocking their cards higher than anyone else at the moment. I've heard great things about BFG and have seen some of their work. Believe me, if the XFX cards don't work out I'll definitely exchange them for BFGs. Gotta love newegg.
A single card will function perfectly fine in x8, there is almost zero loss of performance compared to x16. That's why the cards run in x8 SLI mode no prob. And also why AGP x8 cards are just as fast as their PCI-e x16 counterparts, because the graphics cards of today aren't pumping enough power to even make full use of AGP x8, much less PCI-E x16...

And what do you mean more than just rotating the SLI card? What sli card? Like I said, all you have to do to switch to x8 + x8 mode is simply switch the jumper with the tool they give you, it's a one time deal.

About the video card, why does it matter if it's overclocked out of the box? Video cards are the easiest things to overclock. You do it in windows with a program and all you do is crank up the clocks, don't have to mess with voltages or anything. They don't even overclock it that much. What they do I can do in 10 seconds, just turn up the clocks and apply.

And about the space between the PCI-E slots issue, you know you might just want to consider this board:

http://www.xtremesystems.org/forums/sho ... hp?t=66651

Yes, it's crossfire, not SLI. But check out the space between those pci-e slots!!!

And about the fan, that chipset fan is pretty nice and cools well. It's not really that loud at all. And this board has a whole bunch of heatsinks all over the place, to give it much better cooling than the asus.

And if you're really that picky, I don't think you have to switch a jumper for x8 + x8 mode on this board. 8)

So I guess I would suggest going with that DFI crossfire board and getting that RAID card plus maybe an x900 xt pe or something like that. :D

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Mon Jul 25, 2005 1:32 am

Here's a picture with the cards so you can see how much space there is. And those are both double-slotted cards.

Image

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Mon Jul 25, 2005 4:05 am

Your reasoning is sound in regards to graphics functionality over PCI-e x8 from what I've read so far. nVidia has mentioned that they may produce an AGP version of their 7800GTX if consumer demand exists so bandwidth shouldn't be a problem. I haven't read up on how the jumper/card/BIOS changes the PCI-e connections is what I'm saying.

I don't know too much about how overclocking is handled currently by various vendors when you try to return a fried component, but it used to be something that often voided your warranty. It's why I try not to overclock much to begin with. If a manufacturer is confident enough with its product to sell it overclocked (and a 14% overclock is pretty substantial to me) then I think it's safe enough to try without having to worry as much because I know I can always return it. My fear in this circumstance stems from a Hercules GeForce3 I mail-ordered from the manufacturer some years ago that would not work in the motherboard I was using at the time (Gigabyte GA-7ZX-1). I tried everything I could think of and spent most of my warranty period in the process. You can imagine my disgust when I sold it to a friend and he got it working in his computer on the first go. I sold it at quite a loss...

I'm not too interested in ATI's product just yet. I want to wait and see their next generation card and its maturity before I commit myself to it for a year or two. From what I've read in the link you just posted it seems that ATI has been having trouble getting its southbridge bugs ironed out for CrossFire. Right now they're playing catch-up, and their next-gen card has seen several delays. To put this in perspective, the card I purchased to replace that Geforce3 is still in use today in my current rig, a 64MB ATI Radeon 8500. I still game with it, but it's not pretty. :)

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Mon Jul 25, 2005 12:25 pm

I wouldn't worry too much about warranties. It's always going to be a hassle to get something RMA'ed. And if you have any idea what you are doing there is no way you are going to fry your card by overclocking it. People that fry the cards either do volt-mods, or do some insanely stupid high overclock when their cooling is pure crap and then.... "hmmm what's that lovely smell? smells like my video card melting!"

If you bust your card even if you have overclocked it, just return it to them to have them fix or replace it. Simply "forget to mention" that you were overclocking it. :P

Raider
Posts: 18
Joined: Sun Jun 19, 2005 5:38 am
Location: Kellyville, OK

Post by Raider » Sun Jul 31, 2005 1:36 am

If you are going to go with a sinlgle card solution you might want to seriously consider the ATI or Saphire X850XT Platinum Edition. I am sold on ATI becuase they back there cards big time. I burned up my first 9600XT and they sent me a replacement no questions asked. You can't beat that. I have a 9800XT now and when I upgrade it will be to the X850XT Platinum Edition or I will just wait until the dual core ATI cards come out. They are going to do with one card what it takes NVidia two cards to do.

Protagonist.
Posts: 162
Joined: Tue Jun 14, 2005 12:02 am

Post by Protagonist. » Sun Jul 31, 2005 2:16 pm

Well assuming his build would be not in the very-near future I would say go with a single x900 xt platinum edition. That should be available within a few months. But the x850 xt pe is good too, and not that expensive right now.
This place has it for $350

LINK

User avatar
buletov
Posts: 380
Joined: Tue Feb 15, 2005 11:30 am

Post by buletov » Mon Aug 01, 2005 4:39 am

Hey Ryan, how about this as an unsupported download:
http://members.home.nl/rvandesanden/raid3.html

jJust like uxtheme.dll and stuff...
Never know what life is gonna throw at you.

Vid0
Posts: 41
Joined: Thu Apr 14, 2005 6:24 am
Location: Lithuania

Post by Vid0 » Mon Aug 22, 2005 6:12 am

Why not to use "infinitely fast" SATA drive for system partition?
Image
Review:
http://www.codinghorror.com/blog/archives/000349.html

User avatar
5eraph
Site Admin
Posts: 4621
Joined: Tue Jul 05, 2005 9:38 pm
Location: Riverview, MI USA

Post by 5eraph » Mon Aug 22, 2005 7:05 am

Nice linkage, Vid0.

I can imagine it would have phenomenal random access speeds. My only complaint with it besides being limited to 4GB is Gigabyte's approach. It could achieve a much higher average throughput if they integrated a dedicated custom hard drive controller, made it PCI-e based, and connected it solely to the PCI-e bus. It wouldn't be limited to 150 or 300 MB/s and the only bottlenecks I could see then would be the installed i-RAM and bus speeds.

Vid0
Posts: 41
Joined: Thu Apr 14, 2005 6:24 am
Location: Lithuania

Post by Vid0 » Mon Aug 22, 2005 8:24 am

5eraph wrote:It could achieve a much higher average throughput if they integrated a dedicated custom hard drive controller, made it PCI-e based, and connected it solely to the PCI-e bus. It wouldn't be limited to 150 or 300 MB/s and the only bottlenecks I could see then would be the installed i-RAM and bus speeds.
In this approach it uses PCI slot only for power supply. It's normal SATA drive! It is detected by BIOS and need no drivers for any OS. PCI-E solution is not good for this - that kind of drive would not be detected by BIOS. We would have something like RAM-Drive with proprietary drives for every OS. May be then it’s better to add more RAM to the system and to use normal software RAM-Drive? So, Gigabyte's solution is good, only we need to wait for SATAII 300 Mb/s version and with more DIMM slots and maybe optional backup to flash on power loss.

Post Reply