Ramdisk RAID. Is it possible on a Linux system? I've got 2 ramdisks, is it possible to force them into one giant raid array? I don't know what the benefits would be, except amazing epic bragging rights.
Its an amazingly retarded idea. RAM disks are simply sections of RAM allocated as disk space. If your RAM is dual channel, then it already is basically RAID 0. Taking two RAM disks and putting them in RAID won't do a thing. Hell, the extra overhead from soft RAID will make accessing it SLOWER not faster.

And if it is possible, and if you do it, you won't have epic bragging rights, instead people will just look at you like you are an idiot that doesn't know what the hell he is doing (and they would be correct)

Unless you are talking about those incredibly rare disks made using RAM sticks that cost several thousand dollars, but I doubt that.
I don't really have anything to add that Kllrnohj didn't already say; unless each of your RAMdisks is on a separate piece of physical memory, ideally on a different bridge, this would be completely useless speed-wise. Good thinking outside the box, though.
KermMartian wrote:
I don't really have anything to add that Kllrnohj didn't already say; unless each of your RAMdisks is on a separate piece of physical memory, ideally on a different bridge, this would be completely useless speed-wise. Good thinking outside the box, though.


It would have to be on a different memory controller to matter, as two phyiscal sticks get accessed at the same time by default already (dual channel), and that is all the bandwidth that the memory controller has to begin with. So he would basically need a dual CPU system with Intel Core i7 or AMD Opeteron CPUs to even have the hardware bandwidth there to begin with.

Then again, the bandwidth already provided by either the Opteron or the Core i7 is already ridiculous. Triple channel DDR3 1600 RAM with a Core i7 gets about 28 GB/sec of memory bandwidth (that would be gigaBYTES btw).
I joined this forum just to explore the notion of this thread a bit more.
The gist of what I was Googling for is :
on a dual processor main board with two memory ramdisks ,
setting affinity pairing each ramdisk to one processor.
This effectively provides what amounts to a Level 3 dedicated cache to each processor.
Whether this may provide improvement over regular global memory access is the question.
( Level 1 is processor cache for it's registers , level 2 is on die cache adjacent to the core ).

As was observed above that two modules of dual channel ram can be seen as analogous to
a RAID O disk array. Enabling ram caching then is much the same as having partitions in it.
( There are four 512 MB modules )

While WinXP Pro memory manager adequately handles Symmetric Multi Processing , there
are circumstances in which this actually hinders data execution of software not specifically
coded for this. An example of this is the Folding@Home distributed computing effort to
resolve protein folding http://folding.stanford.edu so as to fully utilize both processors ,
http://fahwiki.net/index.php/FAH_%26_SMP#Running_Multiple_Clients

Not being one to want to stand a computer on it's ear , testing applications serve me only
as an adjunct to diagnostic needs rather than development. I am open to suggestions and
advice to establishing a baseline for evaluation. I have the following test software on hand.

PCMark2002 -
SiSoft Sandra 2004 -
CrystalMark 2004 -
CrystalDiskMark 2.2 - Disk read/write
Everest 2.2 - memory

Selection of a ramdisk driver is not insignificant as has been researched by others , here _
http://www.raymond.cc/blog/archives/2009/12/08/12-ram-disk-software-benchmarked-for-fastest-read-and-write-speed
http://fiehnlab.ucdavis.edu/staff/kind/Collector/Benchmark/RamDisk/ramdisk-benchmarks.pdf

I have for some years used a ramdisk driver RAMDisk Extended 52102 PRO that was being
beta tested and is no longer available although it's commercial descendant is , here _
http://members.fortunecity.com/ramdisk/RAMDisk/ramdiskent.htm

The edition I use installs as a driver is configured in Win XP device manager and provides
direct memory access without emulating a disk so it is not seen in Disk Manager it only
appears as a disk icon in My Computer. There are just two files apart from registry entries.
C:\WINDOWS\system32\DRIVERS\RAMDisk.sys 34 kB
C:\WINDOWS\system32\RAMDisk.dll 55 kB


Brief outline of main board memory data flow of the particular main board I use
http://www.anandtech.com/show/780/5
http://www.anandtech.com/show/780/7

AMD-760™ MPX 762 Memory Controller Chipset Overview
http://support.amd.com/us/ChipsetMotherboard_TechDocs/24494.pdf

AMD-762™ System Controller Data sheet ( Relevent sections on PDF page 19 & 22 )
http://support.amd.com/us/ChipsetMotherboard_TechDocs/24416.pdf

AMD-762™ System Controller and Software/BIOS Design Guide
http://support.amd.com/us/ChipsetMotherboard_TechDocs/24462.pdf

AMD-762™ System Controller revisions ( identified error modes )
http://support.amd.com/us/ChipsetMotherboard_TechDocs/24089.pdf

.
I was talking about a case where you actually have two memory controllers - your system doesn't have that. You have a single memory controller.

Even in the dual memory controller case, it is still an incredibly retarded idea that will slow things down (overhead, latency, limited chip<->chip bandwidth that you are now squandering, etc...), not speed them up. Of course, this is also assuming the system even lets you specify which set of RAM to put the RAMdisk on - which I don't think it does.
Welcome to Cemetech, Franklyn! I see that you've already met our sadistically brutal (but very knowledgeable) global moderator Kllrnohj. Might I request that you Introduce Yourself and tell us a little bit about your tech and coding background? Based on what was posted previously in this thread and your post, I'm inclined to agree with Kllrnohj, perhaps with a bit more kindness and tact than he chose to use. Dealing only with system RAM, I don't think that trying to specify locality will give you any significant speedup; I feel like it it was possible, more commercial high-performance systems would be using RAM as an L3 (or L4) cache in that manner. Don't forget that hitting main memory is two or three orders of magnitude more expensive than hitting on-die cache, as well.
Who is franklyn ? - International man of mystery http://www.cemetech.net/forum/viewtopic.php?p=152853#152853
__________________________________________

What's clear is that I am not making myself understood.
My analogy to Cache L3 may have misdirected the dialog.
To reiterate the premise of my inquiry , the merit of
"a dual processor main board with two memory ramdisks ,
setting affinity pairing each ramdisk to one processor."

Overall system performance is markedly improved by
avoiding read / write access to a hard drive. This is so.
As example http://sourceforge.net/projects/ofdx
A computer having no hard drive at all , that boots an
operating system from read only memory , such as
Windows embedded , only has ram to operate with.
Where are files and data to be placed without a ramdisk ?
If one ramdisk is good then how can two be bad ?

If you wish to respond to this , do so in explicit detail
tracing instructions through their computation steps.
Without showing how you derive your conclusion , it's
only an opinion , perhaps informed , perhaps not.

- some background -

Basic PC architecture has one processor accessing main
memory directly , and through another chip the other
buses and Hard drive. Having two PC's working as a cluster
on the same task requires a communication channel and
some coordination using software only. Each is otherwise
Independent of the other. With a multiprocessor machine
Two PC's are merged onto one board and in this case
they share the same memory. To prevent both CPU's
from accessing memory at the same time or violating
each other's memory space , a controller northbridge
coordinates traffic. The only memory in common is the
same then as with individual PC's , the application files
on a hard disk.

With two gigabytes of ram , there is ample room to
accommodate a ramdisk and reduce the need for hard
drive access. Active memory in ram is controlled by
the application code , I have no control over that.
Memory assigned to a ramdisk is controlled by the
operating system's Windows memory manager ,
which facilitates some user control.

Having two processors working together on the same
task ( which typically won't be specifically coded for
multiprocessing ) what coordination exists , is the job
of the operating system kernel. Ignoring the ever
present background of services and resident programs ,
the principal priority remains the shared task. In such
case one ramdisk serves to accommodate the one task.

Having two processors working independently of each
other on individual tasks assigned by affinity , in such
case having a dedicated ramdisk for each should I
believe reduce bus contention and seek time , since
each has a smaller memory space to search and the
ram controller facilitates dual access. It is much the
same as having two separate PC's , each will perform
it's separate task alone better than both working on
the two same tasks simultaneously.

Although relating to a four core processor , and access
to active memory , this premise is discussed on Page 70 here
http://www.unilim.fr/sci/wiki/_media/cali/cpumemory.pdf
The idea is the same even acknowledged as such by the author.
_____________________________________

The relative merit of this arrangement can be evaluated
by actual testing. The problem with this is that much
of the benchmarks provided by testing applications and
utilities only express the total of processor activity.
This says nothing about the quality of that activity , how
much actually accomplishes the tasks relative to just
churning data inefficiently to no net benefit. Performing
the very same task with different configurations and
noting the time to completion is the only methodology
I can think of. Perhaps something like conversion of
video from one format to another. I had asked for
suggestions before above ( the point of post ) but this
has been overlooked in the responses I have received.

.
franklyn wrote:
If one ramdisk is good then how can two be bad ?


Because of unnecessary overhead slowing things down?

Quote:
If you wish to respond to this , do so in explicit detail
tracing instructions through their computation steps.
Without showing how you derive your conclusion , it's
only an opinion , perhaps informed , perhaps not.


No thanks.

Quote:
Having two processors working independently of each
other on individual tasks assigned by affinity , in such
case having a dedicated ramdisk for each should I
believe reduce bus contention and seek time , since
each has a smaller memory space to search and the
ram controller facilitates dual access.


That's not how any of that works.

You don't "seek" or "search" in RAM or for a ramdisk. Access to RAM is controlled and the amount of bandwidth is finite. If you have two CPUs both access memory at the same time, they each end up with half the bandwidth. It doesn't matter if they are accessing one ramdisk or two - it's still just RAM and it still uses the same channels. The CPU doesn't know or care about ramdisks, those are OS level constructs.

Quote:
It is much the same as having two separate PC's , each will perform it's separate task alone better than both working on the two same tasks simultaneously.


It's *nothing* like two separate PCs. Two separate PCs have two *independent* set of resources. With a multi-CPU system, that isn't true.
If the computer could access the two different RAMdisks through two different Northbridges with independent buses connecting them to the CPU, it would indeed be a very valuable addition, to add onto Kllrnohj's final point. With one Northbridge connecting one bank of memory, even to more than one core or more than one processor, every extra bit of bandwidth used for one RAMdisk would necessarily reduce the available bandwidth for the second RAMdisk.
KermMartian wrote:
If the computer could access the two different RAMdisks through two different Northbridges with independent buses connecting them to the CPU, it would indeed be a very valuable addition, to add onto Kllrnohj's final point.


Even if you had that, though, it would need to be managed by the OS and not transparently by the hardware for you to setup two ramdisks on the two separate buses. I don't know of any OS that can handle anything like that, as it would be a very funky setup.
For sure; exposing it to the OS as two separate sets of memory in hardware would be something incompatible with just about everything off-the-shelf, although I'd bet throwing together a Linux driver to make it happen wouldn't be horrendously bad.
KermMartian wrote:
For sure; exposing it to the OS as two separate sets of memory in hardware would be something incompatible with just about everything off-the-shelf, although I'd bet throwing together a Linux driver to make it happen wouldn't be horrendously bad.


It would probably need to be more than just a driver, the memory management would need to be changed as well (which doesn't need a driver currently as paging and virtual memory mapping is all defined by x86)

I supposed the safest way would be to have the hardware handle everything transparently, with a driver that can query for how the physical memory is divided up.
Kllrnohj wrote:
It would probably need to be more than just a driver, the memory management would need to be changed as well (which doesn't need a driver currently as paging and virtual memory mapping is all defined by x86)
But Linux targets far more than just x86; surely multiple memory units could be handled by the same mechanism.
KermMartian wrote:
But Linux targets far more than just x86; surely multiple memory units could be handled by the same mechanism.


Uh, what mechanism? Low level memory management is one of the things that each platform needs to provide, and no platform has multiple MMUs on different buses afaik so there isn't any generic support for something like that.
Kllrnohj wrote:
Its an amazingly retarded idea. RAM disks are simply sections of RAM allocated as disk space. If your RAM is dual channel, then it already is basically RAID 0. Taking two RAM disks and putting them in RAID won't do a thing. Hell, the extra overhead from soft RAID will make accessing it SLOWER not faster.


I found this thread because I have a use for RAID-controlled RAM disk, and as you are so sure it is an 'amazingly retarded idea' I thought you'd like the opportunity to shoot this plan down in flames.

Reason I'm looking for a RAID-controlled triple RAM disk setup is simple: Redis. Redis is a fast key-value data store that performs most of it's I/O operations in RAM, backing up to hard disk occassionally (in the default configuration at least). However, whilst it's normally reliable, I want to increase the reliability by linking these RAM disks using RAID 5, most likely using hardware RAID. Of course there will be an impact on performance, but considering how efficient Redis is it's a small penalty to pay.

So what are the issues with this plan?
Let me act as Kllrnohj's proxy in doing a preliminary shoot-down-in-flames, and Kllrnohj can follow up with the big guns when he sees the thread. Assuming that you're talking about creating multiple RAMdisks in the same machine's memory, there's no advantage whatsoever. First, I'm not sure how you plan to create hardware raid of RAMdisks when RAM is directly linked to the CPU via the northbridge and has no opportunity to be linked into a hardware RAID controller. Secondly, reliability won't be improved at all; things like random bit flips are likely to be a minor concern compared with machine hangs, power failures, etc, all of which will take down all the pieces of your RAMdisk RAID at once, rendering it useless.

Also, might I request that you Introduce Yourself when you get a chance? Smile
KermMartian wrote:
Let me act as Kllrnohj's proxy in doing a preliminary shoot-down-in-flames, and Kllrnohj can follow up with the big guns when he sees the thread. Assuming that you're talking about creating multiple RAMdisks in the same machine's memory, there's no advantage whatsoever. First, I'm not sure how you plan to create hardware raid of RAMdisks when RAM is directly linked to the CPU via the northbridge and has no opportunity to be linked into a hardware RAID controller. Secondly, reliability won't be improved at all; things like random bit flips are likely to be a minor concern compared with machine hangs, power failures, etc, all of which will take down all the pieces of your RAMdisk RAID at once, rendering it useless.

Also, might I request that you Introduce Yourself when you get a chance? Smile


I can introduce myself here. I'm currently looking how to build myself a super fast database processing system. That's all I can think to say.

You're right with lack of access to the hardware RAID controller, well apart from if you use a RAM-based SSD, which I had classed as an option. However, I've just found an even better solution, so won't need to continue my part in this discussion. See here:
http://www.youtube.com/watch?v=5cqfhZvyE80

Cheers.
ZenoZammo wrote:
I can introduce myself here. I'm currently looking how to build myself a super fast database processing system. That's all I can think to say.


For that you need algorithms, not hardware.

Quote:
You're right with lack of access to the hardware RAID controller, well apart from if you use a RAM-based SSD, which I had classed as an option. However, I've just found an even better solution, so won't need to continue my part in this discussion. See here:
http://www.youtube.com/watch?v=5cqfhZvyE80

Cheers.


That's just a RAM disk... Did you just stop by to make a single retarded post followed up with "hahaha, Just Joking, I'll just use a RAM disk"?

As for reliability, you don't need/want RAID5 for that, just use some checksums to verify data integrity. Or hell, implement "RAID5" in your data structure - it's just the addition of 'A xor B', that's all it does.
Kllrnohj wrote:
ZenoZammo wrote:
I can introduce myself here. I'm currently looking how to build myself a super fast database processing system. That's all I can think to say.


For that you need algorithms, not hardware.


Uhhh, last time I checked computer hardware and software kinda had a symbiotic relationship. Good luck with running those algorithms without hardware buddy.

Quote:

Quote:
You're right with lack of access to the hardware RAID controller, well apart from if you use a RAM-based SSD, which I had classed as an option. However, I've just found an even better solution, so won't need to continue my part in this discussion. See here:
http://www.youtube.com/watch?v=5cqfhZvyE80

Cheers.


That's just a RAM disk... Did you just stop by to make a single retarded post followed up with "hahaha, Just Joking, I'll just use a RAM disk"?

As for reliability, you don't need/want RAID5 for that, just use some checksums to verify data integrity. Or hell, implement "RAID5" in your data structure - it's just the addition of 'A xor B', that's all it does.


I was looking at my options. Hardware based RAID with hardware controlled RAM disks is completely possible, using RAM disks connected via SATA or similar. However, after seeing the price of these drives, I decided to look elsewhere. The link I shared showed a couple of options, the one that interested me was the extra large RAM cache linked to a SSD. If the throughput results are true, I can run a CRC on each data block before writing to the SSD and still have blazing fast performance.

As I stated before, I no longer require discussing this, as I've found my preferred solution. If you want to continue, feel free to argue with yourself, invent things you think I would say, be my guest.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement