Google Prefers DRAM to Hard Disks 354
KP writes: "I came across this interview with Google's CEO. A very interesting
read." It's interesting in part becase that CEO (Eric Schmidt) claims that for Google's purposes, "it costs less money and it is more efficient to use DRAM as storage as opposed to hard disks." "I still cannot figure out how he says storing data on DRAM is
cheaper than storing it on hard-disks. Maybe, if you buy in bulk?"
I can see it now... (Score:2, Offtopic)
Then someone trips over the power chord...
-- Dan =)
Re: Power Chord- (Score:2, Funny)
Re:I can see it now... (Score:2, Funny)
So where is your UPS NOW?
Additionally (Score:4, Insightful)
Re:Additionally (Score:2)
Re:Additionally (Score:2)
Re:Additionally (Score:2)
Re:Additionally (Score:4, Informative)
Re:Additionally (Score:3, Insightful)
Re:Additionally (Score:2, Informative)
When's the last time you checked your RAM? I get about 1 bad module for every 2 machines. Defects usually show up on the initial test, though some don't show up for a few years.
Don't believe me? Try it yourself; Memtest86 [teresaudio.com]. I suggest running one full test (can take days) when you first build a machine, and when you run into odd problems that you can't figure out. The default tests are good, but I've had times where it did miss problems.
Re:Additionally (Score:2)
RAM is a mechanical device
Ram is an electronic device. It has no mechanical parts, save for the junction between it and the motherboard.
Re:Additionally (Score:3, Informative)
Re:Additionally (Score:2)
This makes no sense. A long warrenty period makes a product sell better. When 99.9% of parts that are going to fail do it in 30 days, it's in the interest of the manufacturer to either have no warrenty at all or a very short one (to prevent claims), or one that is very long, like 10 years or lifetime. After the first 30 days, hardly anything is going to break, so it would be stupid not to prolong the warrenty period. This can be done essentially 'free'. And I've seen RAM that have a lifetime guarantee.
Re:Additionally (Score:2)
RAM is solid state. It is simple a circuit board with a couple of IC modules. There are absolutely no moving parts.
The reason RAM goes bad is chiefly from operating temperatures and poor construction (mostly impurities in the air).
There are absolutely no moving parts in RAM though. That is just silly to even suggest
In fact, the only real moving parts in most PC's are the storage devices and fans...
Re:Additionally (Score:3, Informative)
RAM heats up as it's used, metal expands, the Chips on that little PCB stretch slightly, joints weaken with each power cycle, sometimes they fragment. The same thing with the connectors to the motherboard.
Telstra, in Australia, was having a hellish time with certain Cisco routers as the RAM heating up would eventually work it's way out of the socket, crashing the router!
Re:Additionally (Score:2, Interesting)
The company I work for makes computers with a lot of RAM and so we've been researching how to survive a RAM chip failure, but as far as I know no system implements such a technology.
Re:Additionally (Score:3, Interesting)
Perhaps this is what your company is looking for:
ChipKill [ibm.com]
Re:Additionally (Score:3, Insightful)
RAM vs. HDD (Score:2, Redundant)
Obviously, if they used DRAM for their HUGE central databases, it would not be a cheaper solution.
But, I'm talking out of my ass, because I don't know how their datacenter works.. anyone anyone?
-metric
Re:RAM vs. HDD (Score:3, Interesting)
Re:RAM vs. HDD (Score:2)
Speed saves (Score:3, Insightful)
From the article: Why DRAM is so fast (Score:5, Informative)
I still cannot figure out how he says storing data on DRAM is cheaper than storing it on hard-disks. Maybe, if you buy in bulk?
When you pay for DRAM, you get read latency measured in nanoseconds rather than milliseconds, which lets you get more queries done faster with less processing hardware. The key metric here is seeks per second. From the article:
With a rotating disk, if you wanted to access a million different pieces of data, you would have to either wait for a million seeks or set up a 1,000-way mirror and wait for 1,000 seeks. Because DRAM seeks several orders of magnitude more quickly, you don't need as many mirrors of the data to get the same number of seeks per second.
Re:From the article: Why DRAM is so fast (Score:4, Interesting)
Do you want to buy a machine that cost $100,000 per copy to do 1 Million Hits per X time.
-or-
Do you want to buy 1000 machines that cost $500 per copy to do 1000 Hits per X time.
In both cases we are talking about 1 million Hits per X time.
In case 1 - it costs a port on master switch and $100,000 for the machine.
In case 2 - it costs 1000 ports on master switch -- actually more switches and infrastructure. AND $500,000 for the machines.
Case 1 20% Cheaper then case 2. We have not talked of Power, A/C, Space... Need to look at the whole picture.
I've always wondered (Score:2)
AFAIK Linux and Open BSD cannot do this either. It seems amazing to me that people have missed this idea.
Re:I've always wondered (Score:2, Informative)
AFAIK Linux and Open BSD cannot do this either. It seems amazing to me that people have missed this idea.
You can do it in Linux (and probably in Windows too, though I'm not sure how)--but there generally isn't a reason to. The VM/RD cycle swings back and forth over the years, but at present the PC world seems to be running best with 2::1 VM ratio (using a chunk of HD about twice your RAM size to simulate more RAM) although part of this is that RAM is being used up by smart caching of disk. This holds for Windows, Linux, and (IIRC) Open BSD.
So, the short answer is: you could do it, but it would likely slow you down overall.
-- MarkusQ
Re:I've always wondered (Score:3, Informative)
The more memory present in the system, the more memory the linux kernel dedicates to caching. Thus commonly read files are in memory and have incredibly fast reads. This is performed auto-magically without the user even being aware of it.
Of course no two situations are exact and you may have a purpose for dedicating a ram disk to something. There are instances where you may want a fast read/response time, but the file isn't commonly used. Such as the data for a squid proxy cache. A ram disk in such a situation would be entirely helpful.
The latest 2600 mag... (Score:2)
Re:I've always wondered (Score:2, Interesting)
In fact, this sort of trick was exactly why the unix "block device" abstraction was invented more than a quarter century ago. It allows you to have a file system on anything that can store data in addressable chunks called "blocks". Memory works just fine for this.
An old trick for speeding up unix systems has been to use memory for the
There's no real problem with mapping the entire file system to memory.
Re:I've always wondered (Score:2)
An old trick for speeding up unix systems has been to use memory for the /tmp directory (and symlink /usr/tmp to /tmp, or vice-versa).
This was because SunOS had a dog-slow filesystem; even today, /tmp is usually backed by ram. Linux (and probably BSD) has a fast enough filesystem that this isn't an issue
Re:I've always wondered (Score:2)
Scary! (Score:4, Insightful)
Now if only Google could go out and do its own fact-checking, it wouldn't need to rely on other newspapers at all. Mark my words, by 2010 google will be the only place you go when you need information. Forget askjeeves, try listentogoogle. No humans will be involved. Scary.
By the way, this guy can't speak for beans.
The speech I give everyday is: "This is what we do. Is what you are doing consistent with that, and does it change the world?"
Re:Scary! (Score:5, Funny)
Google fights back.
Re:Scary! (Score:2, Funny)
I can imagine it now: just as I am about to blow out the candles, a giant DRAM chip bursts out of the cake and says, "I am Google. I am here to protect you. I am here to protect you from the terrible secret of space... er, the web."
Once again a simplistic view (Score:3, Informative)
What you pay for the initial product is not what it "costs" in the long-term. Businesses have a term for this called TCO or Total Cost of Ownership. It includes all the other time and materials needed to keep the item in use.
I would imagine in this case that the simple reason is that why DRAM is more expensive to purchase it is a *lot* less expensive to run, the primary cost being power.
Also consider that if speed is of essence, as it with Google, it's not 50GB or RAM vs a 50GB cheap-n-cheerful IDE drive. A 50GB Ultra160 drive costs considerably more than an IDE and still won't come near the DRAM for speed.
Re:Once again a simplistic view (Score:2, Insightful)
Personaly I seriously doubt that all or even close to all of the stuff google stores is stored in DRAM, it's more likely they'd keep newer data and high-access data in DRAM, and older stuff gets archived to disk, avalible for recall later, but slower.
Re:Once again a simplistic view (Score:2)
You better believe it. Altavista already did that a long time ago. Hotbot (inktomi) had a similar all-in-memory scheme. Since Google is faster than those two, all the more reason to believe that the data is in DRAM (although surely they have backups in HDs and tape, but that is a different story).
The key to it being cheaper is.... (Score:3, Insightful)
Imperial MegaRam? (Score:4, Interesting)
I had an opportunity to play with one on a 20 CPU Starfire domain and it was pretty impressive. The unit I was using had 8 wide SCSI ports on it, which were all connected. Interestingly, when the system was pegged, it was off the scale in system time. There's probably a locking problem in the Solaris kernel that's the real bottleneck.
Fewer servers needed (Score:5, Interesting)
This just shows how limited the lifespan is of 32-bit 4GB architecture, especially for servers.
Re:Fewer servers needed (Score:2)
I want to know HOW they are doing this. Are they using PIIIs with 64GB of memory?
Re:Fewer servers needed (Score:2, Informative)
I believe it... (Score:3, Informative)
Josh Crawley
RAM Disks (Score:3, Interesting)
You would still need to be able to direct searches to the machines that have the part of the data you need. This would take a high speed network and some clever programming. But it is doable.
I always was amazed at the speed of googles search engine, now I have a little more clue as to why it is so fast.
Sounds to me like they might be able to sell their database software as a money making product at some point. Oracle, watch out!
Re:RAM Disks (Score:2)
Re:RAM Disks (Score:3, Insightful)
It's probably true that 20 TB isn't enough for Google, but it's not true (and won't be for quite a while) that the cached pages and Usenet archive require "a few PB".
Index space? (Score:2)
What about the indexes required to actually access that data in a timley manner? Once you factor in the extra stuff needed to actually make it a viable search engine, you could easily imagine a PB or more of storage was required.
As for the other poster going on about comrpessing the data - I doubt they'd want to compress the data when all they are concerned about is raw speed of processing requests!
.
Re:Index space? (Score:3, Insightful)
I don't know how google to it. But typical the
main over head is the inverse file, for every word on every page, you just need the number of the page it was in and the word position on that byte. So the Google needs around 8-12 bytes per (non stoplisted) word.
Re:RAM Disks (Score:2)
So, let's get wild and say that there is 120TB of html pages that we care about... if you compress these pages then they would fit in 10 TB. Still plenty of room on a 20 TB RAM Disk for the index to all these pages.
And besides, I'm just guessing... They might have 8GB of RAM in every machine, for all I know.
Five minute rule (Score:3, Informative)
See The Five-Minute Rule, ten years later (Word Doc) [microsoft.com] or it's HTML-ified Google Cache [google.com]
price comparison (Score:4, Informative)
In order to say that the DRAM option is cheaper than the hard drive option, the performance of the DRAM option would have to exceed the performance of the DRAM option by a factor of greater than 25. If you do the math, it's possible.
Years ago, I worked in a VAX shop that used RAM drives for some installed/shared images that required high concurrency. The performance was impressive - and was factored into the overall cost analysis of the purchase.
Re:price comparison (Score:2, Insightful)
This is one of the reasons that we need 64 bit addressability on commodity IA architecture ASAP -- Ram drives using an IO subsystem adds a huge overhead compared to indexing in arrays and natural data organization as opposed to fixed blocks of byte that have to be retrieved as a unit with 100s++ of instructions and security models in the way of access!
Re:price comparison (Score:2)
Re:price comparison (Score:2)
Re:price comparison (Score:2)
I just bought a Gig of DDR ECC ram for $150 from compsource [c-source.com], so there's a datapoint for you.
A number of reasons it could be "cheaper"... (Score:2)
Another point... as long as you don't store you METADATA 100% in RAM, you can store at least your data (cached web pages) in RAM. What happens if it gets dumped? Simple. Just respider the pages you lost and go on. Small amounts of data loss can be covered.
Okay. It may sound like I'm talking out of my ass because I am. It is really hard to cover for a statement like that. But lets talk again on the performance angle that has been covered (but with a little more emphasis on RAID disks).
You *may* be able to get better cost/performance with LOCAL memory (not ram-based drives) than you could with a RAID array. And a raid array could never equal the performance you get with local memory. Of course, local memory could never reach the storage you achieve with a raid array. So these two paths seem to diverge (bulk storage vs speed) when comparing local DRAM to RAID'd disks.
His statement MAY make sense, but it would have to be put into a larger context. (RAM is better than disk in X circumstances.)
Something Nobody's Mentioned (Score:4, Interesting)
Bottlenecks... (Score:3, Insightful)
I've really no idea how complex the queries are or whether or not they use a relational database but that being said its still has to hit the disk to retrieve the data and that's where every decently designed database's bottleneck is. Besides google caches all its pages. Egads! Do you have any idea how much RAM they must need for just that alone? Yes RAM is faster. Oracle even teaches you to try to keep your frequently used tables in cache anyhow, because its fastest, of course they qualify that with the word small realizing that most people don't have the gobs of memory needed to cache large tables.
More importantly than the DRAM... (Score:2, Insightful)
A lot of technology companies would do very well to follow Google's example, it seems to me. They're proving that Internet services are a perfectly sound venture if the company has a sensible business model and always keeps focused on providing quality technology and services in the area that they know best.
Pretty amazing, but I can see it. (Score:5, Insightful)
1. If each box only handles a part of the web, it is possible that most of the space on it's drive (or drives) are wasted anyway.
2. If disk latency means that cpus spend idle time, eliminating that latency means more throughput per box, hence fewer boxes. More money spent on DRAM, less money spent on CPU, power supplies, etc.
3. Even with same number of boxes, lower power draw, smaller and/or fewer UPS(s) required. With fewer boxes, even more reduction.
4. Which leads, of course, to lower A/C bills during the warm weather.
5. Fewer boxes, fewer pieces, whatever, means fewer things breaking. The impact of a single outage may be greater, but, from the cost standpoint, you need fewer man-hours to manage the outages, fewer spare-parts, etc.
6. Lower medical expenses from sysadmins going insane due to the noise from all those drives and the associated larger power supplies and extra cooling fans.
OK, that last item is a stretch, but how many sysadmins are more than a step from insanity anyway?
Re:Pretty amazing, but I can see it. (Score:2, Funny)
Absolutely none.
Overview of Today's Headlines (Score:4, Insightful)
Another service that takes advantage of recency is something we just added called Overview of Today's Headlines. Google reads all the newspapers on the Web every hour and constructs a newspaper for the world by computer--no humans are involved.
This is a pretty cool idea. I only hope they make a RSS feed out of it so that I can use it in my companies new Portal environment. That would be really great! I love Google!
Check it out here [google.com].
Re:Overview of Today's Headlines (Score:3, Interesting)
Re:Overview of Today's Headlines (Score:2)
http://www.cs.columbia.edu/nlp/newsblaster
You guys are missing the point... (Score:4, Insightful)
Hard disks consume large amounts of electricity, and produce large amounts of heat, since they consist of pieces of metal spinning at 7200rpm.
Using DRAM upfront costs quite a bit more, but uses less electricity and requires fewer chillers, condensors, etc to keep cool.
wrong... 10watts for 1GB reg. ECC SDRAM (PC133) (Score:2)
Re:You guys are missing the point... (Score:3, Informative)
However, since I don't want to spend the rest of the day finding out the lowest power DRAM module with the highest capacity, I will assume that the best case Senario is 4GB of ram using approximately the power of two HDs of any capacity after 4GB you would require either a custom DRAM NAS/HD or a second PC. However NAS Dram with multiple gigabit ethernet ports offer the most DRAM storage per watt of electricity. Still it is at least 4x as power hungry as an 8 HD 1TB Raid server. Assuming each DRAM chip in the NAS is 64 Megabytes. To reach one terrabyte we need 16 thousand Dram chips. Obviously if each chip even requires
While it's pretty clear that power isn't an area that google can save money using DRAM over HD, and while DRAM is solid state and if it doesn't fail the first 6 months it probably wont fail in the first 100 years, it is still going to become obsolete long before it fails, requiring replacement. I've also figured that at $4 a Dram chip the cost of 1TB is $64,000 Vs $5,000 for a total package 1TB HD server. Even if you replaced the drives every 6 months it would take 15 years before the cost of materials on HDs exceeded the cost of materials on DRAM. However, there is a cost savings. First of all if you're mirroring the drives that doubles the electrical and material cost of the HD storage. Second of all that 1 GB HD server is only going to have it's seek time saturated by only 100 megabit ethernet.
Unless the data is entirely sequential (not requiring seek time) and even in the case of sequential data a single gigabit ethernet is sufficient. That Dram 1TB has at worst 12 NS latency or
Far more futile than trying to replicate the capacity of HDs with DRAM.
The key is in the MTBF (Score:5, Informative)
Individually, the mean time betweeen failure for a brick isn't that bad, but when you get enough of them, it's a constant drain on the pocket and on person-hours.
Google is great... (Score:2)
Re:Google is great... (Score:2, Insightful)
Re:Google is great... (Score:3, Informative)
AND is by default
OR is OR
NOT is -
I don't think parenthesis for grouping works though (they don't mention it), so you can't do more complex queries, but you can certainly do:
A AND (B OR C) AND !D
Which would be: A B OR C -D
DRAM probably is cheaper...Here's why. (Score:3, Informative)
Its not a fair comparrison to put 1GB worth of DRAM on one side of the scale, and 1GB worth of physical storage on the other. The hard disk will obviously come out to be the cheaper of the two. However, to a company like Google who undoubtedly uses RAID technology for storage, you're effectively not getting the same "bang for your buck" as you would with a JBOD array. In order to have 1TB worth of DRAM on a scale next to 1TB of physical storage, you're going to have to amass like 2TB of storage on the plate in order to have just the 1TB worth of usable free space.
Mind you, thats not to say that RAID is a bad technology..heh, hardly. Its just that you cant make a 1 to 1 comparrison from DRAM to physical without taking into account the storage methods employed by each.
Cheers
Re:DRAM probably is cheaper...Here's why. (Score:2)
That isn't true at all. If you wanted to, you could mirror all of your data on two separate JBODs-- RAID level 1-- but that's not efficient. If you use RAID 3 or RAID 5, you'll never use more than 33% of your storage for parity data. As the size of your RAID set increases, the percent allocated for parity data goes down. In a 10-disk set, one disk is used for parity (in the case of RAID-3), which is only 10% of your total storage. (In the case of RAID-5, you'd still use only 10%, but you'd use 10% of each disk instead of one whole disk.)
The Google feature I want (Score:4, Funny)
How about a "mature content ONLY search"?
Innumeracy and price comparisons (Score:2)
Quick, what is a better price a 1994 Ford Fiesta at $10,000 or a brand new Ferrari at $12,000?
Clearly the Ferrari is a better deal. To do a proper price comparison you have to look beyond the sticker price alone.
What is the performance you get? resale value? maintenance cost? operation costs?
If all you wanted to buy is megabytes of storage you would be better of buying backup tapes. They are hard to beat price wise.
But in all likelihood you need to store that data for some purpose, so depending on frequency of access, latency, total cost of operation (tapes are operator/robot mounted), alternative solutions with higher sticker price, might well end up being cheaper.
What Eric Schmidt claims is that if you have a ton of data and you are accessing it all the time DRAM is more cost effective than (a) a large mirrored RAID array server or (b) a zillion tapes being mounted by operators.
TOC, RAM vs. Steel Platter (Score:4, Informative)
These Platypus drives are PCI cards and have dual power source ability; they plug into the wall as a secondary supply and get power off the PCI bus as primary. Very cool to be able to shut down the machine to do whatever and still have your RAMdrive ready to go upon boot. Feature wise, they use expensive RAM and the manufacturer strongly suggests you not just grab any ole ECC to stick in the card but order from them (probably has to do with the grade of RAM they use in their cards.)
Performance was absolutely unreal: more than twice the speed of SCSI, in fact, practically as fast as the PCI bus in the machine will allow. I used the cards briefly while doing a a small database conversion project and was totally bummed when I had to send the RAMdrives home. *sniff*
If you have to do anything requiring lots of I/O (like database,) you _really_ do want one of these things or something like it.
Cost-wise they are a little spendy up front (even when compared to a SCSI setup with controller and drives) but if you are at all measuring time, then everything else looses the comparison; if you are measuring lost data on dead drives, the time required to make many redundant backups to avoid lost data on dead drives, the time required to shut down and swap out dead drives, etc. -- RAM wins! Just be sure to factor in the cost of quality UPS units because they truely are part of the cost (read necessary.)
Hook up a Qikdrive2 with one GB RAM, plug it into your UPS, make sure it gets backed up to the hard drive regularly (plenty of tools to do that) and I promise you that you will not want to be without one. If you have the resources, get one of the big ones (6 or 8 GB RAM, I forget.) Look on CDW, search Platypus for prices. The Platypus site has links to purchasing sites.
As always, be sure drivers/modules are available which will work for you. Ack, I'm rambling.
They must mean FIXED HEAD 'disks' v DRAM (Score:2)
Why DRAM is cheaper (Score:2)
Also, Google's searchable data is considerably smaller than the total size of the pages searched, even excluding the images. Read their white papers. And I doubt that they store the cached pages and images in DRAM. Those don't get hit that often.
Silly people! (Score:3, Insightful)
I'll lay it out. Obviously Google is not storing the master copy of the full multi-terrabyte database in ram, but they are certainly storing as big a chunk in ram as they can, and the cost model ought to be easy for anyone to understand if you sit down and think about it.
Consider the cost difference between the following EQUAL amounts of hard disk storage:
* A 160GB IDE drive
* A 160GB SCSI drive
* Four 40GB drives in an external RAID system
* The cost of a small medium-performance RAID
system.
* The cost of a larger high-performance RAID
system scaleability to a terrabyte.
* The cost of an *EXTREMELY* high performance RAID
system scaleability to multiple terrabytes.
Now consider the cost of building, say, a 40 terrabyte data store (lets not worry about backups for this experiment). If you build it out of a bunch of huge SCSI drives connected to a bunch of PC's it can be fairly cheap. But if you build out of, say, high performance EMC arrays it could cost millions of dollars more to get the same theoretical performance.
So when you consider the cost of storage, you always have to consider the cost of the PERFORMANCE you want to get out of that storage. All the Google CEO is saying is that, Doh! It's a hellofalot cheaper to improve the performance aspects of the system by buying DRAM in a distributed-PC environment in order to be able to avoid having to purchase extremely-high performance (and extremely expensive) disk subsystems. The cost of purchasing the DRAM to make up for the lower-performing disk subsystem is actually LOWER then the cost of purchasing an equivalent higher-performance disk subsystem.
The same is true in the ISP world. When RAM was expensive we had to rely on big whopping HD systems to scale machines up. But when RAM became cheap it turned out that you could simply throw in a very high density drive with 1/4 the performance that four smaller drives would give you, and the operating system's RAM cache would take care of the problem. Suddenly we no longer needed to purchase big whopping disk arrays.
Think about it.
-Matt
Re:Cost v Speed (Score:5, Interesting)
Err. No.
I maintain a tiny search engine (some 5000 sites), with the data cached locally, just like Google. It takes ~250Gb of disk space for that miniscule cache. The one at Google must be of the order of a few hundred Terabytes, not Gigabytes.
On that basis, I echo the original query about how it can be economical to use RAM...
Simon
Re:Cost v Speed (Score:3, Insightful)
Hence the size of the cache is somewhere between 500GB and 3TB, plus the index would be another 40% of that.
My best guess is that the google archive is somewhere around a 2-3 terabytes, and that the total amount of DRAM available at google at the present time is somewhere between 5-10 terabytes.
Re:Cost v Speed (Score:5, Informative)
I really think people under-estimate the size of the web, and this only becomes apparent when you try to cache large sites. Sure the majority of websites are pretty small, but more often than not now, government and business websites are used for real data-access solutions.
As I mentioned above, I look after a small but targetted search engine (http://www.financewise.com/ [financewise.com]) which looks at only financially-orientated sites. Take for example the European union site http://europa.eu.int [eu.int]. This is a fairly innocuous site, but if I do:
That's a 7.7Gb website, and that's just the text (in fact I only search for
I just think that your estimate for the cache size is a long way short of the real figure...
Simon
Re:Cost v Speed (Score:2)
Indeed, this has been a hot area of debate for the last 7 years or so, when the first paper with a substantially larger web than that indexed by search engines came out.
Usually search engines estimate the web size to be about 15-30% of that claimed by statistical measurements.
Re:Cost v Speed (Score:3, Interesting)
when is it worthwhile to trade off cpu for storage? In your case, I suspect that the website has a degree of redundancy in its 7 gigs of data; there is likely much duplication. Both at the page level (duplicated ccs info), and at the snippet level (duplicated copyright disclaimers).
It is quite straight forward to discover this sharing (IIRC exactly how lzw compression works, but w/ a smaller window) and significantly cut down your storage costs. Of course, now you have a CPU hit, where storing new data becomes expensive, and just reading the data requires some pointer chasing.
The interesting issue is that the CPU hit isn't guaranteed to be a Bad Thing: your higher cache hit rate (indeed, your data may fit in ram entirely now) will possibly (likely?) result in significant speedups.
Re:Cost v Speed (Score:2)
Whereas I agree that we're getting close (or maybe have passed) the point where it would make sense to do something better, since I don't have much of a budget, and disk is cheap
ATB,
Simon.
Re:Cost v Speed (Score:2)
Cute, but not quite correct. They cache post-stamp sized copies. If you want the full image you have to go to the original web site.
Granted, this does increase somwhat my original estimate of the amount of DRAM required.
Re:Cost v Speed (Score:3, Insightful)
Google doesn't create content. They are a search engine. Nor are they in the business of archiving the net for posterity. If they lose data, it's out there to be recollected or if not, then there's no point in them saving it anyway.
Re:Cost v Speed (Score:4, Insightful)
Re:Cost v Speed (Score:2)
Traditionally, they are only stored partially in RAM due to their size.
Certainly, the unprocessed pages are still stored on HDs as one doesn't gain
anything from storing them in RAM.
Re:Cost v Speed (Score:3, Informative)
According to "The Anatomy of Large-Scale Hypertextual Web Search Engine" [nec.com] by Segey Brind and [google.com] Lawrence Page [google.com], the inverted index ("inverted barrels") was about 47.2Gb large (Total data without repository 55.2Gb, Repository 53.5Gb). It had about 24 Million web pages indexed. Assuming a linear increase this amounts to about 5Tb.
But, to quote from the paper:
Which is surely slightly exaggerated, but shows that they considered that there is room for improvement. (E.g using varying length index instead of fixed width)
>I dont think Linux can do it
At least they think it can do it, since they are using Linux boxes, at least accoring to
[ddj.com]
The Technology Behind Google, by Jim Reese CEO.
More than 10,000 Linux boxes, that is.
Re:Cost v Speed (Score:5, Interesting)
PC World: What are Google's biggest challenges?
Schmidt: Managing the growth. Our servers are overloaded. There is a DRAM shortage. We're building more computers. We are adding more-sophisticated products to the advertising side of Google. Our problems at the moment are growth problems.
If you have computers where 4 GB is not very much memory, but use the amount we use on out HD for memory i would have a dram shortage too.
And i bet they store only the most frequest used part of the index in memory.
Did you notice when you access the google cache this very slow compared to a search? Even if that cache was accessed frequently (because it references a
Re:Cost v Speed (Score:2)
So this is why SDRAM prices have been going up and not down lately...
Bastards...
~z
Re:Cost v Speed (Score:2, Interesting)
However they are definitely on the scale of terrabytes. "Searched the web for a.
Results 1 - 10 of about 1,470,000,000. Search took 0.31 seconds." Assuming an average of ~25k cached per link 1.4 billion links would leave a cache of about 37,632,000,000,000 bytes, However The Cache doesn't necisarily need to be stored on RAMDISKs. He clearly states that it's 200,000 times more efficient for _seekable_ data. This means not the 'cached' data but rather the stuff that the search alagorythm looks at to show you appropriate hits. So the heart of the 'search' engine is using RAM exclusively, but 'cached' data would almost certainly still be stored on HDs, unless of course someone has built google a bunch of 120GB DRAM disks that use conventional HD interfaces (sorta like the Flash memory Drives, only on steroids when it comes to speed).
It could even be misleading Google could have meant flash memory HDs were cheaper but mistakenly refered to them as DRAM.
Re:Cost v Speed (Score:2)
Huh? I would have thought it would have been between 10x to 100x that much. Especially if they cache most pages. (Maybe they just use dram for the indexes, and hd's for the cache?)
I still don't understand that claim. $300 will get me a 160G drive, and I can load four of them in a cheap PC case or 1U rackmount case, 640G per unit. That's under $2K for
RAM prices vary wide, but say on the low side I can get 256M for $20. I'd need 2560 sticks of 256M to equal 640G, or $51,200 for the equivalent storage. And that doesn't take into account that most reasonably priced PC motherboards only handle 2G or 4G of memory these days. You'd need 160 motherboards in the best case, adding $80,000 to the cost, assuming you could get 4G per unit, and $500 per motherboard/chassis. Let's, see $51K+80K = $131K, versus $2K.
RAM, as I figure it, is at least 65 times more expensive (that's not 65% more, it's 6500% more).
Either their archive is a lot smaller than I assumed, or they're talking performance/price tradeoffs, where speed has a high premium.
-me
Re:Cost v Speed (Score:2)
Re:Cost v Speed (Score:2)
The data isn't just sitting there static, though: It's being searched. To switch to hard drives and maintain their current performance level, they would have to increase the parallelism of the search, by having many more copies of the index. One copy of the index on disk is not really equivalent to one copy of the index in DRAM, because the DRAM index can be searched many times in the period it takes to search the HD index once.
The quantity they're trying to minimize is not dollars per megabyte, but rather dollars per (megabytes searchable per second).
Re:Cost v Speed (Score:2)
Re:Hard disk is an obsolete technology (Score:2)
Now, if we could just get around that pesky limited-write lifetime
Re:Hard disk is an obsolete technology (Score:4, Interesting)
Solid state everyting would be great (wasn't there an article on solid state cooling fans a while back?), but it may take a while for RAM drives to bridge that big a gap, especially given the volatility problem. One big step is the drastic increase in RAM speeds, compared to hard drives which have increased only slightly in that regard.
As someone else said, it is only a matter of time.
Re:Take a BUSINESS perspective (yes, it's painful. (Score:3, Insightful)
Google will also likely break their technology into three components:
spidering and indexing
searching
caching
::Colz Grigor
Each of the financial analysts for the business groups responsible for each asepct of Google's technology may calculate the value of DRAM vs. HD differently. For searching, latency is extremely critical, but it's not so critical for caching, and there may be some physical problems with solely using DRAM for indexing.
That being said, I would expect Google to use HDs for spidering and indexing, DRAM for searching, and HDs for caching. Mr. Schmidt was probably only discussing technology on the most visable component of Google's technologies: searching.