Follow Slashdot stories on Twitter


Forgot your password?
The Internet

Google's Bigger Index 412

WebGangsta writes "Google Inc. today announced it expanded the breadth of its web index to more than 6 billion items. This innovation represents a milestone for Internet users, enabling quick and easy access to the world's largest collection of online information."
This discussion has been archived. No new comments can be posted.

Google's Bigger Index

Comments Filter:
  • Here's hoping (Score:5, Interesting)

    by r_glen ( 679664 ) on Tuesday February 17, 2004 @12:22PM (#8305363)
    ... this will lead to an increase in the integrity of PageRank(TM), and vintage Google will return in all her glory.
    • Re:Here's hoping (Score:5, Interesting)

      by Destoo ( 530123 ) <> on Tuesday February 17, 2004 @12:36PM (#8305566) Homepage Journal
      So it's not just me..

      First, the reindex that happened a few months ago removed all cross-reference with accents.
      (where google would find the same number of links for both the word and the unaccentuated word... right now: soupcon: 9,750 - soupcon: 88,500)

      Then, when searching for anything regarding ras error messages, I get 30 links from spammer and then the real stuff.
      Example: 711 error yields multiple links for similar pages...
      "Your one stop resource for all things error 711 remote access connection
      management related. ... error 711 remote access connection management. ... "

      Vintage Google.. in Net years, that's 15-16 months ago, right?
      • Re:Here's hoping (Score:4, Interesting)

        by DeadSea ( 69598 ) * on Tuesday February 17, 2004 @04:17PM (#8308337) Homepage Journal
        Google does deal with spammers of the sort that you pointed out. It does take some prodding though. Last time that I found one of these, I submitted it to their problem report form on After a month nothing had been done. I then posted it in a slashdot comment that got modded up. A day later all the spammers were gone.

        Google search: 711 error []

        Come on, Google. Stop reading slashdot and fix the problems.

  • by ChaoticChaos ( 603248 ) * <l3sr-v4cf@s p a m e x . com> on Tuesday February 17, 2004 @12:22PM (#8305364)
    ...yeah, but it would only be 2 billion items if all the Janet Jackson stuff was removed. ;-)
  • how many? (Score:4, Interesting)

    by QuantumRiff ( 120817 ) on Tuesday February 17, 2004 @12:23PM (#8305375)
    How many of these 6 billion items are in the form of
    • by sensei_brandon ( 678735 ) on Tuesday February 17, 2004 @12:25PM (#8305417)
      exactly. I searched for "diode wave shaper" one time and got three hits -- all for porn. I had no idea diodes were so fap-worthy.
    • Re:how many? (Score:5, Informative)

      by Anonymous Coward on Tuesday February 17, 2004 @12:28PM (#8305473)
      That sort of search result spamming is getting out of hand.

      Maybe if more people used Google's Search Quality feedback form [], it would help weed them out.

      • Better than that spam report form for problems with particular searches is the Quality Feedback Form which includes the information about your search for better followup:
        At the bottom of the page, under the second search box, is a phrase "Dissatisfied with your search results? Help us improve. []" - Follow it and the form will ask you to:
        1. Please tell us what specific information you were seeking. Also tell us why you were dissatisfied with the search results.
        2. Were you looking for a specific URL that wasn'
    • Mailing lists (Score:5, Interesting)

      by ajs ( 35943 ) <ajs.ajs@com> on Tuesday February 17, 2004 @01:21PM (#8306039) Homepage Journal
      The thing that is starting to bother me is not the search-spam (easily removed over time with increasingly smart ranking), but the mailing lists. If 20 sites around the net archive the same mailing list, then I'll get the first 20 hits in most techical searches from the same list. Google really needs some way to identify duplicate archives (which is hard given that they're all formatted differently) and treat them as one "site".
  • Heh (Score:5, Interesting)

    by PaintyThePirate ( 682047 ) on Tuesday February 17, 2004 @12:23PM (#8305382) Homepage
    Anyone else find it funny that Google has around one item for every man woman and child on earth?
  • by Chris_Jefferson ( 581445 ) on Tuesday February 17, 2004 @12:24PM (#8305387) Homepage
    While I love google, this is so obviously just a link to a press release, and even worse the first line of the press release cut-and-pasted onto slashdot's page. And is going past 6 billion really that important?
  • by LostCluster ( 625375 ) * on Tuesday February 17, 2004 @12:24PM (#8305390)
    What's going on here? This isn't like Google to put out a press release just because the index size just past a round number.

    Is Google setting up for its IPO and therefore becoming less like the Google we know and love?
  • The real question (Score:3, Interesting)

    by Anonymous Coward on Tuesday February 17, 2004 @12:24PM (#8305391)
    Did they hit some sort of internal limit just above 4 billion? Were they using an unsigned int? Is that why all these extra items are in a "supplemental" index?
  • by Anonymous Coward on Tuesday February 17, 2004 @12:24PM (#8305396)
    They beat McDonalds.
  • Milestone (Score:3, Funny)

    by Doesn't_Comment_Code ( 692510 ) on Tuesday February 17, 2004 @12:24PM (#8305398)
    Google Inc. today announced it expanded the breadth of its web index to more than 6 billion items.

    One for every man, woman, and child. Sounds exactly like the thinking of a machine to me.
  • Related? (Score:5, Funny)

    by SkiddyRowe ( 692144 ) <> on Tuesday February 17, 2004 @12:24PM (#8305401)
    In a related story Booble's [] index just expanded to a Double-D.

    Little boys across the globe will have sore arms tommorrow.
  • by pacsman ( 629749 ) on Tuesday February 17, 2004 @12:25PM (#8305410)
    I'm waiting for them to come up with a sound search and an image search that look at the subject of the image rather than its file name. After that I'm not sure what's left. Maybe comparative searches for sounds and images, where you can upload a source to compare? Who knows! I hope these guys don't follow the normal path of spiralling into inconsequence after they go public.
    • by misof ( 617420 ) on Tuesday February 17, 2004 @12:51PM (#8305730)

      As far as I know, image search in the way you want it is still only a dream. But. Approx 2 years ago I attended a conference focused (mainly) on theoretical computer science. I saw some researchers (I think they were from Italy, not sure) present an early implementation of their algorithm to look for similar images to the one you select.

      The idea behind: For a computer, it's not easy to tell what exactly does an image contain. E.g. take all those "type the word you see above inside this box to prove you are not a bot" registration forms. If there are no working algorithms to tell "this image contains the word SLASHDOT written in yellow and blue stripes on a pink-dotted black background", the chances of creating an algorithm to tell "this is a game of tennis, it is probably played in the afternoon somewhere in England" are really low.

      However, by using various approaches from CG (comp. graphics), you MAY be able to tell whether two images are similar or not -- as simple examples consider edge detection, color spectrum, etc. As I already mentioned, such algorithms have already been implemented and their success ratio is already reasonably high. I expect that it won't take long until we see them on google.

      Note that using the ideas above you CAN search for an image with a given subject -- it just requires two stages. Suppose you want an image of a sun setting down somewhere in the mountains. Stage 1. You enter "sunset" into google's present search engine. You get lots of sunsets, several dogs named Sunset, a chinese girl Sun Set, etc. Then you select one of the sunsets most resembling the image you want and you tell google (or some other engine) to find all similar images. Et voila.

  • by Boing ( 111813 ) on Tuesday February 17, 2004 @12:25PM (#8305420)
    ...that remarkably, a full five-sixths of the content consisted of different versions of the Google logo.
  • by hanssprudel ( 323035 ) on Tuesday February 17, 2004 @12:25PM (#8305423)
    2^32 = 4.29 x 10^9

    Does it sound to anybody else like the rumours of Google hitting a deadend in the number of index position for the websearch are true? Especially given that it has been more than a year since they announced 4 billion.

    Apparently pagerank assigns an unsigned int to every page as id, and their index is so huge they cannot convert it to a 64 bit number. (You wonder why they didn't think of that 2-billion pages ago when a UTF8 like solution would still have been possible).
    • by JediTrainer ( 314273 ) on Tuesday February 17, 2004 @12:44PM (#8305658)
      That reminds me of an old Dilbert (paraphrasing here, forgive the small errors):

      PHB: We've run out of accounting codes! We can't do anything without one!

      Dilbert: Why not upgrade the system to accept larger codes?

      PHB: To do that we'd need a budget and an accounting code

      Dilbert: Why can't we reuse a code from an old finished project?

      PHB: Strangely enough, we've never finished a project.
    • oh, come on (Score:3, Insightful)

      by ajagci ( 737734 )
      This really isn't a big deal and it happens all the time when building large systems. I don't know how their system works specifically, but you just change the transient in-memory representations to 64bit by recompiling, and for the on-disk stuff you create a new format using 64bits but still recognize the old format. That way, you have to convert nothing and you will be migrating to 64bit representations as needed. I'm sure Google has managed to deal with much more complex engineering problems than that
  • by Bob McCown ( 8411 ) on Tuesday February 17, 2004 @12:26PM (#8305432) how to get rid of those pseudo-pages in Google. The ones with names like "thing_that_youre_searching_for.html", and all they are is either a page of dead links to crap on ebay, or a "Hey, we do great searches for your stuff".
  • by stratjakt ( 596332 ) on Tuesday February 17, 2004 @12:26PM (#8305438) Journal
    No it doesn't. It represents a pretty reasonable upgrade for Google.

    It's expected as the web grows, so will the search engines.

    This isn't exactly a man-on-the-moon accomplishment.
    • Perhaps you should look up the definition of a 'milestone'. It's a marker by the side of the road, indicating the passing of a cognitive reference point (mile, or other round measure).

      6 billion items is just that, a milestone.
  • is it just me? (Score:5, Interesting)

    by trans_err ( 606306 ) <> on Tuesday February 17, 2004 @12:26PM (#8305439) Homepage
    Google has become so flooded with internet crap that it's quickly losing its status as a useful tool. Google needs some form of moderation to move out the superfulous blog entries and advertising fronts so it can someday become as useful as it always was.
  • by phoxix ( 161744 ) on Tuesday February 17, 2004 @12:27PM (#8305450)
    Search for any normal product name with google. What would you used to get ? Billions of useless sites that cross link to each other and have the same bloody reviews from

    That seems to have changed!

    I just tried a search on television antennas [] and for once the results seem relevent.

    Hooray!! Google is back!! :^)

    Sunny Dubey
  • Faked URLs (Score:3, Interesting)

    by Professr3 ( 670356 ) on Tuesday February 17, 2004 @12:27PM (#8305453)
    Surely a lot of these results are for search engines that prey on google. You can't run a lookup on anything these days without finding a link that goes straight to some other search page, filled with ads of course. Is this a problem, and is Google actually counting those pages in the 6 billion figure?


  • Still nok (Score:5, Interesting)

    by mirko ( 198274 ) on Tuesday February 17, 2004 @12:27PM (#8305458) Journal
    • I own a forum on top of which I put a robots.txt file which is supposed to STOP any spider from visiting it.
      I however find my post while googling for words they also contain.
      How can one explicitely forbid Google from indexing a site ?
    • My wife developed 2 web sites which never got indexed even though we submitted these using Google's interface. As they might not be linked, I suppose Google just considers that if nobody mentions a site, then the site should not be registered as existing ? Do Google think it actually is the web ?

    Sorry, I'll keep using Altavista [].
    • Re:Still nok (Score:3, Informative)

      by happystink ( 204158 )
      Just check the IPs googlebot comes from and ban those if they're not honoring your roots file, that works fine, they have a very set range they use, anything starting with 216.39 or something I think.
    • Re:Still nok (Score:3, Informative)

      If googlebot crawls your site, then your robots.txt file is either wrong or in the wrong location. There is no doubt that googlebot follows the robots.txt standard.

      It can take a very long time for a site to be spidered after it is submitted via the "add a url" form.
  • No Good... (Score:5, Interesting)

    by Mork29 ( 682855 ) <> on Tuesday February 17, 2004 @12:27PM (#8305460) Journal
    I don't want MORE things to search for, I want it to return more relavant searches. I know that the information I usually search for is out there, the problem is that there's so much chafe out there, that I can't find what I want. No matter what I search for, there are at least 2 or 3 responses related to porn. I understand that their are alot of variety of porn out there, but common... Search engines are getting even worse by throwing in search results that are hardly relevant, just because they got paid money by the company. I would even be willing to pay for a "google membership" if they eliminated the advertisers mixed in with search results and maybe gave me another special feature or 2. I'd want a search engine that returns just 1 or 2 good results over one that returns 5 good results mixed in with 200 bad ones.
    • Re:No Good... (Score:5, Informative)

      by glinden ( 56181 ) * on Tuesday February 17, 2004 @12:48PM (#8305697) Homepage Journal
      • I want it to return more relevant searches.
      Have you tried some of the Google alternatives? Vivisimo [] is particularly interesting with its clustering of search results. Teoma [] is also quite good.
      • My favourite right now is GigaBlast [].

        It's still smaller than most other search engines, but it's quite fast, has good relevance and it indexes stuff in real time.

        Besides, if you don't find what you are looking, you can do the same search with 5 other search engines just by clicking on links at the bottom of the results page.

        But what I like with Gigablast is that it's always getting better and I feel like part of something that has potential.
  • by LostCluster ( 625375 ) * on Tuesday February 17, 2004 @12:28PM (#8305466)
    Notice that they claim that they search 6 billion items, but the home page only claims that they're "Searching 4,285,199,774 web pages".

    To find the rest, we need to use Google's other services. The image search is claiming "Searching 880,000,000 images". Google Groups says its "Searching 845,000,000 messages". Add those to the count and you get 6,010,199,744 items total.
  • by jolyonr ( 560227 ) on Tuesday February 17, 2004 @12:29PM (#8305475) Homepage
    I do hope they manage to sort out their recent indexing problems first. For many searches altavista is now showing far better relevent result searches than google - since their attempted cull of 'spam' sites last december which kind of backfired. They have improved things this year, but the quality of their search results is not as good as it was last year. Now, they need to figure out how to get rid of all the useless sites that are just shopping directories full of espotting URLs and similar and with no real content. Funnily enough, their anti-spamsite code seemed to actually promote these up the rankings on many search terms, while penalising many sites containing genuine content.

    Many people said that Google were using deliberate tactics to encourage small e-commerce websites to spend more on adwords, but I believe this wasn't deliberate - their index is so big that they simply can't tell what the results of their changes are going to do to the search orders for all the search options that people are going to use - and they simply didn't realise in advance the problems they were going to cause. And google have made efforts to minimise the damage since then, but they still need to do more.

  • by Moderation abuser ( 184013 ) on Tuesday February 17, 2004 @12:29PM (#8305483)
    It just means bigger. There may well be innovation in the technology which allows bigger, that might have been news for nerds, but bigger itself isn't innovative.

  • Thanks (Score:5, Funny)

    by KillerHamster ( 645942 ) on Tuesday February 17, 2004 @12:30PM (#8305491) Homepage
    so much for the link to Google, I never would have found it otherwise.
  • by rqqrtnb ( 753156 ) on Tuesday February 17, 2004 @12:30PM (#8305495)
    I heard that Google is using 4-byte ints for DOCids and they have been running out of indexing space since they are pretty close to 2^32 pages already. Is that true?
    • by kindofblue ( 308225 ) on Tuesday February 17, 2004 @12:50PM (#8305722)
      Not likely. I would imagine that each item has a unique id, not just each web page, since their needs to be some way to identify what the target of a link is. Just because a link ends in pdf, or jpg, or gif, does not mean that it is of that type. The crawlers undoubtedly record the content-type of fetched resources.

      So I would guess that they already use more than 32 bits per item with everything in a single item ID space, or they use 32-bits plus some code indicating the ID-space, or more perhaps a variable length code depending on the item type, e.g. like UTF8. In any case, they should have exceeded 32-bits long ago.

    • Since they said they have 4.28 billion searchable pages in the index, and 32 bit integers have a range of about 4.29 billion possible values, I'd say they're pretty close to having to make another upgrade, unless they decide there will never be more than 4.29 billion pages online that searchers would be interested in.
  • by master_p ( 608214 ) on Tuesday February 17, 2004 @12:31PM (#8305502)
    I am still waiting for a search engine that does topic matching instead of text matching. In other words, I would like the search engine to return a list of urls with relative topics instead of relative text. As it is right now, all search engines, including Google, return pages that contain text equal or relative to the input but they might be 98% unrelated. I still can't consider the Internet as a library of knowledge due to this fact.

    For example, if one searches for "TCP/IP tutorials", it would return many unrelated links like posts in newsgroups, college lectures, etc.
    • That's what directories like do. IIRC, google does use directory information, but it is far too hard a problem to automate topic finding without a lot of human editors.
      I saw some research recently at a conference that used complex vocabulary matching algorithms to automatically extract topics and organise large numbers of documents into topic hierachies and present summary reports, but I think that might be a bit too processor intensive and cutting edge, even for google.
  • Google Print (Score:5, Informative)

    by blorg ( 726186 ) on Tuesday February 17, 2004 @12:33PM (#8305524)
    "Google's collection of 6 billion items comprises 4.28 billion web pages, 880 million images, 845 million Usenet messages, and a growing collection of book-related information pages."

    I was interested that they mentioned Google Print [], which is Google's answer to Amazon's Search Inside [] feature, but hasn't got much press, and is pretty well hidden in Google itself.

    You can check it out by limiting results to site, e.g. searchterm []. (Not quite at Amazon-type numbers yet.)

  • Caveat Emptor (Score:5, Insightful)

    by erick99 ( 743982 ) * <> on Tuesday February 17, 2004 @12:34PM (#8305534)
    Google is my favorite search engine. That said, I hope that most folks understand that just because they "google" something does not make that something a fact. Also, the first few pages of any search can be the result of manipulation to get in the top 10, 20 or 100. It is really, really important to consider the source when doing any kind of research on the 'net. I am homeschooling my 13 year old and having a hell of time getting these lessons across to him. He can research almost anything in a fraction of a second, but it takes a bit longer to separate the wheat from the chaf.

    Happy Trails!


  • by The One KEA ( 707661 ) on Tuesday February 17, 2004 @12:36PM (#8305563) Journal
    With 6 billion pages indexed and cached, and maybe an average of 50K per page (which is probably pretty conservative - it's probably twice that in some cases), that's nearly 30TB, IICIC!!!

    The hard disk and RAID folks must LOVE Google....
    • by ediron2 ( 246908 ) * on Tuesday February 17, 2004 @01:22PM (#8306058) Journal
      With 6 billion pages indexed and cached, and maybe an average of 50K per page (which is probably pretty conservative - it's probably twice that in some cases), that's nearly 30TB, IICIC!!! The hard disk and RAID folks must LOVE Google....
      30tb... at a buck a gig, those $30,000 sure do look appetizing to all the hard drive and raid makers.


      Hell, even doing 2x or 3x this amount for server-class drives still leaves us talking lame amounts. Just one Hitachi/Sun 9980 Fiber Channel drive costs several times more than this.

      Seriously, everything I've heard indicates that google's methods hinge on a lot of white boxes, each one covering a subset of the google data. Put another way, drivespace per server isn't the limiting factor. A distributed system with several hundred white box servers can't HELP but have tens of terabytes of storage, given drive capacities of tens and hundreds of gigs each.

      A client just bought a Hitachi 9980. As sweet as the Hitachi arrays are, I thought it was the most horrendous waste of cash I'd ever seen, considering this client's more modest needs. THOSE are the customers that raid/drive makers love... all it takes is one IT guy with hardware lust who has the trust of a Fortune-500 firm.

    • I'm a storage engineer, and, to the enterprise, 30TB is peanuts. On a busy day, I have provisioned 30TB in one day to various computers. A typical high-end array (an EMC/Hitachi/HP/etc)usually tops out at around 150TB, but you can have a bunch of them on the same storage area network.

      The trick, is how to back it all up in shortening backup windows. Things like truecopy work, but take twice the disk space.
  • by codeshack ( 753630 ) on Tuesday February 17, 2004 @12:38PM (#8305585)
    Google's value seems to be in cutting out the crap in its bandwidth... look at their page loads (2.6k plus 8.4k for the image) versus Yahoo! (30k plus images, plus ads). And the less said about AV or Lycos in that regard, the better. Not to mention that Yahoo has basically just co-opted Google, but with more fat around the edges.
  • by Omnifarious ( 11933 ) * <> on Tuesday February 17, 2004 @12:38PM (#8305589) Homepage Journal

    A press release complete with corporate speak!

    "This innovation represents a milestone for Internet users, enabling quick and easy access to the world's largest collection of online information.".

    This is just google doing what they are already well known for doing best. There's nothing new or 'innovative' here. While it's a fine accomplishment, and I'm please google has indexed that much stuff, it's hardly innovative for them.

  • Is /. pro Google? (Score:5, Informative)

    by dark-br ( 473115 ) on Tuesday February 17, 2004 @12:39PM (#8305598) Homepage

    "Google currently does not allow outsiders to gain access to raw data because of privacy concerns. Searches are logged by time of day, originating I.P. address (information that can be used to link searches to a specific computer), and the sites on which the user clicked. People tell things to search engines that they would never talk about publicly -- Viagra, pregnancy scares, fraud, face lifts. What is interesting in the aggregate can seem an invasion of privacy if narrowed to an individual."

    That's a quote from the NYtimes [] (free req. yada yada) also posted as is here []

    If any other site were to track the stuff Google does, /. would be up in arms protesting!

    Please note, this isn't a troll, and I'm not wearing a tin-foil hat (maybe I should?). Imagine the following scenario: a bomb goes off in the US. By tracing searches for "anarchist cookbook" to zipcodes within the area of the bomb blast, the FBI could have access to information that makes TIA look like a better alternative.

    Maybe this isn't such a good feature after all...

    • Re:Is /. pro Google? (Score:3, Interesting)

      by selderrr ( 523988 )
      It all depends on ho often they rotate their logs and how long they store their backups. I honestly don't believe they can keep logs longer than a few weeks. Any longer and they'd need 2nd serverfarm to store the archive. And no terrorists would go from a google query to a bomb in a few weeks. So I guess you're quite toptinfoiled indeed.
  • by leoaugust ( 665240 ) <> on Tuesday February 17, 2004 @12:42PM (#8305631) Journal

    There is an interesting article in Wash Post Search For Tomorrow [] on Google, and possible AI in search.

    Some excerpts:

    We stumbled around in libraries. We lifted from the World Book Encyclopedia. We paged through the nearly microscopic listings in the heavy green volumes of the Readers' Guide to Periodical Literature. We latched onto hearsay and rumor and the thinly sourced mutterings of people alleged to be experts. We guessed. We conjectured. And then we gave up, consigning ourselves to ignorance.

    Only now in the bright light of the Google Era do we see how dim and gloomy was our pregooglian world. In the distant future, historians will have a common term for the period prior to the appearance of Google: the Dark Ages.

    There have been many fine Internet search engines over the years -- Yahoo!, AltaVista, Lycos, Infoseek, Ask Jeeves and so on -- but Google is the first to become a utility, a basic piece of societal infrastructure like the power grid, sewer lines and the Internet itself.

    • by PollGuy ( 707987 ) on Tuesday February 17, 2004 @01:14PM (#8305965)
      I read that article and really disagreed with the premise. Google is good for indexing what's available online, but only a tiny fraction of recorded human knowledge is available online. I work for a digital libraries project, and after visiting the Joint Conference on Digital Libraries, I can tell you that it's a librarian's wet dream to be in the kind of situation that the article describes: where all the information that we have to stumble around libaries and microfiches for is Googlable. But the full texts of almost no books are available. Who's going to scan in millions of volumes? Who's going to pay for that? And most importantly, how are the publishers going to allow it? US and world copyright laws are keeping almost all the content from being eligible for online publication, even if their profit windows are long closed.

      I encourage all of you who are in high school or have college papers to write to look beyond Google the next time you have to research something. You will find about fifty times as much information by looking in published volumes. Here's the technique I always use: visit a University library. Use the electronic card catalog to find a couple of titles that seem to match your topic. They will likely all have similar call numbers. Then, go browse the stacks around those call numbers. That will give you access to all the books available that are related to your topic, and on the next shelf over, are books that are tangentially related. Every time I do that, I find some fascinating angle on the subject matter I never even knew existed. The books you find will have references, and you can follow those to immense amounts of material more specifically related to the angle you've chosen. And none of it is on Google.

      If you have trouble, go ask one of the friendly research librarians. They do a lot more than go around and "shhh!" you.

      Google is a useful tool, but if you want real depth, from people who aren't tech savvy enough to put their full academic works online, the library is the only place to find it. Put in the time!
  • by dark-br ( 473115 ) on Tuesday February 17, 2004 @12:45PM (#8305660) Homepage
    that not everything about Google is so visible.

    One shuold have a look at Google-Watch [] (tinfoil? maybe...) but they have some good points:

    According to DEA, Google is breaking the law []

    Google Evil cookie []

    We got your number! []

    And so on...

    Not to troll but rather a thought. Mod as you wish.

    • One should also have a look at Google-Watch-Watch []

      which states

      Meet Daniel Brandt. He is a self-proclaimed public interest activist and the owner of Mr. Brandt founded after his own site,, did not get a good Google PageRank.
  • by selderrr ( 523988 ) on Tuesday February 17, 2004 @12:52PM (#8305743) Journal
    I wrote a project for our univ and submitted the url to google bout 3 moths ago. It still doesn't show up []
  • by mugnyte ( 203225 ) * on Tuesday February 17, 2004 @12:59PM (#8305795) Journal

    Too bad the article doesn't mention how google is trying to fight gaming the PageRank system [] or any of the other problems like commercials in the results. Still a great search tool though.
  • by GQuon ( 643387 ) on Tuesday February 17, 2004 @01:04PM (#8305856) Journal
    Both Google [] and Fast [] have image and picture search. They're all right. But I have had more luck with Lycos [].

    What are your experiences?

    Of course, none of these services search in the image data itself. They search filenames, special features (like image size), and the content of the pages they are found in.
    What is the state of searching in images today? Facial recognition systems have existed for a while, but they are made for a specific purpose.

    How long before we can take a picture of that piece of your IKEA furniture and find the same model in pictures of celebrity houses, Babylon 5 sets and crime scenes? Or taking a picture of that familiar-looking person walking down the street, searching for her, and remembering that she was in that "reality" series two years ago.
  • by saddino ( 183491 ) on Tuesday February 17, 2004 @01:08PM (#8305893)
    "Google Image Search has been significantly updated," said Sergey Brin, Google co-founder and president of Technology. "We've doubled the index to more than 880 million images, enhanced search quality, and improved the user interface."

    For Mac users, I recommend using Beholder [] to power your Google image search. Google's minimal UI changes notwithstanding.

    (Mod +1 Self-Promotive)
  • META Tags (Score:3, Insightful)

    by JSkills ( 69686 ) <jskills AT goofball DOT com> on Tuesday February 17, 2004 @01:15PM (#8305980) Homepage Journal
    I thought this re-index would finally pick up our "description" meta tag and actually use it. Nope. Instead we still get the same concatenated list of links that are in our left nav bar as our description when people find us in google search results. They have a "decription" listed, but it looks like something they made up themselves?

    Guess I better call the whaaaaambulance :-(

    BTW - can you believe that a large number of visitors we get come from people who do a search on "" []. Wow.

  • Number One (Score:3, Interesting)

    by Michael.Forman ( 169981 ) * on Tuesday February 17, 2004 @01:24PM (#8306081) Homepage Journal

    The upgrade has been quite good to me! Before the upgrade a search for my name would rank my website [] many pages down and then only secondary links not the root site. Now I rank number one! It looks like all my slashdot posting has finally paid off.

    Ahh. The small victories of the computer geek.

    Michael. []
  • by warpSpeed ( 67927 ) <> on Tuesday February 17, 2004 @01:50PM (#8306370) Homepage Journal
    When goes live

  • PNG! (Score:5, Interesting)

    by pmsyyz ( 23514 ) on Tuesday February 17, 2004 @02:10PM (#8306629) Homepage Journal
    ... Advanced [] features include search by image size, format (JPEG and/or GIF) ...

    They didn't mention PNG [], the turbo-studly image format which Google Image Search does indeed support.

    It seems they used to have very few PNGs in their database, but now a search for +a filetype:png [] returns 700,000 results!
  • by morcheeba ( 260908 ) * on Tuesday February 17, 2004 @03:12PM (#8307439) Journal
    When you search for "litigious bastards []", you now get a website promoting the googlebomb technique [] listed first. The sco group [] was listed first, but now it's ranked about 47. I'm not sure if they are reducing the relevance of the link-text, or if the ranking has been lowered because the sco group probably doesn't point back at any of the blogs that link to it.
  • by DrSkwid ( 118965 ) on Tuesday February 17, 2004 @03:29PM (#8307679) Homepage Journal

    Google's adsense service

    is certainly a winner

    The ads presented are similar to the paid ads shown on a standard google search but using the keywords of the page displayed and also tailored to the country of the viewer via their ip address.

    In this way webmasters can maximize the global potential of their website.

    We have some very highly ranked pages (i.e. top 10) but for UK only content. Now our visitors who find us via search engines and discover we aren't quite what they want are presented with a relevant exit strategy and we get a commission!

    We're getting an average 1.7% click through rate which is translating into a nice tidy sum.

    go google! keep kicking MSN's dirty butt

  • by bonaldi ( 90129 ) on Tuesday February 17, 2004 @03:36PM (#8307771)
    We have batteries and accessories for your Google's Bigger Index. Buy now from our extensive selection of Google's Bigger Index, and when you buy your Google's Bigger Index you get free shipping. Buy now. Google's Bigger Index.

    God, google sucks nowadays.
  • by xihr ( 556141 ) on Tuesday February 17, 2004 @04:41PM (#8308684) Homepage
    Especially with this announcement, I'm starting to get worried about the reliability of Google. More and more groups are taking advantage of quirks in Google's ranking system, as has been mentioned in previous Slashdot articles, to the point now where if you're searching for something even a little outside of the pop-culture mainstream (where you will be inundanted with valid hits) you will find tons and tons of automatically generated garbage hits on "providers" who boost their indexes by feeding links to each other. Google is a great service; I hope that in its desire to continue its ever-expanding dominance of the search engine market, they don't let themselves get too complacent and let their search engine technology become stale in the sense of it being so abused that for reliable results you need to look elsewhere.
  • According to Google's cache [] of Google, there used to be only 3,307,998,701 pages in their index, as opposed to the 4,285,199,774 (as of writing) in the index.

    It's also interesting to note that both have a copyright date of 2004, which would imply that Google has found just under 1 billion websites in a month and a half, which seems like an interesting fact.

"Plastic gun. Ingenious. More coffee, please." -- The Phantom comics