Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Keeping Google's In-house Database Ticking 79

An anonymous reader writes "ZDNet has a short but interesting piece on the what Google did with its 12GB database when it became a challenge for the finance department. The database was split into three, says Chris Schulze, technical program manager for Google — one for the current financial planning projections, one for the actual current data from existing HR and general ledger systems, and one storing historic information. The article says Google has been using a variety of products from Hyperion (recently bought by Oracle) to manage its internal financial systems since 2001."
This discussion has been archived. No new comments can be posted.

Keeping Google's In-house Database Ticking

Comments Filter:
  • "Right now, we're on a not very powerful Windows box," Couglin said. "We definitely are wanting to go to Unix when we go to System 9."

    • Re:WTF WTF? (Score:5, Insightful)

      by pasamio ( 737659 ) on Friday April 27, 2007 @08:22AM (#18899245) Homepage
      Its an advertisement! Read the bottom: "Angus Kidman travelled to Orlando as a guest of Hyperion". The thing mentions Hyperion a dozen times, its the old trick of substituting news with press releases written by companies.
      • Re:WTF WTF? (Score:5, Insightful)

        by eln ( 21727 ) on Friday April 27, 2007 @08:35AM (#18899385)
        It's not only a press release, it's a very unimpressive one. Hyperion can handle data larger than 12 GB?! Stop the presses! You could manage a company of 50, maybe even 60 employees with that!

        Plus, the "story" says that in order to manage such a large (*cough*) amount of data, the solution was to partition the database into 3 different parts. Now, I can see partitioning it for ease of management along functional areas, but certainly not because it grew to 12 whole gigabytes. If you can't handle chunks of data larger than 4 GB without partitioning it, you're in big trouble.

        I'm guessing the "anonymous reader" who submitted this works for Hyperion.
        • by alxtoth ( 914920 ) on Friday April 27, 2007 @09:11AM (#18899875) Homepage
          12 Gb of _relational_ database falls under "nothing to see, move along". But Essbase http://en.wikipedia.org/wiki/Essbase [wikipedia.org] is doing OLAP http://en.wikipedia.org/wiki/OLAP [wikipedia.org] , which means that data is pre-aggregated across multiple _hierarchies_ . Those 150 users are likely the top management looking at the revenue, or reviewing the budget.
          In Open Source land there are similar projects: http://freshmeat.net/search/?q=olap&section=projec ts [freshmeat.net]
          • by jhfry ( 829244 )
            Mod the parent up... a quick read of the the linked Wikipedia article on OLAP and I can see how 12GB of data can be so problematic.

            The problem had nothing to do with the amount of data... but the amount of RAM and Processing power required to support even the small amount of data in a OLAP cube.

            Read the Wikipedia article and learn somthing before you jump to conclusions!
        • Hyperion can handle data larger than 12 GB?! Stop the presses! You could manage a company of 50, maybe even 60 employees with that!
          Hasn't Google been hiring aggressively recently? I'm pretty sure they have more than 60 employees...
      • Spot on.

        How this advert got on to the main page of slashdot I'll never know.
      • Oracle has to pay for all that dough it spent on buying Hyperion somehow. I'm surprised it didn't toss in more Oracle references too.
    • Slow news day? This does not bode well for the rest of the day.
    • by beset ( 745752 )
      Now i'm a Unix fanboy but that's crazy. We run Microsoft Dynamics Ax (mssql based) on a 7 year old server with a 20gb database. The server hasn't been rebooted in nearly 2 years, has 2gb of ram and quad p3 733s and absolutely flys.

      Some marketing firm is going to get a big bonus for such a decent slashvertisment.
  • Only 12 GB? (Score:5, Funny)

    by operagost ( 62405 ) on Friday April 27, 2007 @08:27AM (#18899297) Homepage Journal
    12 GB? You call that big? I haven't seen an Exchange mail store that small!
    • by lukas84 ( 912874 )
      What the hell did you do with your front end servers? :)
    • Lamest article (spam) of the day? Is this a very late April Fools joke? I've got tables that are much larger than 12GB in my Oracle DB that perform fine without partitioning. Indexes and good coding.
  • Only 12GB? (Score:5, Insightful)

    by WapoStyle ( 639758 ) on Friday April 27, 2007 @08:28AM (#18899317)
    I don't get it, that doesn't seem like much to me.

    We have many databases that are larger here from MSSQL to Oracle, some around the 600GB mark.

    What's so special about Google's database?
    • by ms1234 ( 211056 ) on Friday April 27, 2007 @08:50AM (#18899569)
      What's so special about Google's database?

      Google.
    • As far as I can tell, the only reason this is news is that it's Google. I manage several very large database, some in the hundreds of GB. Probably the most interesting of the big ones involves auditing people who are accessing a medical records system. The tricky part isn't managing every command passed by tens of thousands of users, but rather trying to find ways to pull out the needle of bad behavior from the endless normal activities. Was doctor A supposed to look at patient B's record? Is user A so
    • Re:Only 12GB? (Score:4, Informative)

      by alxtoth ( 914920 ) on Friday April 27, 2007 @09:31AM (#18900159) Homepage
      TFA is about a _cube_ of 12 Gb . Not _relational_ database. Read my other post http://developers.slashdot.org/comments.pl?sid=232 481&threshold=1&commentsort=0&mode=thread&pid=1889 9385#18899875 [slashdot.org]
      • by alxtoth ( 914920 )
        It is one thing to insert/retrieve a row to/from a 600 GB database, another do a SUM .. WHERE..join..join..join.. GROUP BY TIME,PRODUCT over "only" 12GB . And since it is called online analytical processing, you would expect results .. today. Essbase does several equivalent queries per second.
    • Re:Only 12GB? (Score:5, Informative)

      by hemp ( 36945 ) on Friday April 27, 2007 @09:33AM (#18900183) Homepage Journal
      Google's Hyperion database is an OLAP ( on-line analytical processing ) database rather than an OLTP ( on-line transaction processing ) database. OLAP databases are optimized more for processing human queries rather than standard transactions (like most MSSQL and Oracles are). Hyperion incorporates multi-demensional data hierarchies and other data formats that are difficult if not impossible to model in straight SQL(think of a Rubik's cube in 7 demensions).

      The downside of this approach is that it can cause lengthy time periods when the cubes needs to be re-calculated. In Google's case, evidently, this took 48 hours.
    • by qray ( 805206 )
      Shoot 15 years ago I was working with MS Access databases around 750megs of data. (Not that was a good idea at the time). Took quite a while to run Access's repair utility on them.

      I hope they never have to deal with AVI or other similar large 21 gig files. I guess you could chop them up as well and watch them individually.

      Seriously the only reason I could see for splitting them up is load balancing. A high volume transaction rate might force one to do something like that.
      -
      Q
    • One word. Sarbanes-Oxley. I'm sure the query requirements are a b*tch.

      Then again 12 GB is a walk in the park for Oracle Financals. Again this is another tech company that serves great product, but uses wacky internal setups. Not a good sign of eating your own dog food.

  • Is it just me, or does this seem like it is absolutely silly and pointless? The only thing that I see us getting out of this are some "LOL WINDOWS" posts.
  • by kiwimate ( 458274 ) on Friday April 27, 2007 @08:36AM (#18899403) Journal
    This is the bit that gets me in the summary:

    ZDNet has a short but interesting piece

    Interesting to whom, precisely? Hyperion's marketing department? Scant technical details and really only notable for the link to the photos of Google's new Sydney office which are kind of interesting, I suppose, in an "ooh wow shiny...okay what's next?" kind of way.
    • by garcia ( 6573 )
      Interesting to whom, precisely? Hyperion's marketing department?

      Or I suppose to users of Hyperion and the staff that uses it daily -- like me. While I don't particularly care for how we are directed to use Hyperion (no ad-hoc reporting but instead pre-created queries that we can only modify the reporting of the end result) in theory it could be an extremely useful tool for many companies.

      It's much easier to learn than what is offered in Access or other reporting tools I have used. The only way I could use
  • Press release (Score:3, Insightful)

    by gtoomey ( 528943 ) on Friday April 27, 2007 @08:42AM (#18899449)
    1. Move on, nothing to see
    2. Sack Zonk (sorry man you post some good stories, this ones a stinker)
  • one for the actual current data from existing HR and general ledger systems
    Since when does HR have anything to do with accounting or finances?
  • Also, I think they are talking about AU only. I highly doubt the US only has a 12 GB database.

    • This isn't news, this is embarrasing. Pull the story from the front page, please.
    • by vidarh ( 309115 )
      It's a financial system. 12GB of financial data is quite a bit - it could very well be worldwide.
      • I wonder then what they count as "Financial data" and "Sales data" versus other people. I know companies with 1000 users who have a hell of a lot more data than this.

  • by nathan s ( 719490 ) on Friday April 27, 2007 @08:58AM (#18899703) Homepage
    Obviously that's 12 GOOGLE-Bytes*. Which are far huger than ordinary bytes, or even gigabytes, and therefore much more interesting.

    * Note that GoogleBytes are still in beta and therefore the exact amount of storage in a single GB is yet to be determined.
    • by Fbelch ( 9658 )
      Don't forget GoogleBytes continually increment.... as time goes on! Since it is beta!!
  • Hmm, suddenly I realise what next year's real April 1st product will be.
  • by VE3OGG ( 1034632 ) <VE3OGG@@@rac...ca> on Friday April 27, 2007 @09:28AM (#18900113)
    No no no! It stands for Googlebytes. Each Googlebyte is approximately 1024x10^10,241,024 bytes. So as you can see, a 12 Google Byte database is quite substantial...
    • by Fbelch ( 9658 )
      Don't forget GoogleBytes continually increment.... as time goes on

      Since it is beta!!
      You can't control it

      1024x10
      1025x10
      1026x10
      1027x10
      1028x10
      1029x10
    • How many Libraries of Congress and Olympic Size Swimming Pools is that??
  • by suv4x4 ( 956391 ) on Friday April 27, 2007 @09:36AM (#18900239)
    FTFA:

    The database grew in size to more than 12 gigabytes, and the period restructuring required to ensure accuracy could see the system, which is now used by more than 150 staff, taken offline for two hours at a stretch.

    "Right now, we're on a not very powerful Windows box," Couglin said.


    Uhmm, maybe it's some other Google, right...?

    I can't be reading a press release from Google, the one that has more or less a copy of the whole Internet on its servers, whining about the difficulties of managing a small database on a slow Windows machine.
    • I would have thought this weird, too, until I started working as an IT Auditor and saw all manner of crazy old legacy systems supporting the accounting and financial reporting systems of major companies. I've seen major tax expenses totaling millions of dollars tracked through some of the most wicked Excel spreadsheets you could imagine. There was one fairly major software company I worked on ($1 Billion in revenue last year) that ran their whole online company store (where 90% of it's sales went through) o
  • Oh, my! (Score:4, Informative)

    by Jerky McNaughty ( 1391 ) on Friday April 27, 2007 @09:41AM (#18900311)
    So Google used horizontal partitioning [wikipedia.org] to split load across servers? Wow, that's rocket science. None of us in the database community have thought of doing this before. :-) But, if you want to find some news here, you can. One nice thing that Google did recently was to donate their horizontal partitioning code for Hibernate to the open source community. Hibernate Shards [hibernate.org] definitely needs a lot of work to get it to the point where it does a lot of stuff that people would want, but, hey, release early and often!
  • Their Hyperion Essbase cube was 12 GB? And they had to partition it into 3? That's nothing. We have MS Analysis Services cubes of almost 400 GB (partitioned into 3 seperate ones, like Google). If this is supposed to be an advertisement for Hyperion, it's not very impressive. Of course, we are using 3 seperate 8 processor Itanium boxes with 64 GB RAM. That helps some.
  • Everyone here seems to be forgetting that Hyperion is an OLAP Cube holding highly aggregated data, consequently it doesn't have to store enormous amounts of data, it probably only hold last years actuals and this years actuals and budget data which even for a v.large company is pretty small. Consequently 12GB is actually a lot of data for the product. Think about the purpose of the product before picking holes in it. I don't work for Hyperion, but have done a few projects with it's Essbase product, which is
  • Well, we all know that Google is feverishly working on their free broadband service [google.com]. They don't have enough time to worry about on a measly 12GB database. They are too focus on getting installation instructions [google.com] correct!
  • I have a spare PC running centos and mysql that can handle those troublesome 12GBs like a chainsaw
    cutting through butter.

    Call me and I can drive over to the plex today and get it running over the weekend for a very reasonable fee...

    This must be a joke? right? Google has problems with 12GB of data?

    someone please tell me it's at least 12 TB w/thousands of concurrent users...
  • What everyone needs to realize is that this is Financial Data. I worked with a database of over 4GB of nothing but sales orders for Cisco, and that was only for one technology group. This translates to a lot of money, and keeping the integrity, security and performance of these kinds of databases are very, very important and very stressful due to the responsiblility. Also, for financial data, correctness is more important than mondo fast algorithms that add complexity. Divide the 12GB by average value o

"Virtual" means never knowing where your next byte is coming from.

Working...