Follow Slashdot stories on Twitter


Forgot your password?
The Internet Software

The World Wide Computer, Monopolies and Control 129

Ian Lamont writes "Nick Carr has generated a lot of discussion following his recent comments about the IT department fading away, but there are several other points he is trying to make about the rise of utility computing. He believes that the Web has evolved into a massive, programmable computer (the "World Wide Computer") that essentially lets any person or organization customize it to meet their needs. This relates to another trend he sees — a shift toward centralization. Carr draws interesting parallels to the rise of electricity suppliers during the Industrial Revolution. He says in a book excerpt printed on his blog that while decentralized technologies — the PC, Internet, etc. — can empower individuals, institutions have proven to be quite skilled at reestablishing control. 'Even though the Internet still has no center, technically speaking, control can now be wielded, through software code, from anywhere. What's different, in comparison to the physical world, is that acts of control become harder to detect and those wielding control more difficult to discern.'"
This discussion has been archived. No new comments can be posted.

The World Wide Computer, Monopolies and Control

Comments Filter:
  • by monopole ( 44023 ) on Thursday January 17, 2008 @10:02PM (#22088770)
    The definition of a real utility computing environment is one where somebody can hold a coup d'etat in it and make it stick in the real world.
  • by MightyMartian ( 840721 ) on Thursday January 17, 2008 @10:23PM (#22088918) Journal
    There may be little you can't do, but there's a lot you can't do well, reliably or securely. As cool as Google Apps may be, you're essentially trusting your data integrity and security to an outside company.

    The web, as it currently exists, is a really shitty software platform. Web 2.0, if it meaningfully exists at all, is built on some rather horrible hacks that break down the server-client wall, and for certain kinds of limited applications that's fine, but building substantial applications, like accounting and financial software, in AJAX would be an unbelievably difficult job, and a rather hard one to justify.

    I think this guy is, as with his last great proclamation, overstating his case. Yes, in certain arenas, like home and small business email, apps like GMail certainly can play a role, but I can tell you right now that the business I am in, which deals with confidential information, will be waiting a long time before farming out this sort of thing.
  • by Tancred ( 3904 ) on Thursday January 17, 2008 @10:30PM (#22088968)
    Here's a classic sci-fi (extremely) short story on the topic of an immense computer. Frederic Brown's "Answer": []
  • by Duncan3 ( 10537 ) on Thursday January 17, 2008 @10:38PM (#22089030) Homepage
    So we're back to the point in the cycle where centralized mainframes you rent time on rule the world again. Can you guess what happens next? Privacy problems, reliability problems, outages, and we all go back to personal systems again.

    Old is new again.
  • by MightyMartian ( 840721 ) on Friday January 18, 2008 @12:49AM (#22089884) Journal
    And when Google's document store gets hacked, and all your documents and private communications are compromised, and someone asks you "What do you mean, you didn't know how Google handled backups and security?", I hope to be there to watch as you melt.
  • by presidenteloco ( 659168 ) on Friday January 18, 2008 @01:11AM (#22089978)
    Availability of secure P2P protocols, and creation of a location-free, fragmented
    encrypted redundant moving storage virtual layer on top of lower-level net
    protocols, could retain freedom from monopoly control of information
    and services.

    But watch for the predictable attempts to get legislation against such
    "nebulous dark-matter middle-nets". Watch for fear arguments to be used
    as justification. Watch for increasingly asymmetric ISP plans (download good,
    upload bad), and protocol-based throttling or filtering, by the pipe providers.

    These are all the very predictable reactions by "the man". They must it goes
    without saying be resisted, in law and political discourse, and economic boycott,
    or circumvented by all ingenious tricky means necessary.

    P.S. I've been predicting this inversion of the intranet to where it is the "extranet",
    and inversion of where we would trust our data (What, you kept your data on
    your own servers, and not the massively redundant global storage net?
    Are you insane??) for a long time now, but nobody listens to me.
    (Brain the size of a planet, and they've got me parking cars...)
  • by Tarwn ( 458323 ) on Friday January 18, 2008 @07:50AM (#22091438) Homepage
    I'm not sure the idea of splitting the IT responsibilities into other departments is insane (hold with me a moment). Consider the current situation of an IT department that is a separate department, usually with their own goals, budget, etc. This department is notable for not always getting new PCs as fast as they are wanted, for not implementing software changes immediately when requested, and for demanding additional money when deploying technologies like video conferencing so they can upgrade the internet connection or some other foolishness. Oh, and they always act like they are busy, but we all know the systems hardly have problems.

    Unfortunately the suggested solution, of splitting IT into the surrounding departments, is going to look like a good idea to many director level people. It will (in their minds) ensure immediate service for new equipment, allow a higher level of control over the purchase of items they think are unrelated, and allow them to have changes made to software at a higher level of priority. To the outside manager or director, they generally only see what we are not supplying, not what we are. If we are good at our jobs, but have poor systems, they don't generally realize just how bad things are because we are keeping the system limping along. A lot of our expenditures are due to reasons they just don't understand. If we buy a 48 port managed switch with fibre but were rolled under one of these departments, it could very easily turn into a refurbed 48 port hub off ebay, since they both have lots of connections and thats all you really need.

    What about change control? They don't see it. Time for testing? That will get reduced further. Developing in test environments? But those are good machines, they should be used for something important. Oh, and why do you need fancy development tools? Joe down the way made an application to do that in 45 minutes using MS Access, but it takes days in this fancy technology, we'll just use MS Access instead.

    The whole idea of splitting IT up into several departments is like a startup company (non-tech) in reverse. Money will go to IT-related resources last, it will be in no one's interest to spend the time, resources, or money to ensure there is a strong infrastructure capable of growth, in house software development will be on-the-fly and likely based on technologies like MS Access. On top of that, larger initiatives like data warehousing, global data management will be left to whoever wants to pay for the whiz-bang consultant to come in and do it their way. Backups, email, directory services, all of this will end up on someone's plate who will forever be trying to drop it off on someone else.

    I realize that the author of that article was likely thinking that IT resources would not need to deal with most of these things in the future, and for that I can assume he has not worked in an IT environment in quite a while. While technologies are available to streamline our jobs and allow us to grow the department(s) more slowly that in the past, splitting the department so that no one has these responsibilities is going to have one positive thing going for it: The consultants that come in to clean up the mess after the takeover are going to be set for a good long time.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!