Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

MIT Looks to Give Group Think a Good Name 167

netbuzz writes "With Friday's opening of the MIT Center for Collective Intelligence, researchers there hope to address this central question: "How can people and computers be connected so that — collectively — they act more intelligently than any individuals, groups, or computers have ever done before?""
This discussion has been archived. No new comments can be posted.

MIT Looks to Give Group Think a Good Name

Comments Filter:
  • by chriss ( 26574 ) * <chriss@memomo.net> on Tuesday October 10, 2006 @07:28PM (#16385449) Homepage

    I'll give them the benefit of trying to start a realistic project without any fancy, not-yet existing technology, and therefore accept that their attempt for collective intelligence is writing a business book [wearesmarter.com] in what they call Wikipedia-style, so far with 300 participants. But I believe that books or the written word in general is not the right tool for collective intelligence and in fact right now stopping us from making some advances e.g. in education.

    We've all grown up in a culture dominated by information transfer via text and been trained by our educational system to be producers of text ourselves. I'm currently doing it on slashdot, everybody is communicating via email and IM, because that's what we've learned.

    But there has been a lot of research showing that richer media (not flash, but visualization and simulation) are often much more appropriate to describe complex subjects. There has been a trend for a long time to stuff text books with more graphics, diagrams, pictures, and educational software with videos, animations and so on. A picture can say more than a thousand words if placed in the right context.

    Unfortunately we are not yet trained to use more than a basic hypertext processor for media creation. How many teachers can even draw a diagram? How many websites have useful graphics? If you look at wikipedia, it's basically a large book with a few photos and even fewer good diagrams, no simulations or whatever. So when reading e.g. wikipedia it is up to the reader again to create an internal visualization and hope to match the image intended by the authors.

    I believe to make progress in collective intelligence we have to move our media production to match the mental capabilities of humans. Text was very useful when it was the only technical viable solution, but today there are many more and better media types, only our culture of media creation is behind the possibilities by some decades. YouTube may be a nice step in the right direction and what Lawrence Lessing [lessig.org] said about creating CC licensed rich flash content also is. But starting another wiki style pseudo book is not.

    • by EmbeddedJanitor ( 597831 ) on Tuesday October 10, 2006 @07:39PM (#16385559)
      Put any discussion like this in a technical/Geek forum and the debate becomes about what kind of technology will make this all work. Sorry folks, even with a perfect UI or whatever, this is fundamentally a people problem. The major limitations are not how to deal with html, flash, IRC or whatever, but about how to deal with clashing egos, language & cultural barriers etc and how to arbitrate when experts disagree etc.
      • Re: (Score:3, Funny)

        by TubeSteak ( 669689 )
        The major limitations are not how to deal with html, flash, IRC or whatever, but ... how to arbitrate when experts disagree etc.
        Best of three
        Rock, Papers, Scissors, Shoot!

        (I throw Scissors)
        (Real scissors when experts disagree with me)
      • Well, we could figure that out after we figure out the UI... we will be collectively intelligent then.
      • Re: (Score:3, Interesting)

        by QuantumFTL ( 197300 )
        The major limitations are not how to deal with html, flash, IRC or whatever, but about how to deal with clashing egos, language & cultural barriers etc and how to arbitrate when experts disagree etc.

        True those things must be dealt with (and are probably the majority of the problem), but the ability to index, search, and automatically extract collective knowledge is important - this is one of the reasons that text is so successful on the web. Besides open formats ensure our kids will have access to o
      • by zCyl ( 14362 )
        how to arbitrate when experts disagree etc.

        Don't forget the process of identifying and evaluating experts in the first place. This is psychologically challenging due to the significant tendency of people assigned to judge expertise to not be objective, plus the redundant requirement that they themselves must usually be experts to do the job well.
    • Re: (Score:3, Informative)

      by TubeSteak ( 669689 )

      But there has been a lot of research showing that richer media (not flash, but visualization and simulation) are often much more appropriate to describe complex subjects. There has been a trend for a long time to stuff text books with more graphics, diagrams, pictures, and educational software with videos, animations and so on. A picture can say more than a thousand words if placed in the right context.

      How Stuff Works is a great example of how to mix text & pictures/media

    • Re: (Score:3, Interesting)

      "But I believe that books or the written word in general is not the right tool for collective intelligence and in fact right now stopping us from making some advances e.g. in education."

      I don't agree that text is useless, sure it's not the best for every situation, but it is a companion to other styles of rendering and communicating information. This is where I believe FORUMS actually enhance "group think" there are LOTS of gold nuggets particular section of some topic in many peoples minds that would take
      • by chriss ( 26574 ) *

        I didn't mean to say that text is useless, just insufficient. Forums are actually a good example for this: They work because all the participants are able to create text and the forum itself provides a minimal structure by displaying the discussion thread. But after the discussion has ended, there remains a lot of redundant information. It is often ways more efficient to find and follow a former discussion about the subject you are researching than rethinking it yourself, but it requires you to rethink the

    • I posted this before if you want True AI [geocities.com] If you want smarter networks. Take Digg, then add groups.
      For example you have if you're a Republican or a Democrat.
      Democrats mod stuff down Republicans may mod up. So they should each have their own scoring section.
      There are a LOT of groups people can be a part of. Even social cliques if you so desire.
      Eventually people who's articles that get modded up a lot will start with a degree of moderation to them.
      Or you can search on your favorite authors.


      I hate t
      • You all should know that CrazyJim is a prodigy. Not only does God speak to him, but he also can predict the future with 100% accuracy! For instance, he came up with the idea for a Starsiege Tribes-like game only a year after Tribes was announced. Amazing! He's also invented a comic book about a hero who uses katanas with rockets in the hilt, and you can bet your ass that'll be a top seller in only a few years. Hell, he's designed thousands of video games even if those fat-cats in the game industry won'
    • You make a valid point, but I don't know if I agree that text is inherently inferior.

      Certainly rich media are useful for communicating complex concepts, but the richer the media gets, the more inherently limiting it becomes. Think of it as necessarily increasing "specialisation" with "richness".

      For example, classic "text" (as in "written words") is undoubtedly the most basic communication mechanism - atonal, no inflection, no unambiguous emotional indicators, etc.

      However, it's also:
      1. The most versatile - you
  • Borg (Score:3, Insightful)

    by FhnuZoag ( 875558 ) on Tuesday October 10, 2006 @07:30PM (#16385451)
    Come on, it's freaking obvious. What else can it be?
  • by taking responsibility for the computer and doing the thinking bit. kittens dont worry, why should i
  • by susano_otter ( 123650 ) on Tuesday October 10, 2006 @07:31PM (#16385469) Homepage
    ... I think it's going to take something more than MIT smarty men, to make committees useful.
  • First there's the obvious issue of "negative intelligence" where plugging stupid people into a system has a detramental effect. How the hell will a system onf connected people filter out good input from stupid input? Plugging together more people just seems to make things worse. Decsisions by committees are generally worse than those by individuals.

    Then there is the less obvious issue that intelligence is not uniformly good or bad. What makes sense in one situation (problem, country, culture, etc) does not

    • Consider a brain. an individual neuron doesn't really have much intelligence to speak of. But collectively, it's able to give rise to "intelligence." Whether or not we need to give up our individuality for such an endeavor remains to be seen. The borg come to mind of course, but then there's Gaia and Galaxia from the Foundation series of books.
      • Another example is an ant nest (or beehive). The nest as a whole behaves as an intelligent and highly resilient organisim, the individual insects are little more than automotons.
        • But they're all well programmed, and as you say are just 'automotons'. Humans generally have a lot more to worry about than ants, and sometimes want to stand out from the crowd (well, some Ants do too if you believe DreamWorks..). You're right that a larger system can achieve a purpose without the individual elements being aware of it, though in the cases of committees, I wouldn't always call the results 'intelligent'.
    • Re: (Score:3, Interesting)

      by Lemmy Caution ( 8378 )
      See Ed Hutchins' "Cognition in the Wild," a study of navigation practices on a Naval vessel, for an answer to your question. I am willing to bet that very little of your own "intelligent" behavior is coherent or meaningful outside of a broader system, and that you rely on the cognitive capabilities of many others in order to operate yourself. What distinguishes "good input" from "stupid input" for human activities is usually something which is distributed across minds.
    • Is collective intelligence possible?

      Yes, and it's called Wiki.

      -Rick
    • Re: (Score:2, Funny)

      by ButHed ( 970411 )

      First there's the obvious issue of "negative intelligence" where plugging stupid people into a system has a detramental effect.

      Especially when those morons can't spell.

  • by FordPrfct ( 159271 ) on Tuesday October 10, 2006 @07:33PM (#16385491) Homepage
    "I believe that many people will be doing lots of 'natural experiments' with collective intelligence in the next few years -- with or without us,"

    Because God Knows there haven't [myspace.com] been [secondlife.com] any [slashdot.org] going [wikipedia.com] on [fark.com] so [perlmonks.org] far [youtube.com]...

  • Computers arn't intelligent, they do exactly as they are told by you or someone else, never more or less, which if they were human would make them lemmings, never being able to invent anything on their own or think on their own.
    • Computers arn't intelligent, they do exactly as they are told by you or someone else, never more or less

      Computers might not be intelligent, but your argument for this is complete crap. All you have to do is add a true random number generator (eg. using radioactive decay) and they no longer "do exactly as they are told by you or someone else".

      And even if your argument were sound, I'd have to completely disagree with you anyway. If I had a machine that did exactly what I told it do do, I think it'd be a

      • If you have a random number generator that uses radioactive decay, the atoms just decay, right? You tell a computer to measure them somehow and generate a seed or number using some algorithm (I don't know exactly how this works). The non-deterministic parts, the atoms that decay, aren't really part of the computer, they're a phenomenon that the computer measures. Well, sometimes there are errors because of solar flares. So then we're talking about an ideal computer that can only be defined in theory and
        • The non-deterministic parts, the atoms that decay, aren't really part of the computer, they're a phenomenon that the computer measures.

          I don't get what you're saying. If I build such a random number generator into the computer it's part of the computer. What you're doing is like saying that no car can travel faster than 30mph. When I find a car that travels faster you simply say "that's a weird kind of engine, I don't consider that engine to be truly part of the car". That's just plain silly. I'm claimin

    • Lemmings don't jump they are pushed by other lemmings, large crowds of humans frequently display the same behavior. Human intelligence is no more than a bunch of neurons acting like Lemmings.
    • they do exactly as they are told by you or someone else, never more or less
      That explains a lot. Yesterday I told my computer to "go to hell" and this morning it was gone. I only wish it would have used the front door instead of jumping through the window glass.
  • cluster (Score:2, Insightful)

    by brenddie ( 897982 )
    Sometimes is hard for a group of "smart" people to agree on something,I rather have them experiment with a cluster of idiots (available in large quantities,they usually agree on anything as long as is stupid) and have the system do the oposite of what they choose.
  • The group think on Slashdot is unsurpassed in so many areas...
  • by Yonkeltron ( 720465 ) on Tuesday October 10, 2006 @07:42PM (#16385589) Homepage
    Folks at UCSC have done similar research with this paper [arxiv.org] found on the arXiv....I remembered reading it when it first came out and it's still a pretty neat concept.
  • by Chemisor ( 97276 )
    We all know that when you get lots of people together, the result is less intelligence than the sum of individual intelligences, not more. It is called "groupthink". It is why meetings never result in anything useful. It is why every collectively designed standard is a piece of garbage. Decisions require a decider, period. If the decider is intelligent, you get good decisions. If the decider is stupid, you get stupid decisions. If the decider is the president, well... I'll pass on that one. The point is, th
    • by Quaoar ( 614366 )
      I think the difference is time: It takes a while for a group of people to reach a good consensus on an issue. The question is: How much time is required before the group consensus is better than the individual ideas? Perhaps if a group just hacked at an idea for a longer period of time, they would produce more positive results.
    • by treeves ( 963993 ) on Tuesday October 10, 2006 @08:03PM (#16385823) Homepage Journal
      There is such a thing as groupthink and committees can make stupid decisions and meetings do seem to reduce one's intelligence, but. . .
      You are oversimplifying. There are cases of collective intelligence, and examples of good work coming from groups.
      For example, the group that produced the King James Bible, the Manhattan Project, the Apollo program, GIMPS (not GNU Image Manipulation Program, but the Great Internet Mersenne Prime Search - OK, it probably doesn't belong in this list, as it's less "intelligence" and more brute force throwing processing power at a fairly simple but time-consuming problem).
      I'm sure there are other good examples people could give. Those are just ones that quickly come to my mind. Some have suggested that the human brain is itself a form of collective intelligence. Lots of little "subroutines" working together to form a "sum greater than the parts", or something like that. It's been awhile but I've read a couple of the references cited here: http://ericrollins.home.mindspring.com/evoCellACM/ index.html [mindspring.com] and they suggest that idea.
      • All the things you site are projects. Of course people can work on projects or separate things and come up with good ideas.

        What often time group thinkers fail to realize is that breakthrough ideas can not be parallelized. Imagine Albert Einstein consulting on General Relativity. You might get something ridiculous like string theory.
        • by khallow ( 566160 )

          What often time group thinkers fail to realize is that breakthrough ideas can not be parallelized. Imagine Albert Einstein consulting on General Relativity. You might get something ridiculous like string theory.

          That's incorrect in practice. General Relativity was the product of a number of scientists and required work in Riemannian geometry going back half a century. Most scientific progress involves an intense exchange of ideas.
          • You miss the point. I'm not saying things are not built on other things, I'm saying that new breakthroughs are not parallelizable.
    • by N7DR ( 536428 )
      there is no such thing as a "collective intelligence"; group members hinder each other, not help, due to each individual following his own agenda.

      This argument is seductive and almost, but not quite, true.

      The optimum result comes from a system in which an intelligent benign decision-maker obtains and considers input from a small group of equally intelligent people. It's not even hard to prove this mathematically. The trick in real life is that this configuration tends not to be stable: the decision

    • Re: (Score:3, Insightful)

      by nostriluu ( 138310 )
      I know it's fun to think in stereotypes (certainly makes things simpler), but try reading the book "the wisdom of crowds" for a few good examples of how mixed crowds can be smarter than the smartest people, consistently.

      The problem of group think can be a matter of everyone agreeing on principles, so no other courses can seem reasonable, which is just as prevalent in groups of "smart" people; it takes a mixed group to question assumptions (if people dare speak up against all the "experts").
    • Hence why democracy has never been used for anything significant ...
  • No text, seriously.
  • by Spasmodeus ( 940657 ) on Tuesday October 10, 2006 @07:45PM (#16385621)
    Given how stupidly people behave in groups, it only makes sense to use computers to help them think stupider more quickly.

    Think of how much more rapidly Congress could create worthless legislation and shameful scandals with the assistance of sophisticated Artificial Stupidity algorithms. There's probably also a Beowulf cluster joke in here, somewhere.

  • by MollyB ( 162595 )
    After reading the article, it seems like old wine in new bottles. They have coined "Collective Intellegence" (formerly the Center for Coordination Science) apparently for public relations purposes, and the information given reveals no new technology. The only project mentioned is a business-oriented book written wikipedia-style.

    Is there more to this than groupware-on-steroids? Would like to hear the possible downside to this approach, since analog people don't mesh seamlessly with digital technology...
    • by interiot ( 50685 )

      analog people don't mesh seamlessly with digital technology

      *blink* *blink*

      I'm trying to figure out what that means, when any kid playing Super Mario will tell you that they mesh just fine.

      As other mention, I think it's more of a question of "how can we set up social order so we have thousands of people working closely together, but so we don't have decision-by-mob or decision-by-committee?"

      Okay, that's a question that real-life companiesand organizations have been asking themselves for at least a

      • The linux kernel and debian's solution to that is modularity. You get smaller groups, where more motivated individuals can express their own wants. That the smaller groups have something in common, that allows them to agree on things is what brings the larger group together, but the people on say, the network drivers in the kernel, or apache on debian, do not "necessarily" touch the same code as the sysfs bunch, or the xorg package. You might say their solution is not a better groupthink, but a better di
  • The reason you can't get it right, is it's way to damn complex:

    - Experts on a subject in a large group tend to be minority, not majority
    - If you pick a group entirely of experts, thens til the best experts on a problem will be a minority
    - The "stupid" majority silences the "smart" minority
    - Groupthink without some totally innovative mechanism is this: random noise + averaging. Hardly making the end product smarter
    - Groupthink reduces the chances of factual errors (few agents discovering a factual error will
    • I'm not saying that scientists are gods or something, but, breaking what you say point by point:

      "
      - Experts on a subject in a large group tend to be minority, not majority
      - If you pick a group entirely of experts, thens til the best experts on a problem will be a minority
      - The "stupid" majority silences the "smart" minority
      - Groupthink without some totally innovative mechanism is this: random noise + averaging. Hardly making the end product smarter
      - Groupthink reduces the chances of factual errors (few agent
  • Here could be their jingle:
    "Group think: The process that brought you the Iraq Quagmire"
    • Well, the group in question was the Republican administration, and technically it was "group no-think" (which came very naturally to them).
  • Hasn't anybody ever heard of Singularity [wikipedia.org]?!?
  • If collective intelligence is so great, why isn't it used to solve this problem?
  • by Anonymous Coward
    I believe Trurl and Klapaucius already established this.
  • by LionMage ( 318500 ) on Tuesday October 10, 2006 @08:37PM (#16386125) Homepage
    Vernor Vinge has often talked and written about intelligence amplification techniques, such as amplifying the intelligence of an individual or harnessing the power of many minds together. In his latest novel, Rainbows End (yes, the apostrophe is omitted intentionally, a fact the author draws attention to multiple times in the book), Vinge postulates one such mechanism for realizing group intelligence. What if an AI that was only moderately smart built up a social network of "experts" and well-placed non-experts, and found ways to essentially get people to do things for it by promising various inducements? The beauty is, an AI would be very adept at tirelessly managing such a network so that each contributor wasn't just contributing to the AI's primary goal, but also contributing to satisfying the promises made to other contributors.

    Furthermore, the participants in this network wouldn't necessarily have to be aware of each other, nor would they need to be aware that they were part of a collective intelligence. People tend to cooperate more easily when they don't realize they're doing it.

    We humans have a lot of core competencies, but neither managing group efforts nor making decisions by committe belong to this category. Machines, on the other hand, are fantastic at administrative minutiae. Machines also are much better at number crunching in general, something we already rely on them heavily for. The merging of human and machine cultures seems like a logical progression to me, and I don't believe I am drinking Kurzweil's Kool-Aid.
    • What if an AI that was only moderately smart built up a social network of "experts" and well-placed non-experts, and found ways to essentially get people to do things for it by promising various inducements?
      I think you just described middle management.
      • by hany ( 3601 )

        It's good to see that you quoted precisely and left this part:

        ... but also contributing to satisfying the promises made to other contributors.

        Otherwise I would be compeled to disagree with you. :)

    • by mcrbids ( 148650 )
      What if an AI that was only moderately smart built up a social network of "experts" and well-placed non-experts, and found ways to essentially get people to do things for it by promising various inducements? The beauty is, an AI would be very adept at tirelessly managing such a network so that each contributor wasn't just contributing to the AI's primary goal, but also contributing to satisfying the promises made to other contributors.

      Another poster mentioned middle management, but my first thought is that
      • by khallow ( 566160 )
        I thought the same thing. You don't need AI to get cooperation. The only thing an AI might improve on is to figure out how to use part of that economy to more effectively achieve a particular goal.
        • You don't need AI to get cooperation.
          But if you want cooperation to yield a specific goal or result, then you need some intelligent force guiding things along. A free-for-all economy doesn't optimize for any particular outcome. You still don't need an AI to achieve such a thing, but machines lack many human foibles.
          • by khallow ( 566160 )
            But if you want cooperation to yield a specific goal or result, then you need some intelligent force guiding things along. A free-for-all economy doesn't optimize for any particular outcome. You still don't need an AI to achieve such a thing, but machines lack many human foibles.

            Yea. That's pretty much what I was saying. One thing though to keep in mind is that some "human" foibles are really conflicts of interest that any sentient being would have, if it were in the same role.

      • I suppose you could model such a thing as an economy, but it's a managed economy... because while I mentioned the carrot and omitted mention of the stick, the fact is that for such a system to be efficient, you need a way to discourage non-productive effort. (In this case, "non-productive" from the POV of the AI holding this web of people together.) To get people to cooperate, you have to give them the illusion of choice while still managing to keep them on-task.

        Professor Vinge is very Libertarian, and it
  • If a groupthink system is going to produce very good results, the most important part is that the individuals must not know what the others are thinking or have thought, except perhaps a central mediator or coordinator. Otherwise, people get influenced either by charismatic/intimidating individuals in the group, or the "follow the herd" mentality kicks in once they know a large percentage of the group is thinking in a certain direction. The individuals must work independently or at most in pairs, then som
    • 3. A sports team.
      4. A development team.
      5. A band of musicians.
      6. A hunting party (intelligence distributed across species, often)
      7. A film crew.

      The most interesting human activities nowadays are those in which no single human being could possibly understand the masteries and fluences that constitute it. The anthropologist Emile Durkheim considered it a feature of the modern age: unlike our pre-modern ancestors, who often knew as individuals all the skills and methods by which they maintained their lives (ev
  • "The intelligence of a mob is the intelligence of the stupidest member divided by the number of people in the mob."
  • All the decisions that get made are a result of multiple compromises in order to deal with each members agenda. Consequently committee mentality takes over and therefore any decisions take an overly long time and are all mediocre. Any truly innovative or out-of-the-box thinking doesn't ever survive the gauntlet.

    The most innovative results actually comes from dictatorships where the few most visionary risk-takers have enough authority to overrule the closed-minded majority.
  • by killermookie ( 708026 ) on Tuesday October 10, 2006 @10:02PM (#16386787) Homepage
    Here's why [despair.com]
  • Collective intelligence is not a new term and is certainly not a new concept. (see: http://www.co-intelligence.org/Collective_Intellig ence.html [co-intelligence.org] ) However, studying it and finding ways to actually make it work is a worthwhile challenge.
  • Large group projects break down when the energy spent coordinating the project exceeds the work produced by the project. (Death Spiral, Bureaucracy)

    Modern communication methods do not reduce the effective coordination cost, because while they provide much more information, the quality of the information is worse. (Spam, Interruptions, Distractions)

    The more people you have in a group, the more complex the possible set of relationships is, and the higher the chance there will be a conflict.
  • With tubes.
  • A single sheep is not a stupid animal. Quite the opposite, in fact. They tend to be fairly clever (for sheep.) Herd a bunch of sheep together, though, and they're collectively far more stupid than any single sheep in the herd would be. Likewise, humans.
  • I think that it will be interesting to see papers coming out of this group. Perhaps they'll all be group written, with the entire front page consumed by authors' names, affiliations, and email addresses.
  • Ayn Rand? (Score:2, Funny)

    by Syncerus ( 213609 )
    "Center for Collective Intelligence"

    I do believe that she's rolling in her grave over this.
  • by zoftie ( 195518 )
    On a side note, artifical intelligence was(and is) a bane on research computing communities. Most people will recommend you not to take up PhD in AI, because, besides being hard, you will have to compromise any scientific integrities of your project and come up with something to show, ie not large but not complete step in solving large domain problem, but demonstrable short term gain research...
    2c
  • I have a suspicion that formal collectives of the Wiki style will be discovered to be useful only for some bounded set of problems, like, and for much the same reasons as, massively parallel computing.

    That is to say, there are common, routine problems for which "collective wisdom" is dumb as a sack of rocks, and only a smart individual stands a chance of making headway.
  • ...but I think it bears repeating :

    "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
    -- Albert Einstein

    or, if you need it in a handy reminder, there's a nice little poster [despair.com] that expresses approximately the same sentiment.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...