Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google AI Wikipedia

Google Trains AI To Write Wikipedia Articles (theregister.co.uk) 59

The Register: A team within Google Brain -- the web giant's crack machine-learning research lab -- has taught software to generate Wikipedia-style articles by summarizing information on web pages... to varying degrees of success. As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It's impossible to read and keep up with everything. Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It's not easy, though. A paper, out last month and just accepted for this year's International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is. A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren't bad.
This discussion has been archived. No new comments can be posted.

Google Trains AI To Write Wikipedia Articles

Comments Filter:
  • Obligatory (Score:5, Funny)

    by darkain ( 749283 ) on Sunday February 18, 2018 @11:35PM (#56150238) Homepage

    Obligatory XKCD Reference: https://xkcd.com/810/ [xkcd.com]

    • by shanen ( 462549 )

      I think you should have said more to earn the click-through, but he's a sharp cookie (and even answers his email in helpful and constructive ways), so you got my click. But you wouldn't have gotten my mod point, if'n I ever got one to give.

      In his ever insightful way, he implicitly hit on all three of the applications in my initial (and longer) comment on this story.

  • Turf Wars (Score:4, Interesting)

    by Frosty Piss ( 770223 ) * on Sunday February 18, 2018 @11:43PM (#56150268)

    It might be fun to watch the Google Wikipedia AI Bot get into "turf wars" with existing Wikipedia Bots...

  • by russotto ( 537200 ) on Sunday February 18, 2018 @11:52PM (#56150296) Journal
    Can this bot win edit wars, get Wikipedia administrators to side with it, drive n00bs off its pages? Without that, it's not very useful on Wikipedia itself.
  • by Visarga ( 1071662 ) on Monday February 19, 2018 @12:03AM (#56150324)
    Such models have no common sense yet - can't tell if "the use of the umbrella causes the rain or the other way around". They can't think like us, they just copy text and try to hit all the sub-topics with naturally sounding language based on the source material. It's more similar to Google translator than a human Wikipedia editor.
    • by AmiMoJo ( 196126 )

      Sounds perfect for Wikipedia. Research and logic are now allowed, all that matters is finding a reliable source that says something and summarising it. The only real skills required are writing summaries and defending the reliability of your source on the talk page.

    • Such models have no common sense yet - can't tell if "the use of the umbrella causes the rain or the other way around". They can't think like us, they just copy text and try to hit all the sub-topics with naturally sounding language based on the source material. It's more similar to Google translator than a human Wikipedia editor.

      Hmm. Don't be so sure. There is a certain sense that embodiment, being in a body, is a necessary part of familiar intelligence. Humans are to some extent the way we are because we

      • by HiThere ( 15173 )

        It's not that direct, and I see no reason that a recurrent network couldn't learn "common sense". (Well, at least as well as people can.) But if you need to include all the information involved in acquiring common sense, then you've radically increased the data requirements, including lots of time-series data sets, etc. The cheap way to do this is probably to embody it in a body with lots of sensors. Now one source of this information would be a fleet of automated cars...

        The problem is that this depends

  • Great. Just what we need. A trained monkey that summarizes the summarizers.

    According the the article, "The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff."

    Mohammad Saleh, co-author of the paper and a software engineer in Google AI’s team, told The Register: “The extraction phase is a bottleneck that determines which parts of the input will be fed to the abstraction stage.
    • A trained monkey that summarizes the summarizers

      Wikipedia editors summarized.

      Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

      And since Google essentially has quite some influence on which sites go there... Here we go — Google's very own reality distortion field.

      • most of the information within a wikipedia page is spread around on little visited websites in terrible formatting it actually takes someone who understands or wants to understand "the subject" to actually do a half decent job

        the fact it simply takes the summary of the summarizers basically makes it pointless... go back to you lisp machines "researchers"

        • by HiThere ( 15173 )

          It makes this particular pre-alpha version pointless. No argument there. But this particular version doesn't even handle things that could easily be handled, like capitalizing sentences. This is clear evidence that it's a pre-alpha version.

          All this tells us is that this is another area they're looking into. It give essentially no grounds for judging how well the first release will work.

  • Insofar as the google understands that knowledge is power, I'm only surprised that they decided to show their hand. Maybe Wikipedia is less naive and less harmless than I thought? The google perceives an actual threat to their Gawd Profit?

    I've actually been considering this branch of technology in terms of specific applications, such as (1) Writing assistance to help people tell their hidden stories (in interesting ways, of course), (2) Email analysis for asymmetric celebrity email systems (as a dual of the

  • Most articles I find on the net follow a pretty consistent pattern, using one of two variations on that pattern:

    How To Foo a Fizz

    Fizz s very popular these days blah blah. First paragraph says nothing useful at all.

    Fizz is good for blah blah. Second paragraph also pointless.

    Sometimes it helps to Foo your Fizz. Some people like to Foo it because blah blah blah.

    You can Foo your Fizz by:
    Clicking the tiny menu at the bottom
    Choose Preferences
    Select "Foo"

    Now your Fizz is Foo and blah blah blah.

    Share this on Faceb

    • by Tablizer ( 95088 )

      I want my foo fizzed too, where do I sign up?

      • This was about fooing your fizz, not fizzing your foo. Doing the latter results in the unintentional bar side effect and you'll need to baz it back and start again.

    • I wish they would flag bot-written pages accordingly, so I could filter them out in searches without having to fetch them first.

      I've tried to build an interface where a flag is drawn on a map whereever there's a wikipedia article geolocated there (no matter what language it's in).

      Either with the old wikimedia query interface or SPARQL, there's no way to get rid of the flurry of bot written pages, which are simply the same ridiculously innacurate geonames data, formatted into an article just to bump up the n

  • by Michael Woodhams ( 112247 ) on Monday February 19, 2018 @12:42AM (#56150438) Journal

    This is just crying out to be applied to some famous texts to amuse us with what it comes up with.

    The Hunting of the Snark. Fox in Socks. We're Going on a Bear Hunt. Ulysses. 50 Shades of Grey. Titus Andronicus. Sonnet 130. Harry Potter and the Portrait of What Looked Like a Large Pile of Ash. The Magna Carta. Genesis. Terms and Conditions for iTunes.

  • And then just spin your article via word AI https://wordai.com/ [wordai.com]... Oh thats what the penguin was for was for. ::) Grrrr
  • by Anonymous Coward

    than saving it. They are anti-information. I gave-up after I created a page for my uncle that had a platinum record and five gold records that was deleted as not being notable.

  • A Google AI could hardly do worse than a large number of Wikipedia entries.

  • by dunkelfalke ( 91624 ) on Monday February 19, 2018 @04:59AM (#56150898)

    I didn't realise there is a Google Trains subsidiary. But even so, why does it have an AI and why would this AI edit Wikipedia?

  • This guy was already doing that [gizmodo.com] in 2014?
  • Seriously, if this is done right, it might actually give us 'cliff notes' on businesses, their web sites, ideally, even connect the subsidiaries.
    And perhaps they will show the dots WRT family connections.

If you have a procedure with 10 parameters, you probably missed some.

Working...