Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Text Mining the New York Times 104

Roland Piquepaille writes "Text mining is a computer technique to extract useful information from unstructured text. And it's a difficult task. But now, using a relatively new method named topic modeling, computer scientists from University of California, Irvine (UCI), have analyzed 330,000 stories published by the New York Times between 2000 and 2002 in just a few hours. They were able to automatically isolate topics such as the Tour de France, prices of apartments in Brooklyn or dinosaur bones. This technique could soon be used not only by homeland security experts or librarians, but also by physicians, lawyers, real estate people, and even by yourself. Read more for additional details and a graph showing how the researchers discovered links between topics and people."
This discussion has been archived. No new comments can be posted.

Text Mining the New York Times

Comments Filter:
  • Homeland security (Score:4, Insightful)

    by Anonymous Coward on Saturday July 29, 2006 @05:42AM (#15805041)
    For every time homeland security is mentioned as benefitting of a new technology, you should get a swift kick to the nuts. Goddam, there is more than just terrorism in this world.
    • by mrogers ( 85392 )
      But the pretty graph [primidi.com] clearly shows that some guy called MOHAMMED is the missing link between Religion and Terrorism - without this new technology, homeland security experts might have been kept in the dark about that.

      The graph also shows links betwen US_Military and AL_QAEDA, and between ARIEL_SHARON and Mid_East_Conflict. If only they'd had this technology when they were trying to justify the invasion of Iraq.

      "Look, Saddam Hussein has links to Al Qaeda! You can see it on the graph!"

      "Uh, Mister Vice-Pr

      • "Uh, Mister Vice-President, this graph is based on press conferences in which you repeatedly mentioned Saddam Hussein and Al Qaeda in the same breath. It may not have any statistical value."

        "Shut up and bring me my war britches, dimwit, the computer never lies!"

        "... That's my job!"

      • I doubt the "real" terrorists would speak in regular english, either. First, different languages have different grammatical rules and idioms. Secondly, they wouldn't talk openly about "BOMBING THE WHITEHOUSE", they'd probably say it more discretely in a semi-sophisticated code. This will just be another arms race--a [tele]communinications one--and civilian casualties will be the main results.

        Unless I'm wrong ofcourse and terrorists write like NY Times writers.

    • Good sir, I wish I had some mod points left for you

      Seriously, every time you mention homeland security, every time you watch a special report on terrorism on you local current affair program - That means the terrorists are winning.

      ...You don't support terrorism now do you?

    • by 1u3hr ( 530656 )
      The compulsory "Homeland Security" link makes me think of the story about a drunk who was crawling about on the sidewalk under a lamppost late one night. A Police Officer came up to him and inquired, "What are you doing?"
      The drunk replied, "I'm looking for my car keys."
      The Officer looked around in the lamplight, then asked the drunk, "I don't see any car keys. Are you sure you lost them here?"
      The drunk replied, "No, I lost them over there", and pointed to an area of the sidewalk deep in shadow.
      The polic
      • Searching for terrorists by datamining from the comfort of your cubicle is about as likely to be successful.

        Unless you have a metric crapload of intercepted communications to sort through for information that might be useful. Especially since the NSA is listening to everything.

        Remember that the darling of the Left, John Kerry, insisted that terrorism was a law enforcement problem, not a military problem. A large part of law enforcement is digging through all available information from the comfort of your
        • A large part of law enforcement is digging through all available information from the comfort of your desk, rather than carpet-bombing potential suspects

          Did I suggest carpet bombing as an alternative? I think legwork is the only likely method. Real terrorists don't live their lives online, you might fill up Gitmo with idiots who spouted "Jihad" on some website. Osama gave up using his satellite phone years ago, they're well aware the NSA is snooping on every form of telephone or Internet communication. My

      • Homeland Aftosa (Score:5, Interesting)

        by Lord Balto ( 973273 ) on Saturday July 29, 2006 @08:43AM (#15805465)
        As William Burroughs suggested, the goal of the Aftosa Commission is not to rid the world of bovine aftosa. It's goal is to justify its existence and continue to enlarge its budget and its manpower until the world understands that bovine aftosa is such a critical issue that there needs to be a cabinet level Office of Bovine Aftosa with a budget only surpassed by that of the military. No one in government ever does anything that could conceivably put them out of business. This is why relying on the military and the "defense" contractors to bring peace is such a dangerous activity.
    • Yeah, and what's up with him mentioning Homeland Security in lowercase, as if it's already the fabric of our society, like the state department or some such. creepy...
    • Oh, so what about that. Five words: defocused artificial large scale understanding.
    • Re:Homeland security (Score:1, Informative)

      by Anonymous Coward
      We did this 2 years ago, filed patents. We have a real-time implementation at http://wizag.com/ [wizag.com] in the form of TopicClouds and TopicMaps. It is applied to to hundreds of thousands of news and blogs (including Slashdot). Both the nodes and the links in the TopicMaps are clickable. Once you create an account, the system creates a personalized TopicCloud for each user.
  • by stimpleton ( 732392 ) on Saturday July 29, 2006 @05:45AM (#15805050)
    For example, the model generated a list of words that included "rider," "bike," "race," "Lance Armstrong" and "Jan Ullrich."

    From this, researchers were easily able to identify that topic as the Tour de France.


    I imagine "testosterone", "doping", and "supportive mother", would have found the Tour de France topic even faster.
  • Funny (Score:1, Insightful)

    by vllbs ( 991844 )
    A relative new method? A difficult task? Sorry, but these are almost laughable, even for a poor spaniard like me.
    • Re:Funny (Score:2, Funny)

      by kfg ( 145172 ) *
      You'll have to forgive them, these are computer scientists. Until now they have been completely unaware that natural language has grammar, syntax and that even individual words have structure and meaning; despite the complete absence of a metatag blizzard to inform them that [color]red is a [/color].

      KFG
      • I'd studied Computer Science ergo I suppose I'm a computer scientist too. So save your ironic comments for the less-experienced souls around you (if any)
        • by tsa ( 15680 )
          I found it funny. And I'm a nerd (just like everybody else here).
        • Please forgive if I have given offense. The jibe was directed only against those specific computer scientists who can use the phrase "data mining unstructured text" without bursting into fits of giggles.

          KFG
      • Shit! I'm sorry, I understand your comment the wrong way. Now I'm remembering the first time I chat in English, when I was politely redirected to recycle classes at primary school...maybe hand by hand with the f. "text mining program".
      • I think they developed this technology trying to find a link between computer scientist and girls. Sadly they were not successful.
  • Mining? (Score:5, Funny)

    by Eudial ( 590661 ) on Saturday July 29, 2006 @05:53AM (#15805065)
    "Home atlast after another long day in the salt^H^H^H^Htext mines.

    We lost four more miners today, bless their souls. The foreman kept insisting they'd dig another tunnel between bicycling and Tour de France. They told him it was too dangerous, but no... he never listens. One of these days... They've got us working 20 hour shifts in the abyss that is the text mines, barely pay us enough to afford the rent, I'm telling you, one of these days..."
  • by liuyunn ( 988682 ) on Saturday July 29, 2006 @05:58AM (#15805074)
    If this can be implemented into research in academia, is searching through decades of articles and abstracts finally going to be more efficient? Provided that they are electronic of course. Poor citations, inaccurate keyword tags, obscure sources...ahh reminds me of grad school.
  • by Anonymous Coward
    But does it also ditch the ads?
  • by Uruviel ( 772554 ) on Saturday July 29, 2006 @06:12AM (#15805098) Homepage
    I thought this was fairly easy to do with a Support Vector Machine. (http://en.wikipedia.org/wiki/Support_Vector_Machi ne ) Or even simple Decision trees by setting the threshold for certain words. (http://en.wikipedia.org/wiki/Decision_tree)
    • I don't know about that, but this looks like it does something akin to Latent Semantic Analysis [wikipedia.org]

      I'm not entirely sure what the novel component of this is. I think it might be the duration of time it takes to process the bodies of text (i should RTF papers to find out i suppose). Latent Semantic Analysis is really computationally expensive.
    • by Anonymous Coward on Saturday July 29, 2006 @07:22AM (#15805231)
      Text modeling is mostly viewed as an unsupervised machine learning problem (as nobody will go through thousands of articles and tag each and every word, i.e. assign a topic to it). However support vector machines are very good classifiers for supervised data, e.g. digits recognition (you just learn your svm for a training sample of pictures of 9's tagged as a 9, the svm should then return the correct class for a new digit).

      The problem with this new method (called LDA introduced by Blei, Jordan and Ng in 2003) is (beside other issues) the so called inference step, as it is analytically intractable. Blei et al. solved this by means of variational methods, i.e. simplifying the model motivated by averaging-out phenomenas. Another method (which as far as I understand was applied by Steyvers) is sampling, in this case Gibbs sampling. Usually the variational methods are superior to sampling approaches as one needs quite a lot of samples for the whole thing to converge.
      • Right. And, unsupervised learning can be useful in some areas. Does anybody know how Google news works? It seems to work reasonably well, and seems to be solving the same problem.

        Also note that for most purposes however classification is becoming less of a big deal. Read Clay Shirky's article [shirky.com] to understand why. Shirkey talks about ontologies specifically, but the gist is the same -- basically, tagging each and every word isn't as crazy an idea if the end goal is just "I want to find something related

      • Well, even in variational inference, you have the problem of convergence. You have a huge EM algorithm and you're trying to maximize the completele likelihood of the data you have. Gibbs sampling doesn't have the same nice properties, but usually works pretty well in practice. Gibbs sampling is nice because it's usually easier to do, requires less memory (in variational methods you basically have to create a new probability model where everything is decoupled), and it's far easier to debug.
  • by SirStanley ( 95545 ) on Saturday July 29, 2006 @06:21AM (#15805111) Homepage
    You mean they can group data by topic? Like clusty.com does when you search?

    I just read the stub of the article... because it seemed like it does exactly what clusty does and I don't care to read anymore.
  • by tompee ( 967105 ) on Saturday July 29, 2006 @06:21AM (#15805113)
    Google buys the University of California computer science school
  • "This research work has been presented by Newman and his colleagues during the IEEE Intelligence and Security Informatics Conference" .... Hello Newman....
  • Has anyone realised that english is one of the most screwed up, stupid languages ever created? its just been stretched and modified in any way possible and some aspects of it are practically useless. Maybe the world would be better off inventing a better language than analysing a horrible one :P
    • by rgravina ( 520410 ) on Saturday July 29, 2006 @06:48AM (#15805166)
      Yeah I agree :). Linguists have tried to develop new international languages to replace English (e.g. Esperanto) that have less cruft and exceptions, but unfortunately very few people bother with them in practice, and keep using English :).

      Wouldn't it be cool if we all spoke a language which was expressive but at the same time had a machine-parsable grammar and had absolutely no silly exceptions or odd concepts like the masculine/feminine nouns that French and Italian has?

      I'm no expert on this, but I think linguists will tell you that we tend to modify/evolve langauge to suit our culture and circumstances, so any designed language (and even existing natural ones) will be modified into many different dialects as it is used by various cultures around the world.

      Still yeah, I am glad I'm a native speaker of English since it would be a pain to learn as a second language! Imagine all the special cases you'd have to memorise! Spelling, grammar exceptions that may not fit the definition you learned but native speakers use anyway etc.
      • Linguists have tried to develop new international languages to replace English (e.g. Esperanto)...

        Actually, Esperanto was created by an ophthalmologist [wikipedia.org]. In general, linguists don't attempt to replace languages with "better" ones. They recognize that linguistic change is natural and unavoidable. And, like other sciences, linguistics is largely occupied with observing and recording phenomena. They do not, as a rule, take a prescriptive point of view.

        ...we tend to modify/evolve langauge to suit our cultur
        • Some people assume that reading and parsing are the difficult part for computers. Which is understandable. It's not that easy for us. The study of words and language is a major part of our early schooling.

          Others (like yourself) realize this shouldn't be difficult for computers. You are correct. In truth, computers have little trouble keeping track of nouns, verbs, subjects, predicates... even most of the exceptions.

          BUT, The insurmountable part is giving the computer any kind of useful understanding of
        • Russell contributed a great deal to both mathematics and linguistics.
          And Exhibit C over here, gentlemen, is the Understatement of the Century!
          • Russell contributed a great deal to both mathematics and linguistics.

            Technically, I was wrong there. He actually contributed a great deal to the philosophy of language, which is not at all the same thing as linguistics (though there is overlap).
        • Context-sensitive adaptive parsing seems to be effective in parsing English even with very small (http://www.sand-stone.com/Meta-S.htm for an introduction. (The 2nd reference is on natural language parsing.)
      • Some people have suggested to combine both: make a new version of english, with dumbed-down grammar and a reduced vocabulary. Egyptian-taxi-driver-english if you want. That I believe would be a good solution, as everybody could learn it, and those with time/talent could "move on" to normal english.
      • In the same vein as Esperanto, Lojban http://www.lojban.org/ [lojban.org] is a culturally neutral, machine parsable (written in Lex/Yacc, see the website) artifical language.

        It was originally designed to study the Sapir-Whorf hypothesis http://en.wikipedia.org/wiki/Sapir-Whorf_hypothesi s [wikipedia.org], but has since developed a rich following from computer scientists as a potential human-computer interface tool. Err, at least that's why THIS computer scientist is interested in it. ;-)

        -- /v\atthew
    • So, when are we switching to Esperanto?
    • The Klingon Language Institute

      http://www.kli.org/ [kli.org]
    • I wouldn't pick on english. Any language in use is going to be abused and crafted. That's like saying "Isn't painting stupid, we need clear symbols to represent everything in the world." The moment somebody says "Hey." instead of "Hello James, how are you this morning?" all of the work put into the precise grammer is gone. Your wonderful language would also kill off the job of most authors, poets and editors, people who in my opinion advance and improve the language to which they are patrons every day.
  • Interesting (Score:5, Interesting)

    by glowworm ( 880177 ) on Saturday July 29, 2006 @06:35AM (#15805143) Journal
    I have available to me quite a large database of historical research spanning back to 1991, being freeform copies of emails between researchers and acedemics on a wide variety of topics to do with a specific topic from the 15th century. Dry stuff, but a very exciting topic.

    At the moment the data is mined with wildcard text searching, which means you need to know the subject before you can participate. It's a very valuable resource, but it's also not used to it's potential due to the clunky methods of interfacing with it.

    It will be quite interesting applying this technique to the dataset to see if unknown relationships become apparent or known relationships become clearer.

    Looking at the paper and samples would indicate this tool (if it does what it promises) might be able to not only work out the correlation between datum but to create visual diagrams linking people, places and events quite well. A handy tool for my dataset.

    I'm now sitting here crystal ball gazing; if we were to expand this to a 3D map. Say by displaying a resulting chart and allow a researcher to hotlink to the data underneath it would be an interesting way to navigate a complex topic, more so than a text based wild or fuzzy search. Of course I won't know if this is possible until I look into the program more, and I won't be able to look into the program more until I massage teh dataset again ;) but it does open up some interesting possibilities.

    Click on the Anthony Ashcam box and see the hotlinking and unfolding of data specific to him. Drill in more... then more... and eventually get to a specific fact.

    The only problem will be that I would need to pre-compute all the charts. Oh well, one day ;)
  • by Anonymous Coward

    An artificial intelligence [earthlink.net] could maybe use these new methods to grok all human knowledge contained in all textual material all over the World Wide Web.

    Technological Singularity -- [blogcharm.com] -- here we come!

  • So how is this not simply automated discourse analysis? [wikipedia.org]
  • I have to agree with the first response (swift kick in the nuts to whomever came up with that). It's called Google or Regex, whatever you want to use to strip unwanted content from a search.
  • by alcohollins ( 64804 ) on Saturday July 29, 2006 @07:09AM (#15805212)
    Not revolutionary. In fact, they're late.

    Google AdSense network has done this for years to serve contextually-relevant text ads across thousands of websites. Yahoo now, too.

    • Yeah - Google ad sense gave this very slashdot topic (text _mining_) two advertisements, both having to do with shoving coal around the globe. I'd say we can use some advancements in this area.
  • grep? (Score:2, Funny)

    by muftak ( 636261 )
    Wow, they figured out how to use grep!
  • by SlashSquatch ( 928150 ) on Saturday July 29, 2006 @07:47AM (#15805289) Homepage
    ...a load of grep.
  • How is this hard to do? It seems like this could be done with relatively simple algorithms.
  • by soapbox ( 695743 ) * on Saturday July 29, 2006 @08:18AM (#15805373) Homepage
    Phil Schrodt at the U of Kansas has been doing something similar for years using The Kansas Event Data System [ku.edu] (and its new update, TABARI [ku.edu]). He started using Reuters news summaries to feed the KEDS engine back in the 1990s.

    Following Schrodt's work, Doug Bond and his brother, both recently of Harvard, produced the IDEAS database [vranet.com] using machine-based coding.

    These types of data can be categorized by keywords or topic, though the engines don't try to generate links. The resulting data can also be used for statistical analysis in a certain slashdotter's dissertation research...
  • The new method that they figured out was
    "site:newyorktimes.com "Tour de France" "
  • We were doing this in 1989 with long free form responent answers to marketing questions to gain information about their actual preferences. Full natural language processing. We didn't patent the technique because we thought it was obvious - and we were too dumb to know how difficult a thing we achieved. It worked wonderfully. Ours worked in Japanese, German, and Thai, too - I bet their's only works in English, and American English at that. Of course it took us several months to teach it the decoding matrix
  • by saddino ( 183491 ) on Saturday July 29, 2006 @08:59AM (#15805522)
    The demonstration is significant because it is one of the earliest showing that an extremely efficient, yet very complicated, technology called text mining is on the brink of becoming a tool useful to more than highly trained computer programmers and homeland security experts.

    On the brink? Q-Phrase [q-phrase.com] has desktop software that does this exact type of topic modeling on huge datasets - and it runs on any Windows or OS X box. [Disclaimer: I work there] And there are a number of companies (e.g. Vivisimo/Clusty) that uses these techniques as well.

    Going beyond the pure mechanics (this article speaks of research that is only groundbreaking in their speed of mining huge data sets), there are more interesting uses for topic modeling such as its application to already loosely correlated data sets. A prime example: mining the text from the result pages that are returned from a typical Google search. One of our products, CQ web [q-phrase.com] does exactly this (and bonus: it's freeware [q-phrase.com]):

    Using the example from the story: in CQ web, text mining the top 100 results from a Google search of "tour de france" takes about 20 seconds (via broadband) and produces topics such as:
    floyd landis
    lance armstrong
    yellow jersey
    time trial


    And going beyond simple topic analysis: using CQ web's "Dig In" feature (which provides relevant citations from the raw data) on floyd landis returns "Floyd landis has tested positive for high leves of testosterone during the tour de france." as the most relevant sentence from over 100 pages of unstructured text.

    So, while this is a somewhat interesting article, fact is, anyone can download software today that accomplishes much of this "groundbreaking" research and beyond.

  • 330,000 articles at $3 each comes to $990,000, almost a million dollars for their data mining experiment. No wonder tuition costs are so high when this is what they're spending their money on!
  • Google News does a rather good job of associating all the stories on the same topic. I'd thought this was a solved problem.
  • Do Try This At Home! (Score:2, Interesting)

    by ejoe ( 198565 )
    It doesn't come bundled with an analysis engine, but if you're looking to build your own corpus of material (e.g., by automating searches or harvesting large volumes of your research web pages) and you're on MacOSX, check out Anthracite web mining desktop toolkit [metafy.com]... It makes it easy to build spidering and scraping systems, structure the output and feed it into a database like MySQL...all without requiring you to write a single line of code. Take that output and feed it into any number of the analysis and se
  • Edward Herman and Noam Chomsky may or may not have had a fancy computerized search system, but association of loaded keywords was a major topic in Manufacturing Consent (ISBN 0375714499) where the influences of commercial interests on the media and government was analyzed using the New York Times. The great improvement in the rate at which text can be analyzed should make for an excellent third edition.
  • Out of the thousands of papers published on this subject every year, Roland Piquepaille picks this one.
  • Why is this news? (Score:4, Informative)

    by Lam1969 ( 911085 ) on Saturday July 29, 2006 @03:23PM (#15807217)
    This is interesting, but the idea has been around for more than 50 years, and practiced using automated computers (as opposed to human coders) since the 1960s. Lerner and de Sola Pool came up with the idea of using "themes" to analyze political texts at Stanford in 1954, and hundreds or even thousands of studies using automated text analysis tools have been performed since then. You can download a free text analysis tool called Yoshikoder [yoshikoder.org], which will perform frequency counts of all words in a text, as well as dictionary analysis, and several other functions. So why is this news now? I think the press release is really leaving out some key information. I think the more relevant questions that should have been addressed in the original release is how the text was prepared for analysis, because most websites and online databases of news articles (LexisNexis, Factiva, etc.) don't allow batch downloads of huge amounts of news text in XML or some other format that can be easily parsed by text analysis programs.
  • by jrtom ( 821901 ) on Sunday July 30, 2006 @12:30AM (#15809478) Homepage
    I'm a PhD student in the research group that worked on this. My research is somewhat different (machine learning and data mining on social network data sets) but I've gone to a lot of meeting and presentations on this work, and I've used the model they're describing in my own research. Certainly people have worked on document classification before, but posters that are suggesting that this isn't new don't understand what this method accomplishes. For example:
    • basically, the model assigns a probability distribution over topics to each document
      i.e., documents aren't assigned to a single topic (as in latent semantic analysis (LSA))
    • topics are learned from the documents automatically, not pre-defined
      this means, incidentally, that they're not automatically labeled, although a list of the top 5 words for a topic generally characterizes it pretty well.
    • the technique can learn which authors are likely to have written various pieces of a given document, or which cited documents are likely to have contributed most to this document
      side benefit: you can also discover misattributions (e.g., authors with the same name)
    For a good high level description of what these models are doing, see Mark Steyvers' research page [uci.edu] (MS is one of the authors); that page also has links to a number of the preceding papers. Those interested in seeing what the output of a related model looks like might like to check out the Author-Topic Browser [uci.edu].
    • How does this differ from Andrew McCallum at UMass Amherst's work on AT (Author-Topic) and ART (Author-Recipient-Topic) models? I think he uses a generative model assuming each document has a Dirichlet distribution over topics, and uses Gibbs sampling to infer the parameters. I'll have to read the paper, obviously, but some plain explanation would be useful. cheers
      • The Author-Topic model is actually due to Steyvers et al. at UC Irvine. McCallum's contribution was the Author-Recipient-Topic model, which extended the AT model to the domain of directed communications. The AT model is actually very closely related to Steyvers' topic model. I recommend reading the summaries on his page referenced above (in my original comment).

A memorandum is written not to inform the reader, but to protect the writer. -- Dean Acheson

Working...