Open Source Automated Text Summarization? 38
TrebleJunkie writes "I've spent some time recenting looking for open source projects dealing with Automated Text Summarization -- automatically generating detailed summaries from longer documents -- to no avail. I can find a lot of research papers and several commercial projects, but no open source code or projects? Does anyone out there know of any?"
Re:The way it is supposed to work! (Score:1)
Re:The way it is supposed to work! (Score:1)
Yeah. I think that's from too many stupid people. I remember seeing a guy get flamed on usenet for having a summary line in his headers (or was that a keyword line? I forget.) The idiot that flamed him said something like it messes up his newsreader or some odd crap like that.
Back on the subject at hand...the sort of program you're asking about would need some sort of AI code in it. I believe the field of study is called natural language processing. It's not a trivial matter, so it makes sense to me that there is no open source software for it.
It wouldn't be a bad idea for a project. Maybe a system that converts text into a more understandable form for computers...most languages were just haphazardly slapped together then highly bastardized over time...it's amazing that anyone or anything can understand English! ;-) Anyway, changing to data that is structured and conforms to more strict rules could do wonders. After that, it'd be much less difficult to write programs that depend on understanding the meaning of documents. (like creating summaries)
Unfortunately, such a project would involve more than just changing words to specific codes. I remember reading about one of the first translation attempts by computer. They tried to test it out by going from English to Russian and back. They put in: "The spirit is willing, but the flesh is weak." They got back something like: "The vodka is good, but the meat is rotten." A lot of the meaning is lost. You have to account for not only placement of words, but also context, idiosyncratic phrases, &etc...
Re:The way it is supposed to work! (Score:1)
maybe a dumb question (Score:1, Insightful)
Re:maybe a dumb question (Score:3, Informative)
Microsoft Word.
It doesn't do it all that well, from what I've seen, but it does it. It's called "AutoSummarize".
Re:maybe a dumb question (Score:3, Informative)
Re:maybe a dumb question (Score:2)
"doesn't do it all that well" is being kind. My wife tried it on a 5 page letter she'd written, and the results were...bizarre. Yes, the text was from the document. Far from the most relevant parts, seemingly grabbed at random.
My best guess for what it could do is some sort of word frequency count, ignoring common words like 'the'. Then include the top N% of sentences and those adjacent to them that include the most common words. Also, give a higher weighting to things in the beginning and end, since papers following the classic form tend to say what they're going to say, say it, then say what they've just said.
Re:maybe a dumb question (Score:3, Interesting)
Check out ArchiText [yellowbrix.com] from YellowBrix.
Having been looking at their demos and so on, they have some great summary software.
It is most certainly NOT free, but perhaps by looking at the summaries generated and the documents pulled from, you could get some idea how to reverse-engineer the process.
Re:maybe a dumb question (Score:1)
Obvious caveats apply - i.e. I work for them and helped write the thing. However if you are needing that sort of thing or something more particular contact Lextek [lextek.com]
A simple kinda-solution (Score:1, Redundant)
Re:A simple kinda-solution (Score:2)
Yeah, it's off-topic, but it's not redundant! Stupid moderators -- meta-mod will bite you back!
Re:No Offence inteneded but, Why?? (Score:1)
And as for 'probably cheaper' - well yes if you want to guarantee you get something that makes sense back. However, the expense can only be justified if you want a summary for use in (say) a presentation, or you want to read a review. If you want to see abstracts of dozens of documents in order to decide if any are worth reading a computer is way cheaper and faster; 50wpm and 1GHz just don't compare for that task.
Re:No Offence inteneded but, Why?? (Score:3, Interesting)
There was a trend a few months to a year ago where members of some discussion groups were producing summaries of each week's traffic, but it proved to be so much thankless work that they have all quit by now. Every week these people would have to spend hours sifting through hundreds of messages and manually distilling it down to one hyperlinked document of perhaps a few hundred words, or a couple of pages long if printed. For the thanks they got in return -- and people did appreciate all the work, but you can't eat thanks -- it just wasn't worth it for any of them to keep doing these manual summaries. Even if they were being paid, it's not the sort of work most people want to be doing in the first place.
Finding a system that could programmatically produce a periodic summary -- even if a crude one -- of what was discussed on one of these groups would be a great tool. And no, I'm not willing to pay an assistant to summarize Usenet for me, and no I don't think it's something that any one assistant could do alone anyeay. But I would be willing to have, say, a cron job that on mondays gave me a summary of the Linux kernel lists, on Tuesdays gave me a report on what's up with Perl6, on Wednesdays told me what security issues have been news lately, on Thursdays ...you get the idea.
In order to be able to summarize these aggregates of documents, you'll have to start with smaller ones. You could play it in both directions: from the messages up level, you could reduce each posting to a sentence or less , while from a threads down level you could figure out what topics seemed to be hot and go for key ideas from messages within the main threads. Bonus points for a system that could recognize citations (if what poster A said was important enough for poster B to quote it, then maybe that quote should end up in the summary) or, Google-style, place emphasis on traffic pattens, linkages, etc.
As several people have noted, this is all a big, hard problem to solve, and there would be real uses for it if anyone could put it all together. Would we be willing to pay for such a service? I dunno, depends how good it is I guess. But if it really could reduce Usenet, web logs, mailing lists, and hey maybe even some normal web sites down into a small handful of roughly accurate documents that could be read over a cup of coffee each morning, then yeah I think that would be a valuable thing.
Re:No Offence inteneded but, Why?? (Score:1)
The answer's pretty simple: I want to be able to summarize documents as they come into my life and inevitably stay there. I'm a pack rat. I keep everything. I just want to be able to organize it. I want to be able to search it. I want to be able to search through the summaries (to provide for a way to be able to search generalizations, rather than finding keywords in irrelevant parts of irrelevant documents.) and I want to be able to display the summary when I mouse over the document... stuff like that... so I don't have to dig through the whole document to find out if it's really what I need. And I wanted to play a little bit with the technology. I tinker. I do that.
I do thank everyone for their responses. Anything else you can think of, please let me know. Thanks much!
Where is the research you found? (Score:1)
bookaminute (Score:1)
So I'd like to take a moment to point out a good resource for some existing summaries, at bookaminute [rinkworks.com].
Re:Have a look in CPAN (Score:4, Interesting)
USA Everyone is permitted to copy and distribute verbatim copies of this license document. Changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. The GNU General Public License is intended to guarantee your freedom to share and change free software. To make sure the software is free for all its users. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.). We are referring to freedom. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish).
It seems comparable to MS Word..
Microsoft Summarize (Score:4, Funny)
I know it's not open source, but have you tried the Summarize feature in Microsoft Word? I fed it the entire contents of the GNU website [gnu.org] and it came back with:
Check out Alembic (Score:1, Interesting)
Might do exactly what you want. You probably have to train it first but it works quite nicely.
Mike
Summarisers (Score:2, Interesting)
I'll GPL this: (Score:4, Funny)
Try it on man pages:
man awk | perl -ne 'split;foreach(@_){print $_." " if (rand()>.9)}'
and it still makes sense! :)
The reason why is because it's hard (Score:2, Interesting)
Re:The reason why is because it's hard (Score:1)
The mitre one is at : http://www.mitre.org/technology/alembic-workbench
and
http://www.mitre.org/technology/alembic-workben
Some related information (Score:3, Informative)
From my research, there appears to be two primary methods of performing this kind of processing:
Of the two, statistical parsing is more popular these days because it doesn't require knowledgebase, expert system shells, grammar modeling and extensive dictionary. One of the primary method of determining the relative importance of words in a sentence is valence. The main challenge with natural language parsing and statistical technique is it depends on the training dataset. The more specific the dataset is, the better it will perform.
Statistical analysis can also use expert system shells and other AI technologies to improve accuracy, but it doesn't have to.
From my understanding (which is limited), it stems from a principle from linguistics. By counting the frequency of words or more specifically nouns, the program is able to rate each nouns importance. Once it got done, it could then look at the sentence that best describes the document by doing a comparison between the most importance words and the appearance of those words in the sentences. I remember this from my literature and linguistics classes. Congnitive science has also attempted to solve this problem, but it is very difficult.
In either case, if you dealing with well structured documents, your best bet is to grab the first 3 paragraphs assuming the author followed standard thesis/essay structure. If you're planning on summarizing new articles, it might not be that hard if the author followed the inverted pyramid, which many do not. One of the big tools of natural language parsing in the early days was prolog. It is still used a lot in academic settings for natural language processing. You're best bet is to get an intern to read and summarize for yo
Re:Some related information (Score:1)
But it wouldn't be impossible. There's a company in Canada that does software like this (in Englis, German and French, I believe) called Nstein. I've seen a demo and its very impressive.
Sherlock (Score:3, Informative)
It's not Open but it is scriptable, is not an additional cost, and is available on a Unix OS (MacOS X) Indeed through Apple's Open Scripting Architecture (OSA [apple.com]) one can use any number of scripting languages such as Python, Perl, and even JavaScript to interact with the application.
Feed it a document, tell it to summarize and back will come a generally useful précis. For folks directly on a Mac (MacOS 8.6 or newer incl. X) simply highlight a document or portion of text and select "Summarize" from the contextual menu.
Hard.. (Score:2)
The best way to find code would be to e-mail the authors of the papers you've found. They probably have implementations, and academics are usually willing to share under something like a BSD license or the GPL.