Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Botnet

Submission + - ENISA Gears Up for War on Botnets (securityweek.com)

wiredmikey writes: The European Network and Information Security Agency (ENISA), Europe's Cyber security agency, has issued a report focused on botnets this week titled, "Botnets: Measurement, Detection, Disinfection and Defence." The report questions the reliability of botnet size estimates and provides recommendations and strategies to help organizations fight against botnets. In addition, ENISA published a list of what it considers the top 10 key issues for policymakers, a list derived from internal discussions by security experts in the field of botnets that took place between September and November 2010 and presents a selection of the most interesting results.
Crime

Submission + - Corporate data breach average cost hits $7.2M (networkworld.com)

alphadogg writes: The cost of a data breach rose to $7.2 million last year from $6.8 million in 2009, with the average cost per compromised record in 2010 reaching $214, up 5% from 2009. The Ponemon Institute's annual study of data loss costs this year looked at 51 organizations who agreed to discuss the impact of losing anywhere between 4,000 to 105,000 customer records.While "negligence" remains the main cause of a data breach (in 41% of cases), for the first time the explanation of "malicious or criminal attacks" (in 31% of cases) came in ahead of the third leading cause, "system failure."
IT

Submission + - A letter on behalf of the world's PC fixers (pcpro.co.uk)

Barence writes: "PC Pro's Steve Cassidy has written a letter on behalf of all the put-upon techies who've ever been called by a friend to fix their PC. His bile is directed at a friend who put a DVD bought on holiday into their laptop, and then wondered what went wrong.

"Once you stuck that DVD in there and started saying 'yes, OK' to every resulting dialog box, you sank the whole thing," Cassidy writes. "It doesn’t take 10 minutes to sort that out; it requires a complete machine reload to properly guarantee the infection is history."

"No, there is no neat and handy way I’ve been keeping secret that allows you to retain your extensive collection of stolen software licences loaded on that laptop. I do disaster recovery, not disaster participation.""

Books

Submission + - Book Review: Solr 1.4 Enterprise Search Server 1

MassDosage writes: "[Note to Slashdot editors: my e-mail address is massdosage@gmail.com if you need to contact me. Please do NOT publish this on the site.]

Solr 1.4 Enterprise Search Server written by David Smiley and Eric Pugh provides in-depth coverage of the open source Solr search server. In some ways this book reads like the missing reference manual for the advanced usage of Solr. It is aimed at readers already familiar with Solr and related search concepts as well as those having some knowledge of programming (specifically Java). The book covers a lot of ground, some of it fairly challenging, and gives those working with Solr a lot of hands-on technical advice on how to use and fine-tune many parts of this powerful application.

Solr 1.4 Enterprise Search Server starts off with a brief description of what Solr is, how it is related to the Lucene libraries (which it is built around) and how it compares to other technologies such as databases. This book is not an introduction to search and this chapter covers only the basics and assumes the reader already knows what they are getting into or that they will read up on search concepts themselves before reading further. Solr is free, open-source technology licensed under the Apache license and is available here. This book covers the 1.4 version of Solr and was published before this version was actually released so it is a bit patchy in areas which were still undergoing change but the authors point this out very clearly in the text where applicable.

The book provides details on downloading and installing Solr, building it from source and the manifold options available for configuring and tweaking it. A freely available data set from Music Brainz is provided for download along with various code examples and a bundled version of Solr 1.4 which is used as the basis for many of the examples referred to throughout the text. In some ways this dataset is limited as it only allows for fairly simple usages compared with the challenges of indexing and searching large bodies of text. Again, the authors clearly mention these limits and briefly describe how certain concepts would be better applied to other data sources.

The basics of schema design, text analysis, indexing and searching are covered over the next three chapters and these include a wide-range of essential search concepts such as tokenizers, stemming, stop-words, synonyms, data import handlers, field qualifiers, filters, scoring, sorting etc. The reader is taken through the process of setting up Solr so it can be used to index data that is to be searched and then how this data can be imported into Solr from a variety of sources like XML and HTML documents, PDF’s, databases, CSV files and many others. Using Solr to build search queries is covered with examples that the reader can run via the Solr web interface and provided sample data.

More advanced search techniques are covered next and at this point I felt a lot of what was being discussed went over my head. Perhaps this was because my own search experience hasn’t extended very far and the behind-the-scenes algorithms powering search aren’t something I’ve had to directly work with. There were sections here that definitely felt aimed at people with a much more thorough understanding of the theory underpinning search and how a knowledge of mathematics and the data being searched are essential for search algorithm design. Having said this, these chapters felt like they would be really useful to come back to at some point in the future and I’m sure that people working with search on a daily basis would find some useful advice here for how to get the best out of Solr.

Solr provides much more than just indexing and search and the fact that various components are available to do many other common search-related functions is one of its main benefits. These components provide things like the highlighting of search terms in returned results, spell-checking, related documents and so on. The authors cover components which ship with Solr to provide this functionality as well as a mentioning a few that are currently separate software projects. One can easily see how all of this would be directly applicable if one was adding search capability to one’s own product or web site as there are a lot of wheels that Solr saves you from having to re-invent. The book also mentions the various parts of Solr that can be extended to modify or add new behaviours, which of course if one of the many advantages of its open source nature.

The final three chapters move on to the more practical side of actually using Solr in the “real world” and discuss various deployment options, how it can be monitored using JMX, security, integration and scaling. In addition to Java (which is the probably the most powerful and straightforward way of integrating with Solr) support for languages like JavaScript, PHP and Ruby is described. I felt the Ruby section was way too long, maybe one of the authors has a soft spot for the Ruby language? The sections on writing a web crawler and doing autocomplete were far more interesting and probably also more generally applicable. The book wraps up with a thorough discussion on how to scale Solr from scaling high (optimising a single server through techniques like caching, shingling and clever schema design and indexing strategies), scaling wide (using multiple Solr servers and replicating or sharding data between them) and scaling deep (a combination of the former two approaches).

On the whole this is a very thorough, detailed book and it is clear that the authors have a lot of experience with Solr and how it is used in practice. This book does not cover a lot of theory and assumes a fair amount of prior knowledge and is definitely aimed at those who need to get their hands dirty and get up and running with Solr in a production environment. The authors have a straightforward, open and honest writing style and aren’t afraid of clearly stating where Solr has limitations or imperfections. While the book may have a somewhat steep learning curve, this is isolated to certain chapters which can be skipped and returned to later if necessary. The fact that the writing is concise and to the point means one doesn’t have to wade through pages of flowery text before getting to the good bits. If you’re seriously thinking about using Solr or are already using it and want to know more so you can take full advantage of it, I would definitely recommend this book.

Full disclosure: I was given a copy of this book free of charge by the publisher for review purposes. They placed no restrictions on what I could say and left me to be as critical as I wanted so the above review is my own honest opinion."

Comment Re:The language all consumers understand: money. (Score 1) 163

And release a statement that they are testing this new filter, and have signed all politicians up for a trial. Randomly block about 10% of their traffic, and also some porn sites. Slow down their download speeds, and triple the prices. Anyone who publicly supports the filtering will of course get added to the trial.

Comment Re:What the hell (Score 4, Insightful) 321

Here in Denmark we were taught that if the coverage is bad, as it often is at sea, a text message is more likely to make it through. Same might be the case with low battery situations, and even if speaking aloud is not safe, as could be the case in some shooting and hijacking situations. In some situations the background noise may make voice communications unreliable, and some accidents may even disturb your ability to speak... Many reasons to allow the use of text messages.

Comment Open Source Product vs Company (Score 1) 357

If that source code isn't made available, then you're not an open source company.

Technically, a single company can have products licensed for both closed and open licenses - I know, I work for one. They can even offer the same product under an Open Source license, and under a different license. Owning the copyright, they can fork the product, implement some features only in one version, and release that only under a closed source license.

Of course, nothing prevents anyone from taking a version that has been released under an Open Source license, and (re)implementing the features the company only offers under a closed license. Except that it requires time, effort, and know-how.

Comment Re:Nevertheless, still doing science! (Score 2, Informative) 250

So what, pray tell, would have been the advantage of sending a human (other than shakier photos of the same rocks)? It would have cost an order of magnitude more money to haul a few people and all the supplies needed to keep them alive for a year-long mission

An order of magnitude???

In rough numbers, the mass of your normal human is one order of magnitude over the mass of the rover. The life support for said human would be another order of magnitude, or two. That would be fine, if we could leave the volunteer(s) behind on a dead planet. But getting them back would mean sending a big enough ship to bring them home. That would be at least thousand times bigger than what they'd need to survive on the surface - three more orders of magnitude. That's what I could think here and now. I believe there would be a few more problems to account for one or two orders of magnitude. So, my estimate for sending humans (that would expect to return) would be at least a million times more than to cost to send the rovers. With all these uncertainties, perhaps a billion...

Still, worth the effort, if and when we have the resources and technology. I hope to see that in my lifetime, or at least in the next 50 years!

Image

PhD Candidate Talks About the Physics of Space Battles Screenshot-sm 361

darthvader100 writes "Gizmodo has run an article with some predictions on what future space battles will be like. The author brings up several theories on propulsion (and orbits), weapons (explosives, kinetic and laser), and design. Sounds like the ideal shape for spaceships will be spherical, like the one in the Hitchhiker's Guide movie."

Submission + - Man Controls Cybernetic Hand with Thoughts (unicampus.it)

MaryBethP writes: Scientists in Italy announced Wednesday that Pierpaolo Petruzziello, a 26-year-old Italian who had lost his left forearm in a car accident, was successfully linked to an artificial limb that was neural planted in the median and ulnar nerves. He has learned to control the artificial limb with his mind. According to cnet, Petruzziello says he could feel sensations in it, as if the lost arm had grown back again.

http://news.cnet.com/8301-17938_105-10408139-1.html

Security

Submission + - The top ten security heroes (pcpro.co.uk)

Barence writes: The media love to stick it to the IT security bad guys: the notorious hackers or the bumbling civil servants who put nothing more than a first-class stamp on a disc containing millions of personal files. But what about the little-known heroes of the security world? PC Pro's Top Ten Security Heroes are the people who have made the internet a (relatively) safe place to work, shop and communicate; the people who work behind the scenes to make sure our PCs aren’t stuffed full of malware. In short, the good guys.

Submission + - Offset bad code with Bad Code Offsets

An anonymous reader writes: Two weeks ago, The Daily WTF's Alex Papadimoulis announced Bad Code Offsets, a join venture between many big names in the software development community (including StackOverflow's Jeff Atwood and Jon Skeet and SourceGear's Eric Sink). The premise is that you can offset bad code by purchasing Bad Code Offsets (much in the same way a carbon-footprint is offset). The profit's are donated to Free Software projects which work eliminate bad code, such as the Apache Foundation and FreeBSD. The first cheques were sent out earlier today.
Patents

Submission + - Federal Appeals Court Tosses Spam Patent (patentlyo.com)

Zordak writes: Dennis Crouch's Patently-O is running a news item about U.S. patent 6,631,400, which has claims drawn to a method of making sure enough people get your spam. A federal district court had overturned the patent as anticipated, obvious, and not drawn to patentable subject matter. The Federal Circuit, the appeals court which hears patent matters, upheld the finding of obviousness, thus invalidating the patent.
Earth

Swarm of Giant Jellyfish Capsize 10-Ton Trawler 227

Hugh Pickens writes "The Telegraph reports that the Japanese trawler Diasan Shinsho-maru has capsized off the coast of China, as its three-man crew dragged their net through a swarm of giant jellyfish (which can grow up to six feet in diameter and travel in packs) and tried to haul up a net that was too heavy. The crew was thrown into the sea when the vessel capsized, but the three men were rescued by another trawler. Relatively little is known about Nomura's jellyfish, such as why some years see thousands of the creatures floating across the Sea of Japan on the Tsushima Current, but last year there were virtually no sightings. In 2007, there were 15,500 reports of damage to fishing equipment caused by the creatures. Experts believe that one contributing factor to the jellyfish becoming more frequent visitors to Japanese waters may be a decline in the number of predators, which include sea turtles and certain species of fish. 'Jellies have likely swum and swarmed in our seas for over 600 million years,' says scientist Monty Graham of the Dauphin Island Sea Lab in Alabama. 'When conditions are right, jelly swarms can form quickly. They appear to do this for sexual reproduction.'"

Slashdot Top Deals

"my terminal is a lethal teaspoon." -- Patricia O Tuama

Working...