Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment One way (Score 4, Interesting) 167

I work for a company that does a lot of Open Source stuff. Here is how we manage it: We have core toolkits that are open source, and custom applications that are closed source, made for specific customers. When ever a customer needs new functionality, we try to generalize it and put it into the toolkits, which we then release. We tell the customer that we have this open source toolkit which we use for the project, and which we keep improving. But we don't specify how much of the work goes into the toolkit, and how much on the custom side.

Those toolkits have been our main marketing effort, and have certainly paid off. Within our very narrow field we are world famous, and our toolkits almost dominate the market. Nobody can afford to build a competing one, when ours is free. Although anyone may use our tools, we happen to know them best and have most experience with them, so we can often do any given job faster than others. The company has survived over a decade, and has expanded internationally, and is now all of 15 people.

Comment Search sucks (Score 2) 290

It is really good to have music in the free. But it could be organized better. I tired to search for "Locatelli", a baroque composer I know a little about. The first hit found a "piece" with a headline "Battista, Locatelli & J.S Bach - Concetos". What passes for a comment for the music is some details about Vivaldi's life, and under that is a composer Bio, also of Vivaldi. The "piece" consists of four parts, starting with a Concerto Grosso by Vivaldi, followed by Pergolesi, something by Bach, and finally a single movement of a Locatelli concerto. Last there is a fact box that lists Vivaldi as the composer, and fails to mention anything about the performer or period...

Comment Re:The 'Mysterious' part. (Score 1) 209

"Gears are finicky things, every single tooth must have the correct angular position, pitch diamerter and involute profile"

no. The more accurate those things are, the better it measurs time. And this think wasn't very accurate. By today's standards.

As far as I know, the original machine was not meant to measure time. It had a crank you gave one turn every day, and it showed the position of various stars etc. More like a calendar than a clock.

Comment Impossoble Licensing Agreement (Score 1, Interesting) 290

I can not read the book. I can not accept the license that requires my moral values to coincide with those of the author. For example, "That your family is first and foremost the most important thing in your life." makes not much sense to me, with no wife, no kids, parents dead, and the rest of the family not interested in much contact, and residing in a different country anyway.

Although he means well with it, I find such licensing an offensive intrusion in my life. If my employer would put up conditions like "That you will exercise your body as well as your mind" I would certainly tell him to stay out of my private life.

Some of the points are blatantly impossible. For example, "That you will defend the rights of those who are unable to defend themselves". Note that there is no provision to make this apply only occasionally, only when practical or even possible. Thus, anyone who is not defending the people in Libya, in China, and in Afghanistan, at the same time, is in violation of the license.

Moral principles are fine, but trying to enforce them as a condition for reading a book is absurd. If that is the price for reading the book, I rather keep my freedom!

Japan

Submission + - Japan’s tsunami devastates prefecture in 6 m (geek.com)

An anonymous reader writes: News reports this week are understandably focusing on the events that have recently shook Japan to its core. An 8.9 magnitude earthquake just off the coast, followed by a tsunami, has devastated parts of the country and taken thousands of lives. The extent of the damage is still being realized, there are thousands of people still missing, and problems with nucelar reactors could escalate.

While most of the video footage seen on TV so far has shown the extent of the devastation, it is mainly seen from the viewpoint of someone in a helicopter, or after the damage has been caused in an area. But now we have some raw footage of someone who experienced the torrent of water passing through his home prefecture at ground level.

As you can see in the video, it caught some drivers unaware and in a little over 6 minutes we see a dry Japanese street turn into a fast moving torrent of water ripping buildings from their foundations, crushing cars, overturning boats, and rising a few meters above ground level. The footage was captured in the Miyagi Prefecture in the city of Kesennuma which is home to 74,000 people.

Submission + - Robert X Cringely predicts more mininuke plants (cringely.com)

LandGator writes: "PC pundit Robert X Cringely had a life before writing "Triumph of the Nerds" for PBS: He covered the atomics industry and reported on Three Mile Island. In this blog post, he analyzes the Fukushima reactor failures, and suggests the end result will be a rapid growth in small, sealed 'package' nuclear reactors such as the Toshiba 4S generator considered for Galena, Alaska. He thinks Japan may have little choice, and with rolling blackouts scheduled, he may be right."

Comment Re:This game is random , you can't outsmart someon (Score 1) 292

I did this many years ago. No need for fancy AI, a simple Markov chain was enough to beat the people I tried with. Today I would make it adapt the chain length dynamically, trying with different lengths and keeping track of their performance. But even a 3-level chain (if I remember right) beat humans consitently in about 50 games, and the random number generator of that old machine in less than 10000 games. But it was probably not a good random number thing...

Botnet

Submission + - ENISA Gears Up for War on Botnets (securityweek.com)

wiredmikey writes: The European Network and Information Security Agency (ENISA), Europe's Cyber security agency, has issued a report focused on botnets this week titled, "Botnets: Measurement, Detection, Disinfection and Defence." The report questions the reliability of botnet size estimates and provides recommendations and strategies to help organizations fight against botnets. In addition, ENISA published a list of what it considers the top 10 key issues for policymakers, a list derived from internal discussions by security experts in the field of botnets that took place between September and November 2010 and presents a selection of the most interesting results.
Crime

Submission + - Corporate data breach average cost hits $7.2M (networkworld.com)

alphadogg writes: The cost of a data breach rose to $7.2 million last year from $6.8 million in 2009, with the average cost per compromised record in 2010 reaching $214, up 5% from 2009. The Ponemon Institute's annual study of data loss costs this year looked at 51 organizations who agreed to discuss the impact of losing anywhere between 4,000 to 105,000 customer records.While "negligence" remains the main cause of a data breach (in 41% of cases), for the first time the explanation of "malicious or criminal attacks" (in 31% of cases) came in ahead of the third leading cause, "system failure."
IT

Submission + - A letter on behalf of the world's PC fixers (pcpro.co.uk)

Barence writes: "PC Pro's Steve Cassidy has written a letter on behalf of all the put-upon techies who've ever been called by a friend to fix their PC. His bile is directed at a friend who put a DVD bought on holiday into their laptop, and then wondered what went wrong.

"Once you stuck that DVD in there and started saying 'yes, OK' to every resulting dialog box, you sank the whole thing," Cassidy writes. "It doesn’t take 10 minutes to sort that out; it requires a complete machine reload to properly guarantee the infection is history."

"No, there is no neat and handy way I’ve been keeping secret that allows you to retain your extensive collection of stolen software licences loaded on that laptop. I do disaster recovery, not disaster participation.""

Books

Submission + - Book Review: Solr 1.4 Enterprise Search Server 1

MassDosage writes: "[Note to Slashdot editors: my e-mail address is massdosage@gmail.com if you need to contact me. Please do NOT publish this on the site.]

Solr 1.4 Enterprise Search Server written by David Smiley and Eric Pugh provides in-depth coverage of the open source Solr search server. In some ways this book reads like the missing reference manual for the advanced usage of Solr. It is aimed at readers already familiar with Solr and related search concepts as well as those having some knowledge of programming (specifically Java). The book covers a lot of ground, some of it fairly challenging, and gives those working with Solr a lot of hands-on technical advice on how to use and fine-tune many parts of this powerful application.

Solr 1.4 Enterprise Search Server starts off with a brief description of what Solr is, how it is related to the Lucene libraries (which it is built around) and how it compares to other technologies such as databases. This book is not an introduction to search and this chapter covers only the basics and assumes the reader already knows what they are getting into or that they will read up on search concepts themselves before reading further. Solr is free, open-source technology licensed under the Apache license and is available here. This book covers the 1.4 version of Solr and was published before this version was actually released so it is a bit patchy in areas which were still undergoing change but the authors point this out very clearly in the text where applicable.

The book provides details on downloading and installing Solr, building it from source and the manifold options available for configuring and tweaking it. A freely available data set from Music Brainz is provided for download along with various code examples and a bundled version of Solr 1.4 which is used as the basis for many of the examples referred to throughout the text. In some ways this dataset is limited as it only allows for fairly simple usages compared with the challenges of indexing and searching large bodies of text. Again, the authors clearly mention these limits and briefly describe how certain concepts would be better applied to other data sources.

The basics of schema design, text analysis, indexing and searching are covered over the next three chapters and these include a wide-range of essential search concepts such as tokenizers, stemming, stop-words, synonyms, data import handlers, field qualifiers, filters, scoring, sorting etc. The reader is taken through the process of setting up Solr so it can be used to index data that is to be searched and then how this data can be imported into Solr from a variety of sources like XML and HTML documents, PDF’s, databases, CSV files and many others. Using Solr to build search queries is covered with examples that the reader can run via the Solr web interface and provided sample data.

More advanced search techniques are covered next and at this point I felt a lot of what was being discussed went over my head. Perhaps this was because my own search experience hasn’t extended very far and the behind-the-scenes algorithms powering search aren’t something I’ve had to directly work with. There were sections here that definitely felt aimed at people with a much more thorough understanding of the theory underpinning search and how a knowledge of mathematics and the data being searched are essential for search algorithm design. Having said this, these chapters felt like they would be really useful to come back to at some point in the future and I’m sure that people working with search on a daily basis would find some useful advice here for how to get the best out of Solr.

Solr provides much more than just indexing and search and the fact that various components are available to do many other common search-related functions is one of its main benefits. These components provide things like the highlighting of search terms in returned results, spell-checking, related documents and so on. The authors cover components which ship with Solr to provide this functionality as well as a mentioning a few that are currently separate software projects. One can easily see how all of this would be directly applicable if one was adding search capability to one’s own product or web site as there are a lot of wheels that Solr saves you from having to re-invent. The book also mentions the various parts of Solr that can be extended to modify or add new behaviours, which of course if one of the many advantages of its open source nature.

The final three chapters move on to the more practical side of actually using Solr in the “real world” and discuss various deployment options, how it can be monitored using JMX, security, integration and scaling. In addition to Java (which is the probably the most powerful and straightforward way of integrating with Solr) support for languages like JavaScript, PHP and Ruby is described. I felt the Ruby section was way too long, maybe one of the authors has a soft spot for the Ruby language? The sections on writing a web crawler and doing autocomplete were far more interesting and probably also more generally applicable. The book wraps up with a thorough discussion on how to scale Solr from scaling high (optimising a single server through techniques like caching, shingling and clever schema design and indexing strategies), scaling wide (using multiple Solr servers and replicating or sharding data between them) and scaling deep (a combination of the former two approaches).

On the whole this is a very thorough, detailed book and it is clear that the authors have a lot of experience with Solr and how it is used in practice. This book does not cover a lot of theory and assumes a fair amount of prior knowledge and is definitely aimed at those who need to get their hands dirty and get up and running with Solr in a production environment. The authors have a straightforward, open and honest writing style and aren’t afraid of clearly stating where Solr has limitations or imperfections. While the book may have a somewhat steep learning curve, this is isolated to certain chapters which can be skipped and returned to later if necessary. The fact that the writing is concise and to the point means one doesn’t have to wade through pages of flowery text before getting to the good bits. If you’re seriously thinking about using Solr or are already using it and want to know more so you can take full advantage of it, I would definitely recommend this book.

Full disclosure: I was given a copy of this book free of charge by the publisher for review purposes. They placed no restrictions on what I could say and left me to be as critical as I wanted so the above review is my own honest opinion."

Comment Re:The language all consumers understand: money. (Score 1) 163

And release a statement that they are testing this new filter, and have signed all politicians up for a trial. Randomly block about 10% of their traffic, and also some porn sites. Slow down their download speeds, and triple the prices. Anyone who publicly supports the filtering will of course get added to the trial.

Comment Re:What the hell (Score 4, Insightful) 321

Here in Denmark we were taught that if the coverage is bad, as it often is at sea, a text message is more likely to make it through. Same might be the case with low battery situations, and even if speaking aloud is not safe, as could be the case in some shooting and hijacking situations. In some situations the background noise may make voice communications unreliable, and some accidents may even disturb your ability to speak... Many reasons to allow the use of text messages.

Comment Open Source Product vs Company (Score 1) 357

If that source code isn't made available, then you're not an open source company.

Technically, a single company can have products licensed for both closed and open licenses - I know, I work for one. They can even offer the same product under an Open Source license, and under a different license. Owning the copyright, they can fork the product, implement some features only in one version, and release that only under a closed source license.

Of course, nothing prevents anyone from taking a version that has been released under an Open Source license, and (re)implementing the features the company only offers under a closed license. Except that it requires time, effort, and know-how.

Slashdot Top Deals

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...