Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment clever ideas, at what cost? (Score 1) 21

There are some very cool ideas here, particularly the use of hydraulic coupling to measure pressure and vibration, as well as the measurement of thermal properties. It's a very nice sensor for robot fingers.

However, it seems to targeted solely toward bulk perception, since the fingertip looks smooth and uniform. This is quite different from human fingertips, where the fingerprint ridges provide a significant component of tactile perception, particularly in motion. It's also not obvious why the thermal measurements should correspond to human perception, which depend at least as much on the thermal (and mechanical) properties of what's underneath the material as the material itself. For example, the sensation of touching a clothed live person's arm is very different from touching a clothed mannequin, even if the clothing materials are identical. But even so, it's a big improvement on florid adjectives.

Unfortunately, the website seems completely devoid of information about ordering their products or services, or on examples of their measurement results, which looks like a bad sign. It would be a pity if they were aiming to introduce an expensive proprietary standard. Many measurement standards in the physical world (e.g., ASTM) are hugely and disproportionately expensive (not to mention comparatively ancient in technical sophistication--Shore Durometer, anyone?). These costs form a significant barrier for small businesses attempting to introduce a novel product or material. A cynic might say that's precisely why large established enterprises provide financial support for those standards-setting organizations.

Comment Re:Err, not the "birth of time-sharing" (Score 1) 146

JOHNNIAC Open Shop System (JOSS) was another early time-sharing system, demonstrated in 1963. By 1964, the time-sharing idea was becoming widespread.

But, yes, undisputably, Dartmouth gave us BASIC, and like George Washington's proverbial axe (which had both its head and handle replaced multiple times), BASIC remains with us today. At least it's not as harmful as C; BASIC arrays and strings always had bounds-checking.

Comment Economic bias, not just cultural (Score 4, Insightful) 379

As others have observed, older workers tend to want to be compensated for their experience... so they're more expensive.

In a rational hiring world, that might not matter much--they usually deliver greater value, too--but it's often not rational people (or, let's be polite and say, people who could be better-informed) that are making that decision--it's people who want to minimize costs no matter what.

Hire an expensive engineer who really understands the work? Or two young cheap ones who might not? The latter, of course--for the same reason that outsourcing to the third world is so popular despite the incredible hurdles of management and quality. And if the bet fails, and neither of the young'ns can get it done (despite the 80-hour weeks that they can deliver and have come to expect), well, you'll be off to another job by then anyway and no one will know.

It's a vicious cycle: VCs like start-ups that live on ramen noodles because they're cheap to fund, unlike ones that have a real staff and a real track record. And sure, some of those cheap ones will succeed, and they'll get the press (in no small part because they are young), and that will perpetuate the myth that only young folks can innovate, leading the VCs to believe in their own decisions.

I don't see the bias going away. As a general rule, young people are less expensive, more dedicated, more attractive, and just more fun than us old farts. The market want crap in a hurry, and this is one of the primary reasons they get it.

Comment Re:Missed opportunity? (Score 1) 17

You might think that larger gates are an inherent advantage, but it's not that simple. To a modest extent, the advantage is there, but the counter-effect is strong, also: smaller gates have that much less cross-section in which a particle hit can deposit charge or cause damage. In practice, radiation tolerance is much more dependent on a bunch of other process characteristics, and it is very difficult to predict.

Failover is rarely "simple". There's a lot of code and mechanism, somewhere, to decide when a failure has occurred, determine the kind of failure, apply applicable recovery procedures, and restore what context can be restored and resume. This is a lot easier to do when you're not also trying to fit in 32KB of flash.

Space computing is very conservative. It is astonishing how much has been accomplished with such simple processors. But advances in the semiconductor art beg to be used, and projects like this could help light the way if not hamstrung by limited architectural choices.

Comment Missed opportunity? (Score 3, Insightful) 17

Arduinos make somewhat more sense than phonesats (Really? We're sending a touch-screen and graphics controller into low earth orbit? Because the boss couldn’t think of any sillier project and had a spare $100K for launch costs?).

But it's hard to understand why a 17-wide parallel configuration of 8-bit microprocessors each having just 2.5KB of RAM makes for a sensible satellite payload processor. Why not something with an architecture more like a Raspberry Pi or BeagleBoard? Not those specific boards necessarily, but a similar, simple one-chip SoC approach and a decent amount of memory. A processor like that could drive a bunch of experiments (more than you can fit in a Cubesat), and have enough room for the software to be comfortable and maybe even maintainable on-orbit.

A SoC-based system would fit in the same low cost profile but could run much more interesting applications. Ardusat feels like a missed opportunity, because it has lots of other things going for it: open source, submission process, international coalition, hobbyist/student focus, etc.

Comment Just be sure your customers acknowledge it (Score 1) 364

Consultants can largely solve this problem by having customers declare explicitly that the work doesn't fall in the realm of taxable services as defined by the ruling.

There's so much ambiguity in the wording that as long as you're not in the crosshairs of being a reseller who supplies expensive software (think Oracle, not so much Windows) in the guise of a (heretofore) non-taxed service, you'll be fine. It's not worth their time to enforce it otherwise.

The key is being creative. Supplying customized Drupal installations? No, you're writing unique software to customer specifications for the customer to use with their existing Drupal platform. And maybe you're supplying training about operation and installation of Drupal systems. And helping them evaluate their business needs that might be met by aforesaid custom software. The ruling (section II) even explicitly exempts "training" and "evaluation". Maybe a small fraction of your business might fall under the ruling, but if that's the case, you just need to make sure it's covered by separate contracts. If there isn't significant money flowing out of your business for (reseller tax-exempt) software that your customers eventually get, it will be pretty challenging for the DOR to argue that your business is taxable... as long as you're smart about how you define the business.

I'm as worried as the next fellow about jackbooted thugs from the government running my business into the ground. However, the reality here is that these are overworked civil servants who are motivated by meeting their goals--and they'll do that by pursuing the cases that the statute is intended to target, because those will be most likely to generate revenue. No bureaucrat wants a lawsuit, they want passive compliance. Maybe ten years from now, it will be different, but if it is, I'd bet it's because the law is expanded (to cover all services, in the name of "fairness"), not because this statute is egregiously misinterpreted.

Comment Embed logging technology in your software (Score 1) 205

By this I mean that you should instrument the code with real, meaningful activity logging, not just some afterthought that grabs a stack trace and some state variables (although you'll want to have that data, too). If you instrument your code with logging that produces readily human-interpretable information about what's going on, the payback is huge, because it makes internal developers' lives easier, and it allows even first-level support folks to to a better job of triage and analysis. It's really important to make it meaningful to the human reader, not just "readable"--an XML representation full of hexadecimal doesn't cut it, it needs to include symbolic names.

Let the users see the logged data easily, if they ask for it, and maybe give them a single knob to turn that controls the level of logging. This will help technically sophisticated users give more useful reports, and it's really helpful in any sort of interactive problem resolution (OK, do X. Now read the last few log messages. Do any of them say BONK?).

It's really useful to include high-resolution time--both clock time and accumulated CPU time--in log messages. This is great for picking up weird performance problems, or tracking down timeouts that cause mysterious hangs. Depending on your architecture and implementation technology, other sorts of "ambient" data (memory usage, network statistics) can useful here, too.

There's a trade-off between logging by frameworks, mixins, macros, etc., and logging of specific events. The former approach gets comprehensive data, but it often can't provide enough contextual semantic information to be meaningful. The latter approach scatters logging ad-hoc throughout the code, so it's very hard to make any argument for comprehensiveness, but if done properly, it's spot-on for meaningful messages. Usually best to do some of each, and have good control knobs to select.

Logging can generate a lot of data, so it's important to be able to minimize that burden during routine operation (especially in deployed applications, where there should be a strict limit on the amount of space/time it takes up). But it's also useful (especially when it's configured to generate a lot of data) to have tools that allow efficient ad-hoc review and analysis--an XML tree view, maybe filtered with XSLT, can be easier than a giant text file.

In any complex system, logging is one of the very first things I recommend implementing. After the architecture is settled enough to know what will be some of the meaningful activities and objects to record, bolting in a high-efficiency, non-intrusive logging infrastructure is the very next step. Then comes business logic, user interface, and all the other stuff. Pays for itself many times over.

Comment Re:Historically, NSA have done the opposite. (Score 1) 407

Considering the rest of Coppersmith's work, I have no trouble believing in his genius or that he independently invented differential cryptanalysis. Are you suggesting that he didn't, and instead lied about it 20 years later?

Your post rather mischaracterizes the content of that section of Wikipedia. It is hardly "everyone else's version" that NSA made changes. That section cites both the Senate inquiry and Walter Tuchman (then of IBM) as saying that NSA did not dictate any aspect of the DES algorithm. The Konheim quote ("We sent the S-boxes to Washington...") is an un-referenced comment from Applied Cryptography (which says "Konheim has ben quoted as saying..." without saying where or by whom). Schneier goes on to express admiration for IBM's work and how it scooped the rest of the open crypto world for 17 years.

In any case, the important point is that changes were made, whether by IBM alone or in collaboration with NSA, and they unequivocally made the algorithm much better, as opposed to the conspiracy theory that NSA made it worse. The 56-bit key is reasonably commensurate with the security DES actually supplies (against the attacks of the day, secret and otherwise). Now if it had turned out to be weak against linear cryptanalysis, or indeed any other attack of the last 40 years, that would be news--but it's not weak, it's just average, strongly suggesting that no better attacks were known back then.

Comment Re:Historically, NSA have done the opposite. (Score 5, Interesting) 407

Biham and Shamir, Differential Cryptanalysis of the Data Encryption Standard, at CRYPTO '92. They showed that the S-boxes were about as strong as possible given other design constraints.

Subsequently, Don Coppersmith, who had discovered differential cryptanalysis while working (as a summer intern) at IBM during the development of DES in the early 1970's, published a brief paper (1994, IBM J. of R&D) saying "Yep, we figured out this technique for breaking our DES candidates, and strengthened them against it. We told the NSA, and they said 'we already know, and we're glad you've made these improvements, but we'd prefer you not say anything about this'." And he didn't, for twenty years.

Interestingly, when Matsui published his (even more effective) DES Linear Cryptanalysis in 1994, he observed that DES was just average in resistance, and opined that linear cryptanalysis had not been considered in the design of DES.

I think it's fair to say that NSA encouraged DES to be better. But how much they knew at the time, and whether they could have done better still, will likely remain a mystery for many years. They certainly didn't make it worse by any metric available today.

Comment The GSM ciphers are an interesting story (Score 2) 407

I can't find a good reference right now, but I recall reading a few years back the observation that one of the GSM stream ciphers (A5/1?) has a choice of implementation parameters (register sizes and clocking bits) that could "hardly be worse" with respect to making it easily breakable.

This property wasn't discovered until it had been fielded for years, of course, because the ciphers were developed in the context of a closed standards process and not subjected to meaningful public scrutiny, even tough they were nominally "open". The implication was that a mole in the standardizing organization(s) could have pushed for those parameters based on some specious analysis without anyone understanding just what was being proposed, because the (open) state of the art at the time the standard was being developed didn't include the necessary techniques to cryptanalyze the cipher effectively. Certainly the A5 family has proven to have more than its fair share of weaknesses, and it may be that the bad parameter choices were genuinely random, but it gives one to think.

Perhaps some reader can supply the reference?

The 802.11 ciphers are another great example of the risks of a quasi-open standardization process, but I've seen no suggestion that the process was manipulated to make WEP weak, just that the lack of thorough review by the creators led to significant flaws that then led to great new research for breaking RC4-like ciphers.

Comment A job for legislators, not programmers (Score 1) 400

The truly frightening thing about this article is that the authors apparently felt it was the job of the programmers to determine what the reasonable algorithmic interpretation of the law's intent was, thus again demonstrating how completely out of touch with reality many academics seem to be.

The legislative process is appallingly imperfect, to be sure, but at least it has the pretense of openness and consideration of constituent interests. That's where these decisions need to be made.

Fortunately, since legislators break these laws as much as the rest of us, we probably don't have too much to worry about. Think about all those electronic toll systems--they certainly know how fast you were going, on average, and an intuitive application of the mean value theorem will quickly show that you were speeding, but we rarely if ever get tickets from those systems.

Comment Lower-quality, Market-trailing (Score 1) 271

I've used Thinkpads exclusively since I bought a 560 in late 1996. I'm currently using a 2009-vintage W500 and hoping it doesn't break, because it has more pixels (1920x1200) than any Windows laptop made today. They've always been rugged, functional, and effective tools for getting work done.

What did I want from yesterday's Lenovo announcement? A retina-class (i.e., 2560x1600) display, modern CPU/memory/SSD hardware, and no significant changes elsewhere, because Thinkpads are in fact pretty darn well-engineered (and designed), and remarkably reliable.

What did I get? A paean to how important it is to design for millennials (who apparently need dedicated multimedia buttons), a bunch of important features gone (physical buttons? function keys? replacement battery? indicator LEDs? Thinklight?) and an explanation that the single hardest decision they had to make for the T431 was how to re-orient the logo on the lid. I can't even get a big SSD--their largest is 256GB, unlike the 600GB Intel unit I installed in the W500 18 months ago.

Bah. I'd vote with my feet, except there aren't any alternatives. Why is there no Windows laptop with a high-resolution display? I suppose I can get a Macbook or a Chromebook and run everything in a VM. But then there's no Trackpoint.

Comment Planetary damage and energy vs. fantasy (Score 1) 626

It must be silly season over at the good ol' BAS. First we get "bio-terror is impossible", and now this. I miss Hans Bethe.

Other posters have pointed out how silly it is to base any argument on hundreds of years of exponential growth. Yep, if that happens, and all other things stay the same, we're screwed. But clearly, all other things aren't going to stay the same. Even Malthus knew that argument is bogus.

Will population and concomitant energy use increase inexorably? Err, maybe not. There's a lot of demographic evidence that population growth slows, even reverses, as living standards improve and, especially, as women become better educated and control their own destinies.

Can solar (or nuclear) solve all our energy problems? Probably not, at least not without a lot of improvement in battery technology, because the energy density of hydrocarbons is so appealing. And there are indeed real resource issues that may put a crimp in massive production of electronics, solar panels, transmission lines, reactor vessels, you name it. For production on a significantly more massive scale, those issues need to be addressed. But scarcity relative to current practices is a strawman--as material costs increase, economic pressures generally yield optimizations. A lot of these look like issues because nobody has even tried to solve them, because material supplies haven't been an issue.

Is conservation important? Yah, you betcha. The cheapest energy of all is that which doesn't get used.

Is energy supply the compelling motivation for solar ? No, it's climate change and pollution. The longer we dither about renewables, the sooner we will face the massive costs for mitigating all the damage caused to date. We'll pay a lot of those costs eventually--the harm is too far along to cure itself. But at this rate, it's not our grandchildren, or our children, who will be paying for huge sea walls around Manhattan, it's us! The longer we can push off those mitigations, the easier they will be. That, to my mind, is the overwhelming argument for solar (and other low-emission) energy.

Slashdot Top Deals

Doubt is not a pleasant condition, but certainty is absurd. - Voltaire