Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment VENOM bug exploits floppy drivers in KVM, etc. (Score 1) 368

A Floppy Disk is that device you almost never bother using, but which gets added to your virtual machines by default, at least under VMware (haven't paid attention on OpenStack.) The recently-discovered VENOM vulnerability exploits bugs in the floppy drivers, which have been around for a decade, to let a process on a virtual machine break out into the hypervisor and maybe mess with other virtual machines.

So it's especially timely to have a convenient new platform for using floppies!

Biotech

Biologists Create Self-Healing Concrete 94

Mr.Intel writes: A team of microbiologists from the Delft University of Technology claims to have invented "bioconcrete" — concrete that heals cracks and breaks using bacteria. The goal was to find a type of bacteria that could live inside concrete and also produce small amount of limestone that could re-seal cracks. This is a difficult prospect because concrete is quite dry and strongly alkaline. The bacteria needed to be able to stay alive for years in those conditions before being activated by water. The bacteria also need a food source — simply adding sugar to concrete will make it weak. The scientists used calcium lactate instead, adding biodegradable capsules of it to the concrete mix. "When cracks eventually begin to form in the concrete, water enters and open the capsules. The bacteria then germinate, multiply and feed on the lactate, and in doing so they combine the calcium with carbonate ions to form calcite, or limestone, which closes up the cracks."
Google

Academics Call For Greater Transparency About Google's Right To Be Forgotten 57

Mark Wilson writes: Just yesterday Google revealed that it rejects most Right To Be Forgotten requests it receives. In publishing yet another transparency report, the search giant will have hoped to have put to bed any questions that users and critics may have had. While the report may have satisfied some, it did not go anywhere near far enough for one group of academics. A total of 80 university professors, law experts and technology professionals have written an open letter to Google demanding greater transparency. The letter calls upon the company to reveal more about how Right To Be Forgotten requests are handled so that the public is aware of the control that is being exerted over "readily accessible information."

Comment GSM Rolling their own - Malice or Incompetence? (Score 1) 111

GSM rolled their own crypto. They depended on Obscurity to protect their algorithm. Somebody handed a copy to Ian Goldberg, then a grad student at Berkeley, and the reason it took him three whole hours to break it was that the Chinese restaurant near campus was having the good lunch special that day.

It was a weak enough algorithm (designed in electrical-engineer-math style, which is fine if you want checksums for reliability) that I'll give them credit for incompetence, though the fact that 10 bits of the already-too-short key were always set to 0 looks much more like malice (with a slight possibility that an early hardware implementation didn't have enough spare bits on some part of the chip.)

Ron Rivest can sometimes get away with rolling his own algorithms - but RC4 and MD5 are looking pretty weak these days, even if you don't count the (documented from the beginning) rules about making sure to never ever use the same RC4 key twice, which was ignored several different ways in PPTP and in a number of other protocols implemented by people who were rolling their own implementations without understanding the algorithms.

Comment Google Makes Upgrading Impossible for Consumers (Score 1) 434

Phones and Tablets are different problems - with phones and 3G/4G/LTE tablets, you've got a carrier who can push updates to you, but if you've got a Wifi-only tablet, there's no carrier, just a manufacturer. Do they have an incentive to upgrade? Does the user have a way to tell?

Google's new product announcements always say "See all our shiny new features! If you have one of these three Google Nexus products, you can get it! Otherwise, wait for your carrier to maybe do something!", but never say (at least to consumers; I assume they tell manufacturers) "If your device has at least this generation processor and this much memory, you can upgrade, here's how." Part of that is because, for the big-vendor phones, the manufacturer and sometimes the carrier heavily customize the product, replace half the user interface and tools with custom ones and add a bunch of useful apps or bloatware, and then you can't just do the OS upgrade yourself because you'd lose the customization and probably also lose the bloatware.

My old HTC phone was heavily customized, and the upgrade from 2.1 to 2.2 wasn't actually pushed out, though you could pull it for a little while, if your phone wasn't broken when locked-to-AndroidMarket got replaced with Google Play. My noname 4.0.x tablet which has Google Play but no obvious customization is now running 4.0.4 (I think it originally had 4.0.1), so it shouldn't be a problem to upgrade it if it's got enough horsepower - and Google never tells you how much horsepower they need, just what Nexus models support it.. I ended up replacing the HTC with a Samsung, and haven't taken the time to go back and install Cyanogen on the HTC; I assume if I did that to the tablet I'd lose Google Play access, which I depend on for apps and patches.

Comment There're lots of new 2.3 phones (Score 1) 434

There are two kinds of 2.x phones out there - really old phones, and cheap low-end phones that run 2.3 because they don't have the horsepower to run 4.x. Many of them are pay-as-you-go phones you can buy at 7-11 or low-end ones from carriers for customers who don't want to pay iPhone prices.

My HTC was locked to Android Market, and wasn't willing to talk to Google Play, and the carrier never pushed out the 2.1->2.2 upgrade in a way that worked for me. 3.x was mainly a tablet release that didn't affect phones, and most of those seem to have been upgradeable to 4.0.

Businesses

Technology and Ever-Falling Attention Spans 147

An anonymous reader writes: The BBC has an article about technology's effect on concentration in the workplace. They note research finding that the average information worker's attention span has dropped significantly in only a few years. "Back in 2004 we followed American information workers around with stopwatches and timed every action. They switched their attention every three minutes on average. In 2012, we found that the time spent on one computer screen before switching to another computer screen was one minute 15 seconds. By the summer of 2014 it was an average of 59.5 seconds." Many groups are now researching ways to keep people in states of focus and concentration. An app ecosystem is popping up to support that as well, from activity timing techniques to background noise that minimizes distractions. Recent studies are even showing that walking slowly on a treadmill while you work can have positive effects on focus and productivity. What tricks do you use to keep yourself on task?

Comment Went to Smithsonian Air/Space Museum for research (Score 1) 160

Back in the late 80s, when I was working on that decade's failed project to replace the 360/90-based systems, my coworker and I were in DC for a meeting on some phase of the project (or one of the related projects), and we had half a day spare, so we went to the Smithsonian Air&Space Museum to do "research". They didn't have examples of the system we were working on, but they did have some other air traffic control systems (Tracon, I think), and other cool stuff like astronaut ice cream. After that we went to the National Gallery, because Van Gogh.

Comment Redundancy is really hard. (Score 1) 160

That's not even counting the huge amount of code that's designed to make sure all the other parts of the code are working, and to do something appropriate if they're not, and the code that's designed to make sure that code is also working. That stuff's a lot harder than the basic code, and getting it right is the difference between a system with double- or triple-redundant hardware that gets you the 8 9s of reliability the FAA naively thought was possible with 1980s hardware and a air-traffic control system that had triple-redundant hardware running an operating system that crashed weekly (that one was in Singapore, but I don't know if it was actually deployed; I assume they killed it long before it hit the field.)

The 1980s attempt at developing this was only going to be deployed at the ~25 En-Route control centers (with simpler components at the several hundred radar sites feeding each one); it's not intended to be at every airport tower, which was a bunch of different systems.

It's interesting to see how much this thing has grown into, beyond the initial "get radar signals onto the board and replace paper flight-strips and never ever ever crash" goals.

Comment LOTSa Naivete was involved. (Score 1) 160

Most of it on the part of the people who started the original project, who thought it would be done in 3-4 years, made way too many incorrect decisions for the wrong reasons, specified lots of requirements without understanding how impossible they were to meet, picked multiple sets of pie from multiple sets of skies, and didn't start with the ability to get kinds of budget they would have needed to do the job right (if they'd picked a definition of "right" that could have been implemented in the 1980s, when they were trying to replace a 1960s system that had much lower ambitions when it was built, but was still a big upgrade over the 1950s predecessor), but the one thing everybody knew was that if airplanes fall out of the sky or crash into each other, the FAA gets blamed, and if the system's late, the FAA gets blamed, and if it's over budget, the FAA gets blamed, and if the budget had been bigger to start with, the FAA would have been blamed, and if the FAA's going to get blamed, then you can be the contractors trying to design the system are going to get blamed a lot, even just for asking questions when they're working on the thing.

Projects with a scope of tens of millions of dollars are much much different than projects with a scope of a few billions or a few tens of billions. A couple of years after I worked on my part of that fiasco, one of the directors for information systems for one of the National Labs was telling us that he was trying to restructure things to be done in small manageable projects, because he'd never seen the government do a billion-dollar computer project that didn't fail. And all that ancient "Mythical Man-Month" stuff said things you probably already knew about projects in the $10m range sometimes being too large; I remember one much less critical project that had 30 people working on it, so it had to grow to 150 people before it totally failed; if it had started with 5 people instead of 30 and had a budget limiting it to a max of 10, it might have worked. But projects that know they're legitimately in the billion-dollar scale are really really hard.

Comment Ada's no more verbose than C++ or Java (Score 1) 160

It's designed for object-oriented use, with lots of type specification and such upfront, to push decisions into upfront design time rather than coding time, and it's not as terse as C or APL, but it's nowhere near as verbose as COBOL. I wouldn't use it today (mostly because its main uses are for military stuff I won't do, and for antique maintenance, and it doesn't have all the friendly libraries that I'm used to and probably doesn't easily link to non-Ada systems), but it's a fairly cromulent language.

Comment Oh, my! Re: Glitches (Score 1) 160

The article you're pointing to was about how one of the ERAM systems crashed trying to cope with a bizarre flight plan for a U-2 spy plane.

When I was working on AAS in the late 80s, one thing I was mildly concerned about was that the planned "upgrade" our project was trying to design wouldn't really be able to cope with super-sonic aircraft over the continental US. The requirements for how much area had to show on a controller's screen and how fast the radar sweeps were meant that anything at Concorde speeds would kind of blip onto the screen, maybe bounce once or twice more, and then be gone by the next refresh, either to somebody else's screen or another regional center. Economics and politics (sonic booms, restrictions on what nations' airlines could compete for US markets, etc.) meant that it wasn't a likely prospect anyway, but U-2 spy planes operate under different economics and politics.

Comment I worked on the 1980s version (Score 1) 160

Back in the 1980s, the FAA's shiny new Advanced Automation System project (AAS) was being designed to replace the 1960s-vintage En-Route system, which used IBM 360/90 and 360/50 computers that were getting to be old, unmaintainable, and unreplaceable. (It was getting hard to even get cable connectors for components - imagine coming up with new SCSI-1 terminators these days.)

As with many military aircraft system contracts, they ran a design competition, which had funneled down from 4 bidders to two by the time I was there. I worked for a subcontractor on one of the teams bidding on AAS. We were the lucky ones who lost; IBM were the poor suckers who won the deal. We learned many lessons about how not to do large software projects. The requirements weren't very well-defined, but the one thing that was certain was that if yet another airplane crash happened, the FAA would take lots of political heat, so everything had to be totally bullet-proof, and every bureaucratic ass had to be covered in triplicate. The phase we were working on was already behind schedule and over budget, and once IBM won it got much farther behind, way farther over budget, and it kind of slunk into the 90s, the 2000s, and the articles referenced above make it sound like Lockheed-Martin bought the IBM Federal division that was working on this debacle.

Originally, the requirements were for 8 9s of reliability (so 99.999999%), but what was worse was that there was no definition of what a failure event was. If a failure meant "each individual radar needed to meet 8 9s", that was hard enough, but if a failure meant "ANY radar's connection was down", that meant the system had to meet 10 9s, not just 8, since there were O(100) radars. Everything had to be triple-redundant to meet those numbers, because taking down any component of a dual-redundant system for maintenance for 5 minutes would blow your reliability for the year. We later found out that the existing 1960s-vintage system that AAS was supposed to replace was shut down for 4 hours per night, replaced by EDARC (a ~1970s upgrade to the ~1950s DARC radar controllers), to make sure that the EDARC system was available as a working backup and that personnel stayed trained in using and maintaining it. (And of course the radars only had dual access lines, with a typical reliability of 3-4 9s each, so 8 9s per radar was already overkill. Phone company equipment with the famous 5 9s of uptime got that by using lots of dual redundancy in appropriate places.)

AAS was originally required to use DOD-STD-2167 software development methodology, a 1985 standard that the DOD replaced in 1988 with 2167A because 2167 was unusable. (You're having trouble dealing with Agile? This is way way far out the other direction.) Both were cumbersome waterfall processes, 2167 requiring something like 180 documents over the predicted 3-year development period, so every week, there'd be one or more new documents, hundreds of pages long, that were all ironclad requirements for all remaining development; developers wouldn't have the time to read and analyze each document and still get their work done, and if they determined down the road that a previous decision had undesirable consequences, there was no way to go back and change it. For example, a decision about whether a given calculation should be done out at the remote radar site, or on one of several central processing computers, or on the computer that drove a given operator console, might turn out to make several hundred milliseconds difference in processing time, but any given radar signal had to get from the remote radar to the console in under 1 second. The subcontractor designing the display consoles knew they wouldn't have the horsepower to do it in time, so they bounced it to the central processors early in the requirement process; those didn't even have an architecture that met the redundancy specs yet, so we didn't know if they'd have the resources to do it in time either. (We later offered to move a bit more of their processing into the central system, because we could do the combined calculations faster than having to split them.)

(And no, we didn't have to walk uphill both ways through the snow to work on it, but I did have to fly to Chicago for a weekend kickoff meeting in the winter, there was snow there and I had a bad cold, and stuff that looked really good and feasible on a sales VP's slide-show presentation didn't actually look so good when you tried to map it to reality. And we had to fly out to California weekly for months, back when you could listen to the Air Traffic channel on the plane's sound system and get an idea of what you were working on, which we decided was intended to make us take this stuff seriously. One reason we'd gotten picked, besides having lots of other Air Traffic Control and system integration experience, was because we had a floating-point chip that calculated trig functions really fast. It turned out that all the "floating-point" data was coming from 12-bit ADCs, and it's much faster to use a lookup table than wedge in a floating-point chip :-) )

Comment This was SUPPOSED to be hypothetical (Score 1) 301

A week or so ago, my wife's Lenovo let some magic smoke out of the charger connection, and yesterday we heard back from the repair folks who are saying that the motherboard is shorted, not just the jack, so depending on how much they want to charge, it's probably new laptop time. What would be really nice would be an Apple magnetic power cord connector on the replacement, but of course patents mean that's not going to happen.

Meanwhile, what I really want for my work laptop is a description of exactly which ports are which; it's an HP ZBook of some sort, but doesn't seem to exactly match any of the diagrams only (G1, G2, etc.) One port is definitely USB3, and talks at high speeds to my external backup drive, and it's likely that one of the other two USB ports can do higher-current charging, but I can't tell which one, or how much higher (500mA instead of 100??) There's some kind of HDMI, and something I don't recognize that's probably a DisplayPort. And there's an SDHC slot, which has turned out to be really useful, because it can hold a 128GB card that lets me keep music, experimental virtual machines, and other space hogs online (the laptop has a 256GB SSD, while its predecessor had a 320GB spinning disk.) And yes, there's an Ethernet port, and a docking-station port, so I've got reliable high-speed alternatives to WiFi.

Half the thing I want to do with USB would probably work fine with MicroUSB (and maybe a USB-to-Go cable) if it were short on space. Mice and keyboards don't really need a lot of power. OTOH, I've started to like the fact that Lightning cables work either way up.

Slashdot Top Deals

BLISS is ignorance.

Working...