Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:New Attack? 0 Day? (Score 4, Insightful) 165

Easy. You have something (like a header) that leads the image decoder to allocate a certain amount of memory on the stack (a buffer) for an expected piece of data. Then you have the decompressed data be larger then it was advertised or calculated, overflowing the buffer and so overwriting other items on the stack, like the return address. By changing the return address you can point it back at the buffer, which when the CPU tries to read those bytes as code instead of data it turns out they do bad things.

Vulnerabilities in media decoders are a prime vector for infection since they are usually processed automatically. The only reason you are seeing it in software from 'a decade ago' is that hackers face so much competition from white hat researchers when it comes to browsers, fighting for vulnerabilities from a usually shrinking pool. With fewer opportunities some are turning to media decoders found in applications like Office. It's a less effective vector since it requires several actions from the user, but the upside is that these applications are often not as aggressively patched as browsers have become which means a single vulnerability might work for months.

For a comparison it's been almost a year since the last arbitrary code vulnerability was reported in FireFox's GIF decoder, and 2 years since the JPEG decoder was last turned into an attack vector (to the best of my knowledge). IE, Chrome and Safari have experienced similar droughts, with all the major browsers only having 1 or 2 image based vulnerabilities reported annually for the last few years, and usually by researchers who allow it to be patched quickly rather then as a zero day being exploited. Of course other types of media exist. CSS/HTML5 has rapidly become a media format in of itself and a little over a month ago FireFox was vulnerable to arbitrary code execution due to the way it decoded animations in CSS stylesheets (this was reported by Google and patched with the release of FF 24). TL;DR Researchers are hogging all the good browser vulnerabilities, so hackers are playing in the dusty old rooms nobody has visited in years.

Comment Want to know more about car fires in America? (Score 5, Informative) 232

Here is some interesting information on car fires from the US Fire Administration (USFA->FEMA->DHS) and the National Fire Protection Association.

From 2008-2010 "Approximately one in seven fires responded to by fire departments across the nation is a highway vehicle fire. This does not include the tens of thousands of fire department responses to highway vehicle accident sites.". The leading factors in ignition where "mechanical failure" (44.1%) and "electrical failure" (22.3%). 1

The actual number of highway car fires in that period was approximately 582,000, or an average of over 500 car fires every day on American highways.2

In this accident which involved an electric car a large piece of sparking metal debris was run over by the car and thrown up with enough force to slice through the cars stored energy compartment, in this case one of the batteries. The driver was alerted via the display to a problem and instructed to pull over immediately due to the fact that one of the batteries was now leaking and smoldering. A short time later the burning ember reached critical temperature and was able to ignite the softer materials in the adjoining 'frunk', the carpeted front side trunk located where most cars have an engine. The other 15 battery compartments, having not been skewered by a giant metal spike, remained unharmed due to the firewalls and other protection, as did the passenger compartment.

If the owner had been driving a gas powered car and that metal spike had instead been driven up into the gas tank, ripping it open and showering the fuel with sparks as it was dragged along the highway, would the driver have had any warning other than a loud bump and then the passenger compartment being consumed by flames?

This is not the first Tesla fire, there was another involving the Roadster resulting in a recall of 439 vehicles. The source of the fire in that instance was not the advanced battery at all, it was one of the old style 12V lines (Tesla vehicles still include a regular 12V battery for lights/instruments and 'ignition') being in a bad position near a headlight and susceptible to damage that could spark a fire. Going back to the statistics above we have over 100 car fires each day (22.3% of 500) caused by those 12V wires and components being damaged and shorting out. For example Honda recalled over 140,000 (non-hybrid) Fits in the US this year because the wiring in a 12V door switch could get wet, short out and start a fire. GM had the same problem last year and had to recall almost half a million vehicles.

Comment I can't be the only one to see a flaw in this math (Score 1) 208

Did no one else immediately think of the weight as soon as the author started talking about filling an SUV with microSD cards? I'm reminded of the saying '100lbs of pillows/feathers is still 100lbs', in reference to how people seem to overlook that very light objects are still heavy if you carry enough of them.

While the exact weight of each of the 19 million microSD card would vary a nice starting point is about 0.4 grams plus or minus 0.1 based on general specs. That's well over 16,000 lbs or 8 tons of microSD cards in the back of that SUV, which according to the page linked in the article is rated for a payload of only 1580 lbs. To get an idea of how much 8 tons is, that's the weight of a medium sized Caterpillar backhoe.

Comment Re:I don't want to be "that guy", however (Score 5, Interesting) 319

Microsoft has also had the benefit of Anders Hejlsberg being the lead architect, one of the best minds in the industry. There are maybe a handful of people in the industry today that can stand at the same level as him, and none currently alive that can stand taller. Hiring him away was a major boon to Microsoft and a crushing blow to Borland.

Comment Re:Purposeful (Score 1) 519

Consider the 'carpet bombing' exploit that was discovered in Safari a while back. It allowed a website to save any number of files to the default download location without any interaction or notification to the user. The exploit worked on OS X and Windows but was a far greater threat on Windows due to some changes Apple made when porting Safari from OS X. Despite the public discovering this Apple dragged its feet and tried to claim it wasn't an attack vector.

So what where the difference between the original Mac version and the same code ported over to Windows, which in theory should have simply replaced the OS specific calls? First the Windows port changed the default download location to the desktop, despite the fact that Windows has a downloads folder just like Mac. That means files get deliberately dumped all over the desktop until the screen is a mess. That's a minor thing though it does help make Windows seem cluttered to the average user.

The major change was they removed the code that marked downloads as untrusted. So when a hostile website silently saved an executable file called Safari.exe (directly to the users desktop) there was no untrusted flag. No other browser on Windows works this way; IE, Firefox, Chrome and Opera all mark downloaded files as untrusted to prevent them from being accidently run without a user notification. Safari on OS X marks downloaded files as untrusted so that users are shown a notification. But the port removes this basic functionality. And taking away that safe guard makes it a lot easier for malware to be installed on Windows when a user is browsing with Safari - not just the Safari specific exploit that silently downloaded files, but also the wealth of executable malware that likes to pretend to be a legitimate file (such as websites that attempt to give users an executable file in lieu of a promised video or torrent download)

Comment It needs the companion app at $69? (Score 4, Interesting) 79

It really doesn't take a lot of power to read an eBook. Some of us have been doing it since the Palm days (for reference I had no problem reading eBooks on a 4MB Palm IIIx, which used a 16 Mhz low power SoC version of the CPU that powered the Apple Lisa).

Reading the specs for the device it seems that its 4 GB of storage are used to hold 4 bit uncompressed bitmaps - the companion app must render each page as a bitmap, send it to the device by bluetooth and then the device just dumps it on the screen with no processing power at all. That would seem to be the 'cost savings': take out the CPU and RAM and replace it with a simple 8 bit controller linking BlueTooth, flash and display, or at least that must have been the original sales pitch before anyone actually sat down to design it.

By comparison a $30 photo frame contains a CPU powerful enough to decode JPG files fast enough to display them as a slide show. That's more powerful then the Palm at half the cost of the Beagle. Part of that is because the cheap ARM CPU inside costs under $2 and has all the power you could need.

I think the simple truth is that 80-90% of the material cost of the Beagle (and it's competitors like the entry level Kindle, Nook, Kobo models) probably comes from the eInk screen and the NAND memory. There just wasn't a huge savings to be had by eliminating the CPU and RAM. They seem to have saved $10 after markup over their competitors (who not only have CPUs but touch screens and rechargable batteries as well). This seems like a pie in the sky sales pitch that wasn't aborted as soon as they discovered the cost savings where not there.

Comment Nope (Score 3, Interesting) 522

Valve has been in the unique position of having some hit titles in the past that they had good publishing deals on. That's given them the financial cushion to run things however they wanted with whomever they wanted, without any of those pesky obligations most developers have to meet to pay the bills. And then of course they stumbled onto Steam, the patching platform turned online store where they get a cut of all the other developers profits.

To highlight a similar scenario, 3D Realms was able to dick around for almost 15 years (1996-2009) thanks to the big pot of cash they had from the first Duke 3D game and a few farmed out expansions. We know for sure now that those years where not spent under some masterful system of management creating the most polished game ever, they where terribly managed years in which the same game was reinvented every 4-6 months everytime Broussard saw a new game.

Valve management is certainly not the disaster that was 3D Realms, but at the same time it's very hard to apply their near-zero management style without also having access to their near-zero financial obligations. Valve can afford to mess around in the kitchen for years tossing meal after meal into the garbage until they have something they like. Other developers have to feed their family tonight.

So I guess what I'm saying is that regardless of whether the bossless model works for Valve, other companies have to actually produce games on time and on budget. Where exactly is Half Life 3...

Comment Re:it's not really just storage (Score 5, Insightful) 168

To expand on this Salesforce.com has two different blocks of storage allocated for any Salesforce instance. One is data storage which is for tables and you start at 1GB for your database. This is where the quote of $3000 for each additional GB comes from. The other is file storage, where you save PDFs and other record attachments. You start off with 30GB and it is much more in line with normal cloud data storage prices. Your usage of both is displayed seperately on your companies Salesforce admin page.

As the parent said the cost of that 1GB is not really the disk space but the expected transaction cost in terms of servers. The number of bytes shown as used is not even based on any actual disk usage (this would be complicated with table structure, overhead, indexes and fragmentation). For most tables they use a formula of 2KB per record - it doesn't matter if it's an contact record which is probably stuffed with much more then 2KB, or a very simple custom sales record containing a name and a dollar amount. There are a few special tables that are treated at 512 bytes per record, like the table containing chatter updates (Salesforce.com's social media like notifications). Taken all together this means that the "1GB" of data allowance is really 250,000+ records, depending on how much is chatter vs. actual records and not anything related to disk space. It's just easier to explain it as 1GB to a management person rather then as a complex relationship between records, transactions and indexes.

Comment One small thing (Score 1) 422

You've got lots of answers about cabling, some for cooling and a few for power. One tiny thing you don't want to overlook is the door. You should ensure there is a plus sized pathway (check for tight turns) all the way from outside the building to the computer room where you'll have an extra tall, extra wide outward opening door. If you are building a smaller room this is really important since it may become impractical to disassemble a rack and reassemble it in such a small space (remember that there will be other running equipment you don't want to accidently knock about). Also make sure you have a properly sized ramp (and that the ramp is factored into any path and turn calculations) if you have steps or a raised floor. Unless there are security considerations a good setup would be for the server room door to be close to a large side door which in turn is close to the server rooms air conditioning units (when there is eventually a problem it would be terrible for the repair guy to have to walk back and forth through a machineshop to fix it).

Comment Which is the scary part? (Score 5, Insightful) 86

A widely used web package has a backdoor inserted.

Scary.

One of the regional mirrors of the largested software respository containing tens of thousands of projects is either hacked or was a plant from the start.

Scarier.

The backdoor code looks to be the work of someone who learned PHP on Monday.

Scariest.

Honestly, the only way it could have been more obvious is if the file was called backdoor.php. There was no attempt made to disguise the location or what the code was doing which is why it got caught so quickly. A complete amateur got caught with control over a chunk of Sourceforge downloads. In computer security when you find a breach you don't just close the obvious point of entry, you have to take a big step back and seriously ask 'what else was compromised'. In this case the big question is who else.

If this clown could do it and didn't get caught until an end user saw the stupidly obvious file and its stupidly obvious code (as opposed to a server log or other Sourceforge audit turning it up) what are the competent hackers up to. Real backdoors are blended into the existing code instead of being added as a seperate file. Real backdoors are designed to be hidden from casual inspection instead being completely obvious in their function and 'I don't belong here status'. Really good backdoors are designed to not look like intentionally malicious code even after they are found (ex. the wait4 backdoor attempt in the Linux kernel was pretty good, it got caught because the CVS hack used to insert it in a regional CVS mirror was flawed in several ways that raised alarms).

So, what kind of security/procedure/audit could have been in place, needs to be in place, so that something like this will raise an alarm even when the hacker isn't the most incompetent backdoor author in history? What kind of audit is needed to be sure it hasn't already happened?

Comment Is this what Microsoft did in the 90s? (Score 2, Interesting) 255

In the 90s Microsoft was accused of and then convicted of monopoly behavoir against OEMs to push OS/2 (and other PC OSs) out of the market in favor of Windows.

Back then Microsoft provided 3 choices for OEMs:
  • 1. Don't play with Microsoft at all (sell PCs with OS/2 or other OS, operate as a niche dealer)
  • 2. Play with Microsoft but without a club membership (buy Windows licences at full price, sell however they want)
  • 3. Join Microsoft's club (get discounted licences but pay them on a basis of one licence per computer regardless of actual configuration)

Microsoft argued that this was not anti-competitive; they claimed the discount simply represented Microsoft not having to keep track of individual licences and that OEMs where free to buy licences individually instead. They lost that argument because it was found that since Windows already had a majority market share (for the time being) an OEM had to load Windows on a majority of their systems to satisfy consumers. Because of the pricing scheme OEMs could not be competitive with other OEMs if they took option 2, forcing them into 3 where Microsoft's terms made it uncompetitive to sell PCs with another operating systems. So Microsoft was convicted under the Sherman Antitrust Act.

Let's look at Google and its club the Open Handset Alliance (OHA):

  • 1. Don't use any official Android distributions (operate as a niche/self-supported market, ie. Amazon)
  • 2. Use any combination of Android and forked android-derived distributions, but can't join the OHA
  • 3. Join the OHA and use only an official Google Android derived OS

The official Android distribution can be seen as something wanted by the majority of customers (looking for a non-Apple/Microsoft or a inexpensive phone) at this time (unless you have something else big enough to get people to come to you, like Amazon) so most Android/android OEMs would be giving up the majority of their customers if they dumped official Android entirely; that removes option 1. Much like the licence discount a membership in the OHA represents a major competitive advantage - the OEMs are already way behind in keeping official Android up to date in their design and production pipelines even with that inside track and help from Google. An OEM on its own trying to make an official Android device is thus at a large disadvantage against OEMs that are part of the OHA. This makes option 2 uncompetitive, forcing any serious OEM into option 3. Option 3 goes even farther then Microsoft in the 90s - it doesn't just apply a tax, it outright bans the alternative.

So does the same 90s logic applied by the court - that regardless of Microsoft/Google's excuse for the 3 choices it isn't really a choice at all, and that the only viable choice blocks competition - still apply today?

Comment Re:What is a CD? (Score 5, Funny) 328

A CD or 'compact disc' was an ancient precursor to the DVDs that you can still find in some stores today. During their heyday CDs where mainly used to store a primitive type of mp3 called '16bit uncompressed PCM' but could also store regular data. A typical CD could hold between 650 and 700 'megabytes' worth of small files. A 'megabyte' was an older unit of storage; One megabyte was just 1/1000 of a gigabyte!

Comment Re:Addresses one issue but not the other (Score 3, Interesting) 355

It has nothing to do with "older" devices and how well they run ICS, but budget devices even from this year. Your Nexus One and Nexus S (circa Jan 2010 and Dec 2010) still run circles around budget Android phones like a Galaxy Pocket (circa Feb 2012).

A 2 year old used Nexus One is still selling for more then budget Android phones sold outside the subsidized market of North America. Just because your Porsche 911 is 10 years old doesn't put it in the same racing category as a 2012 Kia Rio.

Comment Addresses one issue but not the other (Score 4, Insightful) 355

The PDK does address an issue that Google shouldn't have made an issue to begin with - manufacturers actually getting some lead time. But it doesn't address the issue of why Gingerbread itself is still such a big chunk of the market.

ICS simply can't run on budget Android devices. The Android makers that are making money (Samsung) are targeting a much wider market then just the high end subsidized North American market. Samsung is able to turn a profit because they're spreading their costs over a much wider net with both mid range phones like the Ace line and a lot of super-low end ones (Y, Mini, Pocket) that compete directly with feature phones and in emerging markets. ICS is never going to run on those and Samsung and others won't try - they're still releasing brand new phones, 8 months later, running Gingerbread with no hope for an upgrade.

Android will continue to be 'fragmented' between Gingerbread and whatever the latest and greatest is for a long time, at least as long as the gulf exists between heavily carrier subsized phones in a few countries (allowing iPhones, Samsung Galaxy Ss and HTC One Xs to sell in any quantity) and full cost phones in other countries where (Gingerbread) Android's price point is the biggest selling point against more expensive smart phones and increasingly identically priced feature phones.

Slashdot Top Deals

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...