Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:not nearly enough (Score 1) 341

Wrong product segment. The Samsung note competes with the Surface RT.

Surface pro is... a touchscreen windows laptop with an intel processor. Integrated onboard GPU, a shortage of ports and a sorta kinda nifty cover. And that's about it.

If you want an intel 10 inch touch screen with windows you're going to have a tough time finding anything with the same specs as a surface in the same price bracket as a galaxy note.

What will be really interesting to see is what they can get the battery life to with a haswell though, and that's the biggest strike against pretty much any tablet or laptop from 6 months ago (well other than windows it Windows 8). For a device like this there is literally no reason to buy an ivy bridge version when there's a haswell one in the pipe which will likely have close to double the battery life at more or less identical performance.

Whether or not having a sort of laptop wintel device is worth the premium depends on what you're doing certainly, but the Surface is sort of kinda in the direction of a replacement for your laptop and tablet, whereas a tablet isn't really a productivity device. Surface pro isn't quite there, there are some issue with a lack of ports, and windows 8 is terrible, but with some slightly better hardware and a better creative vision Surface pro 2.0 or 3.0 could actually be interesting offerings.

Comment Re:Android 4.3? (Score 1) 120

I was making a not so subtle reference to a yahoo mail bug that caused the app to redownload all of you last 50 messages every day, and not cache them.

I'm not talking about specifically downloading a new OS. However you get the OS, what it does day to day on 3G *could* be a problem. Naturally, it generally isn't because people actually test these things, but mistakes happen and if you push out an update to 10 million users who all accidentally do 1gig of 3G downloading before you fix it you're going to have a LOT of very unhappy customers.

My guess it that Apple has a deal with the carriers that if they do that, Apple has it covered. No one else seems to have been willing/able to make such a deal.

Comment Re:Android 4.3? (Score 1) 120

Except Apple has pretty much DONE that.

yes, exactly. Someone else should have done that. Handing control to the carriers is a bad idea. Apple understood that.

I'm sure Apple assumes a certain amount of the risk for accidentally breaking the network with the carriers though.

Samsung is officially larger than Apple now

And have been for a while. Of course they're huge because hacker geeks loved them for being easy to root and they have great hardware. If you were trying to enter the market against samsung having a 'the carriers don't control your updates' policy would be a big competitive advantage.

I singled out MS for a reason. They had (have?) a developer programme, where you could pay 100 bucks and do whatever you wanted to your device. Right idea, wrong implementation. If you attract hackers who will do interesting things on the phone you'll attract customers to but the phones that do the most interesting things.

Apple is where it is, and has the market it has because there's a minimum of carrier bullshit to go with it, it's simple - great for some people. Samsung is where it is because it has the most power and the most options in phones. But we'd all be better off if there was someone in the market had the best of both worlds, with a wide offering of phones and a 'minimal carrier bloatware interference' policy. The obvious candidate for that would have been Microsoft after none of the Android guys did it, though it philosophically makes the most sense on android.

Google kinda does it, but well, google seems to be happy to take a back seat to samsung.

Yes, iTunes is hated, but it certainly has some useful features.

Well it's bloated. iTunes was really excellent grandma and technically illiterate user software for a long time and it I agree it definitely has its uses.

Comment Re:Android 4.3? (Score 2) 120

You take the chance on your end that your phone accidentally uses 50MB of data a day doing that, or no longer works or the like.

Well, apple I'm sure has a special deal. But with a droid, that's your problem if you do that. But if the carrier is pushing it out they want control over it.

This is definitely somewhere MS or one of the big Android players could have gone for the jugular in the market and said 'the carrier is a dumb pipe and you control updates to YOUR device".

Comment Re:Just doesn't work... (Score 1) 245

One extra detail: the alphabet of 50 characters was the effective entropy over a much larger space of symbols. I described the tree in entropy space, because that is the what mattered to its performance profile. The naive view is that the symbol set contained 8000 symbols and that four character strings could be selected from a set of 8000^4 members.

I ignored this detail because conventional reverse engineering would very quickly determine that we only go to the hash table for a much smaller nucleus of the problem space. That filter was a couple of pages of code. Nothing major, but not trivial to guess without some appropriate expertise.

Comment Re:Just doesn't work... (Score 2) 245

Great, so all you have to do is replace that conditional so it always evaluates to true, no? When you actually do this, the program happily writes an answer to the screen every time. The only problem is, if you provided an invalid security key at the beginning, the answer it writes is complete nonsense. You see, it's secretly already tested the security key, and if it was wrong, the answer ends up being wrong too.

I implemented exactly this circa 1990 to protect a small database of disambiguation rules structured as a hash table. A random value obtained from the security dongle was intermixed into the hash function and hash check condition. This was not done once for each possible lookup as defined in a conventional database. It was done once for each feasible answer for each possible lookup. The code had a statistical model of feasible answers. For some queries the number of feasible answers was excessive (too many dongle interactions) so we created a heuristic that was correct 99% of the time and set aside the 1% for a second pass with an additional data structure. If the dongle wasn't present the set of feasible answers was incorrectly narrowed with the expected statistical distribution. The members of that distribution, however, were entirely wrong.

We built up more complex queries from smaller queries. We were actually building a tree where every path in the tree was a valid answer and the majority of leaf nodes were at depths 2-4. That we hit a leaf node was a bit of metadata from the hash table lookup, which would be wrong if the dongle wasn't installed.

How about a quick forward description. Start with an alphabet of 50 symbols and construct the tree of all strings of length one to six. Every node in the tree has a flag about whether that node terminates a valid string and some additional bits about the correct orthography of the string as expressed in the user input, when typed. Your database is a subtree of this tree with about 100,000 strings (problems were so much smaller 25 years ago) along with a couple of bits of metadata per leaf. It's pretty sparse compared to the 15 billion possible leaf nodes.

The database subtree is actually constructed by elimination. One dongle assisted hash probe tells you whether a descending edge from your current vertex leads to a non-empty subtree (further solutions with your current path in the tree as a proper prefix). In addition, the user input defines another subtree of everything that could possibly matter to the conversion being performed. What you are computing is the intersection of these two subtrees: the tree corresponding to the task at hand and the tree corresponding to all solutions possible. Because the hash table was decomposed on the principles of minimum description length, when the dongle was absent (or corrupted) you still get an answer with much of the expected statistical distribution.

Except for one thing. The hash check was imperfect and you would get some false positives. We set up the rate of false positives so that the set of false positives grew exponentially as you descended to deeper levels. We knew from the statistical structure of the user input that few elements of this phantom solution set would interact negativity in practice even though the phantom set vastly out-numbered the legitimate set. Further, if one tried to enumerate the tree exhaustively using an incorrect dongle hash function, the tree you would reconstruct had no depth limit. It grew exponentially in size forever. We knew there was a depth limit when correctly probed, but this was nowhere expressed in our program code. In fact, this could be used to reverse engineer the correct hash function: only the correct hash function enumerates to a finite set of 100,000 subtrees. Just iterate over the set of all possible hash functions, in some well-structured enumeration order, until you discover this condition. Bingo, you're done.

Not all of the phantom space was harmless, so we ran a test on that and identified all the members of the phantom space likely to interfere in practice and coded an additional data structure about 25% the size of the main data structure which encoded the set of harmful phantoms on the principles just expressed. I think we tuned this second hash structure to have a lower rate of phantom production, otherwise we would have needed a third structure to restore the solutions incorrectly eliminated.

So the desired answer set was the (user problem tree) intersected with (database tree - bogus database answer tree + [non existent] bogus bogus answer tree ...).

I won't get into it, but you can construct hash tables encoding these subtrees at pretty close to the Shannon entropy by balancing the number of hash check bits against the sparseness of the subtree encoded.

We didn't use an ordinary hash table. We used a globally optimized hash table computed using a bipartite graph matching algorithm where every hash query had a set of three locations to examine and if any location returned a hit, you added that node to your subgraph descent set (if more than one of these locations was positive, you had at least one hash accident but that didn't tell you anything you could use). With three locations per probe reconciled with the bipartite graph algorithm, the hash table would achieve a bit over 90% occupancy rate and constant-time probe rates (we always tested all three cells, because each hit added metadata concerning the path, you had to accept all answers).

The hash table placement algorithm (bipartite graph solver) was not included in the distributed software. Nor was the statistical model used to construct the tiny phantom correction table. Without duplicating this work, any attempt to replace the supplied hash function (in hardware) with a different software hash function would require data structures about 50 times as large as we had employed, according to one estimate I made.

The only viable and practical attack significantly less difficult than reproducing much of our original work was to crack the dongle hashing algorithm and encode it in software, eliminating the hardware security lock. It was hard to suck the encoded information out of this structure, because it contained a lot of noise.

If you had a huge corpus covering the space of typical user input, you could discover which parts of this data structure was used in practice as a statistical construct. But anyone who had that wouldn't be ripping off a low quality reproduction, they would compete straight up. It's about manipulating incentives.

The problem with this technique is that it was pretty much a one-off. You had to tune the hash rejection rate just right so that the phantom elimination tables converged to finite size. You needed to constrain worst-case performance on any possible input string (we did this by throwing away regions before lookup where the sparsity fell below a certain threshold). And you needed an application space that tolerated imperfect answers. In our case, a wrong disambiguation of user input to Asian characters. There was also a conventional B-tree database for user-generated expressions which could be used to supplement or override any rough edges that poked through from what I described above. Our application had all of these things.

What I learned, though, is that one can go pretty far in this direction under the right conditions. This system was extremely resilient to conventional reverse engineering. We were fortunate that memory-resident hash tables sustain such high access rates, because the amount of memory we touched compared to a convention database was a hundred to a thousand times more. One sentence of input with the most productive symbols would probably hit our entire data structure multiple times over. Even on a 486, we could manage 100,000 to one million hash probes per second, depending on how aggressively we mixed in randomness from the hardware dongle. Even then, we drove that parallel port dongle to ten times its specified rate. It might have been producing bogus values some small fraction of the time. If so, it was never noticed amid all the other noise inherent to the problem space.

With this result I smell a rat because there's no discussion of computational burden up front. I'd be shocked if the obfuscated software ran at 1% of the rate of a conventionally encoded algorithm. But still, a critical nucleus at the center of your system that resists reverse engineering is a potent building block to discourage competition.

Competition is of course just another word for innovation. A large field of innovation is stifling competition. Innovation is passive-aggressive like that, which is why Microsoft loves this word. Personally, I wouldn't date that chick. The problem with defining worthy innovation (worthy of nasty protections such as the patent system) is that it's deeply frame dependent. What looks like the distillate of hard mental labour in one frame of reference is an automatic result in some higher frame of abstraction we haven't managed to reach yet. So patents are often awarded to the idiot who arrives there by the most awkward possible method in the least appropriate frame of reference.

If someone wants to make a living coming up with a partial result by feats of intellectual acrobatics a decade before the same result is convenient to achieve as an automatic result within a higher frame of reference, I don't have a huge problem with granting limited patent protections, but only so long as you're truly ahead of the curve. If any frame of reference comes along where you special result becomes a general result, game over for your expensive patent--in an ideal world that will never exist.

LZW is a good example where the early implementations were delicately tuned though a mixture of inspiration and empiricism to achieve viable performance levels. But the entire space of time/space efficient LZW implementations can be fairly thoroughly explored in a decade or so within the right algebraic apparatus, making every efficient implementation a direct result.

A person smart enough to do the algebra first would have no claim to patentability at all. One can not patent beautiful objects such as algebraic expansions of pi. It's just too universal deep down.

The byproduct of being able to obfuscate your algorithm is that your Chinese competitor can obfuscate ripping it off. So in a sense, it's highly desirable that there's a horrendous performance penalty with each additional nesting.

Comment keyspace negawatts (Score 2) 207

This particular scenario is rubbish.

It's weird that PHK framed it this way, but he's on the right track, regardless. Compromised entropy is one of the largest persistent attack surfaces in the state surveillance war. It's darn hard to notice when your client-side random key is leaking key space from prior exchanges, unless we're all running perfectly vetted software every day of the week and twice on Sunday and nothing bad ever happens to the golden master distribution chain. Developers never lose their private keys ...

From the dark side, at Borg scale, it's a slow war of attrition. The more they know about you, the better their guesses become. Suppose they gain possession of a dozen of your passwords from the least upstanding corporations you deal with. Your passwords have zero cross-entropy, right? Every password entirely unconditioned on any other password you've ever used?

And it if turns our you're a member of the 0.01% who uses distinct, randomly generated sixteen-character password strings for every site, so much the better to target you with other methods.

This isn't a battle over the yield strength of the titanium crypto primitives. It's a battle over the total burden. Every person who re-uses the same password a dozen times is that much less computation. Password cracking is like Type II b muscle fiber. It's the muscle fiber of last resort, that one your body activates to lift an overturned car off your child after a crash. Traffic analysis is Type I muscle fiber, the fiber you can use all day long, day after day.

That big hassle with the self-signed certs (which are needed for authentication) significantly thinned the default use of strong encryption for simple privacy. These did not need to be tied together as they were. Because the use of encryption stands out and the connectivity graph is below the percolation threshhold, it becomes hard to set up covert onion routers.

The focus on encryption strength is mostly red herring to distract us from the real agenda, which is keeping the general run of affairs extremely sloppy. The whole surveillance apparatus depends on the bulk manufacture of negawatts (shedding keyspace) in dribbs and drabbs by various murky political means. It's not a hard war, it's a soft war.

Comment Re:Good to see (Score 4, Insightful) 274

It's not on BSkyB,

BSkyB is a shortform for British Sky Broadcasting Group PLC.

They use "Sky" in branding for all sorts of stuff, notably Sky Broadband and Sky Subscriber Services (which is their TV offering).

In that context, an internet cloud service calling itself Sky-something sounds like it's part of the Sky services, which it of course isn't. And Microsoft has no real claim on the 'Sky' brand, so they're SOL.

Comment Re:Sony, for example (Score 1) 234

Companies have NOTHING to fear from consumer retaliation.

You're nuts, man. Sony took it in the nads for their blunder with the PlayStation 3. You know, that small setback where they allowed a Monogolo empire with deep pockets and not much traction to sweep behind their Maginito line and plant the Xbox flag atop Mount Suribachi. (By the way, would you be interested in picking some Lehman Brother shares I have lying around? Can't lose investment. Too big to fail and all that.)

Yes, the sumo wrestlers pick themselves up again all too quickly after their flagrant misdeeds. It's hard to knock them completely out of the ring. Whatever fear they experience momentarily is replaced by arrogance just as soon as their testicles re-inflate. (Hint: They're not pinching their nose and convulsing their chests because of some smell they've left behind.)

The girls, they kiss frogs. That's how it works. The triumph of hope over experience is what our species is all about, so much the better if there's an IPO with some DRM.

Comment regulation gulag (Score 1) 234

The argument over 'more regulation' vs 'less regulation' is about the stupidest argument out there.

It's not an argument. "Regulation" is a code word for power. Either the government holds this power, or private interests hold this power. There's no middle ground, due to the convexity of the slippery slope. It's either a firewall configured with a default "block all" or a default "pass all". Those are your two choices, 100% mutually exclusive.

Besides, inhabiting the middle ground involves the tedious art of knowing the difference, which is not what people with power enjoy doing.

Comment Re:This is why we have a first amendment. (Score 1) 254

The only way it's "dickish" is that it leaves VW customers in a [now-aware] potentially bad spot.

depends on what exactly would be redacted. Customers are no more informed with or without just the keys. As I say, it depends on what exactly VW wanted redacted.

And the other shoe drops. You see, researchers have to show their proof.

I'm a researcher, and you're somewhat confused. They made a claim about some or all of their results being on the internet already. Those claims aren't verifiable for the moment, nor is it clear what exactly they mean that the numbers are out there already. I can do a search for a lot of random strings of numbers and come up with results, that doesn't mean they have any useful context to them.

No, the "problem" is that you're making excuses for why a potential security flaw in a car should be any treated any different than, say, a security flaw in a door

I'm not sure you understand how the car recall process works. There is a whole lot of asking what is the risk/cost of doing nothing versus the cost of a recall. Recalling 100k vehicles to put in new locks gets very expensive very quickly. Not to mention the lost time of car owners getting their vehicles fixed.

Slashdot Top Deals

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...