Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Really? (Score 3, Interesting) 169

As I understand it, the Mt.Gox fiasco was due in part to a hacker's ability to exploit transaction malleability in Bitcoin. Yes, Gox should have updated their software, but the Bitcoin protocol had a known weakness in it, and we've seen the result.

Your understanding is wrong. The mtgox fiasco didn't occur because the miners accepted malleable transactions. It happened when the miners stopped accepting transactions that were malleable. Well, not all malleable transactions. But they did stop accepting the invalid transactions mtgox was generating. Generating those invalid transactions was mtgox bug 1. Mtgox bug 2 was when people fixed their bad formatting and they were accepted the block chain, mtgox software didn't recognise them. Mtgox bug 3 was they they then repeated the same transaction without doing a full audit of their ledger to verify some other mistake hadn't been made. Doing it twice is a bit of a risk given bitcoin transactions aren't reversible. But to be fair, mtgox said they authorised such double spends manually.

But ... it is almost inconceivable that a human authorised $350M in double spends without getting suspicious. So that brings us to the unknown mtgox bug 4. Somehow, they managed to figure out a way of authorising $350M in double spends without anybody noticing. Surely this must quality for the Guinness Book of Records greatest accounting cluster fuck of all time.

But bitcoin protocol bug - sorry no, not this time. Bitcoin offers very few guarantees. I guess a known mining rate, whatever appears on the audit trail is the one and only correct history of bitcoin, and that history will never change are the main three. In the early days, back when people sent 1000's of bitcoins to pay for a pizza, there were bugs that in the bitcoin software that meant those guarantees weren't upheld. But it was also a nicer time. It was when bitcoin was just a toy friends played with, so such mistakes could be and were always fixed. No bitcoin has every been permanently lost because because of such bugs.

I know I shouldn't care when a person on the internet is wrong. Not just a little bit wrong, but tinfoil hat type wrong as you are in this case. But seeing tinfoil hat comments being modded up to +5 is difficult to swallow silently.

Comment Re:Stills seems like it has to be an inside job (Score 5, Informative) 228

Consider these Mt. Gox loses:

  • - June 2011: seller's administrator account was hacked by an unknown process. The priveleges were then abused to generate humungous quantities of BTC. None of the BTC, however, was backed by Mt. Gox. The attackers sold the BTC generated, driving Mt. Gox BTC prices down to cents. They then purchased the cheap BTC with their own accounts and withdrew the money. ... Many customers claim they have lost money from this reversion, but Mt. Gox claims it has reimbursed all customers fully for this theft. After the incident, Mt. Gox shut down for several days.
  • - June 2011: Users with weak passwords on MyBitcoin who used the same password on Mt. Gox were in for a surprise after the June 2011 Mt. Gox Incident allowed weakly-salted hashes of all Mt. Gox user passwords to be leaked. These passwords were then hacked on MyBitcoin and a significant amount of money lost.
  • - October 2011: Mt. Gox accidentally destroyed 2609.36304319 bitcoins.
  • - July 2012: A hacker infiltrated the Mt. Gox account used by Bitcoin Syndicate, sold off the USD owned, and withdrew all balances.
  • - July 2012: On July 13, 2012, a thief compromised the Bitcoinica Mt. Gox account. The thief made off with around 30% of Bitcoinica's bitcoin assets.

But for any programmer, none of this is a surprise given he hacked up an ssh server in PHP, then deployed it on a production server.

Comment Apple is stateless (Score 1) 288

The entity receiving the money is known as a stateless organisation. It's controlled by Apple, obviously.

How does an organisation become stateless? They take advantage of different of residency in different jurisdictions. For example, country A may say you are based in A if your headquarters are there. Country B may say you are a country B organisation if your board meets there. Country C may say a company is comes under its laws if the bulk of its board are residents there.

One way to be stateless that that situation is to have your headquarters in country B, and have your board meet in country A.

This is what the entity Apple transfers the money too does, so it isn't under the control of any country's laws. It is perfectly legal, of course.

This loophole won't be around for much longer. All that is needed to fix it is the various countries get the respective laws consistent. Doing that is on the agenda for the G20 meeting in September.

Comment So we are like mice? (Score 4, Insightful) 459

As others, the study was done on mice, who are herbivores in the wild. They say what happens to them will also happen to us, but we have been eating meat a long while now.

I wonder if also applies to my cat? <scarcasm>I know cat's are predominately carnivores, but that shouldn't matter, right?</scarcasm>

Comment Re:Is sudo broken or its audience? (Score 1) 83

144 pages is fairly short and compact for a security tool.

It's a pretty dumb security tool. It allows you become user X if based on a few simple credentials. In fact I can list them: your user, your group, the computer you are on and the program you are running. On top of that, you can ask it to do a few things when to assume user X's credentials like clear the environment, close a few files, log something - nothing you could not also do by running a wrapper script.

That's it. It's not much. To configure this relatively simple thing the author invented this god awful syntax. It's one virtue is it's compactness - so it's forgivable I suppose, particularly if he writes a clear man page readable by humans. But he didn't. He used EBNF. Now let me tell you, as a person who has written parser generators, an inevitable fact about any non-trivial grammar. When first written they are full of bugs. It's hardly surprising given they are regular expressions on steroids, and most people struggle to get just one non-trivial regular expression right - and an EBNF is a list of them. Thus even the person writing them can not predict the language they will recognise. The only way to get rid of the bugs is to compile a parser from them, test it on various language constructs, then fix the surprises.

You can take it from that EBNF is a great way to express things so a computer will understand it. But not so good for a human. If you are expecting someone to write a computer program to match the grammar, it might be a reasonable choice. If you are using it in a man page that only humans will read it's a bloody awful choice. Maybe it might be justifiable if he just ripped the grammar straight out of his source code. At least we could be sure it was right them. But he didn't. The source doesn't use a grammar at all.

But then if the grammar in the man page has never been compiled or tested, and given it is non-trival, then if what I said above is true it won't recognise the sudo config file. And it doesn't. For instance, nowhere in the grammar does it express all commands must lie on a single line. In fact he doesn't even mention it directly the text either, beyond saying at the end you can split long lines using a \. You are meant to infer it from the examples I guess.

So to sum up, we have a simple concepts expressed in a terse and complex configuration language, which is described by a an untested EBNF so complex it needs it's own syntax description in the man page, and we know the EBNF is incomplete. That is why the sudo man page is a cluster fuck. It has nothing to do with your "oh security is complex" throw away line.

And does it need a 144 page book to explain it? No, of course not. A man page about the size of the one we have could get all the concepts across just fine.

Comment I'm feeling Déjà vu (Score 2) 120

This reminded me of the claims Steve Perlman made in 2011. He said his technique would overcome Shannon’s Law. He was justifiably ridiculed. At least this mob isn't claiming they can break the laws of physics.

Oh wait, this is Perlman, peddling the same dog and pony show. Only this time he's got an article in IEEE Spectrum to print his claims. I hope that means he no longer says he can beat the laws of physics into submission.

The original claims of the impossible aside, the idea was to monitor the signal of each phone in real time from a central point, do some calculations to figure out the path distance from each antenna the phone, then do some more calculations to split up and phase change outgoing signal so the signals from those antennas so they constructively interfered to produce the wanted signal at the phone. The tracking has to be damned accurate - much better than GPS because a 1Ghz mobile phone signal has a wave length of about a meter, and you need better than 1/4 of the wavelength. And it has to be fast, because if the phone or objects around it move it all goes to put. So if you are walking at comfortable 1 metre per second, in 0.25 seconds it's all gone to pot. In a car that drops to 0.02 seconds. Oh, and since we as talking 1GHz, we have to measure it within a few 100 picoseconds. And since you don't use one antenna to service just one phone, he will have to be doing this for 100's of phone simultaneously. Oh, and that means when he is calculating the phase and amplitude of the signal his antenna is generating, he has to solve 100's linear equations with 100's of variables so he can ensure each signal he sends from each antenna adds up to what each phone needs. And since the collective antenna group is sending at oh, say 100Gb/s and he has to do this for every fucking bit, so he has 10 picoseconds per bit to do it in.

Yeah, right. It will be out by Xmas, I'm sure.

Comment Re:The Surprised Dutch Prosecutor (Score 1) 83

It doesn't sound like Tor is compromised to me.

Instead is sounds like that fact that the man running the original Silk Road was earning over $10M a month, and only got caught because he was sloppy got published far and wide. There is almost certain a flood of Silk Road clones out there now. It's probably one of the few things that outnumbers bitcoin clones. I expect most of them are run by 2 bit crims who picked up their l33t script kiddie skills from their prepubescent cousin.

I am somewhat surprised there has only been 3 of these so far. I expect this is the start of a never ending flood, and it will only drop off the newspapers when it becomes obvious it's an everyday event.

Comment My take (Score 1) 2219

Since this seems to be the place to post comments on beta:

  • 1. That /. is a predominately text site. That is what you current audience likes. When ther new beta home page displays there is no visible text. Instead we see some god awful graphic occupying the entire visible part of the page. Ick, ick, ick. If I want entertainment served up as cute photos I can go to imgur. Reduce the graphics, reduce the load times.
  • 2. The home page is ... boring. The current /. page is surrounded by options and knick knacks and useful information. The new one looks like - a spartan mobile page clumsily expanded to fill up a desktop page. Thus you end up with heaps of boring white space. Worse - to get to those knick knacks now required multiple clicks. For fucks sake - you must have witnessed the savaging both Windows 8 and Gnome 3 got here for doing the same thing. Yet you repeat same basic design error????
  • 3. The main body of the home page is also a mess. Before we had a regular layout - dense text describing the story with a small graphic breaking between stories that helped to break up the blocks of text. Now we either pure text with nothing to break it up, but the occasional story having a huge graphic you have to scroll past to get to the meat - the text. /. isn't a fashion magazine. We aren't here for the graphics. The graphics is there purely to help us parse the text.
  • 4. In the stories themselves. As others have said, there is too much white space. To me this seems to be just a line space / font choice issue. Hint: here the priority should be making the text easy to read, not making it look good, we clicked are here to read the text not to appreciate the skills of your graphic artist. The tiny font + heaps of white space may look "balanced" in somebodies mind, but it makes that harder to read than it need be.
  • 5. To me the most important piece of information about a comment is the thread it's in. That remains obvious, fortunately. The second most important piece of information is the moderation score. And you put that in a dim, small font???
  • 6. Keyboard navigation doesn't work. Seriously? You put this up for people to use as beta (one step away from production) and took away the feature at least 10% of us use to navigate it? What were you expecting - us to show the love?
  • 7. What happened to the plain text / html options when replying? What replaced it? How are we supposed to format our replies now? Surely you don't expect us to go on a whumpus hunt to find all this out? At the very least a "help with formatting" link would be appreciated.
  • 8. When replying to a comment there is no default subject. What's wrong with "Re: (old subject)?".
  • 9. You have a "Share" button on comments. I think it may be a permanent URL to the comment. That might have been what "Share" meant a decade ago, but now "Share" means post a link to your favourite social media site and when you click something like a dropdown of common social media sites appears. "Permalink" is a better name for it.
  • 10. On the main page, "Older" points left. On what planet is that the standard? Everywhere else "older" posts slide are on the right, and indeed a swiping the screen left makes them appear.

I can't help but agree with the FUCK BETA crowd, even though I have started modding them down. This isn't beta quality, and it should never have been inflicted on the undeserving public.

Comment Re:There is (probably) no analog phone network any (Score 1) 218

There are a tons of devices like these out there and if they cannot operate reliably over a VoIP based network

True. Theses devices are modems, and they power things like fax'es and EFTPOS terminals.

You know what? Modems are what we use to send digital over an analogue line. They don't work over some VOIP, but ye gods if you are kludging a digital line over VOIP emulating a analogue signal over a digital signal which is sent using an analogue PYH using a high speed modem - maybe it is time for a layer or two to die.

In other words, complaining that about VOIP making life difficult for modems is like a teamster complaining how the hard the asphalt is on the horse's hooves.

Comment Re:Huh? (Score 1) 218

What happens when your power goes out and you have Charter-crap or Comcast-shite or UVerse-dung ? You're screwed.

I don't know about Charter-crap or Comcast-shite, but here in Australia I can tell you what happens with a PODF. Initially batteries in the exchange power the PODF, and it's all good as you say. But if the outage is caused by a category 5 cyclone named say Yasi then some of the exchanges will be isolated, so the next thing that happens is the batteries go flat. Not a huge problem as the diesel generator cuts in automatically. But then it runs out of fuel, and your PODF dies.

So what now? Well I can tell you what thousands of Aussie's effected by Yasi did. They used their mobiles. If the mobiles didn't work they hopped into the car and drove to somewhere that did. And if their mobiles went flat they charged them using the car. Turns out determined human with car and mobile beats PODF every time.

Here in Australia we are building something called the NBN. Sort of like the FCC plan being described here, but we are skipping the trial step. (Well, not really. The NBN is better described as "re-writing the country", but exactly what that means is up for debate, however one thing is clear: POTS dies.) There used to be a huge debate about batteries, just like the one you are starting here. Yasi ended it.

Comment The race is on (Score 4, Informative) 162

If Google isn't careful, they will loose this race. Right now it is a bit of a toss up. It wasn't always so. A few years ago OSM was just toy, and the Android Google Maps app did a reasonable job of offline maps and searching the local area. My how things have changed.

On the one hand Google has been busily removing features from it's Maps app. I think they were trying to make it easier to use. Whether they achieved that is debatable, but what they done is make it less useful. You can't measure distances now, the search for local places of interest is all but useless, there is no way to find out what maps are available for offline use.

OsmAnd+ on the other hand has acquired one big missing feature - directions, navigation and voice. Amazingly its point of interest search works much better than Google, possibly because the locals enter the point of interest data. And it always had a number of features Google Maps doesn't:

  • Measure distances.
  • Add way points for navigation.
  • Directed Address Entry.
  • Display custom underlay / overlay maps.
  • Record / display GPS tracks.
  • Totally offline operation.
  • If something is wrong or missing, you can add it.

Normally I would not bet against Google. But collecting traffic and public transport out of the realms of possibility for Osm. If that happens, I can't think why anybody would choose to use Google Maps over OSM.

Comment Re:Is there any way to gain trust in a chip? (Score 2) 178

Given a "black box" implementation of a random number generator, is it possible to test its output sufficiently to gain some faith in its proper randomness?

The answer is an outright no.

The thing that crypto depends on isn't that a stream of random numbers appears to be random. It is that the next number is utterly unpredictable. No one, not even the person who generated it, will know what it will be. This means if it is used as a key to protected some data, no one can predict what that key will be.

One of ways every cryptographic cipher or hash is checked is to verify its output is indistinguishable from random data. If it isn't there is a weakness in the cipher or hash. So the output from any good cipher or hash will always appear to be completely random according to any test we can devise. But - the output is also completely predictable.

So all NSA need to do in their black box is start with a predictable key or salt (the time would be fine), push it through a cipher or hash and output something which by appears completely random. If the random number is used to as 128 bits AES key it will appear file to any test the user can generate. But say they use a 1us tick to generate the time, and the NSA knows to say within 10 minutes when the key was generated, then they will only have to brute force against 1 billion keys (in other words that 128 bit key only has 30 bits of entropy). This is trivial to do.

QED, the answer is emphatically no - there is no way to test if a black box is generating truly random numbers. Every black box must be treated as untrustworthy - which is exactly what BSD, Linux and I hope everybody else does.

Comment Re:3DES (Score 1) 230

The entire article is rubbish. It's little more that a viral ad for CSO, at Adobe's expense.

Yes, they used 3DES. 3DES has a number of nice attributes. It's strong, and it's slow. And if the password is kept safe, it's equivalent to a hash - but an unknown one. Being unknown renders it immune to brute force attacks. Being immune to brute force attacks makes it as good as bcrypt, scrypt and PBKDF2, but without the speed penalty those incur.

The one weakness is that password leaking. I gather it hasn't, so far. Which means that the passwords are safer than an alternative they recommend - salted with SHA-2. In fact, if they were salted with a single round of SHA-2 most of the passwords would be brute forced by now.

Which means while Adobe has done a good job of keeping those passwords safe (well aside from the leak), the security advice offered by CSO in the article is just plain wrong. Which makes the /. writeup of the article wrong. It should say "In trying to teach Adobe to suck eggs, CSO proves they know nothing about password security."

Comment Re:again? (Score 5, Interesting) 235

Hear hear! A bit of background to the politics of this:

NFTables is brought to you by a group of codes created when Alexey Kuznetsov decided to replaced the low level linux network stack for Linux 2.2 to make it more like what Cisco provided in IOS. The result added whole pile of new functionality to Linux (eg routing rules), and a shiny new highly module traffic control engine. Alexey produced a beautifully written postscript documentation for the new user land routing tools (the "ip" command), and 100 line howto for the far more complex traffic control engine tools (the "tc" command).

Technically it was a was tour de force. But to end users it could at best be called a modest success. Alexey re-wrote the net-utils tools ("ifconfig", "route" and friends) to use the new system, and did such a good job very few bothered to learn the new "ip" command even though the documentation was good and it introduced a modest amount of new features. But real innovation was the traffic control engine, and to this day bugger all people know how to use it.

At this point it could have gone two ways. Someone could have brought tc's documentation up to the same standard Alexey provided for ip, or they could ignore the fact that almost no one used the code already written and add more of the same. They did the latter.

It was also at this time the network code wars started in the kernel. Not many people know that a modest amount of NAT, filtering and so on can be done by Alexey's new ip command. But rather than build on that Rusty Russell just ported the old ipfwadm infrastructure, called it ipchains (and later replaced it with iptables). There was some overlap between Rusty's work and tc, and this has grown over time. For example the tc U32 filter could do most of the packet tests ipchain's introduced over time on day 1. Technically the modular framework provided by tc was more powerful than ipchains, and inherently faster. Tc was however near impossible for mere mortals to use even if they had good documentation. There were some outside efforts to fix this - tcng was an excellent out-of-tree attempt to fix the complexity problems of tc. But in what seems like a recurring theme, it was out of tree and ignored. In contrast, Rusty provided ipchains with the some best documentation on the planet. In the real world the result of these two efforts are plain to see - while man + dog uses iptables, there maybe 100 people on the planet who can use tc.

Another example of the same thing is IMQ. IMQ lets you unleash the full power of the traffic control engine on incoming traffic. (Natively the traffic control engine only deals with packets being sent, not incoming packets - a limitation introduced for purely philosophical reasons). IMQ was very well documented, and heavily used. The people who brought you tc had a list of technical objections to IMQ. I don't know whether they were real or just a case of Not Invented Here, but I'd give them the benefit of the doubt - they are pretty bright guys. So they replaced it with their own in-kernel-tree concoction. (For those of you who don't follow the kernel "in-tree" means it comes with the Linux Kernel. An out-of-tree module like IMQ means at the very least you have to compile the module source, and possibly the entire kernel.) For a while this discouraged the developers of IMQ so much they stopped working on it. If you follow that link, you will see it's back now. Why? Because the thing that replaced it had absolutely no documentation. They never do. So no one could use the replacement. Again, in the end, the thing code that was documented won the day.

By now you might be guess where this is heading. We have two groups in the kernel competing to provide the same networking functions. All sorts of weird modules were added to the traffic control stack - things like mirred, nat, blackholing, ipset. The more observant among us noted they allowed the traffic control engine to replace iptables. No one used them of course, as in the fine and continuing tradition of this group, they weren't documented. So the net effect was to add unused orphan code to kernel. To this day, I don't know why it was tolerated by Dave Miller, the head of the networking stack.

NFTables is that latest attempt by this group to unseat iptables. This time it looks like they will succeed. For the most part this is a good thing. The duplication between iptables and Alexey's framework was always a huge technical wart, and Alexey's framework was always the better one. It will be even better if they backport the classification engine to tc, so we can use it to assign traffic control classes. If they do that most of the duplication will be gone, at last.

The one minor problem is that true to form, there is no fucking documentation.

It appears after consistently having their code ignored by most Linux users for over a decade, these bastards are incapable of learning the lesson. If Linus and his appointed networking stack maintainer, Dave Miller allows this state of affairs to continue with NFTables it will be another right royal mess.

TL;DR: If Dave Miller doesn't grow some balls and say "no user land documentation, then automatic NACK", nothing will be replaced. Instead we will end up with yet more duplication.

Slashdot Top Deals

To thine own self be true. (If not that, at least make some money.)

Working...