Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Is sudo broken or its audience? (Score 1) 83

144 pages is fairly short and compact for a security tool.

It's a pretty dumb security tool. It allows you become user X if based on a few simple credentials. In fact I can list them: your user, your group, the computer you are on and the program you are running. On top of that, you can ask it to do a few things when to assume user X's credentials like clear the environment, close a few files, log something - nothing you could not also do by running a wrapper script.

That's it. It's not much. To configure this relatively simple thing the author invented this god awful syntax. It's one virtue is it's compactness - so it's forgivable I suppose, particularly if he writes a clear man page readable by humans. But he didn't. He used EBNF. Now let me tell you, as a person who has written parser generators, an inevitable fact about any non-trivial grammar. When first written they are full of bugs. It's hardly surprising given they are regular expressions on steroids, and most people struggle to get just one non-trivial regular expression right - and an EBNF is a list of them. Thus even the person writing them can not predict the language they will recognise. The only way to get rid of the bugs is to compile a parser from them, test it on various language constructs, then fix the surprises.

You can take it from that EBNF is a great way to express things so a computer will understand it. But not so good for a human. If you are expecting someone to write a computer program to match the grammar, it might be a reasonable choice. If you are using it in a man page that only humans will read it's a bloody awful choice. Maybe it might be justifiable if he just ripped the grammar straight out of his source code. At least we could be sure it was right them. But he didn't. The source doesn't use a grammar at all.

But then if the grammar in the man page has never been compiled or tested, and given it is non-trival, then if what I said above is true it won't recognise the sudo config file. And it doesn't. For instance, nowhere in the grammar does it express all commands must lie on a single line. In fact he doesn't even mention it directly the text either, beyond saying at the end you can split long lines using a \. You are meant to infer it from the examples I guess.

So to sum up, we have a simple concepts expressed in a terse and complex configuration language, which is described by a an untested EBNF so complex it needs it's own syntax description in the man page, and we know the EBNF is incomplete. That is why the sudo man page is a cluster fuck. It has nothing to do with your "oh security is complex" throw away line.

And does it need a 144 page book to explain it? No, of course not. A man page about the size of the one we have could get all the concepts across just fine.

Comment I'm feeling Déjà vu (Score 2) 120

This reminded me of the claims Steve Perlman made in 2011. He said his technique would overcome Shannon’s Law. He was justifiably ridiculed. At least this mob isn't claiming they can break the laws of physics.

Oh wait, this is Perlman, peddling the same dog and pony show. Only this time he's got an article in IEEE Spectrum to print his claims. I hope that means he no longer says he can beat the laws of physics into submission.

The original claims of the impossible aside, the idea was to monitor the signal of each phone in real time from a central point, do some calculations to figure out the path distance from each antenna the phone, then do some more calculations to split up and phase change outgoing signal so the signals from those antennas so they constructively interfered to produce the wanted signal at the phone. The tracking has to be damned accurate - much better than GPS because a 1Ghz mobile phone signal has a wave length of about a meter, and you need better than 1/4 of the wavelength. And it has to be fast, because if the phone or objects around it move it all goes to put. So if you are walking at comfortable 1 metre per second, in 0.25 seconds it's all gone to pot. In a car that drops to 0.02 seconds. Oh, and since we as talking 1GHz, we have to measure it within a few 100 picoseconds. And since you don't use one antenna to service just one phone, he will have to be doing this for 100's of phone simultaneously. Oh, and that means when he is calculating the phase and amplitude of the signal his antenna is generating, he has to solve 100's linear equations with 100's of variables so he can ensure each signal he sends from each antenna adds up to what each phone needs. And since the collective antenna group is sending at oh, say 100Gb/s and he has to do this for every fucking bit, so he has 10 picoseconds per bit to do it in.

Yeah, right. It will be out by Xmas, I'm sure.

Comment Re:The Surprised Dutch Prosecutor (Score 1) 83

It doesn't sound like Tor is compromised to me.

Instead is sounds like that fact that the man running the original Silk Road was earning over $10M a month, and only got caught because he was sloppy got published far and wide. There is almost certain a flood of Silk Road clones out there now. It's probably one of the few things that outnumbers bitcoin clones. I expect most of them are run by 2 bit crims who picked up their l33t script kiddie skills from their prepubescent cousin.

I am somewhat surprised there has only been 3 of these so far. I expect this is the start of a never ending flood, and it will only drop off the newspapers when it becomes obvious it's an everyday event.

Comment My take (Score 1) 2219

Since this seems to be the place to post comments on beta:

  • 1. That /. is a predominately text site. That is what you current audience likes. When ther new beta home page displays there is no visible text. Instead we see some god awful graphic occupying the entire visible part of the page. Ick, ick, ick. If I want entertainment served up as cute photos I can go to imgur. Reduce the graphics, reduce the load times.
  • 2. The home page is ... boring. The current /. page is surrounded by options and knick knacks and useful information. The new one looks like - a spartan mobile page clumsily expanded to fill up a desktop page. Thus you end up with heaps of boring white space. Worse - to get to those knick knacks now required multiple clicks. For fucks sake - you must have witnessed the savaging both Windows 8 and Gnome 3 got here for doing the same thing. Yet you repeat same basic design error????
  • 3. The main body of the home page is also a mess. Before we had a regular layout - dense text describing the story with a small graphic breaking between stories that helped to break up the blocks of text. Now we either pure text with nothing to break it up, but the occasional story having a huge graphic you have to scroll past to get to the meat - the text. /. isn't a fashion magazine. We aren't here for the graphics. The graphics is there purely to help us parse the text.
  • 4. In the stories themselves. As others have said, there is too much white space. To me this seems to be just a line space / font choice issue. Hint: here the priority should be making the text easy to read, not making it look good, we clicked are here to read the text not to appreciate the skills of your graphic artist. The tiny font + heaps of white space may look "balanced" in somebodies mind, but it makes that harder to read than it need be.
  • 5. To me the most important piece of information about a comment is the thread it's in. That remains obvious, fortunately. The second most important piece of information is the moderation score. And you put that in a dim, small font???
  • 6. Keyboard navigation doesn't work. Seriously? You put this up for people to use as beta (one step away from production) and took away the feature at least 10% of us use to navigate it? What were you expecting - us to show the love?
  • 7. What happened to the plain text / html options when replying? What replaced it? How are we supposed to format our replies now? Surely you don't expect us to go on a whumpus hunt to find all this out? At the very least a "help with formatting" link would be appreciated.
  • 8. When replying to a comment there is no default subject. What's wrong with "Re: (old subject)?".
  • 9. You have a "Share" button on comments. I think it may be a permanent URL to the comment. That might have been what "Share" meant a decade ago, but now "Share" means post a link to your favourite social media site and when you click something like a dropdown of common social media sites appears. "Permalink" is a better name for it.
  • 10. On the main page, "Older" points left. On what planet is that the standard? Everywhere else "older" posts slide are on the right, and indeed a swiping the screen left makes them appear.

I can't help but agree with the FUCK BETA crowd, even though I have started modding them down. This isn't beta quality, and it should never have been inflicted on the undeserving public.

Comment Re:There is (probably) no analog phone network any (Score 1) 218

There are a tons of devices like these out there and if they cannot operate reliably over a VoIP based network

True. Theses devices are modems, and they power things like fax'es and EFTPOS terminals.

You know what? Modems are what we use to send digital over an analogue line. They don't work over some VOIP, but ye gods if you are kludging a digital line over VOIP emulating a analogue signal over a digital signal which is sent using an analogue PYH using a high speed modem - maybe it is time for a layer or two to die.

In other words, complaining that about VOIP making life difficult for modems is like a teamster complaining how the hard the asphalt is on the horse's hooves.

Comment Re:Huh? (Score 1) 218

What happens when your power goes out and you have Charter-crap or Comcast-shite or UVerse-dung ? You're screwed.

I don't know about Charter-crap or Comcast-shite, but here in Australia I can tell you what happens with a PODF. Initially batteries in the exchange power the PODF, and it's all good as you say. But if the outage is caused by a category 5 cyclone named say Yasi then some of the exchanges will be isolated, so the next thing that happens is the batteries go flat. Not a huge problem as the diesel generator cuts in automatically. But then it runs out of fuel, and your PODF dies.

So what now? Well I can tell you what thousands of Aussie's effected by Yasi did. They used their mobiles. If the mobiles didn't work they hopped into the car and drove to somewhere that did. And if their mobiles went flat they charged them using the car. Turns out determined human with car and mobile beats PODF every time.

Here in Australia we are building something called the NBN. Sort of like the FCC plan being described here, but we are skipping the trial step. (Well, not really. The NBN is better described as "re-writing the country", but exactly what that means is up for debate, however one thing is clear: POTS dies.) There used to be a huge debate about batteries, just like the one you are starting here. Yasi ended it.

Comment The race is on (Score 4, Informative) 162

If Google isn't careful, they will loose this race. Right now it is a bit of a toss up. It wasn't always so. A few years ago OSM was just toy, and the Android Google Maps app did a reasonable job of offline maps and searching the local area. My how things have changed.

On the one hand Google has been busily removing features from it's Maps app. I think they were trying to make it easier to use. Whether they achieved that is debatable, but what they done is make it less useful. You can't measure distances now, the search for local places of interest is all but useless, there is no way to find out what maps are available for offline use.

OsmAnd+ on the other hand has acquired one big missing feature - directions, navigation and voice. Amazingly its point of interest search works much better than Google, possibly because the locals enter the point of interest data. And it always had a number of features Google Maps doesn't:

  • Measure distances.
  • Add way points for navigation.
  • Directed Address Entry.
  • Display custom underlay / overlay maps.
  • Record / display GPS tracks.
  • Totally offline operation.
  • If something is wrong or missing, you can add it.

Normally I would not bet against Google. But collecting traffic and public transport out of the realms of possibility for Osm. If that happens, I can't think why anybody would choose to use Google Maps over OSM.

Comment Re:Is there any way to gain trust in a chip? (Score 2) 178

Given a "black box" implementation of a random number generator, is it possible to test its output sufficiently to gain some faith in its proper randomness?

The answer is an outright no.

The thing that crypto depends on isn't that a stream of random numbers appears to be random. It is that the next number is utterly unpredictable. No one, not even the person who generated it, will know what it will be. This means if it is used as a key to protected some data, no one can predict what that key will be.

One of ways every cryptographic cipher or hash is checked is to verify its output is indistinguishable from random data. If it isn't there is a weakness in the cipher or hash. So the output from any good cipher or hash will always appear to be completely random according to any test we can devise. But - the output is also completely predictable.

So all NSA need to do in their black box is start with a predictable key or salt (the time would be fine), push it through a cipher or hash and output something which by appears completely random. If the random number is used to as 128 bits AES key it will appear file to any test the user can generate. But say they use a 1us tick to generate the time, and the NSA knows to say within 10 minutes when the key was generated, then they will only have to brute force against 1 billion keys (in other words that 128 bit key only has 30 bits of entropy). This is trivial to do.

QED, the answer is emphatically no - there is no way to test if a black box is generating truly random numbers. Every black box must be treated as untrustworthy - which is exactly what BSD, Linux and I hope everybody else does.

Comment Re:3DES (Score 1) 230

The entire article is rubbish. It's little more that a viral ad for CSO, at Adobe's expense.

Yes, they used 3DES. 3DES has a number of nice attributes. It's strong, and it's slow. And if the password is kept safe, it's equivalent to a hash - but an unknown one. Being unknown renders it immune to brute force attacks. Being immune to brute force attacks makes it as good as bcrypt, scrypt and PBKDF2, but without the speed penalty those incur.

The one weakness is that password leaking. I gather it hasn't, so far. Which means that the passwords are safer than an alternative they recommend - salted with SHA-2. In fact, if they were salted with a single round of SHA-2 most of the passwords would be brute forced by now.

Which means while Adobe has done a good job of keeping those passwords safe (well aside from the leak), the security advice offered by CSO in the article is just plain wrong. Which makes the /. writeup of the article wrong. It should say "In trying to teach Adobe to suck eggs, CSO proves they know nothing about password security."

Comment Re:again? (Score 5, Interesting) 235

Hear hear! A bit of background to the politics of this:

NFTables is brought to you by a group of codes created when Alexey Kuznetsov decided to replaced the low level linux network stack for Linux 2.2 to make it more like what Cisco provided in IOS. The result added whole pile of new functionality to Linux (eg routing rules), and a shiny new highly module traffic control engine. Alexey produced a beautifully written postscript documentation for the new user land routing tools (the "ip" command), and 100 line howto for the far more complex traffic control engine tools (the "tc" command).

Technically it was a was tour de force. But to end users it could at best be called a modest success. Alexey re-wrote the net-utils tools ("ifconfig", "route" and friends) to use the new system, and did such a good job very few bothered to learn the new "ip" command even though the documentation was good and it introduced a modest amount of new features. But real innovation was the traffic control engine, and to this day bugger all people know how to use it.

At this point it could have gone two ways. Someone could have brought tc's documentation up to the same standard Alexey provided for ip, or they could ignore the fact that almost no one used the code already written and add more of the same. They did the latter.

It was also at this time the network code wars started in the kernel. Not many people know that a modest amount of NAT, filtering and so on can be done by Alexey's new ip command. But rather than build on that Rusty Russell just ported the old ipfwadm infrastructure, called it ipchains (and later replaced it with iptables). There was some overlap between Rusty's work and tc, and this has grown over time. For example the tc U32 filter could do most of the packet tests ipchain's introduced over time on day 1. Technically the modular framework provided by tc was more powerful than ipchains, and inherently faster. Tc was however near impossible for mere mortals to use even if they had good documentation. There were some outside efforts to fix this - tcng was an excellent out-of-tree attempt to fix the complexity problems of tc. But in what seems like a recurring theme, it was out of tree and ignored. In contrast, Rusty provided ipchains with the some best documentation on the planet. In the real world the result of these two efforts are plain to see - while man + dog uses iptables, there maybe 100 people on the planet who can use tc.

Another example of the same thing is IMQ. IMQ lets you unleash the full power of the traffic control engine on incoming traffic. (Natively the traffic control engine only deals with packets being sent, not incoming packets - a limitation introduced for purely philosophical reasons). IMQ was very well documented, and heavily used. The people who brought you tc had a list of technical objections to IMQ. I don't know whether they were real or just a case of Not Invented Here, but I'd give them the benefit of the doubt - they are pretty bright guys. So they replaced it with their own in-kernel-tree concoction. (For those of you who don't follow the kernel "in-tree" means it comes with the Linux Kernel. An out-of-tree module like IMQ means at the very least you have to compile the module source, and possibly the entire kernel.) For a while this discouraged the developers of IMQ so much they stopped working on it. If you follow that link, you will see it's back now. Why? Because the thing that replaced it had absolutely no documentation. They never do. So no one could use the replacement. Again, in the end, the thing code that was documented won the day.

By now you might be guess where this is heading. We have two groups in the kernel competing to provide the same networking functions. All sorts of weird modules were added to the traffic control stack - things like mirred, nat, blackholing, ipset. The more observant among us noted they allowed the traffic control engine to replace iptables. No one used them of course, as in the fine and continuing tradition of this group, they weren't documented. So the net effect was to add unused orphan code to kernel. To this day, I don't know why it was tolerated by Dave Miller, the head of the networking stack.

NFTables is that latest attempt by this group to unseat iptables. This time it looks like they will succeed. For the most part this is a good thing. The duplication between iptables and Alexey's framework was always a huge technical wart, and Alexey's framework was always the better one. It will be even better if they backport the classification engine to tc, so we can use it to assign traffic control classes. If they do that most of the duplication will be gone, at last.

The one minor problem is that true to form, there is no fucking documentation.

It appears after consistently having their code ignored by most Linux users for over a decade, these bastards are incapable of learning the lesson. If Linus and his appointed networking stack maintainer, Dave Miller allows this state of affairs to continue with NFTables it will be another right royal mess.

TL;DR: If Dave Miller doesn't grow some balls and say "no user land documentation, then automatic NACK", nothing will be replaced. Instead we will end up with yet more duplication.

Comment A new game of wack-a-mole has begun (Score 1) 620

If the comments here are right, it wasn't the technologies Silk Road is based on that caused the issue, it was that he used dumb things like gmail addresses and mailing fake documents to his physical address. So the underlying technology stands firm, and it is now well know the he made millions from it.

There are two ways you can remove a weed. One way is to carefully dig it up, roots and all, and put it in the incinerator. The second way is to wait into it had flowers, then hit it with a weed wacker; spreading it seeds far and wide. This looks like the latter.

If I didn't know better I say someone in the Department of Justice is trying to set themselves up for a job for life. But I do know better. They aren't that smart.

Comment Re:not just charge cycles (Score 1) 364

They loose 20% of their capacity - when they are fully charged or fully discharged. Quoting Wikipedia:

Loss rates vary by temperature: 6% loss at 0 C (32 F), 20% at 25 C (77 F), and 35% at 40 C (104 F). When stored at 40%–60% charge level, the capacity loss is reduced to 2%, 4%, and 15%, respectively.

And yes, that is real. On reading that 5 years ago I decided to store my laptop's battery in the backpack, at 50% charge, unless I planned to use it. It still has 2/3's of charge today.

All that aside, again quoting Wikipedia on the ESS - the Tesla's battery system:

The ESS is expected to retain 70% capacity after 5 years and 50,000 miles (80,000 km) of driving (10,000 miles (16,000 km) driven each year). However, a July 2013 study found that even after 100,000 miles, Roadster batteries still have 80%-85% capacity and the only significant factor is mileage (not temperature)

As it happens, 80%-85% after 100,000 means 80%-85% after 500 cycles, which just happens to fit the characteristics of a LiMn battery. So there is nothing remarkable about the Telsa's performance. It's just today's battery technology done right. Granted, given it is almost always done wrong, this is a major achievement.

Comment Re:Voting "Accident"? I think not. (Score 1) 343

I don't know what lots translates to in the US, but here in Australian it translates to a ballot paper 1.0 meter wide. The polling booths are 0.6ms wide, so you can lay the thing flat. The number of candidates exceeded our printing technology (or maybe the ballot paper had to fit into the ballot box - I don't know), but its put a maximum size on the ballot paper. The only option to fit every candidate on was to reduce the point size of the print. The had to reduce it to 6 point to make it fit.

Humans can't read 6 point. So the had to issue magnifying glasses so we could read the damned things.

Still, that isn't the problem. We have two more complications. We have preferential voting. This means you have to number every box from 1 to the number of candidates. It works wonderfully well the number of candidates is sane - far better than the US system of first past the post.

Only in the senate the number of candidates isn't sane. It is literally near impossible to mark 100 candidates without duplication or missing a number. To have a hope you have to spend ages double checking and triple checking, and if you make a mistake you can't correct it. Corrections on a ballot paper invalid it. You have to ask for a new ballot sheet and start again, and pray you don't make a different bloody mistake.

Are you getting the idea now? It is clear it is near impossible for a human to make a valid full senate vote? Good. Because what happens next leads us to the current situation, where a man who had a video of him & his mates flinging kangaroo poo at each other up on YouTube during the election got elected to the current Australian federal senate.

Because it is impossible to fill in, they had to simplify it. What they did seems fair enough. They introduced "above the line" voting. To vote above the line you effectively delegate your vote a 1 party. In other words you mark one box. The party has submitted a full senate vote to the Electoral Commission earlier, and that is used as your full preferential senate vote. You can still do a full preferential vote by filling in every square below the line, but you would have to be completely anal.

So, think about it. How do you game this system? If you are a big party it isn't easy, but if you aren't so tied down by ethics you create lots of little parties with confusingly similar names. The Electoral Commission helpfully colludes with you by randomising those names on the ballot sheet. So the voter is confronted to 20 to 30 names of parties most of which he has never heard of before, on a piece of paper so wide he can't lay it flat in the ballot box so he can read them in a single pass. Naturally lots of mistakes are made. The preferential system means if a small party doesn't get in, their votes (which remember they control now) flow to another party of their choice. It doesn't take much imagination to how they might make their choices.

There is one final twist. For the senate, you aren't electing 1 person. You are electing 6. The 1st 5 winners have almost certainly gobbled up more than 90% of the votes, so the last one is determined by tiny fraction.

The really sad part of all of this is while the extra complexity of preferential voting is more than worth it when electing one candidate, it is a complete waste of time when electing 6.

Anyway, don't lecture us Aussie's on how to completely fuck up a voting system. We have all of you beat by a large margin.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...