Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:flashy, but risky too. (Score 1) 83

Although I see problems with this I kind of doubt counterfeiting is going to be one. To successfully do this the driver/Uber would have to have access to a huge warehouse of counterfeit goods so they could exchange the real item (chosen by the customer, not the Uber driver) for a matching fake one. I just don't see that as a practical scheme for stealing goods.

Comment Re:cryptobracelet (Score 2) 116

We'll see.

It's absolutely wrong that I am proposing a 'stealable' ID. No, it's not that at all. Like NFC (ApplePay and others) you don't send out your ID, your bracelet will engage in a two-way conversation that uses generates unique identifiers every time that prove that it's you without giving the system communicating with you the ability to impersonate you. It's not hard at all; we should have been doing this years ago. This is described in Bruce Schneier's Applied Cryptography twenty-fucking-years ago. Chapter 21(Identification Schemes) describes "zero-knowledge proof of identity". Curiously, researchers Feige, Fiat, and Shamir submitted a patent application in 1986 for this, but the Patent Office responded "the disclosure or publication of the subject matter ... would be detrimental to the national security..." The authors were ordered to notify all Americans to whom the research had been disclosed that unauthorized disclosure could lead to two years' imprisonment, a $10,000 fine, or both. Somewhat hilarious, as the work was all done at Weizmann Institute in Israel.

That said, I do think that groups like the NSA and FBI have been quite successful in keeping people (like Jeff4747) remarkably uneducated. Banks, credit card companies, and groups like Google that make gigabucks tracking people have held back from doing things right as well -- and they're paying for it today.

To say again. It is easy to build a system that would securely verify that you have authority to do something, without giving the ability for somebody else to impersonate you. It's somewhat more challenging than printing number in plastic on a credit card, but only a tiny bit more challenging.

This will happen. Once it does people will wonder why it took so long.

Comment Re:cryptobracelet (Score 1) 116

The problem with phones is that you can lose them or break them or have them stolen. I agree that it's a good place to start, though.

I believe that the RFID tag that Coren22 suggests don't have, and can't have, the processing power required to do this right. You don't want to say "Yes, I'm 132132123123", that would be *way* too easy to fake. You want to have a back-and-forth communication that shows that you are who you are, without giving away your ID.

I think the bracelet would become a status symbol -- the status being "yeah, I care about security." I'm actually not kidding.

Comment cryptobracelet (Score 1) 116

At some point, and my guess is pretty darn soon, reasonable people are going to have a very secure cryptobracelet that they never take off, or if you take it off it will never work again.

The bracelet would work like the NFC chip in current phones, it would create unique identifiers for each transaction, so you can be verified that you are who you are without ever broadcasting your identity.

Then, all email and every other communication can easily be encrypted, securely, and without adding complication. You won't have to worry about remembering a hundred passwords, or about what happens when the store you bought things from is hacked, or that a library of 100 millions passwords will find yours.

I grant that some will protest that this is not natural (I don't want to wear something on my wrist!) but people do a hundred other unnatural things every day (brush their teeth, use deodorant, wear glasses, live longer than fifty years...) The benefits will be enormous, the changes minimal, and this will be led, I believe, by thought leaders.

Comment Next step -- VMT (Score 3, Insightful) 114

The problem with license plate readers is that there are only so many cameras out there. How can they know where everybody was all the time?

The answer is the Vehicles Miles Traveled tax. Many states and the federal gov't have proposed over and over that all cars have GPS trackers in them that tax them on how many miles they drive. They say "the problem is cars are more efficient, so we don't make as much money." (Can't you just raise the rate then? wtf?) or that this is "more fair", everybody is charged the same amount for how far they drive; as opposed to how much gas they use and how much carbon they emit.

But, come on, the real reason is almost certainly to track where everybody went, all the time. If there is anything the Snowden revelations have demonstrated, it's that if there is any possible way to capture data on people, the government is going to do it. Anything you can imagine, and many things that you could never have imagined, are being done. If you want to believe that a GPS tracker that hooks up to a gas pump only sends one bit of information, well, I suppose you deserve what you get.

Comment Re:Schneier got it right a decade and a half ago (Score 1) 119

Yes, Java and Python (3) and Qt all are causing enormous difficulties as they followed Microsoft down the fantasy road and thought you had to convert strings on input to "unicode" or somehow it was impossible to use them. Since not all 8-byte strings can convert there must either be a lossy conversion or there must be an error, neither of which are expected, especially if the software is intended to copy data from one point to another without change.

The original poster is correct in saying "stay away from Unicode". This does not mean that Unicode is impossible. It means "treat it as a stream of bytes". Do not try to figure out what Unicode code points are there unless you really really have a reason to. And you will be surprised how little you need to figure this out. In particular you can search for arbitrary regexps (including sets of Unicode code points) with a byte-based regexp interpreter. And you can search for ASCII characters with trivial code.

Comment Re:Type "bush hid the facts" into Notepad. (Score 1) 119

Actually Plan 9 and UTF-8 encoding existed well before Microsoft started adding Unicode to Windows.

The reason for 16-bit Unicode was political correctness. It was considered wrong that Americans got the "better" shorter 1-byte encodings for their letters, therefore any solution that did not punish those evil Americans by making them rewrite their software was not going to be accepted. No programmer at that time (including ones that did not speak English) would ever argue for using anything other than a variable-length byte encoding for a system that still had to deal with existing software and data that was ASCII, this was a command from people who did not have to write and maintain the software.

The programmers, who knew damn well that variable-length was the correct solution, were unfortunately not bright enough to avoid making mistakes in their encodings (such as not making them self-synchronizing). UTF-8 fixed that, but these errors also led some of the less-knowledgeable to think there was a problem with variable length.

Unfortunately political correctness at Microsoft won, despite the fact that they had already added variable-length encoding support to Windows. It may also have been seen as a way to force incompatibility with NFS and other networked data so that Microsoft-only servers could be used.

One of the few good things to come out of the "Unix wars" was that commercial Unix development was stopped before the blight of 16-bit characters was introduced (it was well on it's way and would have appeared at the same time Microsoft did it). Non-commercial Unix made the incredibly easy decision to ignore "wide characters".

The biggest problem now is that Window convinced a lot of people who should know better that you need to use UTF-16 to open files by name (all that is really needed is to convert UTF-8 just before the api is called). This led to UTF-16 to infect Python, Qt, Java, and a lot of other software and cause problems and headaches and bugs even on Linux. There is some hope that they are starting to realize they made a terrible mistake, Python in particular seems to be backing out by storing a UTF-8 version of the string alongside the UTF-32.

Comment Re: novice programmer alert! (Score 1) 119

The big downside of UTF-8 is using it as an in-memory string. To find the nth character and you have to start at the beginning of the string.

And this is important, why? Can you come up with an example where you actually produce "n" by doing anything other than looking at the n-1 characters before it in the string? No, and therefore an offset in bytes can be used just as easily.

C# and Java use UTF16 internally for strings.

And you are aware that UTF-16 is variable-length as well, and therefore you can't "find the nth character" quickly either?

You might want to retake compsci 101.

Comment Re:Type "bush hid the facts" into Notepad. (Score 1) 119

Maybe you're willing to accept that ambiguity, and use the rule, "If the file looks like valid UTF-8, then use UTF-8; otherwise use

Yay! You actually got the answer partially correct. However you then badly stumble when you follow this up with:

8-bit ANSI, but under no circumstances UTF-16

The correct answer is "after knowing it is not UTF-8, use your complicated and error-prone encoding detectors".

The problem is a whole lot of stupid code, in particular from Windows programmers, basically tries all kinds of matching against various legacy encodings and UTF-16, and only tries UTF-8 if all of those return false. This is why Unicode support still sucks everywhere.

You try UTF-8 FIRST. This is for two reasons: first because UTF-8 is really popular and thus likely the correct solution (especially if you count all ASCII files as UTF-8, which they are). But the second is that a random byte stream is INCREDIBLY unlikely to be valid UTF-8 (like 2.6% chance for a two-byte file, and geometrically lower for any longer ones), this means your decision of "is this UTF-8" is very very likely to be correct. Just moving this really reliable test to be the first one will improve your detection enormously.

The biggest help would be to check for UTF-8 first, not last. This would fix "Bush hid the facts" because it would be identified as UTF-8. But a variation on that bug would still exist if you stuck a non-ASCII byte in there, in which case it would still be useful (but much much less important) to not do stupid things in the detectory, for instance requiring UTF-16 to either start with a BOM or to have at least one word with either the high or low byte all zero would be a good idea and indicate you are not an idiot.

Comment Re:projecting UV images from below liquid resin? (Score 1) 95

Thank you. I just couldn't understand it; although clearly the clues were there and you interpreted them correctly.

So the UV light goes through the bottom window, through the oxygen-rich zone that will not polymerize. When the light gets through that zone, it polymerizes the resin. The polymerized resin must block the light from going deeper into the liquid resin.

If you have a thick part, though, I wonder if this could work? New unpolymerized resin would have to flow into the gap between the hardened part and the window, and this 'dead zone' is only microns thick. Now, I do believe that most 3D printed parts aren't solid blocks; but this could be a limitation.

Still, looks quite cool. I am sure that I'm not alone wanting to build stuff with it!

Slashdot Top Deals

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...