Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:"Technologically impossible?" (Score 3, Interesting) 219

"we'll probably figure out how create a system that uses authenticated electronic ledgers to prevent fraudulent tampering (blockchains, etc) while still preserving anonymity."

We'll probably not.

This is not impossible. In fact it is a solved problem. Blind Signatures can be used to do this. I actually designed and mostly implemented such a system: Source and docs here. I also was not the first to do this (David Chaum deserves far more credit than I do: his contributions to cryptography have enabled so many amazing things including my little experiment) .

That system lets everyone vote exactly once, maintains secret ballot, and gives voters the tools to confirm their vote was counted, and if not they can cryptographically prove it to the media or any auditors available.

However it also makes buying and selling of votes very robust and easy. Without an isolated voting booth, there really isn't any hope of making it impractical to sell your vote, or force people to vote particular ways. This is as important as the secret ballot: both are requirements for our electoral systems.

I have designed electoral systems, that use a voter booth, paper records, and some cryptographic verifiability that are resistant to coercion and vote selling/buying which makes me think there may be room improvement in this area. However paper ballots and voting booths are pretty close to ideal: The simple paper system is also easier for people to trust and verify, which is very important for elections.

Comment Re:22,338,618 digits (Score 4, Insightful) 132

The number is 2^74,207,281-1, thus its exactly 74,207,280 bits long and all those bits are 1. That's 9,275,910 bytes, or roughly 9MiB. When talking about mersenne primes on a tech site, using base 10 versions encoded as ascii (or utf-8, its the same for that subset) seems like an odd measure of size.

Comment Re:80% Slower? (Score 4, Informative) 161

80% slower = 20% of the speed = 5 times slower (takes 5 times as long to do something) = 500% slower.

OR

80% slower = takes 80% longer to do something = 1/1.8 times slower = 0.55555555555 times slower = 44% slower.

See, ambiguous. There are 2 well defined definitions of 80% slower: gets 80% less done per unit time, OR takes 80% longer to get the same thing done. They have very different meanings. For one of them, 100% slower = gets nothing done ever, and the other means it takes takes twice as long (you have to wait an extra 100% of the original time). The same applies to 100% faster: one version means infinatly fast, and the other means twice as fast.

Comment That's the goal. (Score 5, Insightful) 266

Any good software architect or engineer should have the goal of minimizing the needed code / work for a project. If it takes metaprogramming, then fine. If it requires creating general purpose run-times (such as auto optimizers, such as simple hill climbers or as advanced as large neural nets) thats fine too. If the general purpose runtimes can code and thus are meta programs, great.

The idea of declarative programming for specialized run-times is nothing new. If you apply it to a general runtime that can do programming, you then have a system that makes functions that meet specs: programming moves to producing what ever declarative specs such a system consumes. Once again its just a move to a higher level language and abstraction. If (and thats a big if) its trivial to write in such a language, and all us coders no longer need skill or experience to develop new applications and we all become unemployed, well, thats the goal right: make developing your applications trivial?

Every advancement we make, assemblers, higher level languages (like C), and all those language paradigms (OOP, functional, generics/templates etc) are supposed to help with this. So are libraries. We make software hundreds of thousands of times more complex than we used to because of these advances. Much of the software of today may become trivialized by the coming advancements just as much old software has been since we started programming. Maybe we will just keep making software more complex, or maybe well will create more different applications, or maybe we will just have time to catch up, optimize and fix all the broken shit? Or most of us could become unemployed because we have enough complexity, and new tools will make the work needed go down not up.

Comment Re:Ppl who don't know C++ slamming C++ (Score 2) 200

Why is inheritance a bad idea?

It couples polymorphism/dynamic dispatch to implementation sharing. These concepts can be used orthogonally through other means which gives much better flexibility. One example is what Go does with interfaces for one polymorphism/dynamic dispatch and composition for implementation sharing. You can do this style in C++ as well, and it often works out better than massive type hierarchies. Its much easier to re-factor (thus requires less upfront design) and avoids the problems of multiple inheritance. It can get you lower coupling too.

Comment Re:BCP38 (Score 2) 312

bcp38 stops people from using fake IP addresses. That does not solve the problem in general. If my pipe (or collection of pipes) is bigger than your pipe, I can still destroy your service. While it seems like many people here don't think you can do better, there are some more options.

First let me say this is not my field. It's been a couple years since I studied BGP, but since I don't see anyone posting robust solutions, I'll provide my hand waving arguments and proposals. I will not claim any of this is practical, but I do think it is technically possible (costs and performance aside).

Note that other limited bandwidth fair delivery systems (email, physical mail, Tor hidden services etc) all have the same set of problems.

Given that you can purchase DoS (distributed or not), there is the question on if enabling this (or doing it) is illegal. The legal solution either doesn't work, or bans proxies like Tor. Thus I don't consider it a valid solution.

There is also the approach of changing service models: some services can operate by simply publishing messages (DNS is an example, but so are news sites without user interaction). These don't have to depend on the packet switched network directly, and can use distributed content based models, like Freenet that get around the problem of having to host your own stuff. I don't see how this can generalize too well (there's lots of overhead and latency), but Bitmessage is an interesting example of using publication + encryption for private messaging.

Also there are cost and payment based approaches: suppose I had to pay the cost of delivering and processing my packet to send it to you, or provide some significant proof of work. Ripple does this but that's just one example. I'm not aware of ideas for how to scale this to IP scales (the stateless packet based nature makes it really hard). I think Skycoin is trying, but they aren't far enough along convince me it's possible.

Now for my crazy ideas: Suppose we could deploy a system for pay for processing to establish a session with the service, and after that we could continue to the the session, and perhaps resume it long into the future (ex: you get a shared secret or a private key, and put it in a cookie). Once authenticated, you could then use a network that only allows solicited traffic. This is possible: for example once connected to a Tor hidden service, you have a route to it which can be closed off if you abuse it, but other people are unable to flood the route, and the service could stop accepting new routes leaving your existing connections safe from DDoS. Tor hidden services don't have an DDoS resistant way to "sell" routes/sessions, but that could be fixed (Send bitcoins to proper address, and the details you need will be published encrypted with your public key. Of course the bitcoin network has DDoS problems of its own, but lets not recurse here, assume you have something to fill that role that is DDoS resistant, maybe like Ripple?).

So we have a proposed design that solves this problem for Tor hidden services. We should now be able to exploit the homomorphisms here and fix and the internet (IP) and email (left as exercise for the reader, but I recommend adding a key selling scheme to Pond if you care about privacy, because screw plain text email). So focusing on the internet, if you then had a system where only people owning valid sessions could transmit to the service (at some other IP or set of IPs) enforced at entry to the internet (ISP level I guess), then you have a setup where DDoS can't effect existing users which is a huge win, and if you make get an auth system to be DDoS resistant (needs some payment or proof of work setup) then it's pretty much DDoS resistant.

So how can we filter traffic based on permission/keys/session to particular addresses? We could allocate some massive block of IPv6 addresses (thats cheap right?). In the trivial case the user just computes the address from public information+their private key/data. The set of addresses that actually work is large, and if particular ones leak and start getting hit by DDoS, they simply expire, and get removed from the routing tables, and anyone using them legitimately needs to re auth and get a new session, or use some new info+their secret data to compute a new address. I assume that is way to abusive of routing tables, but you may be able to keep it localized if the entire range you might use gets routed together to your edge routers.

If you are willing to do a some bigger changes, things get more interesting, and you can smite the bad traffic on entry to the internet, not on exit. Each special (normally unroutble) destination address could have a cryptographic ring associated with it that contained several users keys. The idea is that we permit some fixed set of users to send to a given address, and provide a mechanism for them to authenticate. Due to the connectionless nature at this level, either all packets must be special (contain a signature or token of some kind. In the previous trivial example, the token that the special destination address), or must be preceded by a special packet. Either way, it would be possible to randomly sample packet streams (in the backbone, or on delivery) and determine if this enforcement was omitted by particular entry points (which would be handled by not forwarding their data to such protected addresses). Given that implementation would have major costs, being able to enforce it is important (then it becomes a feature an ISP can advertise).

So I think such DDoS resistant networks can exist. Now I'd like one.

Comment Re:Use Bitcoin Blockchain technology.. (Score 1) 388

I guess I should mention my optional verifiability extension to voting booths: the voting system gives you a receipt with a random (*) id generated for each of the items on the ballot. The table of results (list of all ballots, but broken up by item so you can't correlate between separate items), when published online will contain these IDs and thus you can find your vote and make sure its there.

(*) It turns out you can do better than random (random has the issue that they could cheat and give 2 voters the same id, and thus the both think the same vote is theirs). However if they let the voter choose the id, it opens a hole for coercion (someone can request you mark your ballot with a specific id they provide you with). A good choice would be you give the election system a sha hash of your random (arbitrary really, random isn't important) number, then they give you back a random number which you xor with your number and use as the ID. This means you have no control over the resulting ID (prevents coercion) but neither does the voting system (prevents treating multiple votes as one vote).

Its worth nothing that this extension is optional on a per voter basis: you can allow anyone user of the voting booth to verify their vote is in the published list and thus counted, but anyone else can vote as normal (on paper if they want) and just trust it if they desire. There is the issue this requires some computation to be done by the voter, but it turns out all they need prepared is a random string and a hash of it, which can be prepared in advance (ex: as QR codes) by what ever software they trust. All the information they need to take home to verify it can be printed out as a receipt. You can't let them use phones and such in there since they can serve as a recording device which opens up another coercion hole (you can ask someone to film them-self casting a vote for X).

Warning: this design is my original work. It may be horribly flawed. I am not an expert on such topics.

Comment Re:Use Bitcoin Blockchain technology.. (Score 2) 388

Its easy to design an anonymous verifiable voting system using crypto (I don't think your proposed way solves all the problems, mainly secret ballots, but yes, there are ways). However its hard (if possible) to make one where selling your votes isn't equally verifiable: if I can prove my vote was or wasn't counted, generally I can prove to a third party how I voted thus I can sell them my vote. If this proof can be fully automated and done anonymously via bitcoin, vote selling would become super easy and completely safe. So do other forms of coercion.

I put some time into implementing such a system, but as documented here in the readme, there are basically unsolvable problems in the coercion and vote selling area. I haven't worked on the project much since some kind /. commenter pointed out the severity of the issue and I was unable to come up with a solution (thus it's in an unfinished unusable state). If you have anything better, please let me know. I hadn't thought of using the block chain (good idea!), but I think I found alternative solutions for the problems it solves.

Since then I've spend some time trying to design systems with somewhat different tradeoffs, but I haven't gotten anything really better than voting booths + some verifiability that your vote is counted while keeping secret ballot (a requirement for resisting coercion attacks).

You are clearly correct that just about anything (including your design) is better than the current electronic voting systems. Closed source uninspectiable systems that don't offer verifiability or even audibility are a joke, and advocating for or deploying them should be treason (Its worse than what Snowden did).

Submission + - How Nigeria Stopped Ebola

HughPickens.com writes: Pamela Engel writes that Americans need only look to Nigeria to calm their fears about an Ebola outbreak in the US. Nigeria is much closer to the West Africa outbreak than the US is, yet even after Ebola entered the country in the most terrifying way possible — via a visibly sick passenger on a commercial flight — officials successfully shut down the disease and prevented widespread transmission. If there are still no new cases on October 20, the World Health Organization will officially declare the country "Ebola-free." Here's how Nigeria did it.

The first person to bring Ebola to Nigeria was Patrick Sawyer, who left a hospital in Liberia against the wishes of the medical staff and flew to Nigeria. Once Sawyer arrived, it became obvious that he was ill when he passed out in the Lagos airport, and he was taken to a hospital in the densely packed city of 20 million. Once the country's first Ebola case was confirmed, Port Health Services in Nigeria started a process called contact tracing to limit the spread of the disease and created an emergency operations center to coordinate and oversee the national response. Health officials used a variety of resources, including phone records and flight manifests, to track down nearly 900 people who might have been exposed to the virus via Sawyer or the people he infected. As soon as people developed symptoms suggestive of Ebola, they were isolated in Ebola treatment facilities. Without waiting to see whether a "suspected" case tested positive, Nigeria's contact tracing team tracked down everyone who had had contact with that patient since the onset of symptoms making a staggering 18,500 face-to-face visits. The US has many of these same procedures in place for containing Ebola, making the risk of an outbreak here very low. Contact tracing is exactly what is happening in Dallas right now; if any one of Thomas Eric Duncan's contacts shows symptoms, that person will be immediately isolated and tested. “That experience shows us that even in the case in Nigeria, when we found out later in the timeline that this patient had Ebola, that Nigeria was able to identify contacts, institute strict infection control procedures and basically bring their outbreak to a close,” says Dr. Tom Inglesby. “They did a good job in and of themselves. They worked closely with the U.S. CDC. If we can succeed in Nigeria I do believe we will stop it here.”

Comment Re:don't get it (Score 3, Interesting) 220

You're doing it wrong. It's trivial to set up PBKDF2-RIPEMD160 rainbow tables just as with any other encryption or hashing algorithm. You're still going to try decrypting the same root directory block with the IKs until you get back a valid block, at which time you can decrypt the whole volume with the IK and do a reverse lookup to get the original password as a bonus.

Just use a salt, and that problem is solved. It forces you to incur the full cost for every different drive (making the tables useless). A reverse hash table for all possible 160 bit outputs wouldn't fit in the observable universe, so that's not a real threat.

Comment wait, what? (Score 2) 220

Increasing security is counterproductive because it enables people who suck at security to have better security? Making it easier to have better security should be a goal, not something to avoid. It's not a big difference in this case, but I see no reason to oppose an improvement simply because its an improvement. It's not like only us crypto nerds deserve security.

The only point of running multiple rounds of the key derivation function is to increase the brute force cost. While you may argue that the extra 10x-300x times isn't that great, the total 300,000 times is pretty darn useful. It can turn a day long attack into 8 thousand years. For a typical naive password thats ~7 characters. For a good (random base 64) password that's ~ 4.5 characters. Sure, all this does is protect people with weak passwords, but that's almost everyone. If you can get them real security despite that, it's a big deal. Updating this to be as beneficial as practical as process speed increases is standard practice, not something to complain about. These are basically free benefits, and if we don't take them, our security will degrade as performance improves.

Comment More iterations allows shorter passwords. (Score 1) 220

For a given security level more iterations means you can have a shorter password. In this case, if it really is 300 times slower to try a password in a brute force or dictionary attack, you can drop log(2, 300) = 8.2 bits of entropy. According to xkcd 936 typical naive passwords have ~ 28 bits /11 character = 2.55 bits of entropy per character. This means you can drop ~log(2, 300) / (28/11) = 3.2 characters from your password and keep the same security. Alternatively, you could keep the same password and its as good as if it were 3.2 characters longer. Note: this is just assuming the best case of 300 times harder and a crappy passwords. Realistically it's less effective than that, but you get the idea.

Slashdot Top Deals

If I had only known, I would have been a locksmith. -- Albert Einstein

Working...