First, DES is 56 bit (near enough 60). Triple DES as per first mode (the authorised standard) is 168 bits. The article fails to distinguish, implying the authors are just a little bit naff. 3DES seems to be quite safe, as long as not used in DES emulation mode. And who the hell emulates a mode that was broken in the 80s?
Second, Blowfish was replaced by TwoFish, ThreeFish and Speck. Skein, an entrant to the DES3 challenge, makes use of ThreeFish.
Third, the Wikipedia page states it has been known for a long time that weak keys are bad. This particular attack, though, is a birthday attack. You can find all the ciphers vulnerable or free that you should be using. Anything not on the list is something you are solely responsible for.
In other words, this information is about as useful as telling up that Model T Fords weren't good at cornering at highway speeds. Below are some links, I can't be buggered to HTML-ify them.
I do not trust most encryption software these days, but that's because programmers these days are sloppy and arrogant.
But what do I know?
As to my absence I've been a bit overwhelmed by work stuff, sorry about that, it's no excuse
Why must you record my phone calls?
Are you planning a bootleg LP?
Said you've been threatened by gangsters
Now it's you that's threatening me
Can't fight corruption with con tricks
They use the law to commit crime
And I dread, dread to think what the future will bring
When we're living in gangster time
...you want something akin to Mondo cards, only with all the knowledge that has been developed since on contactless payments and strong access security. Once you have cards that require no network, no central bank and no other external dependencies beyond the communications protocol, there is nothing that rogue officials can do to confiscate your money.
For those not aware of the history of cashless societies, Mondo had tamper-proof strongly encrypted cards that could act like cash. You could transfer money between cards. There was no risk of anyone setting the card to a prior state as any attempt to break into the device destroyed it. This did mean only one vendor made the cards, but we've come a ways since then. The Orange Book and EAL standards cover tamper-proofing and unauthorised writes to memory. Other standards cover application software design and protocol design. All you need is for card vendors to get certified against the general standards, financial transaction standards and the standards specific to some open specification. Vendors can then get encryption keys signed by such a standards verification body. So it would be a procedure similar to the old Level 3 SSL certificates but with all the extra verification layers you'd expect from the FAA or DoD.
You now have cashless, bankless, networkless anonymous financial activity on par with the Shadowrun fictional series, only a good deal more secure still and without having to physically transfer objects. Contactless transfers using unlicensed spectrum at very low power would require the sender to be in range of the intended receiver and to press some keys. That's it. Same sort of range as a key fob. Communication would be by encrypted link, using an authenticating + validating mode to prevent MitM attacks or other attempts at altering transactions.
What could the cops do? Well, they could confiscate any device they didn't recognise. That might not go down too well, though. They could confiscate the card, but as you can do wireless card-to-card transfers with this scheme, there's no guarantee they'd have confiscated any actual money by doing so. They can't determine if you did or didn't, except with the access code. It's not a computer, per se, as it doesn't need to be Turing Complete, and it's not an account, so there's no law on the books that requires that access be given.
Because the device complies with international banking laws and the PCI processing regulations, it would be legal to use such a card. It would be an authorized, licensed financial transaction processor between brick-and-mortar financial institutions, it's merely using the older networking method of store-and-forward with packet fragmentation and fragment reassembly. All perfectly legit operations. Because PCI governs logging, the device is compliant with all tax evasion and money laundering laws. There aren't any laws saying anyone has to actually access that information, the only laws that currently exist merely require that they can if authorized for a lawful need. Let the Feds figure out how to deal with that without making impossible demands of traveller's cheques and cashier's cheques, which can also be used as money equivalents.
The SKA interferometer will be able to directly see a planet's atmosphere at a range of 100 light-years. If two or more gasses are present where they react in each other's presence AND the ratio of those gasses is stable over time, you have concrete proof of life. This cannot be achieved by known (or unknown) natural processes, a dynamically maintained equilibrium that would cease to exist through any process other than direct action requires a biological process.
Actually, it requires at least two. Any organism that tries to make things favourable for itself must necessarily alter some second dynamic to be unfavourable to itself. You cannot do more work without producing more byproducts (conservation of matter) that are in a lower energy state (conservation of energy, since energy has been taken out) where some of these are toxic to the organism (if it wasn't, it would be processed for energy and matter until it was toxic).
So, one organism always produces an instability. Two is the minimum. The more you have, the more stable the dynamic becomes as there are increasingly better solutions to the set of equations. If an organism develops that tries to exploit the equilibrium (which is inevitable), the equilibrium is lost and the new organism is put at a deficit. A new equilibrium will emerge as a result.
This, by the way, falsifies Nash's argument against his equilibrium. The equilibrium is an emergent phenomenon, so if the dynamic changes, the equilibrium changes. Nash made an error by assuming a dynamic equilibrium has to itself be around a static point. No. The dynamic equilibrium has one Strange Attractor per class of actor in the system. That really should have been obvious and I'm honestly shocked Professor Nash did not see this in his original work or his later appraisal.
Now we get onto communication. Could, in principle, a SKA-class array or the half kilometre single dish in China, be used to communicate at a distance of 100 LY to a civilization of like ability?
Much more difficult. The so-called waterhole is the obvious line to use, as there is virtually nothing natural emitting there. Incredibly quiet. Long baseline interferometry can be used to cancel out much of the random noise from individual telescopes, terrestrial sources, etc, as can long timebase interferometry. So you're essentially taking a lot of radio-frequency photos that are, themselves, taken with a very long exposure time. Stuff in common accumulates, stuff that's different cancels out.
A sufficiently slow, pulse-modulated, message at that frequency will be extremely obvious above the noise, even if it's well below noise level any given instant. You're relying on the fact that noise is random, so that the average can be set to zero. The objective is to guarantee that the signal, after sensitivity, loss of strength and less-than-ideal capture time, strictly exceeds zero at the desired distance.
Once the law of big numbers kicks in, noise is not an issue. The average of any number of zeroes is zero. What matters is signal. If the pulse, transmitted for a second, would be 3,600 times too weak, transmitting for an hour would mean that someone capturing for an hour would detect the pulse.
Interferometry means you can also use constructive interference. Even Linux supports nanosecond accuracy and data from nanosecond-accurate PPS sources, and there are atomic clocks now that are millions of times more accurate than the official definition of the second. With that kind of gear, getting the phase such that the waves constructively interfere wherever we want is not going to be difficult. We know the phase difference already, because powerful natural radio sources must be visible from all telescopes and that same accuracy tells us how out of phase they are relative to said source.
Is that enough to go 100 LY, though? Even if both planets were ringed with telescopes, you're limited to less than the shortest year of the two per pulse and one pulse is not enough to say hello. To be unambiguous, you need a prime number of prime numbers signalled by pulses. Preferably pulses short enough that someone will notice there are some to notice.
Probably not 100. 50 would quadruple the chances of detection by any life but would butcher the chances of there being life to detect it. I don't think you can go below 25, just not enough candidate worlds, and the probability of detection only quadruples again.
A pulse of an hour duration is probably acceptable, short enough for someone to detect something strange but long enough to have enough power to stand a chance of, again, someone detecting something strange. After that, it's just a case of proper summation.
Signal power, itself, is the least important part as it falls off with the square of the distance. The challenge is to make it irrelevant, just as you make each emitter very low power in a gamma knife but very powerful at the point of interest.
Even so, you need enough bits for the sum to matter. SKA might not quite be up to the task.
Ok, it's probably not possible to transmit yet. Receive, yes, but it might take another 50 years for transmission to a reasonable number of stars to be possible.
The problem is that developers no longer answer to their bosses. They answer to web forums. They are so afraid of doing things other programmers wouldn't find acceptable that they'll code to please web forums rather than doing their job. That means using the heaviest frameworks available and writing the deepest, most complex code they can manage to understand themselves.
Actually the problem is that the idea of doing stuff on a web page, then clicking a submit button and reloading an entire page just for a few pixels to change is a clunky old way of doing things that deserved to die.
The possibilities we get from intelligent use of JS open up so many things we simply couldn't do otherwise without native apps. Most users embrace this in one way or another, even if they choose to restrict it to sites they trust.