The objective of "mathematically proven security properties" via program obfuscation is definitely not achievable. After all, it's a given security principle of "security through obfuscation" is unsupportable. If an adversary is capable of obtaining the executable of a program, they can also reverse engineer that same executable. It may take a lot of effort, but it is always achievable.
The URL still works. I don't know if Disney has a link to the URL, but the URL itself is still active.
Although it may be soon slashdotted.
The summary here is about as deceptive as I could possibly imagine. What Uber is attempting to do isn't to initiate a lot of bogus trips and then cancel. They're attempting to recruit drivers from other companies and have them become drivers for Uber. The use of burner phones and credit cards are to prevent the easy detection of recruiters. Not to make fake trip requests.
Personally, I believe that such tactics are legal, but morally suspect (if the tactics were illegal, it would also be illegal for a company to attempt to recruit employees from other companies. See http://en.wikipedia.org/wiki/H... )
If bricking a phone would also result in any stored photographs going "bye bye".... I can think of quite a few police who would like that feature.
Agreed, the path that was taken for that attempt wouldn't have worked. However, if someone had been able to compromise the credentials that would authorize a check in to the main repository, it most definitely would have worked. Adding in two factor authentication just makes it that much harder.
Well, you could have answered your own question by simply using google to look up Yubikey and reading a bit. But to give you a partial answer, the token generates an AES encrypted value and passes that value to the server for authentication. During authentication, the server decrypts the value. (the shared secret between the token and the server is the AES encryption key). The decrypted value includes a counter. And if the counter isn't greater than the previously used counter, the authentication attempt is invalid. So if you were to hit the button 100 times and record those codes, you could authenticate using any of those codes, but as soon as I hit the button and authenticated using the resulting code, all of the codes you recorded would become instantly invalid.
Well, malware injection to the linux kernel isn't a mere possibility. The incident that happened back in late 2003 comes to mind.
I believe that we can get things smaller. I'll agree that we're approaching the limits as regards what is basically a 2 dimensional layout that we're currently using for chips, but that leaves the 3rd dimension. Of course there is a lot of technical issues to overcome, but I believe that they will be overcome.
In a nutshell, they simply had any computer that contacted the web site send back the computer's real IP address and its MAC address. The actual security of the Tor wasn't affected. Just that compromising information was sent through the Tor network. Just as any other data would be sent through the Tor network.
Now I suspect the MAC address was sent so that they could identify the actual computer when they seized it via a warrant. That way the suspect couldn't claim that it wasn't their computer since the IP address was on the other side of a NAT and there were multiple computers using NAT. And the IP address was simply to make identifying the physical location easier.
Which raises an interesting question....
What if someone alters their MAC address and then enters the Tor network via a public wifi hotspot?
The connection is encrypted so the fact that the hotspot is publicly accessible shouldn't be a problem.
And when the computer is turned off, the MAC spoofing goes away so even if the computer is seized, they don't have a matching MAC address to prove it's the computer they hacked. And of course, since access was via an open hot spot, there's plenty of computers that could have been connected. Proving which one would be rather
Look at the article. And examine the photo in the article closely.
The backpack portion of the exoskeleton has attachments. Including 2 "mini-cranes" going over the user's shoulder. And in the photo, those mini-cranes are linked via some rigging to the plate the worker is handling. So the majority of the weight of the object is handled by the exoskeleton while his hands are merely providing fine control.
So the ONLY statement anyone picking "GPLv2 only" is making, is that they don't want their code mixed with GPLv3 which honestly... is pretty silly.
If "GPLv2 only" is silly, then you might want to alert all the Linux kernel developers. After all, the code in the Linux kernel is GPLv2, not GPLv2+.....
It's not a two factor authentication, it's actually a means of generating one time passwords. In a nutshell, you can have a local device calculate the password based upon a challenge sent from the system you wish to log onto, or you can preprint a list of passwords that you can use to log onto the system.
See http://en.wikipedia.org/wiki/O... for a general description of the method. You ought to be able to find out more using that page as a starting point.
Oh good god...
I was a LM employee a few years back. Brought in on a project that was failing. And the main issue with the failure was their process.
For instance, LM was using Common Criteria and they were trying to get the system to EAL4. And frankly, getting there is quite doable. Unfortunately, management and the customers for the project didn't bother to actually understand anything about requirements.
For instance, in Common Criteria, your need to tailor the documents. An example would be this template being tailored to the system requirement:
FPT_FLS.1.1 The TSF shall preserve a secure state when the following types of
failures occur: [assignment: list of types of failures in the TSF].
The above template is obviously intended to be tailored to include a list of possible or predictable failures upon which the system will still remain secure. But this is how LM tailored that little beauty:
FPT_FLS.1.1 The TSF shall preserve a secure state upon a partial system failure.
Notice how the tailoring totally removed anything concrete about the requirement? What kind of partial failure? How do you test it? When is it violated? etc, etc, etc, ad nasium.
And that kind of bullshit "tailoring" was done EVERYWHERE. There would be multi-hour meetings just change, tailor, and interpret specifications tailored that way. And any suggestion by anyone working in the trenches stating that the requirements were badly done and needed to be redone properly in order to actually get a functional system was met by "We can't do that, it would be too costly."
If the above paradigm was used on the Social Security project, I can definitely see why progress has been snail slow and over budget. They're most likely still attempting to get their specifications correct.
The 240V 60Hz is so that it can handle both North American and UK voltage levels. If you look at the technical specifications document, you'll see that there are 2 different grounding configurations that the contestants may specify. In both configurations the inverter output is fed into an isolation transformer. One specification has the input of the isolation transformer center tapped and grounded which makes the AC outputs from the inverter swing +/- 120V from ground like you would expect in the USA. The other configuration doesn't have a center tapped transformer, but one leg of the input is grounded making one of the AC outputs swing +/- 240 V in referenced to ground and the other output is tied to ground. I suspect the 60Hz specification is due to the way transformers work. A transformer designed to operate at 50Hz using minimal materials will operate fine at 60Hz. However a transformer designed to operate at 60Hz using minimal materials will saturate magnetically at 50Hz causing it to overheat and eventually fail.