Why can't there be SATA controllers with drive encryption support? Your drive encryption program could then just be an expansion UEFI ROM card that prompts you for your password and sends it to the SATA controller, then erases it from main memory. There's no need to do anything else after that point, because encryption and decryption would be completely transparent to all software on the system.
So this means if a tree falls in the forest and no one was listening, it wouldn't be simulated and therefore would not make a sound. That was easy...
So long as it is provably impossible for anyone to feel or notice the effects of that sound for all of eternity, yes, a simulation could get away with not simulating it. Provable impossibility in our Universe would be something happening outside the light cone of the simulated area.
If they hadn't locked it down, Windows RT could have just been another target to which developers could recompiled their software and that would have kick-started the application ecosystem somewhat. It would have been with desktop applications, though, which Microsoft considers deprecated. Desktop applications also don't work with touch control very well and more importantly don't make Microsoft any money.
It seems as well that Microsoft wanted the locked-down environment to prevent Windows RT from having viruses, an inevitable side effect of open development. Many more people bought the virus-laden Surface Pro than the Surface RT, so maybe people like their viruses =)
Was it because of the OS that the Surface did not have cell data support???
They should have just left it unlocked, rather than make us jailbreak it by force. By forcing us to jailbreak, they guarantee that commercial applications never get ported to it.
I guess Microsoft didn't care, because they consider the desktop to be deprecated, something they will remove in a future version.
It's about 10^80, not 2^80.
I think we'll find that the amount of energy required to hold X entangled particles in coherence will be exponential in X. This would make quantum computing essentially worthless.
If not, wake me when we get to 2048 qubits, for the original Xbox's public key and I have some unfinished business from last decade...
ReactOS keeps changing their targets, and not getting anywhere.
So does Windows itself, or any other evolving project.
The first mistake was using signed integers.
The problem is C's promotion rules. In C, when promoting integers to the next size up, typically to the minimum of "int", the rule is to use signed integers if the source type fits, even if the source type is unsigned. This can cause code that seems to use unsigned integers everywhere break because C says signed integer overflow is undefined. Take the following code, for example, which I saw on a blog recently:
uint64_t MultiplyWords(uint16_t x, uint16_y)
uint32_t product = x * y;
MultiplyWords(0xFFFF, 0xFFFF) on GCC for x86-64 was returning 0xFFFFFFFFFFFE0001, and yet this is not a compiler bug. From the promotion rules, uint16_t (unsigned short) gets promoted to int, because unsigned short fits in int completely without loss or overflow. So the multiplication became ((int) 0xFFFF) * ((int) 0xFFFF). That multiplication overflows in a signed sense, an undefined operation. The compiler can do whatever it feels like - including generate code that crashes if it wants.
GCC in this case assumes that overflow cannot happen, so therefore x * y is positive (when it's really not at runtime). This means the uint32_t cast does nothing, so is omitted by the optimizer. Now, the code generator sees an int cast to uint64_t, which means sign extension. The optimizer this time isn't smart enough to know again that it's positive and therefore can ignore sign extension and use "mov eax, ecx" to clear the high 32 bits, so it emits a "cqo" opcode to do the sign extension.
So no, avoiding signed integers does not always save you.
* Fixation of two's complement as the integer format.
Are you trying to make C less portable, or what?
The "broken" code is already nonportable to non-two's-complement machines, and much of this code is things critical to the computing and device world as a whole, such as the Linux kernel.
The C standard needs to meet with some realities to fix this issue. The C committee wants their language to be usable on the most esoteric of architectures, and this is the result.
The reason that the result of signed integer overflow and underflow are not defined is because the C standard does not require that the machine be two's complement. Same for 1 31 and the negative of INT_MIN being undefined. When was the last time that you used a machine whose integer format was one's complement?
Here are the things I think should change in the C standard to fix this:
* Fixation of two's complement as the integer format.
* For signed integers, shifting left a 1 bit out of the most-significant bit gets shifted into the sign bit. Combined with the above, this means that for type T, ((T) 1) << ((sizeof(T) * CHAR_BIT) - 1) is the minimum value.
* The result of signed addition, subtraction, and multiplication are defined as conversion of all promoted operands to the equivalent unsigned type, executing the operation, then converting the result back. (In the case of multiplication, the high half is chopped off. This makes signed and unsigned multiplication equivalent.)
* When shifting right a signed integer, each new bit is a copy of the sign bit. That is, INT_MIN >> ((sizeof(int) * CHAR_BIT) - 1) == -1.
That should fix most of these. Checking a pointer for wraparound on addition, however, is just dumb programming, and should remain the programmers' problem. Segmentation is something that has to remain a possibility.
Do you think most-all exploits are down to the defective x86 segmented memory architecture.
I think those who coded for the SNES or Apple IIGS in C would disagree with blaming the x86 exclusively =)
Just let me know when I can build my dream of a hoverboard arena. =^-^=
And they receive how much money from the NSA for providing them with details of zero-day exploits?
Are they still providing NSA with zero day exploits BTW? I assume the answer is yes.
It's more likely that the NSA pays VUPEN rather than Microsoft. Paying Microsoft directly would have blowback.
They only were offering bounties for two particular things in Windows: Internet Explorer 11 and the new anti-exploit mitigations in Windows 8.1. Even though there are plenty of other security targets in Windows, only those two things would get you money.
I found a bug in Windows's Secure Boot code that I'm using to jailbreak Windows RT. I might as well; it's not like they pay bug bounties for Secure Boot exploits.
The exploit could be used to run Android on Surface RT with a kexec-like driver implementation, but this would be a huge amount of work for someone who doesn't know Linux internals.
RT? Nope, not at present. There are "jailbreak" hacks to let you run normal Win32 software, but it still has to be recompiled to ARM. BlueStacks / JarOfBeans / etc. aren't available. The bootloader is locked, so you can't just install Android directly (not that Android is generally designed to be installed that way anyhow). There has been talk of using the jailbreak to make an NT driver that loads Linux, essentially using NT as the bootloader, but it's a pretty huge project and nobody has made any real progress on it so far as I know.
I already have an exploit to do this in Windows RT, but yes, the hard part is building the Android OS for a Surface RT. Making the drivers, the boot loader...things like that.