Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Back for a limited time - Get 15% off sitewide on Slashdot Deals with coupon code "BLACKFRIDAY" (some exclusions apply)". ×

Comment Re:fanbois with a pottymouth (Score 1) 111

Low-level software or standard-bearer software like this is a good canidate for MIT, you actually want to able to include it in Windows*, *BSD, and Mac OSX. At one point AMD was considering porting MESA to embedded windows to significantly reduce memory overhead. Why not? there's not a whole lot of incentive to create secret sauce or introduce incompatibilities

Comment Re:Any use of this? (Score 1) 74

How do you even know your software encryption program is actually unmodified and not modified or spied upon by parts of the OS modified to be malicious? Unless you air-gap the computer (and even that sometimes isn't enough (high-frequency listening implanted in the firmware) and keep it in a tamper-evident pouch when you aren't using it? Otherwise you need at minimum you need a verified boot chain and a cryptographically signed file-system. Yes the keys should be owner accessible or replicable, but unfortunately such systems rarely pass the grandma test.

Comment Re: Remember - Apple is a hardware company. (Score 1) 225

A few points.

People have extracted key from "secure processors" via hardware probes, but it is very difficult especially on the newest-gen lithography

And the apple model provides more guarantees than that. It layers a pin-derived key and a generated on-chip key at different levels of the file system.

The Secure Boot protocol does not guarantee secure key storage and does not require a specialized chip to implement. It's strongly recommended you rely on hardware mechanisms to verify the firmware, but such mechanisms are distinct feature and the nature of secure boot is that it can't actually verify the firmware on its own. Apple's security coprocessor is similar to a TPM but uses it's own unique API's.

Comment Re:Sounds like (Score 5, Informative) 225

Considering Apple includes a security co-processor it's not actually that easy. Touch ID wrapped keys are discarded after reboot, 48 hrs, or 5 failed attempts. This authentication method can also be disabled or never activated by the user.

Additionaly the root keys are only held in the co-prossesor and co-mingled with a UID (which even apple doesn't know) as well as the password. You can't begin a dictionary or pin attack without pulling out that UID (and cosidering the co-proccessor is running L4, the only way I know to do it is use nano-meter scale probes to spy on the hardware as it operates. The root of the file-system is encrypted by a key held only in the security co-processor, and the comingled password is used in a sort of chain of trust with the hardware key to secure file-metadata and per-file encyprion keys.

The firmware is designed to resist brute force, and apple fixes every known vulnerability to brute-force it discovers. The update mechanism requires the user password and cannot be rolled back to a prior vulnerable version, So apple can't provide a targeted device update to enable brute-forceing. At best the forensic team will have to sit on the device and hope a new vulnerability is discovered, and hope the data erase after 10 failed attempts was not enabled by the user.

Comment Re:This Is Very Important (Score 2) 80

Hmm, I think Allow?/Deny? isn't suffecient for security. You should be able to Allow?/Deny?/Fake? where fake redirects the API's to fake or random data. The webcam or mike when faked might just be able to access the Rick Roll or Trr La La music or music videos. Contacts might redirect for a list of Congressmen etc.

Comment Re:Nobody is talking about the root causes yet.... (Score 1) 77

Even if you can "prove" the software, how do you prove your hardware? And I think this sort of thing is very hard in a desktop system. Just take private namespaces. Within a single program you can be fairly sure as to what needs access to that data structure, on the desktop it's less sure what a user could want to have access to a particular file. There are server techs with isolate namespaces between services and processes, and there are techs which can fine-tune access of arbitrary executables to files and vice versa, It's just on an open platform that can be configured in an exponential combination what exactly is proper access ex ante.

Comment Re:Nobody is talking about the root causes yet.... (Score 2) 77

A microkernel minimizes the amount of code you have to trust. MINIX as of 3.0 is also designed to be fault-tolerant, able to recover to almost any sort of bug. You tend to get a lot of transactional and message passing overhead though. For example the filesystem modules isn't allowed to access the disk controller, it has to ask the block layer to do it and pass the result. But the block layer can't actually pass the result directly, it has to check in with the microkernel to make sure it's okay.

But the future isn't bleak, not only has hardware in general become faster, there has been quite a bit of design advance around these sorts of messaging system that reduce the overhead micro-kernels generate.

I think the original argument was about design complexity though, not performance or security as linux started as a hobbyist desktop system. Linus's counterargument as that a microkernel design simply moved complexity to a different level and didn't actually decrease the complexity of a practical and working system.

Comment Re:The problem is C (Score 1) 77

The only provably secure OS (L4) is written in C. I think there are good languages out at the application or platform level (Rust, Haskell, Scala) but systems level programming it's mostly just C. Alternatives are mostly just dressed up C (C++, D). Java and Haskell offer was to wrap application in a standalone VM, but largely due to the fact it uses shims in a controlled enviroment, it doesn't actually have to work with the messy hardware stuff.

Comment Re:Proprietary Firmware (Score 1) 77

You can even buy laptops now that are OSS from the firmware on up. There are dozens of OSS u-boot based dev boards availible. You can run a system of a CPU design loaded onto an FGPA. There are cellphones built with basebands you can load OSS firmware onto, or are not linked into DMA with main memory and have CPU controlled hard off switches. These aren't flagship consumer products, but they are available. Security is rarely convenient.

Intel's management engine and the like are not some vast conspiracy, There really is a demanded use case. And as way they are used is less intrusive than the alternative, which is to lock the computer to a particular OS and configuration that calls home anyways. A big issue is that stuff at the firmware level tends to be implemented poorly at best. Write once and forget.

And there are processors and boards you can be without these features. It just tends to be more convenient for corporations to have these feature. At least 90% of the desktop market are corporations or those that don't have a clue. 95% or more of the cell phone market is wireless carriers (via deciding what phones to offer subsidies on and to carry in their store). They have legitimate use cases for these security-poor features. The logic of the business case overrides any concern over the common good. They don't even stop to think about it, which is why OSS will generally be more secure, but less accessible to the average joe.

Never say you know a man until you have divided an inheritance with him.