Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Why is the signing useful (Score 1) 80

Expect this certificate to be revoked in near future. This will close that avenue, and cause all machines infected drivers signed by the cert to refuse to load the malware driver.

And cause all machines with legitimate Sony drivers (if there is such a thing?) signed with the same cert to refuse to load those too.

Unfortunately, yes. Sony will have to re-issue those legitimate drivers and sign them with a new cert. That is actually a good reason why a code signing certificate for widely distributed software absolutely should reside within a HSM, which will make the private key impossible to steal.

Comment Re:Why is the signing useful (Score 2) 80

What benefit does the attacker get by signing the malware with a company's certificate?

Windows has a mechanism where kernel-mode drivers must be signed. For certain mandatory, early-load drivers (e.g. anti-malware tools, measured boot tools) the drivers must be signed by Microsoft. But Windows allows other kernel-mode drivers to be loaded as long as they are signed using a valid, non-revoked code-signing cert from (IIRC) Verisign.

Kernel-mode drivers can obviously access memory in kernel-mode. This is a common way for malware to take foothold on a Windows machine. It is really hard to ensure that Malware is executed during boot otherwise.

Expect this certificate to be revoked in near future. This will close that avenue, and cause all machines infected drivers signed by the cert to refuse to load the malware driver.

Comment Re:Here come the certificate flaw deniers....... (Score 1) 80

In practice, a certificate is nothing more than a long password that's impossible for a normal human to memorize. So it ends up in a file somewhere, if not several "somewheres", where it can be easily stolen.

If certificates are used correctly they are stored in some kind of certificate store where they cannot just be "stolen".

In the Windows certificate store, when you import a certificate, the default is to set the key to "non exportable". Non exportable means that you'll never get the key from that store - at least not from your user context (given that it is stored encrypted but on the local disk, an "root" user with access to physical disk sectors could theoretically reconstruct the key - but not without running with severely elevated privileges).

You can still use the certificate to sign with - but you'll need to go through the crypto api which asks the certificate store to perform the signing without giving the private key away. This works even if the key is held in a connected hardware secure module (HSM) which will add more guarantee that private keys *never* leave the device.

For better security you *should* use the cert store to generate the non-exportable private key to begin with. It can still be signed by an external entity like Verisign - even without the private key ever leaving the secured store.

There is no excuse for having the private key stolen. The private key of a certificate used to sign software/drivers from a corporation like Sony should *definitively* have been created by a HSM and there should be guarantee that the key never leaves that HSM. There are well known products which will still allow you to load-balance HSMs, synchronize and take backups where the key will only ever leave the box in an encrypted container that will only be understood by a box that have been paired with the originating HSM/cert store.

Comment Re:How many bozos are screaming that Windows is sa (Score 2) 131

So many ppl come here and post that Windows is not only safe, but that it is targeted because of numbers. Yet, it is obvious that NSA and GCHQ targeted Windows. Why? I doubt that it was numbers, but ease of cracking.

If your targets use Windows it would be a real stroke of genius to distribute attacks against Linux, don't you think?

Duh.

So, in the meantime, how many companies will start switching to *nix?

What is the *nix equivalent to secure boot? Signed kernel modules? What is the *nix equivalent to Measured Boot and Network Access Protection? How does an organization automatically and immediately detect and isolate potentially infected hosts?

Every operating system out there will experience exploitable vulnerabilities. Applications running on top of the operating systems will experience exploitable vulnerabilities. The most recent severe vulnerabilities that have been mass exploited are *nix vulnerabilities like Heartbleed and Shellshock. No operating system is immune.

That's why defense in depth is important. Windows starts it's defenses before boot, by using Secure Boot. This ensures that only approved bootloaders run. It prevents bootkits. Some Linux distros support a weak form of secure boot (it doesn't protect all types of resources, notably scripts and config files are not digitally signed). Windows loads all kernel components from signed "cabinet" files - protecting all assets used during boot. If a rootkit tampers with any of the files, the system will refuse to boot.

During boot, before loading *any* kernel module, Windows will compute a hash of the module and record it in the TPM hardware module along with name, size, dates and other metadata. Upon successful boot (but before other hosts will accept traffic from the system) the OS asks the TPM for a signed "health" record. The TPM will issue a signed document with all the recorded info that the host can present to a health certificate server. The health cert server can investigate the list of loaded modules and compare against known whitelists and/or blacklists. If everything checks out, the health cert server issues a certificate the booting host must use when communicating with other hosts. Unless it can present such cert, the other hosts will refuse to communicate with the host.

Does 'Nix support such security in depth?

Such targeted attacks will target whatever operating system is being used by the target. Targets must consider the possibility that any host can be breached through an application or OS vulnerability. With that recognition, they must ensure expedient diagnosis and isolation. In that area, a Windows server infrastructure can be set up to become extremely strong.

Comment Re:Attackers take control of websites? (Score 1) 41

"New security updates released for the WordPress .. fix cross-site scripting (XSS) vulnerabilities that could allow attackers to take control of websites ."

Embedded javascript in a comment box could trigger exploits on Microsoft Internet Explorer running on Microsoft Windows desktops.

Source? Or just trolling?

Comment Re:Highly advanced computer worm? (Score 1) 143

This 'highly advanced' computer worm will only work on Microsoft Windows:

It is not a worm. It is a trojan, i.e. the user has to invite the trojan (the "dropper") inside for it to work.

A worm is an automated infection which propagates automatically from system to system. Like the Shellshock worms, Code Red, Nimda.

Any particular reason you chose to call it a worm, despite that it was described as a trojan in the summary as well as in TFA?

Comment Re: Microsoft Windows only (Score 1) 143

Current strain of Microsoft Windows? Which ones?

All of the current Windows versions are derived from Windows NT. The security model was developed for Windows NT. It is the very same extensible (through SIDs) model that has later been extended for AD and later for UAC (mandatory Integrity Control) in Windows Vista.

Comment Re:Microsoft Windows only (Score 1) 143

It's the world's biggest target for malware, it's a monoculture, and it has a security model that tends toward convenience over security

Yes - the "dragnet" attacks tends to go after the most victims. If your attack has a certain chance of succeeding (like a social engineering attack), you'd be stupid to go after the 1% instead of the 90%. Now, in a *targeted* attack where the attacker singled out a specific victim or group of victims - the attacker will go after whatever those targets use.

and was actually bolted on after-the-fact.

Nope. The current strain of Windows was created from scratch with the present security model from the get-go. The security model is based on tokens and it was designed to be extensible from the start. Also from the start, the designers envisioned that a process or even a thread could have a token *different* from the user token - i.e. a process could run with permissions/privileges different from the user.

The Windows security model also goes beyond the naive file system-focused model where only file system-like objects were seen as important to secure. In Windows - from the start - all system objects (files, directories, windows, processes, threads, shared memory regions, mutexes, users, groups etc) are accessed through object-oriented handles. When you open a handle you specify the access you request, where each object type has it's own access types. The security check is performed right there when opening the object - instead of on each syscall. If the access you request is granted, a system object is created with a jump table (think virtual method table) where the functions you requested access are mapped to the actual system functions, and the other functions mapped to "denied". The upshot of this is that even though Windows has a much more advanced security model which could make security checks more involved, it will usually perform better because it does *not* have to check security permissions on each syscall.

Contrast that with Unix/Linux where the security model initially only considered file system objects. There were only 2 levels: regular users and root, and a large number of functions could only be performed by root. When it was realized that other system types might also need security descriptors, the existing file system was "adapted" by "mapping" non-file system objects to become file system-like. Talk about bolted on!

The Unix/Linux security model is also the only one with a deliberate drilled hole: The SUID/setuid. Here you have a too limited model where regular users are unable to perform perfectly reasonable functions, like changing their own passwords. So what do you do? You let them run as the only user that *can* perform the function, and pray that the process somehow prevents them from performing any of the other functions root can do while running they are running as root. This is a blatant violation of the least privilege principle, but it is now deeply engraved in all Unix systems. Needless to say that this is the most common path for pwning Unix/Linux systems, going all the way back.

The Unix/Linux model was so bad that NSA had to create SELinux (talk about bolted-on!) which creates it's own competing security "context" (a token). When you want to audit the security of a Unix/Linux system you have to consider 3 competing models: 1) The "original" file-system oriented discretionary model with the SUID hole, 2) the sudoers and 3) SELinux/apparmor or whatever has been bolted on the top.

Especially 1) and 2) are worrying, because it is neigh impossible to audit those sufficiently as long as just a single SUID/sudo command is allowed: How do you (as an auditor) know *what* the SUID/sudo command can actually do? Did *you* install the executable, did *you* monitor the compilation from source? What *other* things can ps or even ping do that you don't know about? If I hold up a file or point to a process on your system as asks "who can access this" - you cannot give me a conclusive answer - because the original file system discretionary permissions may not tell the entire story. There may always be a SUID or sudo utility that can access the file *despite* the discretionary access control.

On Windows there are no deliberate holes in the security boundary: If an auditor points to a file or a process or any other object type, you can give a conclusive answer to who can access the object with what access level. It is all in the ACL. If a user is not in that ACL - he cannot access the object.

When considering the desktop it is even worse: Windows actually has meaningful user interface privilege isolation. On Unix/Linux there is *no* isolation. X is about as promiscuous as can be: Any process and snoop on *any* other process keyboard, mouse moves etc. Which means that is an attacker slips by and get to run his code in e.g. Firefox - he can snoop on *anything* you type in *any* window - including terminal where you type sudo or root passwords. Go figure. Windows security model (since Vista) prohibits lower-integrity processes from snooping on higher-integrity processes. Even a normal integrity process cannot snoop on other normal-integrity processes unless a number of conditions are met (it has to declare it's intention in the manifest, it has to be installed in Program Files or System32 etc). And then there's the stupid password caching for sudo...

Unix (Linux) is about as far from a monoculture as you can get while still remaining reasonably compatible between distributions, and it was built with security in mind.

Shellshock, Heartbleed, ...

Not to mention that a common reason to run Linux is LAMP - The P of which is PHP - the swiss cheese monoculture of web programming languages.

Comment Re:We all dance in the streets (Score 1) 192

I know this is is meant as a jokey comment, but it's worth noting that VS2015 has native Git support as well so Github etc. works without any plugins.

VS 2013 (including Community) has Git support out of the box and works just fine with GitHub as well.

Ahem. It works. Sorta. It's slow, mildly confusing and it totally screws up if you use subrepositories. Looking forward to VS2015.

Comment Re:Open, but will it run? (Score 1) 525

Excuse my ignorance but is there such a thing as plain ascii conf files in the Microsoft world? Or will the proprietary binary registry be ported/required too for the .NET libs to access app/system settings? How will it adhere 100% to the *nix security conventions? TIA.

.NET does not rely on the registry, except for some of the COM that will not be ported. In .NET the config files are XML files, e.g. a program called MyStuff.exe will have a config file called MyStuff.exe.config - which must contain XML configuration according to the (extensible) schema. Pretty sweet, actually, if only they would modernize it a bit. I'm hearing that they are doing exactly that - making the config system even more "pluggable".

Config files for server applications can "inherit" base config files: First, the base config file is applied and then the more specific config file. The specific config file can remove, replace, change or add items from configured collections/items, unless explicitly forbidden by the base file.

Comment Re: RIP Java! (Score 1) 525

Can you explain?

I'm not the GP and I'm a self-proclaimed C# fan, but: The Java collections seems to have been more well thought out from the beginning with abstract types (interfaces) for different types of collections, such as bag, list, set, stack, queue, vector etc and then concrete implementations with separate characteristics, such as hashed, sorted etc. .NET is catching up, especialy in the 4.x versions, but Java (IIRC) still has proper priority queues that has no equivalent in .NET.

If you see comparisons between .net and java, it's usually that the past 10 years .net has evolved and java sometimes catches up a tiny bit.

Agreed.

I always thought that java collections were weaker since in .net even an array is also still a collection, they have collections for just about anything you need, and with LINQ you've got an incredibly powerful way of manipulating/creating/accessing collections.

I always found the Java collections a bit stronger conceptually. For instance, it really bothered me that there was no hashed set (there is now), and I had to play tricks with HashTable by using the same value for key and value to mimic a set. It was particularly annoying as I went from C++ to Java to C#. Java seemed to have lifted the collections from STL where they seemed to have been very well designed. C# collections always stroke me as having been "thrown in there". Thankfully they have improved a lot since then.

and with LINQ you've got an incredibly powerful way of manipulating/creating/accessing collections.

LINQ cannot be overestimated. Large parts of code is actually manipulating collections, and LINQ is just awesome. Also the fact that C#/.NET generic collections were always properly reified, unlike Javas fake generics (type erasure) which causes all kinds of strange corner cases and problems. C# generic collections allow primitive types to be used for type parameters, and always without performance loss due to runtime downcasting like in Java.

Comment Re:Are renewable energy generators up to task ? (Score 1) 488

This is Denmark, yes? You know, the country that is surrounded by oceans that have some of the strongest tides? I think Denmark could produce almost all of it's power though tidal power plants. The only real trick is how to buffer the power during the lull of high and low tide.

You are mostly correct solar (fotovoltaic) is a dumb idea, but there are more renewable power sources than solar and wind.

There is no tide to speak of in Denmark. I'm not sure that we'd classify the sea between the islands (Denmark is basically an island nation) as "oceans". The tides are usually 1m or less, most pronounced in the eastern part facing the North Sea, much less pronounced in the western parts that sits in the Baltic Sea.

But the flat topology and the fact that most of Denmark is islands, there's a *lot* of coastline, and wind is a much preferred as renewable energy source here. I don't think people realize how much it is blowing here. Damned wind!

It is correct that generating most energy from wind runs the risk that prolonged periods with high pressure (which means little wind and clear skies == frezzing cold during winter) can not generate enough wind to meet the demand.

Another problem is that in large parts of Denmark (e.g. the entire Copenhagen metropolitan area) most households get their heating from centralized "surpluss" heat from electricity production - burning coal at the moment.

It is commendable not to waste heat,and as you can probably imagine, Denmark has a huge investment in this centralized heat distribution system.

But I'd like to know, where will we get the heating from once electricity is produced from wind and solar?

Slashdot Top Deals

"When anyone says `theoretically,' they really mean `not really.'" -- David Parnas

Working...