Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.


Forgot your password?

Comment: Re:not the point (Score 1) 336

by benjymouse (#48930397) Attached to: Why Screen Lockers On X11 Cannot Be Secure

You download a program that appears legit (and may be mostly legit, or be a hacked version of a legit program), and are running it.

But why would I do that?

Ok, try this: You browse the Internet using Firefox. Lots of vulnerabilities discovered each month, 4 remote code executions already in 2015. An attacker has infected an add or a legitimate or fringe site you visit. Attack code executes and the attacker now runs his code in your Firefox. The malicious code hooks into X. The code can intercept the lock screen, but it can *also* monitor each and every keystroke entered into ANY other window - including terminal windows - without you noticing it. Lock the screen and unlock it and your password is compromised. Run a sudo in a terminal window and you are pwned!

How's that?

Comment: Re:not the point (Score 1) 336

by benjymouse (#48930387) Attached to: Why Screen Lockers On X11 Cannot Be Secure

Yes, that is exactly my point.

Nice try. But no, you are BSing.

Scoth: "Windows has had the ctrl-alt-del to log in/unlock since literally the first version of Windows NT, 3.1, in 1993. "

You: "In 1993, Windows didn't have an NT kernel."

AC: "In 1993, Windows NT 3.1 was released. Not to say that the non-NT product line ended at the same time."
(AC factually correct here: Windows NT 3.1 was released in July 1993)

operaghost: "Windows NT 3.1 didn't have an NT kernel? Color me confused. No, scratch that-- color you wrong."

You: "Go to a typical computer store in 1993, ask for Windows, and they wouldn't give you an NT kernel."
(now you try to deflect; why bring in the "typical computer store"? the issue was *Windows NT*)

So, your claim was that Windows NT didn't have an NT kernel. The TFA was about Windows NT, and Windows NT certainly HAD the NT kernel, it certainly HAD the "attention sequence" Ctrl-Alt-Del, and it certainly WAS released and available.

And you are dishonest.

Comment: Re:If it's accessing your X server, it's elevated (Score 1) 336

by benjymouse (#48930197) Attached to: Why Screen Lockers On X11 Cannot Be Secure

I'm not familiar with writing apps for X, but are you saying that every program that displays a window in X can log all keystrokes including in windows that are not associated with that program?

Yes. This isn't just X, by the way; it's a common design across most operating systems. Any client can register to receive keyboard and mouse input regardless of the current focus, unless another client has already "grabbed" the input device.

Except in Windows. Since Vista user interface privilege isolation prevents unauthorized processes from grabbing keyboard/mouse events or sending messages to windows owned by another process, even if that process is running as the same user. To be allowed to grab keyboard/mouse, the process must have declared that intent in the manifest *and* it must have been launched from an installed location (program files or windows system). Furthermore, such hooking/messaging is also masked out at the intrinsic level by UAC - specifically by integrity levels. A lower integrity process is simply not allowed - manifest or not - to send messages or install keyboard/mouse hooks at a higher integrity level process.

X is especially bad in this regard, as it does not even protect against shatter attacks and eavesdropping on windows from *another users* processes. If you elevate to root - e.g. sudo from a terminal window - any other process can *still* eavesdrop on keyboard events.

Comment: Re:not great, but probably not very important eith (Score 1) 105

by benjymouse (#48786493) Attached to: Sloppy File Permissions Make Red Star OS Vulnerable

Some alternatives sound nice but fail horrificly when the come in contact with people, especially the ones that let any people within a group grant access to others with zero oversight.

An access control system where everyone (with access?) can grant access to others sounds bad. However, I don't think that's the only alternative to me-us-everyone rwx. In fact, I don't know that such a system that exists at all. You usually needs to be the owner of a resource (or in the "owners" group) to grant privileges in a DAC system. Some systems also allows owners to grant specific rights on the security attributes to non-owners - i.e. the right to grant access.

Within a short period of time with such a "everyone can grant or deny access" scheme you end up with almost everything wide open

How about a system where only owners or designated security administrators can grant/deny access? The issue here was that a developer *wanted* access to a file from a non-owner and non-group member account. Lacking finer grained ACLs, that leaves only "everyone".

It sounds like you believe that discretionary access control (DAC) is the alternative to Unix filesystem permissions. It's not. Unix filesystem permissions is itself a DAC system, albeit a very limited one. DAC only means that the owner of a resource (or a designated security administrator of a resource) can grant access to others. Because the creator of a file is often considered the owner, creators can often grant access to whom they choose.

However, if a user has been granted "read" access to a resource he can usually not grant it to someone else, unless he is the owner. Do you know of a system where, by default, you can grant the same permissions that you have been granted?

Comment: Re:not great, but probably not very important eith (Score 2, Informative) 105

by benjymouse (#48786099) Attached to: Sloppy File Permissions Make Red Star OS Vulnerable

This kind of exploit, a local privilege escalation exploit, used to be very significant, but is significant in a declining number of cases, as old-style Unix multiuser systems are a smaller and smaller proportion of systems.

An attacker who has exploited a Firefox vulnerability (there are still many found and patched each month) is running as a *local user* on your machine. Trying to explain these types of vulnerabilities away is disingenuous, if not downright complacent.

Unix/Linuxs permission system is 70-era bit-saving stupid. There is no other way to put it.

While this is clearly a mistake by someone packaging the distro, they were certainly not helped by a system where you cannot adequately express permissions. ACLs are available, but they are still kludges and they fell like a bolt-on with many tools still not recognizing them.

When a developer meets the limit of what can be expressed with a single-group me-us-everybody, he will often look for the path of least resistance. Unfortunately that is often relaxing permissions along the coarse-grained me-us-everyone, often ending up with everyone as in this case.

Comment: Re:CryptoWall (Score 1) 463

by benjymouse (#48733661) Attached to: Writer: How My Mom Got Hacked

Incremental is the worst system for restoring. Needing the last full and *all* backups since the last full. Differential is better in that you need the last full and *one* differential. What I think you really mean is versioned backups (not over-written). You can restore from Tuesday's backup (whether full, differential, or incremental is irrelevant), and Tuesday's won't be wiped when Wednesday's is written.

Windows Image backup does *reverse* incremental: An image of the disk is stored as a vhd (virtual hard drive) along with reverse increments so that previous versions can be created. You can attach the vhd and use the "previous versions" feature to go back in time.

Comment: Re:I'm a Java developer (Score 2) 421

by benjymouse (#48646125) Attached to: Ask Slashdot: Is an Open Source<nobr> <wbr></nobr>.NET Up To the Job?

With the open sourcing of .NET, I wonder how far they've gone. Is it the exact same runtime used on Windows, now fully open sourced like the JVM?


Was the entire .NET platform open sourced, or just a subset?

The entire *server* stack - i.e. everything you need to run a .NET server application. They have even created a small-footprint webserver Kestrel for Linux based on libuv. The reason for libuv actually touches on a very important aspect/advantage of modern .NET (and to some extent, Windows Server) . More on that below.

Doesn't .NET require IIS to run web apps?

No. You have *always* been able to just self-host the ASP.NET bits. However, MS have taken it a step further and completely separated out the bits of the pipeline so that you can pick and choose. For a long time there have been plugins for Apache httpd and others that would allow you to run Mono. Those will work fine regardless of whether ASP.NET is provided by Mono or MS. Kremel mentioned above, but you can use any other way. ASP.NET vNext is "pluggable".

How will you run a .NET web app on Linux?

curl -sSL https://raw.githubusercontent.... | sh && source ~/.kre/kvm/

In the Java world, the entire platform and runtimes are open source.

In the .NET world, the entire platform and runtimes are open source, and the platform specification is governed by international standards organizations (ECMA and ISO).

Microsoft grants patent licenses for anyone who wants to create implementations of the specifications, and Microsoft *specifically* does not require paid testing suites and they do NOT assert that using the APIs constitutes copyright infringement.

And now for some reflections on the differences: Microsofts stack - especially with the latest .NET and Windows Runtime - have grown to become completely focused on asynchronous programming. Windows (the NT line) with the "overlapped IO" available from the initial version always had a very high-performing "completion" oriented async model for all types of IO. While this model could yield much better scalability, to leverage it you had to program in a "callback" style that were often at odds on how you think about a problem (sequentially) as well as poor match for constructs such as exception handling, looping/branching etc.

With C# 5.0 (and the equivalent VB.NET) async became an integrated feature of the language. This is not about smart synchronization primitives, multithreading or similar "low level" concepts. This is aboy having a language that effortlessly allows a programmer to express a sequential problem in a way that allow asynchronous processing all the way down to the system level where overlapped IO will be used. Without invading the way the solution is expressed.

This is huge. I am aware of only one other ecosystem that does something similar: node.js. Python has the capability, but there's no ecosystem built around it where the capability is the default way to design libraries and APIs.

In terms of enabling and supporting async programming style, C#, .NET (and F#) is the most mature option out there, along with the "new" kid node.js.

Java only recently acquired the ability to process web requests asynchronously (yielding the thread to process other requests) - but the language and APIs make it exceedingly hard to leverage this capability for anything useful. If you look up articles for how to do async in Java you will notice a strange tendency to "do nothing" while waiting for an synchronous operation complete. If you cannot do anything useful in the meantime - there really is no point.

In .NET I can follow a few basic rules: Mark WebAPI methods / MVC actions with async, return Task<T> instead of some type T and use await whenever waiting for something like a network request, database query, file IO etc. Then ALL of the processing will be asynchronous all the way down to where asynchronous capabilities of the underlying operating system is used. No multi-threading just to wait for completion. Much less overhead, and much easier tuning of the application (you generally do not need to compensate for IO blocking by spawning more threads).

It is going to be interesting to see how the platforms stack up when on equal footing. My bet is that even on Linux it will be *much* easier writing truly scaleable applications with .NET compared to Java, Ruby, PHP etc.

Comment: Re:Why is the signing useful (Score 1) 80

by benjymouse (#48570353) Attached to: New Destover Malware Signed By Stolen Sony Certificate

Expect this certificate to be revoked in near future. This will close that avenue, and cause all machines infected drivers signed by the cert to refuse to load the malware driver.

And cause all machines with legitimate Sony drivers (if there is such a thing?) signed with the same cert to refuse to load those too.

Unfortunately, yes. Sony will have to re-issue those legitimate drivers and sign them with a new cert. That is actually a good reason why a code signing certificate for widely distributed software absolutely should reside within a HSM, which will make the private key impossible to steal.

Comment: Re:Why is the signing useful (Score 2) 80

by benjymouse (#48564323) Attached to: New Destover Malware Signed By Stolen Sony Certificate

What benefit does the attacker get by signing the malware with a company's certificate?

Windows has a mechanism where kernel-mode drivers must be signed. For certain mandatory, early-load drivers (e.g. anti-malware tools, measured boot tools) the drivers must be signed by Microsoft. But Windows allows other kernel-mode drivers to be loaded as long as they are signed using a valid, non-revoked code-signing cert from (IIRC) Verisign.

Kernel-mode drivers can obviously access memory in kernel-mode. This is a common way for malware to take foothold on a Windows machine. It is really hard to ensure that Malware is executed during boot otherwise.

Expect this certificate to be revoked in near future. This will close that avenue, and cause all machines infected drivers signed by the cert to refuse to load the malware driver.

Comment: Re:Here come the certificate flaw deniers....... (Score 1) 80

by benjymouse (#48564211) Attached to: New Destover Malware Signed By Stolen Sony Certificate

In practice, a certificate is nothing more than a long password that's impossible for a normal human to memorize. So it ends up in a file somewhere, if not several "somewheres", where it can be easily stolen.

If certificates are used correctly they are stored in some kind of certificate store where they cannot just be "stolen".

In the Windows certificate store, when you import a certificate, the default is to set the key to "non exportable". Non exportable means that you'll never get the key from that store - at least not from your user context (given that it is stored encrypted but on the local disk, an "root" user with access to physical disk sectors could theoretically reconstruct the key - but not without running with severely elevated privileges).

You can still use the certificate to sign with - but you'll need to go through the crypto api which asks the certificate store to perform the signing without giving the private key away. This works even if the key is held in a connected hardware secure module (HSM) which will add more guarantee that private keys *never* leave the device.

For better security you *should* use the cert store to generate the non-exportable private key to begin with. It can still be signed by an external entity like Verisign - even without the private key ever leaving the secured store.

There is no excuse for having the private key stolen. The private key of a certificate used to sign software/drivers from a corporation like Sony should *definitively* have been created by a HSM and there should be guarantee that the key never leaves that HSM. There are well known products which will still allow you to load-balance HSMs, synchronize and take backups where the key will only ever leave the box in an encrypted container that will only be understood by a box that have been paired with the originating HSM/cert store.

Comment: Re:How many bozos are screaming that Windows is sa (Score 2) 131

So many ppl come here and post that Windows is not only safe, but that it is targeted because of numbers. Yet, it is obvious that NSA and GCHQ targeted Windows. Why? I doubt that it was numbers, but ease of cracking.

If your targets use Windows it would be a real stroke of genius to distribute attacks against Linux, don't you think?


So, in the meantime, how many companies will start switching to *nix?

What is the *nix equivalent to secure boot? Signed kernel modules? What is the *nix equivalent to Measured Boot and Network Access Protection? How does an organization automatically and immediately detect and isolate potentially infected hosts?

Every operating system out there will experience exploitable vulnerabilities. Applications running on top of the operating systems will experience exploitable vulnerabilities. The most recent severe vulnerabilities that have been mass exploited are *nix vulnerabilities like Heartbleed and Shellshock. No operating system is immune.

That's why defense in depth is important. Windows starts it's defenses before boot, by using Secure Boot. This ensures that only approved bootloaders run. It prevents bootkits. Some Linux distros support a weak form of secure boot (it doesn't protect all types of resources, notably scripts and config files are not digitally signed). Windows loads all kernel components from signed "cabinet" files - protecting all assets used during boot. If a rootkit tampers with any of the files, the system will refuse to boot.

During boot, before loading *any* kernel module, Windows will compute a hash of the module and record it in the TPM hardware module along with name, size, dates and other metadata. Upon successful boot (but before other hosts will accept traffic from the system) the OS asks the TPM for a signed "health" record. The TPM will issue a signed document with all the recorded info that the host can present to a health certificate server. The health cert server can investigate the list of loaded modules and compare against known whitelists and/or blacklists. If everything checks out, the health cert server issues a certificate the booting host must use when communicating with other hosts. Unless it can present such cert, the other hosts will refuse to communicate with the host.

Does 'Nix support such security in depth?

Such targeted attacks will target whatever operating system is being used by the target. Targets must consider the possibility that any host can be breached through an application or OS vulnerability. With that recognition, they must ensure expedient diagnosis and isolation. In that area, a Windows server infrastructure can be set up to become extremely strong.

Comment: Re:Attackers take control of websites? (Score 1) 41

by benjymouse (#48450333) Attached to: Critical XSS Flaws Patched In WordPress and Popular Plug-In

"New security updates released for the WordPress .. fix cross-site scripting (XSS) vulnerabilities that could allow attackers to take control of websites ."

Embedded javascript in a comment box could trigger exploits on Microsoft Internet Explorer running on Microsoft Windows desktops.

Source? Or just trolling?

Comment: Re:Highly advanced computer worm? (Score 1) 143

by benjymouse (#48450303) Attached to: Highly Advanced Backdoor Trojan Cased High-Profile Targets For Years

This 'highly advanced' computer worm will only work on Microsoft Windows:

It is not a worm. It is a trojan, i.e. the user has to invite the trojan (the "dropper") inside for it to work.

A worm is an automated infection which propagates automatically from system to system. Like the Shellshock worms, Code Red, Nimda.

Any particular reason you chose to call it a worm, despite that it was described as a trojan in the summary as well as in TFA?

OS/2 must die!