Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Microsoft is wasting people's time (Score 1) 346

What MS could have done to make it a bit better is to allow the standard vertical scrollwheel most mice come with nowadays to scroll the start screen; down = right, up = left (because you always started at the TOP of the start menu, naturally you'd scroll DOWN for more, while the start screen starts at the LEFT, requiring you to scroll RIGHT for more).

What the hell's wrong with your system? That's exactly what it does, at least on 8.0 (haven't "upgraded" to 8.1 because they cut a feature that I regularly use). My mouse wheel is not capable of side-scroll, but I just tested with vertical scrolling and it works exactly like you describe.

Of course, the reason I had to test that is because it's not something a sane person should ever need to do. You have a keyboard, right? Type the first few letters of the program name (or type the file name), hit Enter, and behold the launching of your program. Just like you've been able to do for the last 7+ years, since fucking Vista (to say nothing of Win7).

Comment Re:Happy to let someone else test it (Score 1) 101

The sad thing is, NT itself has (or rather, had) a POSIX API. Up through Win8 (but not 8.1) you can actually get a basic but functional *nix environment running on NT natively (or as natively as NT runs Win32 at least, which is to say it works pretty much seamlessly and nobody back a handful of hacker-types care about the underlying guts). Shells, libraries, utilities, GCC-based build toolchain... pretty nifty, and it integrates better with Windows than Cygwin ever has, while also being faster and supporting things that Cygwin doesn't (setuid, etc.)

However, Microsoft has seen fit to stop funding the maintainers of the package repo for it (there are third-party repos - NetBSD has one, last I checked - but SUACommunity/InteropSystems was where you went for most of this stuff) and to discontinue the POSIX subsystem entirely as of NT6.3 (Win8.1). Very irritating. They say to use Cygwin instead, which is technically a viable option for most of what I use SUA/Interix for, but it's not one I'm happy about needing to take (and move everything over to).

Comment Re:The Internet Needs More Random Data (Score 1) 353

Better yet, "valid" ASCII-armored PGP blobs (or PGP attachments) that don't actually contain any decryptable data, but are otherwise indistinguishable from a real blob. Put a random key ID in there, cycle it every time, claim they are one-time use if anybody ever reaaaaally gets on your case about it.

Comment Re:Involuntary inability to comply (Score 1) 353

I have well over 100 passwords for various accounts (I have no real idea how many I'm up to by now). I can probably remember about 50 of them off the top of my head, and can make educated guesses at many of the rest... but unless it's a password I use regularly, I'd need to check my password keeper (which I do not keep on my phone or in physical form anywhere) to be sure. Some of the more obscure ones I wouldn't have a chance at, and it would probably take me at least a few tries to remember which one was for something I hadn't used in over two years.

Comment Mitigations (Score 4, Interesting) 68

Sorry to self-reply, but I figured I should add some mitigations (for those who don't RTFA...)

First of all, as a user, one can of course disallow Flash by default (or remove it entirely). Mechanisms for doing this vary by browser, but all major browsers have at least one.
You can also update Flash. The latest version, released today (Tuesday the 8th), tightens up the validation of "compressed" applets in such a way that it should catch the output of this "Rosetta Flash" program.

For sitemasters / developers, there are a few options.

  • You can host your JSONP service on a different (sub)domain from your sensitive data. This is most effective if the JSONP responses are either static or if there's a CSRF token for accessing the user data.
  • You can add the string /**/ to the beginning of the JSONP response body, right before the callback identifier (this is what Google and GitHub are doing, for example). This will be ignored by the browser when it's treating JSONP as JavaScript (a 0-character comment) but will break the reflected-Flash-applet attack because the start of the response body no longer contains the magic number for any kind of Flash applet.
  • You can add a HTTP response header like Content-Disposition: attachment; filename=f.txt to the JSONP responses, which will prevent all reasonably recent versions of Flashplayer from executing it the applet.
  • You can add the HTTP response header X-Content-Type-Options: nosniff to all vulnerable responses (or just all of them), and then make sure that you specify the correct Content-Type header (it should be Content-Type: application/javascript although application/json, while technically incorrect, will probably work too). This header forces most browsers to pay attention to the server-provided content type, rather than letting the web page specify or guessing from the content itself.

Hope that helps!

Comment I'll take a shot at it. (Score 5, Informative) 68

I don't know about English, but I can produce an explanation that is understandable by most people with at least some knowledge of how the web works, hopefully... It's not going to be short or simple, but I'll at least try for clear.

JSONP is a web service communication method. The idea is that a client (a web browser) sends a request to a given URL, and in that URL they include a "callback" parameter. The response from the server is a blob of JavaScript starting with the callback parameter (as a function name), and then containing additional data (as a JSON-defined object, usually). Examples:
A target URL that looks like this:
https://vulnerablesite.com/jsonp_service/some_endpoint?callback=jsonp.handle_some_endpoint
Produces a request like this (no body, and some headers omitted for brevity):
GET /jsonp_service/some_endpoint?callback=jsonp.handle_some_endpoint HTTP/1.1
Host: vulnerablesite.com
Cookie: VulnerableSiteSessionCookie=JoeBlowIdentificationValue ...

That produces a response like this (again, header details omitted):
HTTP/1.1 200 OK
Content-Type: application/javascript
Content-Length: 41
...

jsonp.handle_some_endpoint({"foo":"bar"})
The browser would then interpret that response as JavaScript, calling the named function.

Now, this looks risky but normally it's safe enough, because while an attacker could embed a <script src="https://vulnerablesite.com/jsonp_service/some_endpoint?callback=jsonp.handle_some_endpoint" /> script source tag that specifies an arbitrary callback name (which then gets executed as JS), there's nothing really dangerous they can do with that because the server will disallow most sensitive characters in JS (things like ( ) = ' " < >) from the callback name, so you can't actually embed arbitrary javascript in the response. Usually the attacker doesn't control the content of the parameter (the JSON blob) either, or at least can't make it be anything except JSON (which is normally pretty harmless). For example, the attacker could pass "alert" as the callback, in which case the victim gets a message box saying "[object Object]" or similar. Whoop-de-do.

OK, so the attacker can't do much just by invoking a script with an arbitrary callback name. However, Flashplayer can execute applets in a number of formats, including formats that are theoretically compressed. I say "theoretically" because there's actually nothing requiring the data to be "compressed" in any even vaguely efficient manner (which tends to produce dense blobs of seemingly-random binary values). Instead, it's possible to create a "compressed" file that only contains alphanumeric characters (and is therefore valid as a callback name), but when it is "expanded" it produces an arbitrary binary blob (such as a compiled Flash applet).

So, here's what the attacker does. They create a malicious Flash applet. They run it through the special compiler this guy came up with, which converts it into a "compressed" applet format containing only characters that are valid for a callback name. They place an HTML object tag on their own, attacker-controlled website. The object specifies the jsonp service on the vulnerable site as its data source (the way one might specify youtube's flash applet as a data source), and specifies the callback name to be the alphanumeric-format applet. The attacker also specifies that the type of the data is application/x-shockwave-flash.

When a user visits the attacker's site, their browser sees the object tag and tries to retrieve the specified data. The response they get back is *actually* a JSONP script, but the first part of it - the callback function name - is *also* a valid Flash applet. Because the object tag specifies that the data type is Flash, the browser obligingly loads Flashplayer and runs the malicious applet (it ignores the ({"foo":"bar"}) blob at the end).

Now, here's the really mean part. Up to now, you may be wondering why they bothered to go to so much effort. I mean, wouldn't it just be easier to load the Flash applet from the attacker's site, instead of tricking the vulnerable site into returning a JSONP response that starts with a Flash applet? The thing is, browsers have a Same-Origin Policy. This is one of the core security features of the web. In a nutshell, it goes like this: if site X makes a request from site Y, and X and Y are not the same origin (meaning they have different domains, or different port numbers), then site X can't see sensitive data (like cookies, or headers, or HTML content) from site Y.

Therefore, if the attacker hosted the malicious applet on their own site (X), it could send a request to the target site (Y) but X wouldn't be able to see any of the juicy details (like the session cookie for site Y of the user visiting site X). However, this attack makes it seem that the applet is actually hosted on site Y. Therefore, as far as Flashplayer is concerned, a request to site Y is made in the same origin as the applet, and the applet therefore gets all the details it wants. Once the applet has those details, it can make a second request "back" to site X, telling it all the secret details. That then allows the attacker to do things like impersonate the victim user on site Y.

Comment Re:It's a solution looking for a problem (Score 1) 302

Given the chip's expected lifetime, a woman would have to replace it once, probably sometime in her late twenties, if she wanted to be protected from her teen years until menopause. Never mind the pill; that's a *huge* advantage over the next-longest-lasting implant (three-four years), much less the shots.

Comment Re:Why contraception (Score 1) 302

This chip dispenses ludicrously tiny amounts of hormones. You might be able to use it for people who take certain other kinds of medicine regularly - my father is on thyroid pills for the rest of his life, for example, and those might be deliverable by such an implant - but the sheer volume requirements of insulin make it completely impractical for such a use. A single insulin reservoir (for a pump) is bigger than this entire chip, and is only good for a matter of days. Unless the chip could somehow manufacture the chemical out of nutrients from your bloodstream - a literal artificial pancreas - there's no way a diabetic could get insulin from it long enough to justify the operation of installing the thing.

Now, an implantable CGM (Continuous Glucose Monitor) that doesn't require a transdermal sensor and could possibly be installed in such a way that it gets *current* data on blood sugar levels... that would be handy as hell. Current CGMs provide readings that have a significant delay, which makes using them to guide an insulin pump dangerous and unreliable (the bolus of insulin required when you eat would arrive late, and then possibly would overcompensate).

Comment Re:Read-Only Access to Avoid Paternity (Score 1) 302

Believe me, I would really like to see a reliable and reversible form of male contraception that didn't require a barrier. Granted, barriers are needed for STD prevention, but once past that point in the relationship (i.e. once we've been able to get tested) it would be really great to be able to just turn off (viable) sperm production entirely... without getting a probably-not-reversible surgical procedure, that is.

Unfortunately, for whatever reason (I am not any form of biologist or doctor), this seems to be hard. Maybe that's a false impression based on the amount of funding such research receives, maybe it's possible right now but has horrendous side effects, I don't know. All I know is that the only current option for birth control that a man can use requires a small monetary investment and a minor hassle every time you *might* ejaculate in your partner. Once-a-day pills wouldn't necessarily be any cheaper, but at least they could be taken at more convenient times and you wouldn't need to pop a second one to go for round two. Oh, and they wouldn't reduce sensation or constrict anything.

Sadly, the idea you call an "easy solution" is anything but easy. In fact, it's currently impossible.

Comment Re:better than what we have now (Score 1) 249

I'm actually curious how abusive the father really was. None of the articles I read talked about that part.

Let's start with the obvious: I have absolutely no faith (ha!) in the competence of the Catholic Children's Aid Society to judge whether or not a given couple are fit to be parents. He and his sisters were taken because, quote, "allegations of abuse were against their parents". One presumes there was some degree of due process there, but "allegations" by themselves would not normally be considered sufficient evidence of actual abuse. A different article I read said the parents were young (quite believable, given the grandmother's age) and the fact that they apparently kept on having babies even though they kept being taken away does suggest a certain degree of... let's say "lack of wisdom" on their part. The (maternal) grandmother is also borderline clinically retarded (IQ of 69), which doesn't speak well for the mother's probable intelligence.

Comment Re:In a watch, batteries should last a year or mor (Score 1) 129

Leaving aside the part of my brain that is trying to figure out whether you consider only a few showers a week acceptable or are just really fast about them, I've never understood the point of non-waterproof watches. The extra cost is trivial these days, and you don't have to worry about them in the rain, or the shower, or washing your hands, or swimming, or cooking, or... you get the idea. Granted, not everybody needs a watch good to 50m - I'm a SCUBA diver, but I have a dive computer so the watch is somewhat superfluous while diving - but you can get ones good for 10m (33' or so, about one extra atmosphere of pressure) easily enough. The last time I had a watch I had to take off when bathing I was... 8?

I do still have to take the thing off at the damn TSA checkpoints, but that's the only times I have taken it off for years now. I think the battery is about eight years old?

Comment Re:Though crime is here! (Score 1) 185

Wow! You are *multiple* kinds of moron! Very impressive. Please don't breed.

1) You can't prove anybody is "about" to do something of their own volition. By your (idiotic) reasoning, I should be permitted to walk down the street with a loaded pistol in each hand, pointing one at anybody who gets within twenty feet of me or makes eye contact. After all, you can't prove I'm going to shoot anybody who doesn't attack me first!
2) Even if you can establish a reasonable proof that somebody *intended* to do something criminal, that doesn't actually constitute proof E "really was about to do something." For example, suppose my buddy and I have a fantasy about raping a woman, then cutting her up an eating her. He starts dating this lady. I start buying chains and knives and cutting utensils. My buddy and I exchange emails wherein we discuss our plan: he'll bring her over for a barbecue at my place, we'll slip something into her drink, then we'll chain her down and so on. We set a date and get her agreement to come along; I buy the drugs. At the agreed-upon time, he shows up at my place with the discussed victim. Assuming I have ground beef and bus in the fridge, can you *prove* I'm not just going to change my mind and just grill up some really good burgers instead?
3) Establishing proof is the responsibility of the court system. You arrest people on a reasonable suspicion. Now, I'm of the opinion that the justice system needs to compensate those it arrests when they're innocent (in the sense of making sure they get their stuff back and covering the value of anything lose due to the arrest), and that arrests made *without* reasonable suspicion should result in the officer(s) in question being arrested or at least strongly disciplined themselves, but that doesn't seem to be the way the government wants to run things.
4) Even regarding things in the past, there's generally no such thing as proof of who committed a crime. If I watch a guy shoot somebody, make a citizen's arrest until the cops get there, and they take him away... can I truly confirm that the person sitting handcuffed across the courtroom from me two weeks later is the same person I watched kill another person? I mean, he looks about the same as he did at the time, and he may have the same fingerprints as the guy the cops booked back then and as were found on the murder weapon... However, I don't know he doesn't have a twin or other person who just looks very similar, I don't know that the cops actually fingerprinted the right guy, I don't know whether he was using a silicone layer or something to give false fingerprints, even if I could compare the fingerprints myself I don't have the training to tell how much is typical variation between multiple readings of the same person and therefore have to take somebody else's word for it being the same even though I don't know if that person has been bribed or otherwise has an incentive to lie or is even competent to make that call themselves... True proof is impossible. That's why laws have defined terms like "a preponderance of evidence" and "beyond a reasonable doubt" and so on when discussing what it takes to convict people.

Comment Re:Aren't all SMS charges pretty much bogus? (Score 1) 110

I have friends or family on at least four different continents at any given time, and my parents are currently somewhere in Indonesia (I'm in the continental US). We keep in tough all the time... by email, IM, Skype, Google Voice numbers, and other zero-cost options. The last time I had to send an actual SMS to an international number was while I was in Finland and meeting a friend at the train station, and the cost was minimal. There was no $10 initial fee, either. Maybe that only applies if you're not already in the target country?

Relatedly, sending or receiving message with people in the US was free no matter where I went in Europe, from France to Estonia. I used well over a thousand messages, many of them MMS, on that trip. No extra charges for them. I also got data (throttled but without any cap I ever found in the course of a month) for free (well, as part of the standard plan; I didn't pay extra for it). That was excellent for IM and email, and also worked for Skype voice calls (and for looking up directions, finding restaurants, booking hotels, streaming music, updating apps, and so on).

That was all on TMoUS's standard $50/mo no-contract unlimited plan, which includes international roaming in most of the world (not just Europe). It's a really fantastic deal.

Comment Re:Intended Consequence? (Score 1) 178

Correct! It would be a remarkably stupid stack canary (which is a security measure) otherwise. Since the value would be the same on everybody's computer, you'd only need to find it once and then when you overflow the buffer be sure to write the canary value back as it was!

Instead, getting past stack canaries is considerably more difficult than that. It's possible, of course, with the right vulnerabilities... but it's *harder* and sometimes a program that would be exploitable without them (using the vulnerabilities known at the time) just isn't exploitable with them.

Comment Somebody much smarter than you, dbIII (Score 4, Informative) 178

The summary's description of PFS is a complete clusterfuck, of course (this is /. so *obviously* the summary is going to be technically inaccurate, right?). Yours (LordLimecat) is more accurate, but the full concept isn't that hard so I'll explain it below.

First, some quick basics of TLS (I'm leaving out a lot of details; do *NOT* try to implement this yourself!):

  • A server has a public key and a private key for an asymmetric cipher, such as RSA.
  • When a client connects, the server sends their public key to the client. The public key is used to authenticate the server, so the client knows their connection wasn't intercepted or redirected.
  • The client can also encrypt messages using the public key, and only the holder of the private key (the server) can decrypt those messages.
  • Because RSA and similar ciphers are slow, TLS uses a fast, symmetric cipher (like AES or RC4) for bulk data.
  • Before bulk data can be sent, the client and the server need to agree on a symmetric cipher and what key to use.
  • The process of ensuring that both parties have the same symmetric key is called Key Exchange.
  • Obviously, the key exchange itself needs to be protected; if the key is ever sent in plaintext, an attacker can decrypt the whole session.

Here's the scenario where PFS matters, and why it is "perfect":

  • SSL/TLS (same concept, just different versions of the protocol really) is being used to secure connections.
  • An attacker (think NSA) has been recording the encrypted traffic, and wants to decrypt it.
  • The attacker has a way to get the private key from the server (a bug like Heartbleed, or possibly just a NSL).

Here's where it gets interesting:

  • Without PFS (normal SSL/TLS key exchanges), the key exchange is protected using the same kind of public-key crypto used to authenticate the server. Therefore, without PFS, our attacker could use the private key material to either decrypt or re-create the key, and decrypt all the traffic.
  • With PFS, the key exchange is done using randomly generated ephemeral (non-persistent) public and private parameters (Diffie-Hellman key exchange). Once the client and server each clear their private parameters, it is not possible for anybody to reconstruct the symmetric key, even if they later compromise the server's persistent public/private key pair (the one used for authentication).

It is this property, where the secrets needed to recover an encryption key are destroyed and cannot be recovered even if one party cooperates with the attacker, which is termed Perfect Forward Secrecy. Note that PFS doesn't make any guarantees if the crypto is attacked while a session is in progress (in this case, the attacker could simply steal the symmetric key) or if the attacker compromises one side before the session begins (in which case they can impersonate that party, typically the server). It is only perfect secrecy going forward.

Slashdot Top Deals

There are two ways to write error-free programs; only the third one works.

Working...