Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:This again? (Score 1) 480

OK, I will try to restate in my baby talk since I don't remember this correctly.

Given that you are accelerating, the appearance to you is that you are doing so linearly, and time dilation is happening to you. It could appear to you that you reach your destination in a very short time, much shorter than light would allow. To the outside observer, however, time passes at a different rate and you never achieve light speed.

Comment Re:Almost... (Score 3, Interesting) 227

Every... Day.... :-/

I have a polite canned reply, which basically says that unless the recruiter's client is looking for developers to work 100% remotely, AND that their pay scales are likely to exceed Google's by a significant margin, AND that they do really cool stuff, then I'm not interested. Oh, and I don't do referrals of friends (they get plenty of spam themselves).

I don't actually mind the recruiter spam. It only takes a couple of keystrokes to fire the canned response, and there's always the possibility that someone will have an opportunity that meets my criteria. Not likely, but possible. I'm not looking for a new job, but if an opportunity satisfies my interest requirements, I'm always open to a discussion.

However, when they keep pushing even when they know their job doesn't fit my requirements, then I get pissed and blackhole their agency. That also takes only a couple of keystrokes :-)

Comment Re: Yes, but.. (Score 1) 324

That's one way. There are always other options. The key is to hook in at the layer that you're debugging. The wire is almost never that layer, unless you're debugging the network card driver. Or the hardware, but in that case Wireshark (or Ethereal, as I still think of it in my head) is usually too high-level.

Comment Where we need to get to call this real (Score 1) 480

Before we call this real, we need to put one on some object in orbit, leave it in continuous operation, and use it to raise the orbit by a measurable amount large enough that there would not be argument regarding where it came from. The Space Station would be just fine. It has power for experiments that is probably sufficient and it has a continuing problem of needing to raise its orbit.

And believe me, if this raises the orbit of the Space Station they aren't going to want to disconnect it after the experiment. We spend a tremendous amount of money to get additional Delta-V to that thing, and it comes down if we don't.

Submission + - UMG v Grooveshark settled, no money judgment against individuals

NewYorkCountryLawyer writes: UMG's case against Grooveshark, which was scheduled to go to trial Monday, has been settled. Under the terms of the settlement (PDF), (a) a $50 million judgment is being entered against Grooveshark, (b) the company is shutting down operations, and (c) no money judgment at all is being entered against the individual defendants.

Comment Re:Also, stop supporting sites with poor encryptio (Score 1) 324

My bank still insists on using RC4 ciphers and TLS 1.

If Firefox were to stop supporting the bank's insecure website, it would surely get their attention better than I've been able to.

What bank is this? There's nothing wrong with public shaming in cases like this, in fact it does the world a service.

Also, you should seriously consider switching banks. Your post prompted me to check the banks I use. One is great, one is okay. I'll watch the okay one.

Comment Re:Yes, but.. (Score 1) 324

That said, if I'm debugging something a browser is doing, the developer console is usually better anyway.

Yes, it is, and the same holds everywhere. Being able to grab the data on the wire has long been an easy way to get sort of what you want to see, but it's almost never exactly what you're really looking for. HTTPS will force us to hook in elsewhere to debug, but the "elsewhere" will almost certainly be a better choice anyway.

Comment Re:Paid Advertisement (Score 4, Insightful) 76

The OpenSSL codebase will get cleaned up and become trustworthy, and it'll continue to be used

Cleanup up and trustworthy? Unlikely. The wrong people are still in charge for that to happen.

Nonsense. The people running the OpenSSL project are competent and dedicated. OpenSSL's problem was lack of resources. It was a side project with occasional funding to implement specific new features, and the funders of new features weren't interested in paying extra to have their features properly integrated and tested. That's not a recipe for great success with something that really needs a full-time team.

Comment Re:when? (Score 1) 182

nobody is building Internet services that need several hundred megabits for reasonable performance

If there is not a lot of length of copper or fibre between the two endpoints why not? It's only congesting a little bit of a network.

Perhaps I wasn't clear. I wasn't referring to building of network connections, I was referring to the building of user services that rely on them. For example, YouTube is built to dynamically adjust video quality based on available bandwidth, but the range of bandwidths considered by the designers does not include hundreds of megabits, because far too few of the users have that capacity. They have to shoot for the range that most people have.

But as that range changes, services will change their designs to make use of it. We don't really have any idea how things will change if multi-gigabit connections become the norm, but you can be certain they will. Just as programs expand to fill all available memory, Internet services expand to fill all available bandwidth. To some extent that's because more capacity enables laziness on the part of engineers... but it also enables fundamentally different and more useful technologies.

Comment Re:Paid Advertisement (Score 1) 76

Has the fact that there's three major BSDs and one Linux been in BSD's favor?

Being able to choose an operating system (BSDs, Linux, commercial UNIXen, Windows, etc.) has been in your favor, particularly from a security perspective. And would you seriously argue that the existence of multiple BSDs has been a bad thing for their security? I'd argue exactly the opposite. The BSDs, have a well-deserved reputation for being more secure than Linux, and part of that reputation arose directly from the BSD forking. In particular, OpenBSD forked specifically to focus on security, and FreeBSD and NetBSD worked to keep up.

Does it really provide any tangible benefit that not all of us are hit at the same time with the same bug, when we're all vulnerable some of the time?

Yes, it does. You seem to think that being vulnerable none of the time is an alternative. It's not. The system as a whole is much more resilient if vulnerabilities affect only a subset.

For that matter, the eyes in "many eyes makes all bugs shallow" as well.

Look how well that has worked for OpenSSL in the past. The many eyes principle only matters if people are looking, and competition creates attention. Also, it's a common error to assume that the software ecosystem is like a company with a fixed pool of staff that must be divided among the projects. It's not. More projects (open and closed source) opens up more opportunities for people to get involved, and creates competition among them.

Competition also often creates funding opportunities, which directly addresses what was OpenSSL's biggest problem. You can argue that it also divides funding, but again that only holds if you assume a fixed pool of funding, and that's not reality. Google is contributing to OpenSSL development and almost fully funding BoringSSL (not with cash, but with people). That isn't because Google's left hand doesn't know what its right is doing.

Am I supposed to swap browsers every time a vulnerability is found in Firefox/Chrome/Safari/IE?

Huh? No, obviously, you choose a browser with a development team that stays on top of problems and updates quickly. It's almost certain that developers will choose their SSL library at least partly on the same basis, again favoring more work and more attention on the crucial lib.

It's more like math where you need a formal proof that the code will always do what you intend for it to do and that it stands up under scrutiny.

It's not, it's really not. It would be nice if that were true. It's really more like a car that breaks down over time in various ways; some are more reliable than others, but all require ongoing attention and maintenance.

We're not talking about something that must have a fail rate, if you get it right it's good.

This is true in theory, but untrue in practice, because new attacks come along all the time and ongoing maintenance (non-security bugfixes, new features, etc.) introduce new opportunities for security bugs.

Your Apache and IIS counterexamples are actually support my argument. IIS, in particular, was riddled with problems. Yes they've been cleaned up, but you're talking about a space that has been static for almost two decades (though it will soon be destabilized with the introduction of HTTP/2 and probably QUIC) and is, frankly, a much simpler problem than that solved by OpenSSL... and I assert that without the competition of alternatives, IIS never would have been cleaned up as thoroughly as it is.

Slashdot Top Deals

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...