Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Almost... (Score 3, Interesting) 227

Every... Day.... :-/

I have a polite canned reply, which basically says that unless the recruiter's client is looking for developers to work 100% remotely, AND that their pay scales are likely to exceed Google's by a significant margin, AND that they do really cool stuff, then I'm not interested. Oh, and I don't do referrals of friends (they get plenty of spam themselves).

I don't actually mind the recruiter spam. It only takes a couple of keystrokes to fire the canned response, and there's always the possibility that someone will have an opportunity that meets my criteria. Not likely, but possible. I'm not looking for a new job, but if an opportunity satisfies my interest requirements, I'm always open to a discussion.

However, when they keep pushing even when they know their job doesn't fit my requirements, then I get pissed and blackhole their agency. That also takes only a couple of keystrokes :-)

Comment Re: Yes, but.. (Score 1) 324

That's one way. There are always other options. The key is to hook in at the layer that you're debugging. The wire is almost never that layer, unless you're debugging the network card driver. Or the hardware, but in that case Wireshark (or Ethereal, as I still think of it in my head) is usually too high-level.

Comment Re:Also, stop supporting sites with poor encryptio (Score 1) 324

My bank still insists on using RC4 ciphers and TLS 1.

If Firefox were to stop supporting the bank's insecure website, it would surely get their attention better than I've been able to.

What bank is this? There's nothing wrong with public shaming in cases like this, in fact it does the world a service.

Also, you should seriously consider switching banks. Your post prompted me to check the banks I use. One is great, one is okay. I'll watch the okay one.

Comment Re:Yes, but.. (Score 1) 324

That said, if I'm debugging something a browser is doing, the developer console is usually better anyway.

Yes, it is, and the same holds everywhere. Being able to grab the data on the wire has long been an easy way to get sort of what you want to see, but it's almost never exactly what you're really looking for. HTTPS will force us to hook in elsewhere to debug, but the "elsewhere" will almost certainly be a better choice anyway.

Comment Re:Paid Advertisement (Score 4, Insightful) 76

The OpenSSL codebase will get cleaned up and become trustworthy, and it'll continue to be used

Cleanup up and trustworthy? Unlikely. The wrong people are still in charge for that to happen.

Nonsense. The people running the OpenSSL project are competent and dedicated. OpenSSL's problem was lack of resources. It was a side project with occasional funding to implement specific new features, and the funders of new features weren't interested in paying extra to have their features properly integrated and tested. That's not a recipe for great success with something that really needs a full-time team.

Comment Re:when? (Score 1) 182

nobody is building Internet services that need several hundred megabits for reasonable performance

If there is not a lot of length of copper or fibre between the two endpoints why not? It's only congesting a little bit of a network.

Perhaps I wasn't clear. I wasn't referring to building of network connections, I was referring to the building of user services that rely on them. For example, YouTube is built to dynamically adjust video quality based on available bandwidth, but the range of bandwidths considered by the designers does not include hundreds of megabits, because far too few of the users have that capacity. They have to shoot for the range that most people have.

But as that range changes, services will change their designs to make use of it. We don't really have any idea how things will change if multi-gigabit connections become the norm, but you can be certain they will. Just as programs expand to fill all available memory, Internet services expand to fill all available bandwidth. To some extent that's because more capacity enables laziness on the part of engineers... but it also enables fundamentally different and more useful technologies.

Comment Re:Paid Advertisement (Score 1) 76

Has the fact that there's three major BSDs and one Linux been in BSD's favor?

Being able to choose an operating system (BSDs, Linux, commercial UNIXen, Windows, etc.) has been in your favor, particularly from a security perspective. And would you seriously argue that the existence of multiple BSDs has been a bad thing for their security? I'd argue exactly the opposite. The BSDs, have a well-deserved reputation for being more secure than Linux, and part of that reputation arose directly from the BSD forking. In particular, OpenBSD forked specifically to focus on security, and FreeBSD and NetBSD worked to keep up.

Does it really provide any tangible benefit that not all of us are hit at the same time with the same bug, when we're all vulnerable some of the time?

Yes, it does. You seem to think that being vulnerable none of the time is an alternative. It's not. The system as a whole is much more resilient if vulnerabilities affect only a subset.

For that matter, the eyes in "many eyes makes all bugs shallow" as well.

Look how well that has worked for OpenSSL in the past. The many eyes principle only matters if people are looking, and competition creates attention. Also, it's a common error to assume that the software ecosystem is like a company with a fixed pool of staff that must be divided among the projects. It's not. More projects (open and closed source) opens up more opportunities for people to get involved, and creates competition among them.

Competition also often creates funding opportunities, which directly addresses what was OpenSSL's biggest problem. You can argue that it also divides funding, but again that only holds if you assume a fixed pool of funding, and that's not reality. Google is contributing to OpenSSL development and almost fully funding BoringSSL (not with cash, but with people). That isn't because Google's left hand doesn't know what its right is doing.

Am I supposed to swap browsers every time a vulnerability is found in Firefox/Chrome/Safari/IE?

Huh? No, obviously, you choose a browser with a development team that stays on top of problems and updates quickly. It's almost certain that developers will choose their SSL library at least partly on the same basis, again favoring more work and more attention on the crucial lib.

It's more like math where you need a formal proof that the code will always do what you intend for it to do and that it stands up under scrutiny.

It's not, it's really not. It would be nice if that were true. It's really more like a car that breaks down over time in various ways; some are more reliable than others, but all require ongoing attention and maintenance.

We're not talking about something that must have a fail rate, if you get it right it's good.

This is true in theory, but untrue in practice, because new attacks come along all the time and ongoing maintenance (non-security bugfixes, new features, etc.) introduce new opportunities for security bugs.

Your Apache and IIS counterexamples are actually support my argument. IIS, in particular, was riddled with problems. Yes they've been cleaned up, but you're talking about a space that has been static for almost two decades (though it will soon be destabilized with the introduction of HTTP/2 and probably QUIC) and is, frankly, a much simpler problem than that solved by OpenSSL... and I assert that without the competition of alternatives, IIS never would have been cleaned up as thoroughly as it is.

Comment Re:Paid Advertisement (Score 5, Insightful) 76

Someone has to be shilling to post a summary like that one. The only future for OpenSSL is to be replaced over time by LibreSSL or another competitor.

Nah. The OpenSSL codebase will get cleaned up and become trustworthy, and it'll continue to be used. The other forks, especially LibreSSL and Google's BoringSSL, will be used, too... and that's a good thing. Three fairly API-compatible but differing implementations will break up the monoculture so bugs found in one of them (and they *will* have bugs) hopefully won't hit all three of them.

It's tempting to see such apparent duplication of effort as wasteful, but it's really not. Diversity is good and competition is good.

Comment Re:when? (Score 1) 182

The first question that comes to my mind is, "What is the point of 2 Gbps service for residential customers?"

Today? There is no point. The available services have to be built for the speeds that are common; nobody is building Internet services that need several hundred megabits for reasonable performance -- because performance would suck for nearly everyone, because hardly anyone has that. The point of gigabit plus speeds is that if you have those speeds, reliably, the difference between local and remote storage almost disappears, which enables very different approaches to building systems.

In addition, define "residential". I work from home full-time, and 100 Mbps isn't anywhere near fast enough for me. The code management and build infrastructure I often use has been designed for low-latency gigabit connections, because most everyone is in the office. A 2 Gbps connection, for me, would still not work quite as well as being in the office, because I'd have higher latency, but it would be a lot closer. I work around the slow connection with various caching strategies, but I'd rather not have to.

Am I residential? Well, I have a business account, but in a residential area. Obviously I'm far, far from typical. But usage will change as the capacity is available.

From another of your posts:

I do a lot of Android hacking and regularly download ROMs in the 300 to 700 megabyte range.

Heh. I upload a lot of ROMs in that range, and bigger (asymmetric connections suck). I download full Android source repos... I just ran "make clean && du -sh ." in the root of one of my AOSP trees: 57GB[*]. I dread having to sync a fresh tree... It takes upwards of two days.

Again, my usage is far from typical, but how long will it be before typical users are streaming multiple 8K video streams, at 500 Mbps each? It can't happen until typical users have multi-gigabit connections, but it'll come.

By the way, where's my IPv6?

Comcast actually provides IPv6 for a lot of its customers. I had fully-functional IPv6 on the Comcast connection at my last home (Comcast doesn't serve the area where I live now).

* Yes, 57 GB is nuts, but that's what happens when you have a large codebase, with extremely active development, and you manage it with a DVCS. I could cut the size down with pruning, but that always seems to break something, so I just try to minimize the frequency with which I have to download it all. btrfs snapshots help a lot. Just making copies would work, too, but at these sizes it'd be slow even on an SSD. Much better than downloading, though.

Comment Re:I remember him From Usenet as quite a gentleman (Score 1) 138

English will rip it out of your hands.

What? But it's not yours, it's ours. O.K., keep it, it makes barbaric (excuse me, i meant English...) easier for us.

James Nicoll put it best:

The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don't just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.

Comment Re:Just the good guys? (Score 1) 174

Bad guys have to set the evil bit; the software checks whether or not it's set. Really people, we've thought this through.

Relevant RFC

You know, it's been years since I actually read that. The basic concept is funny, obviously, but the author took it much further. I'd forgotten such gems as:

Because NAT [RFC3022] boxes modify packets, they SHOULD set the evil bit on such packets.

Indeed, NAT boxes really should mark all their packets as evil, because NAT is evil.

Oh, I also quite enjoy:

In networks protected by firewalls, it is axiomatic that all attackers are on the outside of the firewall. Therefore, hosts inside the firewall MUST NOT set the evil bit on any packets.

Oh, obviously. If you have a firewall, every host inside the firewall is perfectly safe. BWAHAHA...

Comment Re:LOL ... (Score 1) 208

IBM ... agile??? That sounds like an oxymoron.

I always worry when the "century old colossus" is trying to act like a startup. Because it usually ends badly, because management and the bean counters have their own inertia, and are sure as heck not going to give up their control over stuff, or stop going by the 5,000 page manual of procedures.

I've known people who used to work at IBM ... and most of them still owned the starched white shirts.

They have anything resembling "agile" surgically removed when they're hired.

Bah.

I spent 14 years at IBM, and have been around plenty of other big corps as well. IBM, like all big organizations, isn't and cannot ever be monolithic. With so many people working on so many things in so many places, you're guaranteed to get a broad variety. There have been IBM teams successfully using Agile methods for years, and I'm sure there are lots of other projects who will benefit from it, just as there are many that won't, and whose technical leadership had better resist it, or it'll sink them.

Also, IBM lost the suits not long after I joined the company back in 1997, and well before that in the core areas of Software Group and the labs. Most of the company was generally business casual by 2000, and the geekier areas were your typical shorts and ratty t-shirt places. I'm just talking about clothing, obviously, but dress standards, both official and informal, both reflect and influence attitude and behavior. So if you think IBM is "starched shirts", you don't know IBM, at least IBM's development shops.

Slashdot Top Deals

If you have a procedure with 10 parameters, you probably missed some.

Working...