Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Paid Advertisement (Score 4, Insightful) 76

The OpenSSL codebase will get cleaned up and become trustworthy, and it'll continue to be used

Cleanup up and trustworthy? Unlikely. The wrong people are still in charge for that to happen.

Nonsense. The people running the OpenSSL project are competent and dedicated. OpenSSL's problem was lack of resources. It was a side project with occasional funding to implement specific new features, and the funders of new features weren't interested in paying extra to have their features properly integrated and tested. That's not a recipe for great success with something that really needs a full-time team.

Comment Re:when? (Score 1) 182

nobody is building Internet services that need several hundred megabits for reasonable performance

If there is not a lot of length of copper or fibre between the two endpoints why not? It's only congesting a little bit of a network.

Perhaps I wasn't clear. I wasn't referring to building of network connections, I was referring to the building of user services that rely on them. For example, YouTube is built to dynamically adjust video quality based on available bandwidth, but the range of bandwidths considered by the designers does not include hundreds of megabits, because far too few of the users have that capacity. They have to shoot for the range that most people have.

But as that range changes, services will change their designs to make use of it. We don't really have any idea how things will change if multi-gigabit connections become the norm, but you can be certain they will. Just as programs expand to fill all available memory, Internet services expand to fill all available bandwidth. To some extent that's because more capacity enables laziness on the part of engineers... but it also enables fundamentally different and more useful technologies.

Comment Re:Paid Advertisement (Score 1) 76

Has the fact that there's three major BSDs and one Linux been in BSD's favor?

Being able to choose an operating system (BSDs, Linux, commercial UNIXen, Windows, etc.) has been in your favor, particularly from a security perspective. And would you seriously argue that the existence of multiple BSDs has been a bad thing for their security? I'd argue exactly the opposite. The BSDs, have a well-deserved reputation for being more secure than Linux, and part of that reputation arose directly from the BSD forking. In particular, OpenBSD forked specifically to focus on security, and FreeBSD and NetBSD worked to keep up.

Does it really provide any tangible benefit that not all of us are hit at the same time with the same bug, when we're all vulnerable some of the time?

Yes, it does. You seem to think that being vulnerable none of the time is an alternative. It's not. The system as a whole is much more resilient if vulnerabilities affect only a subset.

For that matter, the eyes in "many eyes makes all bugs shallow" as well.

Look how well that has worked for OpenSSL in the past. The many eyes principle only matters if people are looking, and competition creates attention. Also, it's a common error to assume that the software ecosystem is like a company with a fixed pool of staff that must be divided among the projects. It's not. More projects (open and closed source) opens up more opportunities for people to get involved, and creates competition among them.

Competition also often creates funding opportunities, which directly addresses what was OpenSSL's biggest problem. You can argue that it also divides funding, but again that only holds if you assume a fixed pool of funding, and that's not reality. Google is contributing to OpenSSL development and almost fully funding BoringSSL (not with cash, but with people). That isn't because Google's left hand doesn't know what its right is doing.

Am I supposed to swap browsers every time a vulnerability is found in Firefox/Chrome/Safari/IE?

Huh? No, obviously, you choose a browser with a development team that stays on top of problems and updates quickly. It's almost certain that developers will choose their SSL library at least partly on the same basis, again favoring more work and more attention on the crucial lib.

It's more like math where you need a formal proof that the code will always do what you intend for it to do and that it stands up under scrutiny.

It's not, it's really not. It would be nice if that were true. It's really more like a car that breaks down over time in various ways; some are more reliable than others, but all require ongoing attention and maintenance.

We're not talking about something that must have a fail rate, if you get it right it's good.

This is true in theory, but untrue in practice, because new attacks come along all the time and ongoing maintenance (non-security bugfixes, new features, etc.) introduce new opportunities for security bugs.

Your Apache and IIS counterexamples are actually support my argument. IIS, in particular, was riddled with problems. Yes they've been cleaned up, but you're talking about a space that has been static for almost two decades (though it will soon be destabilized with the introduction of HTTP/2 and probably QUIC) and is, frankly, a much simpler problem than that solved by OpenSSL... and I assert that without the competition of alternatives, IIS never would have been cleaned up as thoroughly as it is.

Comment Re:Paid Advertisement (Score 5, Insightful) 76

Someone has to be shilling to post a summary like that one. The only future for OpenSSL is to be replaced over time by LibreSSL or another competitor.

Nah. The OpenSSL codebase will get cleaned up and become trustworthy, and it'll continue to be used. The other forks, especially LibreSSL and Google's BoringSSL, will be used, too... and that's a good thing. Three fairly API-compatible but differing implementations will break up the monoculture so bugs found in one of them (and they *will* have bugs) hopefully won't hit all three of them.

It's tempting to see such apparent duplication of effort as wasteful, but it's really not. Diversity is good and competition is good.

Comment Re:when? (Score 1) 182

The first question that comes to my mind is, "What is the point of 2 Gbps service for residential customers?"

Today? There is no point. The available services have to be built for the speeds that are common; nobody is building Internet services that need several hundred megabits for reasonable performance -- because performance would suck for nearly everyone, because hardly anyone has that. The point of gigabit plus speeds is that if you have those speeds, reliably, the difference between local and remote storage almost disappears, which enables very different approaches to building systems.

In addition, define "residential". I work from home full-time, and 100 Mbps isn't anywhere near fast enough for me. The code management and build infrastructure I often use has been designed for low-latency gigabit connections, because most everyone is in the office. A 2 Gbps connection, for me, would still not work quite as well as being in the office, because I'd have higher latency, but it would be a lot closer. I work around the slow connection with various caching strategies, but I'd rather not have to.

Am I residential? Well, I have a business account, but in a residential area. Obviously I'm far, far from typical. But usage will change as the capacity is available.

From another of your posts:

I do a lot of Android hacking and regularly download ROMs in the 300 to 700 megabyte range.

Heh. I upload a lot of ROMs in that range, and bigger (asymmetric connections suck). I download full Android source repos... I just ran "make clean && du -sh ." in the root of one of my AOSP trees: 57GB[*]. I dread having to sync a fresh tree... It takes upwards of two days.

Again, my usage is far from typical, but how long will it be before typical users are streaming multiple 8K video streams, at 500 Mbps each? It can't happen until typical users have multi-gigabit connections, but it'll come.

By the way, where's my IPv6?

Comcast actually provides IPv6 for a lot of its customers. I had fully-functional IPv6 on the Comcast connection at my last home (Comcast doesn't serve the area where I live now).

* Yes, 57 GB is nuts, but that's what happens when you have a large codebase, with extremely active development, and you manage it with a DVCS. I could cut the size down with pruning, but that always seems to break something, so I just try to minimize the frequency with which I have to download it all. btrfs snapshots help a lot. Just making copies would work, too, but at these sizes it'd be slow even on an SSD. Much better than downloading, though.

Comment Re:I remember him From Usenet as quite a gentleman (Score 1) 138

English will rip it out of your hands.

What? But it's not yours, it's ours. O.K., keep it, it makes barbaric (excuse me, i meant English...) easier for us.

James Nicoll put it best:

The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don't just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.

Comment Re:Just the good guys? (Score 1) 174

Bad guys have to set the evil bit; the software checks whether or not it's set. Really people, we've thought this through.

Relevant RFC

You know, it's been years since I actually read that. The basic concept is funny, obviously, but the author took it much further. I'd forgotten such gems as:

Because NAT [RFC3022] boxes modify packets, they SHOULD set the evil bit on such packets.

Indeed, NAT boxes really should mark all their packets as evil, because NAT is evil.

Oh, I also quite enjoy:

In networks protected by firewalls, it is axiomatic that all attackers are on the outside of the firewall. Therefore, hosts inside the firewall MUST NOT set the evil bit on any packets.

Oh, obviously. If you have a firewall, every host inside the firewall is perfectly safe. BWAHAHA...

Comment Re:LOL ... (Score 1) 208

IBM ... agile??? That sounds like an oxymoron.

I always worry when the "century old colossus" is trying to act like a startup. Because it usually ends badly, because management and the bean counters have their own inertia, and are sure as heck not going to give up their control over stuff, or stop going by the 5,000 page manual of procedures.

I've known people who used to work at IBM ... and most of them still owned the starched white shirts.

They have anything resembling "agile" surgically removed when they're hired.

Bah.

I spent 14 years at IBM, and have been around plenty of other big corps as well. IBM, like all big organizations, isn't and cannot ever be monolithic. With so many people working on so many things in so many places, you're guaranteed to get a broad variety. There have been IBM teams successfully using Agile methods for years, and I'm sure there are lots of other projects who will benefit from it, just as there are many that won't, and whose technical leadership had better resist it, or it'll sink them.

Also, IBM lost the suits not long after I joined the company back in 1997, and well before that in the core areas of Software Group and the labs. Most of the company was generally business casual by 2000, and the geekier areas were your typical shorts and ratty t-shirt places. I'm just talking about clothing, obviously, but dress standards, both official and informal, both reflect and influence attitude and behavior. So if you think IBM is "starched shirts", you don't know IBM, at least IBM's development shops.

Comment Re:Not likely (Score 1) 208

The larger the project, the less suited it is to being Agile. Of course, that's a good argument for breaking large projects into smaller ones that interact with each other, allowing them to be more suited to Agile.

Take great care in how you do this, though, and you'd better have a solidly-defined architecture before you do it. Conway's Law points out that however you set up your organizational structure, the architecture of the design will follow suit, so if you break the project up along the wrong lines you are dictating a dysfunctional system architecture.

Comment Re:So like the cops... (Score 3, Informative) 76

So like the cops... it shows up only after the crime has been committed, and only protects some of the population (Google passwords) and not the rest of the population (e.g. your banking password isn't protected, because it's not a Google site).

Seems slightly less than useful.

I disagree.

If you use Gmail as your primary e-mail then your Google password is the crown jewel of your online identity, since every other site out there (including your bank) uses e-mail as the password reset channel. Sure it might be nice if the tool were more general-purpose (though that would require changing the hashing strategy, which intentionally uses relatively few bits as a security measure to protect against brute force), but if you can protect only one password, your e-mail password is the one.

For people who use not just Gmail but lots of Google services, it's even more critical. I store lots of important stuff in Drive, have my phone report my exact location, have my whole address book synced, etc., etc. It doesn't concern me to have so many eggs in one basket because I trust Google to maintain good security, but it can only be as good as my authentication. I use 2FA, but there's still value in being careful with such an important password.

Comment Re:Yeah.... (Score 1) 193

What's the point of the external marker? I never had issues identifying an Uber vehicle when it was coming to pick me up. External markers are obviously needed when you're hailing vehicles on the street, but they don't do that.

My guess? It's because Uber wants external markers for advertising to grow the business, and their drivers dislike the idea enough that Uber doesn't want to be the entity mandating it. So Uber's lobbyists convinced legislators that this was a good additional "regulation", to give Uber what they want while simultaneously appearing to "crack down" on them. I mean, if everything in the bill was already being done by Uber it would be too obvious that it's just for show.

(Don't read the above as criticism of Uber. Smart businesses always try to turn regulatory oversight to their advantage. One of the downsides of having government big enough to tell businesses what to do is that businesses are then motivated to influence government.)

Comment Re:Yeah.... (Score 3, Informative) 193

Maintaining a list of drivers, criminal background checks, sufficient insurance for commercial purposes, visible external marker on the car, yearly safety inspections, minimum age of 21, and a license fee for the privilege of this oversight, of course.

I think Uber actually already satisfies most of this. They need external markers on the cars (slap some magnetic signs on), and would probably need to do more safety inspections if MA doesn't already require annual inspections of all registered vehicles, and pay a license fee. They already have $1M insurance coverage and obviously have a list of drivers. I think they do background checks, too, though I'm not completely sure.

Frankly, this seems more like a minimal set of regulations to shut up people who are complaining about the unregulated taxi service. Now they technically won't be unregulated, even though the actual changes to their business will be negligible, assuming the license fee is reasonable.

Slashdot Top Deals

Truly simple systems... require infinite testing. -- Norman Augustine

Working...