Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Is return value optimisation a bug? (Score 1) 144 144

It's called copy elision and it is part of the C++ spec. The spec specifically says that compilers may (but are not required to) implement it, meaning that the compiler is completely free to do different things within the abstract machine at different optimisation levels. It's definitely a bug, but unfortunately it's a bug in the C++ standard. Any sane spec would either require the compiler to implement it or prohibit it - either would be fine (with C++11, prohibiting it would make sense as returning an r-value reference and doing move construction ought to give the same benefit).

Comment Re: Compiler optimizer bugs (Score 1) 144 144

Most compiler bugs are not that difficult to debug

Another compiler guy here: Some compiler bugs are not that difficult to debug if you have a reduced test case that triggers the issue. Most are caused by subtle interactions of assumptions in different optimisations and so go away with very small changes to the code and are horrible to debug (and involve starring at the before and after parts for each step in the pipeline to find out exactly where the incorrect code was introduced, which is often not where the bug is, so then backtracking to find what produced the code that contained the invalid assumption that was then exploited later).

Comment The name thing, too... (Score 2) 246 246

The name thing was a huge deal-breaker for a fair number of people, and the pathologically horrible way they handled it made it a lot worse. I know dozens of people who would have used G+ but walked away from it because at least one person they knew had bad experiences with it. I spent months with my G+ account in various kinds of limbo because the "appeals" process for name decisions was completely dysfunctional. I eventually ran into someone on slashdot who knew a person who knew a person who could unstick my account and get my name approved, but by that time everyone had lost interest.

And one of my friends used to have a Picassa account, and then somehow it got marked as a G+ profile thing (even though she never intentionally activated G+), and then suspended because their algorithm thought the name was unrealistic, and then she lost access to the Picassa stuff. I don't know whether that actually got resolved.

Very badly run at every level. The most frustrating thing is, they had a guy writing about this who was apparently in some kind of leadership role, and he talked about how the appeals process should work and how the name stuff should work... And nothing he said actually had any influence on the behavior of the product. The actual appeals process consisted of a thing that did not include any mechanism at all for stating your case or explaining why you felt a given name was the right name to use for you, which was then ignored by a machine or possibly a person, who knows. That's it. No mechanism for response or interaction.

Google's hatred of actually dealing with things personally interacted very badly with a policy which was inherently personal.

Comment Here, there, and everywhere (Score 1) 53 53

When Nokia bought Navteq they bought one of two global mapping companies, for about US$ 7.5 billion. For that they got, almost immediately, free maps for every Nokia handset. Around the planet. Also data sets for some industry leading augmented reality. Those services were, and are, huge. They sold lots of handsets and led the way to lots of Microsoft collaboration (Windows Phone et al comes with Nokia Here built-in.) That eventually led to Microsoft buying the phone unit outright. Did Nokia lose money selling Here off? Maybe, maybe not. They sold lots of handsets around the world featuring Here. That augmented reality wowed lots of folks and sold some more, plus positioned Nokia products as forward looking. They sold some online mapping to websites, though that was probably not a big revenue stream. They eventually sold the failing phone unit (and kept Here!) So they got a lot of milage out of Here, maybe US$5 billion. Going forward, I hope the new owners keep the consumer editions of Here. I'm off to Glacier Nat'l Park next week, and have Here loaded on all my handsets. The iPhone has just the states I regularly visit preloaded. One of my Android handsets has all of North & Central Americas preloaded, for fast travel convenience. I'm used to sering legions of befuddled tourists wandering around national park attractions confused their smartphone maps (Google Maps & Apple Maps, both largely dependant on streaming maps) aren't working. I used to bring a Windows phone along explicitly for those situations, now I just load Here. Oh, and why not carry a dedicated GPS unit? They don't come with cameras, translators, phones, email, etc. Their maps? Likely sourced from, yes, Here.

Comment Re:Shouldn't this work the other way? (Score 1) 190 190

things like the GHS hazard pictograms, DIN 4844-2, ISO 3864, TSCA marks, and similar such things seem like perfectly reasonable additions to Unicode

No they don't, because they are pictograms with very specific visual appearances. Such things don't belong in a character set, because things in a character set are characters. Glyphs (visual presentation of characters) live in fonts and each font designer is free to represent them differently, as long as they're recognisable. If every font has to represent things in the same way, then they don't belong in a character set, they belong in a set of standard images.

The other issue with this kind of cruft is collation. The unicode collation algorithm is insanely complex (and often a bottleneck for databases that need to keep strings sorted). Different locales sort things in different orders and most have well-defined rules for things that are characters. The rules for how you sort a dog-poop emoji relative to a GHS hazard pictogram, relative to a roman letter are... what?

Comment Re:This one simple trick ... (Score 1) 190 190

Being a character implies a bunch of other stuff such as different graphical representations (fonts) for the same semantic symbol and a collation ordering. This doesn't make sense for a load of stuff that's now in unicode. If these are meant to be glyphs with well-defined visual representations, then they don't belong in a font with their representation dependent on the font designer's whim. If they're not characters used in any language, then what are the collation rules for them? What order do dog-poop and contains-gluten sort, and how does this vary between locales?

Comment Re:That's lovely (Score 1) 544 544

The working class doesn't get to pick where they live. It's expensive as hell to up and move

I'm not totally convinced by this. The poorer you are, the less likely you are to own your own house. That makes moving a lot cheaper (selling a house is expensive, changing rented accommodation is inconvenient but not nearly as expensive).

Comment Re: Troll (Score 3, Insightful) 544 544

It's easy to retreat to a True Scotsman argument, but when it comes to political and economic systems there are very few examples of any ideology being completely applied. Not capitalism, not communism, not socialism. Most countries have a blend of several parts of different ideas. Claiming that the Union of Soviet Socialist Republics is a shining example of socialism is about as accurate as claiming that the Democratic Republic of Congo is a shining example of a democratic republic. They may have the word in their name, but that's about it. Even the USA makes more use of Marxist ideas than the USSR did for most of its existence.

Comment Re:Then make the "aberration" return. (Score 4, Interesting) 544 544

It varies a bit depending on the relative scarcity of your skills and jobs. For someone with skills in shortage, job security isn't that great a thing, as moving jobs will typically involve a pay rise. For someone with fewer options, it's much more important because there's going to be a gap between jobs and they're not in a position to negotiate a better package. Unions were supposed to redress some of this imbalance: an individual employee may be easily replaceable for a lot of companies, but the entire workforce (or even a third of the workforce) probably isn't.

Unfortunately, unions in the USA managed to becomes completely self-interested and corrupt institutions. This is partly due to lack of competition: in most of the rest of the world you have a choice of at least a couple of unions to join, so if your union isn't representing your interests you can switch to another one. Partly due to the ties between unions and organised crime in the USA coming out of the prohibition era. Partly due to the demonisation of anything vaguely socialist during the Cold War, which reduced employee involvement in unions (and if most people aren't involved in the union, then the few that are have disproportionate influence).

Even this has been somewhat eroded by automation. If you're replacing 1,000 employees with robots and 100 workers, then a union's threat to have 600 people go on strike doesn't mean much and even when it does it's very hard to persuade those 600 that striking won't mean that they're moved to the top of the to-be-redundant list.

But, back to my original point: lack of jobs for life isn't the real problem. A large imbalance in negotiating power between companies and employees is. When employees are in a stronger negotiating position, companies will favour keeping existing employees because it's cheaper than hiring new ones.

Comment Re:wft ever dude! (Score 1) 210 210

I found that above about 10Mb/s you start to hit diminishing returns. The jump from 10 to 30 was barely noticeable. The jump from 30 to 100 is noticeable with large downloads, but nothing else. From 100 to 1000, the main thing that you notice is if you accidentally download a large file to a spinning-rust disk and see how quickly your fill up your RAM with buffer cache...

Over the last 10 years, I've gone from buying the fastest connection my ISP offered to buying the slowest. The jump from 512Kb/s to 1Mb/s was really amazing (though not as good as moving to 512Kb/s from a modem that rarely managed even 33Kb/s), but each subsequent upgrade has been less exciting.

Comment Re:wft ever dude! (Score 1) 210 210

Because in 1981 or so, everybody was pretty sure that this fairly obscure educational network would *never* need more than about 4 billion addresses... and they were *obviously right*.

Well, maybe. Back then home computers were already a growth area and so it was obvious that one computer per household would eventually become the norm. If you wanted to put these all on IPv4, then it would be cramped. The growth in mobile devices and multi-computer households might have been a bit surprising to someone in 1981, but you'd have wanted to add some headroom.

When 2% of your address space is consumed, you are just over 6 doublings away consumption. Even if you assume an entire decade per doubling, that's less than an average lifetime before you're doing it all over again.

With IPv6, you can have 4 billion networks for every IPv4 address. Doublings are much easier to think about in base 2: one bit per doubling. We've used all of the IPv4 addresses. Many of those are for NAT'd networks, so let's assume that they all are and that we're going to want one IPv6 subnet for each IPv4 address currently assigned during the transition. That's 32 bits gone. Assuming that we're using a /48 for every subnet, then that gives us 16 more doublings (160 years by your calculations). If we're using /64s, then that's 32 doublings (320 years). I hope that's within my lifetime, but I suspect that it won't be.

In practice, I suspect that the growth will be a bit different. Most of the current growth is multiple devices per household, which doesn't affect the number of subnets: that /64 will happily keep a house happy with a nice sparse network, even if every single physical object that you own gets a microcontroller and participates in IoT things using a globally routable address.

IMHO: what needs to happen next is to have a 16 bit packet header to indicate the size of the address in use. This makes the address space not only dynamic, but MASSIVE without requiring all hardware on the face of the Earth to be updated any time the address space runs out.

This isn't really a workable idea. Routing tables need to be fast, which means that the hardware needs to be simple. For IPv4, you basically have a fast RAM block with 2^24 entries and switch on the first three bytes to determine where to send the packet. With IPv6, subnets are intended to be arranged hierarchically, so you end up with a simpler decision. With variable-length fields, you'd need something complex to parse them and that would send you into the software slow path. This is a problem, because you'd then have a very simple DoS attack on backbone routers (just send them packets with large length headers that chew up CPU before they're dropped). You'd also have the same deployment headaches that IPv6 has: no one would buy routers that had fast paths for very large addresses now, just because in 100 years we might need them, so no one would test that path at a large scale: you'd avoid the DoS by just dropping all packets that used an address size other than 4 or 16. In 100 years (i.e. well over 50 backbone router upgrades), people might start caring and buy routers that could handle 16 or 32 byte address fields, but that upgrade path is already possible: the field that you're looking for is called the version field in the IP header.

Happiness is a positive cash flow.

Working...