Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Big problems from a little seed. (Score 1) 427

Bjarne: There really is a question at the end of this, but it takes some setting up. Please bear with me.

In the late '80s I came to Silicon Valley for a startup, which was using C++ for a major project. I learned the language then and got over the "doing it the object-oriented way" hump. In the process I analyzed what cfront was producing and got a good understanding of what was under the hood (at the time).

The project was very ambitious. Much of it was creating a data base engine for a complicated indexing mechanism. The result was that transactions occurred by creating a transient structure. The bulk of the work occurred during its construction, and errors during this stage had to be unwound carefully.

In those days C++ didn't have exception handling - "catch" and "throw" were reserved words, to be defined later. (So I built a set of exception handling classes that unwound even errors thrown from deep in construction and destruction correctly.)

Some of the architectural types had come to OOP via Xerox PARC and Smalltalk, and didn't want to be "slowed down" by getting "manual" memory management right. So we built a set of classes (incluing "smartpointers") and a preprocessor (to automatically generate pointer-identifying virtual functions) and got garbage collection working just fine. (We did a similar thing for remote procedure calls, and so on. We may still hold the record for layers of preprocessing between source and object...)

C++ WOULD have been the ideal language for it. But we found a little hole in the spec that caused BIG problems. The language got it SO ALMOST right that it was painful.

Consider construction of a subclass with virtual functions. Suppose the base class exports a pointer to itself (say, putting it into an external list). Then suppose that, at some time during the execution of the constructor of a member object of the derived class (or other initialization of the derived class), something gets hold of that pointer and calls a virtual function of the base class that is overridden by the derived class. Does it get the base class or derived class version of the function?

IMHO it should get the BASE class version during the initialization of the derived class UP TO the execution of the first user-written statement of the constructor, and the derived class version once the constructor's guts are executing. Getting the derived version before the constructor proper is entered means attempting to use functionality that has not been initialized. (Before the constructor is enetered you're still initializing the component parts. During the constructorf you initialize the assembly, and the author can manage the issue.) Similarly, during destruction it should get the derived version through the last user written line of the destructor, the base class version after that (as first the object-type members, then the base class(es), are destroyed).

Examples of how this would work in real problems:
  - Object represents a an object in a displayed image. The base class is a generic displayable object, which hooks the object into a display list. It has a do-nothing "DrawMe()" virtual function. The derived class adds the behavior. When the display finishes a frame the list is run, calling the "DrawMe()" methods of all the objects. If one is still under construction and the derived class overriding is clalled, uninitialized memory is accessed (including pointers containing stack junk).
  - Object is heap-allocated. Virtual functions are the "mark()" or "sweep()" pointer enumerator for the garbage collector. Base classs is the generic "I'm a heap allocated object" substrate, hooking into an "allocated objects" list with do-nothing virtual functions. At each level of object derivation the new version of the function enumerates, for the mark and sweep phases, the member variables that are pointers (and calls the base class version to also enumerate the pointers at the more baseward layers. The pointers' own initialization clears them to NULL before this can happen. If a derived class constructor exhausts memory and triggers a garbage collection before all the pointers are set to null, getting the derived-class version of an enumeration routne means following stack-junk pointers. Crash!
  - Exception-handling during constructors involves a similar mechanism to identify what levels of DEstructor need to be called to unwind the construction. Again a virtual function (this time the actual destructor) identifies the level of construction that needs unwinding. Again, getting the derived class overriding before the constructor is entered means calling the destructor for an uninitialized level. Oops!
There are, no doubt, many similar real-world patterns, since the problem is fundamental.

So when we discovered the wrong level was getting called in some cases, I did a survey of available compilers, looking for one that got it "right". WIth both constructor and destructor behavior to be done right or wrong, there were three wrong ways and one right way:
  - cfront (and the cfront-derived compilers got it wrong one way.
  - the three commercial compilers for PCs got it wrong a second way.
  - gnu g++ got it wrong the third way.

So we had to program around it. We were able to get the exception handling to operate correctly. But both exception handling and garbage collection required that all nontirival processing had to be moved from initialization to constructors. Member variables of object type had to be moved to the heap - replaced by smartpointers - and allocated by the constructor. Compact composite objects allocated and freed as a unit became webs of pointer-interconnected heap objects, allocated separately (thus multiplying allocations and frees). In addition to the extra memory management, the number of pointers that had to be followed by the garbage collector exploded. Efficiency went out the window.

Of course, since the published language definition didn't actually DEFINE the behavior, "right" or "wrong" were not official.

This was during the ANSI deliberations for the first generation of standards. The current draft explicitly left open the issue of which overriding you got in this circumstance. So I petitioned the committee to define it - "correctly". I had high hopes, since everybody would have had to make a change so it wouldn't favor some vendors over others. But on the second round of email replys I got somethint that seemed strange - saying something about it not being important because "C++ doesn't have an object model". (Say WHAT?) I was too busy to pursue it further, let it drop, and the standard came out with the behavior explicitly undefined.

When the second round of ANSI standard came by (after my participation in the project was over) I tried again, just to "fix the language" to avoid this sort of subtle bug. Still no luck: This time the standard not only left it open, but explicitly said (approximately) "Don't write code where this could happen."

By the third round I had "gone over to the hard(ware) side of the force" and didn't pursue it.

IMHO this is the one thing in C++ that is a real language killer (of the sneak-up-behind-and-knife-the-developer variety).

So, FINALLY, the question:

Has this been fixed yet by the newer standards? If not, is there a chance that it will be at some future time? (Perhaps if YOU brought it up there might be more attention paid to it.)

Comment Unfortunately that's technologically hard. (Score 2) 147

It's called fair queuing. Serve all active customers equally.

That's what my solution would have been, as well. And I wondered why they didn't. Why did they set caps on particular users, rather than just split it equially on a moment-by-moment basis?

But then, a few years back, I was put on a team designing the hardware accellerators that handle bandwidth division in big router packet processors.

Turns out that doing real fair queueing, when you've got a sea of processors and co-processors trying to hot-potato all the packets, is NOT easy. It involves information sharing among ALL the streams, simultaneously, packet by packet, across coprocessors, processors, chips, even boards. This is both N-square and doesn't parallelize well. So a typical implementation works by setting per-user or per-category-within-a-user limits (only havng to access one, private, data structure per stream), assigning limits to each and counting each's usage without reference to the current usage of others.

That means that, to avoid dead backhaul time while customers are throttled below what's available, you have to give them oversize quotas. But that means the "flight is overbooked" and the heavy users, with more packets in flight, get more than their share of "seats", squeezing out the lighter users. To get back to moment-to-moment fair, with only the quota "hammer" for a tool, you have to throttle them back. Tweaking their quotas even on a minute-by-minute basis, let alone milisecond-by-milisecond, would swamp the control plane, and you couldn't easily share the storage for the rapidly-adjusting limits by classes, but would have to store them, as well as usage, per stream (or at least have many subclasses to switch them among). Oops!

I had some inkling that might be fixable. But the company downsized, and I was laid off, before I could examine it deeply enough to see if it could be done efficiently. That was a processor generation or two ago, and I've been doing other stuff since. Good luck, telecom equipment makers!

Comment FAA is the big one. The other is liability. (Score 1) 107

The FAA regulations are the biggest factor.

The next is liability litigation. Try to run a company when one suit from one crash (even if it's not your fault) might drain your entire investment and bankrupt you. Try to get insurance in the same situation.

Either alone might make it hard. Both together have essentially frozen designs for private aircraft for over half a century and nearly destroyed civil aviation.

Comment (Worse than I thought. Should have proofread...) (Score 3, Interesting) 327

Oops: Got output and jobs merged:

11.91% of the population vs 11.7% of the total output. A bit behind in value added. (Horrible, since the value added in, say, computers is hysterically high.)

11.91% of the population vs. 9% of the workforce. That says 32% fewer jobs per capita in the manufacturing sector. Doesn't sound like the "number one state for manufacturing jobs" to me.

Comment Weight it by population, or area, or ... (Score 4, Informative) 327

California is by far the number one state for manufacturing jobs, firms and output â" accounting for 11.7 percent of the total output, and employing 9 percent of the workforce.

I'd love to see that in per-capita or per-acre terms.

It's also the largest state in population, with 11.91% as of the 2010 census. That's half again as many as Texas, a pinch under twice as many as New York or Florida, almost three times that of Illinois or Pennsylvania, and by then you've used up more than a third of them.

11.7% of the output jobs vs. 11.91% of the population says the AVERAGE of the rest of the states has it beat. Some of the others are REALLY depressed, so the best of them beat it into the ground.

Similarly, it's the third largest state in area - with the largest amount of COMFY area.

It has resources, the best ports for trade with Asia, decent roads and railroads to the rest of the continent, etc. And it's got some capital-intensive industries and lots of access TO capital. It SHOULD be a nova to the rest of the country's furnaces. So why isn't it?

Comment So how does one find out /apply "fix" with linux? (Score 2) 131

It would have been nice if TFA had told us what chips were affected, or how to determine that, rather than saying "haswell" and expecting everybody reading it to do their own research.

I just spent ten minutes looking around the web, trying to determine if the processor in my laptop is one of those affected - preperatory to perhaps trying to figure out, if it is, how to apply the "disable the broken feature" fix - without installing windows - to avoid the memory corruption bogyman if somebody distributes software that uses, or abuses the feature.

No joy. The documentation seems to say that:
  - Core i7 is Haswell
  - TSX is NOT supported on versions up to somethng BEFORE the processor version in my laptop (i7-4700MQ)
  - But the descriptions of that processor I've found so far don't say, one way or another, whether it does or doesn't have TSX. B-b

The "flags" field in /proc/cpuinfo doesn't include a "tsx". But would it?

Can anyone tell us a simple way to check?

Comment Specifically: problems with public domaining. (Score 5, Informative) 191

RMS invented the GPL because of copyright issues, and before software patents became a problem.

As I understand it: It was a (brilliant) workaround for two problems with putting software in the public domain, which releases ALL rights:

  - Derived works: Somebody makes a modified version and copyrights that. They do a bugfix or enhancement and even the original author is locked out of his own software's future. He can't do the same bugfix or a similar enhancement without violating the new copyright. Similarly with other users of the software.

  - Compilation copyrights: If somebody combines several public domain works into a combined work, they can copyright THAT, claiming violation if somebody uses excerpts from it - such as some of the original public-domain components or excerpts from them. In book publishing this covers publishers of collections and anthologies. In software, including a public-domained module in a library or distribution would let the distributors of that lock up the rights to the components. Again the original author and other users can get locked out of the author's own work. (For instance, nobody else could include it in a similar library or distribution.)

Stallman's trick solution was to keep the original work under copyright, but license it under terms that require derived works to also be licensed under the same terms and source to be included with obect. Expiration of the copyright might cause a problem - but with companies like Disney on the job lobbying congress, that's probably not going to happen in the US as long as there IS a US. Alternatively, eliminating copyrightability of software would also eliminate the need for the GPL.

 

Comment The tower of babel was already present back then. (Score 3, Interesting) 294

My experience reaches back to the toggle-and-punch cards days and I don't want want to bore anyone with stories about that.

But one thing I have noticed in all those years a I cannot recall a single year where it wasn't proclaimed by someone that software engineering would be dead as a career path within a few years.

I go back that far, as well.

And the proliferation of languages, each with advocates claiming it to be the be-all and end-all, was well established by the early '60s.

(I recall the cover of the January 1961 Communications of the ACM, which had artwork showing the Tower of Babel, with various bricks labeled with a different programming language name. There were well over seventy of them.)

Comment Re:Programming evolves. News at 11. (Score 1) 294

I'm struggling to understand the point of this article.

It's Infoworld.

The point of the article is twofold:
  - To convince Pointy Haired Bosses that they understand what's going on and are riding the cutting edge.
  - To sell them new products, implementing new buzzwords, which they can then inflict on their hapless subordinates, making their life hell as they try to get the stuff working and into production.

That's the first two lines of the four-line Slashdot meme. The remaining two are:

(3. Bill the advertisers.)
(4. Profit!)

Comment Also a difference in law. (Score 4, Insightful) 262

There is zero difference in talent. The difference is one of leadership and money. The money is already there, so there is where people go.

Actually, the big difference is a little-known aspect of California intellectual property law:

If you, as an employee, invent something, on your own time and not using your employer's resources, and it doesn't fit into the employer's current or foreseeable future product line, you own it. If the patent assignment agreement in your employment contract says otherwise, it's void.

This means that, if you invent something neat and your employer doesn't want to productize it, you (and a couple of your friends) can rent a garage across the street and found a new company to develop and sell it.

Employees in California can NOT be ripped off the way Westinghouse ripped off Nikola Tesla.

The result is that companies in silicon valley have "budded off" more companies, like yeast budding off new cells. And once this environment got started, thousands of techies have migrated to the area, so there are plenty of them available with the will and talent to be the "couple of your friends" with the skills you need to fill out the team in your garage.

Lots of other states have tried to set up their own high-tech areas on Silicon Valley's model. But they always seem to miss this one point. They need to clone that law to have a chance at replacing or recreating the phenomenon. Result: They might get a company to set up a shop, but they don't get a comparable tech community to build up. Research parks of several companies, generally focused on some aspect of tech, might form, but you don't get the generalist explosion.

Of course, like any network, the longer it accumulates, the more valuable it is to be connected to this one, rather than another that is otherwise equivalent. (This is what the parent poster already alluded to.) Thus there's only one Silicon Valley in California, with the resources concentrated within driving distance, though the law is statewide. Even with the law change, and a couple decades to let the results grow, other states might have a tough time overcoming California's first-mover advantage.

But California keeps fouling things up for techies and entrepreneurs in other ways. So if some other state would TRY this, they might become a go-to place when groups of people in Silicon Valley get fed up and decide to go-forth.

Comment What's different from the last quarter century? (Score 1) 262

undoubtedly, you've read about the tempest in San Francisco recently, where urban activists are decrying the influx of highly paid tech professionals, who they argue are displacing residents suddenly unable to keep up with skyrocketing rents.

That was decades-old news in Silicon Valley when I moved here in the late '80s. (A couple who'd gone there for the same project a few years earlier had bought, rather than rented, had the price of their mortgaged house skyrocket over a couple years, and bailed out of High Tech to start a new career as landlords.)

I thought Hi Tek had been doing the same to Berkeley and (to a lesser extent) SF since before then, as well. SF prices have always been high - though perhaps not as high as mid-peninsula around the Stanford tek-lek.

So what's new? Did Hi Tek start buying spaces in the slums and drive the prices further up than SF's already astronomical highs? Did public assistance not rise to track the new rents?

This is SO last millenium...

Comment Re:Start with a prescription from Hipocrates: (Score 4, Interesting) 123

The flaw with this analysis is the timeline. Yes, the short term impact on the cleaned beaches was pretty horrendous, but it remains to be seen how this plays out over time as the ecology recovers

Hear, hear. The "cleaned" beaches may come back closer to the original - after they've been repopulated by pioneer speecies and gone through the whole beach-equivalent of the succession to climax forest. The uncleaned beaches may get where they're going more quickly, but that may be somewhere other than where they started. And so on.

Maybe, once the toxins have been cleaned up by lifeforms in one case, the soil rebuilt and recolonized by successive populations of organisms in the other, they'll come back to what they once were. (Assuming the area hasn't been reshaped by then.) Maybe they'll come back as something else - like the "flip-flop" island of recent history: Lobsters ate the snails and kept their population down. A hurricane wiped out the lobsters. Attempts to recolonize by importing lobsters failed. Turned out the snail population boomed once the lobsters were gone and it got to where a newly introduced lobster would, within minutes, pick up enough snail riders to weight it down and eat IT, so now the ecology was stable in a different mode. So in either case the beach ecology may converge to a different equilibrium.

But there are sections of the Pacific Northwest where a natural phenomenon did something similar: Two glacers met along the front of the ice cap during the last ice age. When things finally melted they melted last, forming a dam holding back an ocean. When it finally melted through, the ocean poured through in that one spot. It scoured an area comparable to an eastern state down to bedrock, washing everything from topsoil to gravel to rocks to boulders off toward the Pacific. The area STILL is nearly as lifeless as the moon.

So my bet is the unwashed beach will reach a robust and stable exology in historic time. But I wouldn't be surprised if, even with lots of sea life washed up by wave action, the washed beach takes geologic time to make a similar recovery.

Comment Start with a prescription from Hipocrates: (Score 4, Insightful) 123

First: Do no (more) harm

One of the lessons from the Exxon Valdez oil spill is that attempts to clean things up may make them far worse, while the ecology's toughness in the face of environmental changes is vastly underrated.

For instance: They did a major removal of oil from part of a beach. In the process they stripped the bulk of the lifeforms off, leaving essentially sand - mineral dust. In an adjacent section that was missed, the orgnisms did a fine job of consuming the oil that had spilled. (It seems sea life has to deal with seeped oil quite a bit, from natural sources. Some stuff not only handles it, but considers it a valuable resource.). After a couple years the un-cleaned beach was flourishing (though perhaps not with the same mix of populations as before). A picture of the boundary is impressive: Cut like a knife.

Granted disturbing mine tailings is a very different case. But similar rules apply: Will letting them settle to the bottom, where they can be processed over decades to geologic time, cause less harm than attempting to clean them up RIGHT NOW - which might keep them mixed into the water and produce a much larger, sustained, iinput of "toxic" minerals to the bulk of the waterway's biosphere?

Comment But but but but the whole POINT ... (Score 1) 140

.. german cars use what we call a "Can Gateway" but is better of though as a firewall. Every different system in the car has it's own private canbus. Anything that needs to travel between the busses has to go through the gateway.

A separate CAN(N)BUS for each system? But the original POINT of the bus was to replace the expensive, custom, wiring harness - a bundle of special-purpose wires as thick as your wrist - with a power line and a pair of signal wires. One big party line with everything talking on it. Now you're bringing back the harness AND adding an extra box.

(The above is only half facetious.)

Vehicles that share common can without a gateway are readily exploitable. I could plug a can interface into the headlights, A/C or any other system on the global bus and lock/unlock the doors, roll the windows up/down, trigger the traction control/ABS or even start/stop the car (if it uses a push button start).

Which, of course, is the downside of the system.

An alternative to restoring the bundle is for each user of the "big party line" to "recognize the voice" of those who can give it instructions - and have a list of what instructions each can give it. I won't go into details, but there is ample room for design here. An interloper would be reduced to trying to "mimic the voice" of a talker with enough authority to command the action, or DOSing by "shouting over" legitimate commands.

Slashdot Top Deals

Always look over your shoulder because everyone is watching and plotting against you.

Working...