Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Moore's law applies here. (Score 1) 365

I'm not saying that C is 2% faster than C++ in fact C++ might be faster but if C were faster... go C for a core functionality.

But what about new functionality? Suppose it takes an extra three months to get the next new thing working in C than it would in C++. That's two million users times three months productivity lost because the feature wasn't there if you stuck with C. And then there's the feature after that, and the next one... The future gets here sooner. Some of those "features" save lives.

Not that it matters: Some teams will use C. Some will use C++. Some will use [insert other language here]. Eventually some will use whiffy-spim-bam that hasn't been invented yet. Getting to release of a working product first is a big factor in displacing the competition and becoming a dominant paradigm.

What strikes me as ludicrous is that we're having this discussion centered around a variant of unix. In case you weren't aware, operating systems USED to be written entirely in assembler, long after vritually all applications were coded entirely, or all-but-critical-stuff, in compiled languages. What made UNIX different is that it was the first major OS that was coded primarily in a compiled language:

As of System 6, which was the big source code leak from the University of New South Wales, the kernel was 10,000 lines (including a lot of added blank lines to make modules come out on 50-line boundaries for easy listing in a textbook). Of that, 8,000 lines - mostly machine independent stuff - was in C and 2,000 - mainly memory management and interrupt vectoring - was still in assembler. As time went on virtually all of the rest migrated to C.

The kernel being tiny, the bulk of it being in C, the availability of good free and/or cheap C compilers for new architectures, and some interesting intellectual property licensing policies by Bell Labs, led to UNIX being rapidly ported to new chip architectures, enabling an explosion of "unix boxes" and wide adoption of UNIX.

All during that time we had EXACTLY this same argument - with assembler taking the role of C and C taking the role of C++. And then, as now, the argument didn't matter, because C won on the merits, despite being a few percent slower than good assembler code. B-)

(Note that this was BEFORE the RISC revolution, when it was discovered that, in actual production code, assembly code, on the average, actually ran slower than compiler output - because compilers knew only MOST of the tricks but they applied them ALL THE TIME, while coders only applied them when they were paying extra attention to something perceived as critical. Want to bet that C++ is REALLY slower than C, across the bulk of production code?)

Comment Moore's law applies here. (Score 1) 365

Looking at the project, I stumbled over the fact they did rewrite an ethernet driver and have rewritten it in C++ with a 2% performance overhead.

Well that makes it easy to determine whether it's worth it.

The David House variant on Moore's Law says processor power doubles every 18 months, and the real world has tracked this very closely for half a century. A 2% improvement doubles after 36 steps, and 18 months is 78 weeks. So in a few hours over 16 days, Moore's law has given you back your loss, and after that it's gravy.

So if having your code base in C++ improves your development process enough that it lets you complete and release your next upgrade (or the one after that, etc.) only a couple weeks earlier, C++ wins.

If it lets you keep gaining on your competition, it wins BIG.

Comment Because ANSI imported strong type checking into C (Score 1) 365

For a short while, the Linux kernel could be compiled as C++. Some developers, I believe Linus included, felt that the stricter type checking offered by C++ would help kernel development. There was no attempt to actually use C++ features though.

The effort did not last long.

Once ANSII had imported C++'s strong type checking into the C standard, and compilers had implemented it, there was no need to use the C++ compiler to get strong type checking.

Since that was the only feature Linus was using, it makes sense to discard the remaining cans of worms unopened. B-)

Comment It won't be until... (Score 1) 365

It won't be time for C++ in the kernel until the standard defines (and the compilers implement) whether you get the base or derived class version of a virtual function if it's called during the initialization, construction, or destruction of a derived class' member variables.

It also won't be time for C++ in the kernel if they DO define it, but they define it to be anything but the base class version. The derived class version should only be called beginning with the first user-written line of the constructor and ending just after the last user-written line of the destructor.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

... a good C programmer can look at a piece of C code and have a pretty good idea of the machine code being generated. ... that hasn't really been true for a long time now. Modern processors and the related architecture have become so complicated that generating correct, efficient code is an extremely specialist skill ...

Yes, these days he might not be able to easly determine what the generated code IS. But with C he can be pretty sure he correctly determined what it DOES - and if it does something else it's a compiler bug.

With C++ you get to override operators, functions, and pretty much everything else. You can redefine the semantics of what they actually DO in various situations. Keeping the semantics analogous when writing overridings is a convention, not something that's forced by the compiler.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Please don't say that in a hiring interview.
You are unable to grasp when a copy constructor or an assignment operator is called?

It is impossible to tell, from looking at an assignment, whether it is being done by a native assignment, a copy constructor, an overridden assignment operator, etc. To determine that you must go back to the declarations of the variables to determine their type, maybe hunt through the include files to identify the type behind a typedef (and possibly which typedef is in effect given preprocessor-flag-driven conditional compilation), then (if you're dealing with a class type), studying the class definition (including walking back through and groking its base classes).

Ditto for overridable operators - which is pretty much all of them.

In C you can pretty much look at the code describing what to do and figure out what it does, only going on a snipe hunt when you hit a subroutine or macro invocation.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Do people not debug through object code any more? I've done that so many times when trying to understand a bit of cryptic C++ code or C macrology. There's no mystery possible - just look at the generated object directly if there's any doubt at all what's going on!

I learned to program back in the days when compilers would produce listings with the assembly code for each source line immediately following the source line. It was a given that the programmer would understand it and it often gave insignts into what was going on - and going wrong - in a program.

It was also good training for reverse-engineering from object to source. (I fondly recall reading a piece of a buggy SCSI driver's object code that stored zero in a variable and then tested the value, and realizing that somebody had written "if (a==0)" as "if (a=0)" by mistake.)

But I gave up on this about 1990. RISC instruction sets, with things like the delay slot after a jump, and extreme compiler optimization, made enough of a hash of the object code that determining what it was actually doing became more of a research project than an exercise in reading. Dropping back to inserting instrumentation in the code ended up being far more productive.

Comment It's been injecting its own bugs since cfront. (Score 3, Interesting) 365

The C++ compiler will most certainly be less buggy than something thrown together to cover some element that C lacks.

Unfortunately, C++ includes an explicit non-standard that can inject subtle bugs. This has been present since at least 1988, and has survived at least the first two standards (after which I stopped watching, having moved on to mostly hardware design).

(I DID try to bring it to the attention of the standards committee in both cycles, but it was ignored. Bjarne, in his recent Slashdot Q&A, didn't answer my question on it, either.)

The problem relates to which overriding of a virtual function is called during the initialization of the member variables and the construction of member objects of a derived class (and the corresponding destruction of the member objects during the destructor). The standard permits the calling of the derived class' version of virtual member functions at this time, when the derived class has not initialized, or has dismantled, their underpinning.

Compilers are permitted to cause the call to go to either the base class version (IMHO correct) or the derived class version (IMHO dangerously incorrect). Calling the derived class version is bad, preventing a number of obvious constructions from working as expected, imposing limits on what programming techniques can be used safely, and displaying no warning (so the programmer has to know what not to do). Letting different compilers make different choices is horribly worse, as it makes the behavior unpredictable and compiler dependent.

C++ (especially the early versions, before it became buried in libraries and baroque constructions) came SO close to being a powerful and reliable tool for rapidly writing reliable code on large projects. But this "little bug" brought it all crashing down.

Comment Re:Ok, several aspects to this. (Score 3, Insightful) 651

First, guns don't protect, never have, never will.

The first eight of your 457 word wall of text shows you're so out of touch with reality that there's no point in reading the rest.

The primary function of guns in private hands is to protect those who carry them. They do that exceptionally well. In criminal attacks, resistance with gun is the most effective way to avoid injury or death. It's substantially more effective than the second best - knuckling under completely - and beats the pants off everything else, from running away, to trying to talk your way out, to resisting with bare hands or other tools. (Resisting with knife is about the worst.)

Research on self-defense is hard, because faiures leave tracks in crime stats while successes usually don't (and often leave the self-defended victim with an incentive to keep quiet about it). Nevertheless, even the first well-run projects were able to put a lower bound of guns preventing or aborting more than six times as many crimes as they aid in committing.

In private hands they're safer than police, too. A defense-with-gun is usually effected by no more than brandishing or occasionally getting off a round in the general direction of the perp. But of those instances where a victim or a policeman shoots someone believed to be a perpetrator, the cop is over 5 1/2 times as likely to erroneously shoot an innocent than an armed private citizen.

My family has substantial personal experience with armed self defense. For just a few examples on my wife's side: In college she was accosted by the rapist in the window, who was dressed in just running shoes and a dirty knife. Fortunately there was a hunting rifle behind the bed: She actually had to go as far as cocking it before he stopped trying to get her to drop it and jumped back out the window - apparently to take it out on another girl a few blocks away, with over 130 cuts while raping her. Her mother defended self and family against a Klan attack with a pistol. (Her granddad was caught away from his gun, though, and had to do his anti-Klan defense with a hammer.) Then there was the aunt, the uncle, ...

At the larger scale it's hard to argue with the fact that the US, founded in a revolution (by religious nut with guns) against their self-admitted "legitimate government" and with over half the adult civilian population armed, has now gone over two centuries without a substantial attack from abroad and only one major internal war, while Europe continues to suffer from genocidal wars, often with multi-million body counts. (With the exception of Switzerland, of course: Every adult citizen there is armed and has had military training. Even World Wars go around them.)

It's also hard to argue with the fact that the US is multi-ethnic, and the common denominator of each of its ethnic groups is that their members' murder victimization rate is substantially less than that of contemporary members of the same ethnic group still residing in their land of origin.

As for resisting an oppressive regime if push comes to shove: We have experiences like "The Battle of Athens" just after WWII, and the documented question from Nixon to a thnk-tank about what would happen if he suspended the presidential election. (Answer: That would precipitate an armed rebellion, and the population was well enough armed that it would succeed.) Uprisings aren't always successful and small or UNarmed uprisings are often put down, sometimes with lots of deaths. (Witness the Bonus Marchers' Massacre.) But recent decades of world politics have shown how effective a popular uprising can be, against even a coalition of world powers and superpowers.

If it came to that in the US, you can expect a substantial amount of the military (especially retirees) to be on the side of the people, along with lots of military equipment raided from armories. (You can see that now in the Middle East. The big difference between Al Queda and ISIS/ISIL is that the latter has bunch of colonels and other line officers, force-retired and blocked from normal politics in the wake of the 2003 invasion of Iraq and overthrow of the Ba'athists - along with a lot of seized military arms. The former is a bunch of terrorists, the latter has a substantial army.

Comment Define airborne (Score 1) 475

However, the Ebola Reston strain is airborne though only dangerous to monkeys.

I have oftten wondered whether the Reston virus had mutated to be spread by things like sneezes, or if it might be another matter entirely.

A number of monkey species throw feces (and/or other bodily secretions) when under stress and perceived attack. (I don't know if this is one of them, but assume for the moment it is.) Might being confined to cages along with others provoke such behavior? Wouldn't a sick monkey's feces, and tiny particles separated by airflow during the flight, carry an ebola-family virus just fine, without any mutation to make it, say, shed into nasal mucus and be carried by a sneeze?

(Granted this might fit the literal definition of "airborne transmission". B-) )

Comment You see that with thermoacoustics. (Score 1) 69

3D printing was the result of a lot of researchers working on a lot of parts, and when the dust settled, none of them could build a really practical printer without paying off all the other patent holders, most of whom were playing dog-in-the-manger with their patents while trying to elbow out the competition.

You see that with a lot of inventions. They may go through several cycles of invention / related invention / non-conbination / wait / patent expiration until enough necessary parts of the technology are patent-expired that the remaining necessary inventions can be assembled in a single company's product and the technology finally deployed.

Thermoacoustics, for instance, just had its second round of patent expiration and is in its third round of innovation. The basic idea is to make a reasonably efficient heat-engine and/or refrigerator (or a machine that combines, for instance, one of each) with no moving parts except a gas. Mechanical power in the form of high-energy sound inside a pipe is extracted from, or used to create, temperature differences.

There are some really nice gadgets coming out of it, built mainly out of plumbing comparable to automotive exhaust systems and tuned manifolds, maybe with some industrial-grade loudspeakers built in, or their miniaturized or micro-minaturized equivalent. (Example: A hunk of pluming with a gas burner, about 12 feet high and maybe eight feet on a side. Oil fields often produce LOTS natural gas in regions, like big deserts, where it's uneconomic to ship it to market. It gets burned off and vented. (CO2 is weaker greenhouse gas than CH4, by a factor of several). Pipe the gas into the plumbing, light the burner, and it burns part of it to get the power to cool and liquify the rest. As a liquid it's economic to ship and sell it. Then you get to use much of the otherwise wasted energy, displacing other fuel supples and reducing overall carbon emssion.

I hope this is the cycle where things hit the market.

Comment They can matter if you sell what you make on it. (Score 1) 69

Patents don't matter for making a printer for your own use.

They can matter if you build a business on them, like by selling objects built using them.

Especially if they improve make your process cheaper, easier, more convenient, flat-out possible, or produce a better part. (And if there ARE cheaper, etc. ways to do it, why are you using the patented tech anyhow? B-) )

Patents in the US were about increasing innovation by making first mover advantage truump second mover advantage: Giving the little guy with the bright idea time to set up manufacturing, make back his costs, reap some benefits, and get established enough to compete with existing large companies once they expire. Without them, it was thought, the existing big guys with the infrastructure in place could quickly clone the little guy's new invention and out-compete him in the market, but they wouldn't bother until the little guy had proved it was worth the effort. This would suck the incentive out of the little guys, the big guys would have little incentive to improve, and progress would be slow-to-stalled. The short-term inhibition on others deploying the invention was seen as less of an impediment to progress than having most inventions not be deployed, or even made, at all.

The idea was to set the time limit to maximize progress to the benefit of all/the country, and make manufacturing and technology grow like yeast (ala silicon valley B-) ). Part of the intent was to bias it toward innovators and make established processes free to use, because when the country was getting started the established players were owned by foreign interests. The founders wanted the country to develop its own industry, rather than being dependent on, and sending most of the profit to, big businesses in Europe.

But the time was set for heavy manufacturing at the pace of the period. It's a horrible mismatch for, say, software: With the availability of general purpose computing platforms, able to make distributable copies at electronic speed and copyright to prevent verbatim cloning, a person or company with a new software product can go from steath-mode program development to market establishment, profitibility, and even market dominance in a matter of months, before competitors can engineer their own version. So patents aren't necessary to promote innovation, leaving just their retarding effect holding down the blaze of creativity. (Then there's open source, with its alternitive monitization and/or reward strategies. But that's a "new invention". B-) )

It seems to me that:
  - The expiration of patents on stereolithography did help produce the initial explosion of new, and often inexpensive, devices and the improvements in what can be made, how accurately, and how inespensively.
  - The availability of machines suitable for practical industrial prototyping - even before the cheap machine explosion - pretty much forced the high-end CAD software producers to include some form of stereolithography output format, while an open output format made the choice obvious. That's a big benefit to the toolmaker for a small effort. The availability in the high-grade commercial tools is a great synergy and helps a lot. But the hobby machines needed CAD tools and open source was already up to the task: Had the big players not gone along it still would have been done, and those big players not "with the program" would be experiencing major competitive pressure from open source tools and competitors that did provide such output.

And here's the key:
  - The availabitiy of these rapid general-purpose maufacturing tools will bring (is already bringing!) software's high-speed innovation and entrepenurial models to the manufacture of physical objects. Patents could be shortened in term or reduced to "design patents" - the manufacturing equivalent of copyright - and produce a physical-product explosion comparable to the computer revolution. (Or patents, like "content" copyright, could become the tool of obsoleted established players in the suppression of the competing business models.)

Brace yourself for either the physical-manufacture ramp-up to science-fiction's "singularity" or an ongoing RIAA / MPAA / conglomerate - style legal battle.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...