Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Good luck with that. (Score 1) 652

Better insulation to curb heat loss?

Better construction standards to reduce or eliminate infiltration of unconditioned air?

More efficient heating appliances? (Anything over ~10 years old can absolutely be upgraded)

More efficient heating/cooling strategies? (e.g. zoning)

There's absolutely more that can be done with a typical suburban home to reduce energy use. A 60% reduction on an older (30+ year old) home could actually be pretty easy if it hasn't already been renovated.
=Smidge=

Comment Re:Good luck with that. (Score 2) 652

A typical american could use 1/3rd or down to 1/4th of the energy he uses and the whole country could cut down to 1/10th and no one would realize any difference.

This, I think, is the most important thing to keep in mind; When discussing "quality of life" in terms of energy, we can reframe it of in terms of energy*efficiency. We can lower energy use without reducing QoL by improving efficiency.

Not impressed? Was not meant to impress you. That is per year not per month.

I'd be more impressed if that was per month... it would be nearly three times as much energy as my entire house uses, which is roughly four times the size of your apartment. And I have all electric appliances!

Anyway, at some point in time your energy will be green, and your energy demand will drop and then you have to fight the power companies about: why can it be that my electricity is so expensive when YOU get it for 'free'?

An argument easily won: I'm charging you for "free" green power because it costs me money to build and maintain the infrastructure that harnesses and delivers it to you.

It's kind of like asking why gasoline is so expensive when the oil is available for free, just sitting under the ground.
=Smidge=

Comment Re:Moore's law applies here. (Score 1) 365

I'm not saying that C is 2% faster than C++ in fact C++ might be faster but if C were faster... go C for a core functionality.

But what about new functionality? Suppose it takes an extra three months to get the next new thing working in C than it would in C++. That's two million users times three months productivity lost because the feature wasn't there if you stuck with C. And then there's the feature after that, and the next one... The future gets here sooner. Some of those "features" save lives.

Not that it matters: Some teams will use C. Some will use C++. Some will use [insert other language here]. Eventually some will use whiffy-spim-bam that hasn't been invented yet. Getting to release of a working product first is a big factor in displacing the competition and becoming a dominant paradigm.

What strikes me as ludicrous is that we're having this discussion centered around a variant of unix. In case you weren't aware, operating systems USED to be written entirely in assembler, long after vritually all applications were coded entirely, or all-but-critical-stuff, in compiled languages. What made UNIX different is that it was the first major OS that was coded primarily in a compiled language:

As of System 6, which was the big source code leak from the University of New South Wales, the kernel was 10,000 lines (including a lot of added blank lines to make modules come out on 50-line boundaries for easy listing in a textbook). Of that, 8,000 lines - mostly machine independent stuff - was in C and 2,000 - mainly memory management and interrupt vectoring - was still in assembler. As time went on virtually all of the rest migrated to C.

The kernel being tiny, the bulk of it being in C, the availability of good free and/or cheap C compilers for new architectures, and some interesting intellectual property licensing policies by Bell Labs, led to UNIX being rapidly ported to new chip architectures, enabling an explosion of "unix boxes" and wide adoption of UNIX.

All during that time we had EXACTLY this same argument - with assembler taking the role of C and C taking the role of C++. And then, as now, the argument didn't matter, because C won on the merits, despite being a few percent slower than good assembler code. B-)

(Note that this was BEFORE the RISC revolution, when it was discovered that, in actual production code, assembly code, on the average, actually ran slower than compiler output - because compilers knew only MOST of the tricks but they applied them ALL THE TIME, while coders only applied them when they were paying extra attention to something perceived as critical. Want to bet that C++ is REALLY slower than C, across the bulk of production code?)

Comment Moore's law applies here. (Score 1) 365

Looking at the project, I stumbled over the fact they did rewrite an ethernet driver and have rewritten it in C++ with a 2% performance overhead.

Well that makes it easy to determine whether it's worth it.

The David House variant on Moore's Law says processor power doubles every 18 months, and the real world has tracked this very closely for half a century. A 2% improvement doubles after 36 steps, and 18 months is 78 weeks. So in a few hours over 16 days, Moore's law has given you back your loss, and after that it's gravy.

So if having your code base in C++ improves your development process enough that it lets you complete and release your next upgrade (or the one after that, etc.) only a couple weeks earlier, C++ wins.

If it lets you keep gaining on your competition, it wins BIG.

Comment Because ANSI imported strong type checking into C (Score 1) 365

For a short while, the Linux kernel could be compiled as C++. Some developers, I believe Linus included, felt that the stricter type checking offered by C++ would help kernel development. There was no attempt to actually use C++ features though.

The effort did not last long.

Once ANSII had imported C++'s strong type checking into the C standard, and compilers had implemented it, there was no need to use the C++ compiler to get strong type checking.

Since that was the only feature Linus was using, it makes sense to discard the remaining cans of worms unopened. B-)

Comment It won't be until... (Score 1) 365

It won't be time for C++ in the kernel until the standard defines (and the compilers implement) whether you get the base or derived class version of a virtual function if it's called during the initialization, construction, or destruction of a derived class' member variables.

It also won't be time for C++ in the kernel if they DO define it, but they define it to be anything but the base class version. The derived class version should only be called beginning with the first user-written line of the constructor and ending just after the last user-written line of the destructor.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

... a good C programmer can look at a piece of C code and have a pretty good idea of the machine code being generated. ... that hasn't really been true for a long time now. Modern processors and the related architecture have become so complicated that generating correct, efficient code is an extremely specialist skill ...

Yes, these days he might not be able to easly determine what the generated code IS. But with C he can be pretty sure he correctly determined what it DOES - and if it does something else it's a compiler bug.

With C++ you get to override operators, functions, and pretty much everything else. You can redefine the semantics of what they actually DO in various situations. Keeping the semantics analogous when writing overridings is a convention, not something that's forced by the compiler.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Please don't say that in a hiring interview.
You are unable to grasp when a copy constructor or an assignment operator is called?

It is impossible to tell, from looking at an assignment, whether it is being done by a native assignment, a copy constructor, an overridden assignment operator, etc. To determine that you must go back to the declarations of the variables to determine their type, maybe hunt through the include files to identify the type behind a typedef (and possibly which typedef is in effect given preprocessor-flag-driven conditional compilation), then (if you're dealing with a class type), studying the class definition (including walking back through and groking its base classes).

Ditto for overridable operators - which is pretty much all of them.

In C you can pretty much look at the code describing what to do and figure out what it does, only going on a snipe hunt when you hit a subroutine or macro invocation.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Do people not debug through object code any more? I've done that so many times when trying to understand a bit of cryptic C++ code or C macrology. There's no mystery possible - just look at the generated object directly if there's any doubt at all what's going on!

I learned to program back in the days when compilers would produce listings with the assembly code for each source line immediately following the source line. It was a given that the programmer would understand it and it often gave insignts into what was going on - and going wrong - in a program.

It was also good training for reverse-engineering from object to source. (I fondly recall reading a piece of a buggy SCSI driver's object code that stored zero in a variable and then tested the value, and realizing that somebody had written "if (a==0)" as "if (a=0)" by mistake.)

But I gave up on this about 1990. RISC instruction sets, with things like the delay slot after a jump, and extreme compiler optimization, made enough of a hash of the object code that determining what it was actually doing became more of a research project than an exercise in reading. Dropping back to inserting instrumentation in the code ended up being far more productive.

Comment It's been injecting its own bugs since cfront. (Score 3, Interesting) 365

The C++ compiler will most certainly be less buggy than something thrown together to cover some element that C lacks.

Unfortunately, C++ includes an explicit non-standard that can inject subtle bugs. This has been present since at least 1988, and has survived at least the first two standards (after which I stopped watching, having moved on to mostly hardware design).

(I DID try to bring it to the attention of the standards committee in both cycles, but it was ignored. Bjarne, in his recent Slashdot Q&A, didn't answer my question on it, either.)

The problem relates to which overriding of a virtual function is called during the initialization of the member variables and the construction of member objects of a derived class (and the corresponding destruction of the member objects during the destructor). The standard permits the calling of the derived class' version of virtual member functions at this time, when the derived class has not initialized, or has dismantled, their underpinning.

Compilers are permitted to cause the call to go to either the base class version (IMHO correct) or the derived class version (IMHO dangerously incorrect). Calling the derived class version is bad, preventing a number of obvious constructions from working as expected, imposing limits on what programming techniques can be used safely, and displaying no warning (so the programmer has to know what not to do). Letting different compilers make different choices is horribly worse, as it makes the behavior unpredictable and compiler dependent.

C++ (especially the early versions, before it became buried in libraries and baroque constructions) came SO close to being a powerful and reliable tool for rapidly writing reliable code on large projects. But this "little bug" brought it all crashing down.

Comment Re:Ok, several aspects to this. (Score 3, Insightful) 651

First, guns don't protect, never have, never will.

The first eight of your 457 word wall of text shows you're so out of touch with reality that there's no point in reading the rest.

The primary function of guns in private hands is to protect those who carry them. They do that exceptionally well. In criminal attacks, resistance with gun is the most effective way to avoid injury or death. It's substantially more effective than the second best - knuckling under completely - and beats the pants off everything else, from running away, to trying to talk your way out, to resisting with bare hands or other tools. (Resisting with knife is about the worst.)

Research on self-defense is hard, because faiures leave tracks in crime stats while successes usually don't (and often leave the self-defended victim with an incentive to keep quiet about it). Nevertheless, even the first well-run projects were able to put a lower bound of guns preventing or aborting more than six times as many crimes as they aid in committing.

In private hands they're safer than police, too. A defense-with-gun is usually effected by no more than brandishing or occasionally getting off a round in the general direction of the perp. But of those instances where a victim or a policeman shoots someone believed to be a perpetrator, the cop is over 5 1/2 times as likely to erroneously shoot an innocent than an armed private citizen.

My family has substantial personal experience with armed self defense. For just a few examples on my wife's side: In college she was accosted by the rapist in the window, who was dressed in just running shoes and a dirty knife. Fortunately there was a hunting rifle behind the bed: She actually had to go as far as cocking it before he stopped trying to get her to drop it and jumped back out the window - apparently to take it out on another girl a few blocks away, with over 130 cuts while raping her. Her mother defended self and family against a Klan attack with a pistol. (Her granddad was caught away from his gun, though, and had to do his anti-Klan defense with a hammer.) Then there was the aunt, the uncle, ...

At the larger scale it's hard to argue with the fact that the US, founded in a revolution (by religious nut with guns) against their self-admitted "legitimate government" and with over half the adult civilian population armed, has now gone over two centuries without a substantial attack from abroad and only one major internal war, while Europe continues to suffer from genocidal wars, often with multi-million body counts. (With the exception of Switzerland, of course: Every adult citizen there is armed and has had military training. Even World Wars go around them.)

It's also hard to argue with the fact that the US is multi-ethnic, and the common denominator of each of its ethnic groups is that their members' murder victimization rate is substantially less than that of contemporary members of the same ethnic group still residing in their land of origin.

As for resisting an oppressive regime if push comes to shove: We have experiences like "The Battle of Athens" just after WWII, and the documented question from Nixon to a thnk-tank about what would happen if he suspended the presidential election. (Answer: That would precipitate an armed rebellion, and the population was well enough armed that it would succeed.) Uprisings aren't always successful and small or UNarmed uprisings are often put down, sometimes with lots of deaths. (Witness the Bonus Marchers' Massacre.) But recent decades of world politics have shown how effective a popular uprising can be, against even a coalition of world powers and superpowers.

If it came to that in the US, you can expect a substantial amount of the military (especially retirees) to be on the side of the people, along with lots of military equipment raided from armories. (You can see that now in the Middle East. The big difference between Al Queda and ISIS/ISIL is that the latter has bunch of colonels and other line officers, force-retired and blocked from normal politics in the wake of the 2003 invasion of Iraq and overthrow of the Ba'athists - along with a lot of seized military arms. The former is a bunch of terrorists, the latter has a substantial army.

Comment Define airborne (Score 1) 475

However, the Ebola Reston strain is airborne though only dangerous to monkeys.

I have oftten wondered whether the Reston virus had mutated to be spread by things like sneezes, or if it might be another matter entirely.

A number of monkey species throw feces (and/or other bodily secretions) when under stress and perceived attack. (I don't know if this is one of them, but assume for the moment it is.) Might being confined to cages along with others provoke such behavior? Wouldn't a sick monkey's feces, and tiny particles separated by airflow during the flight, carry an ebola-family virus just fine, without any mutation to make it, say, shed into nasal mucus and be carried by a sneeze?

(Granted this might fit the literal definition of "airborne transmission". B-) )

Slashdot Top Deals

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...