Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment He didn't deny them in the hospital. (Score 4, Informative) 478

[Hospital sent home the ebola patient in Dallas, though he had classic ebola symptoms and had traveled to Liberia.]

Yep, especially when they deny all of the screening questions.. That's helpful.

He denied the screening questions at the airport. ('Let's see. If I answer yes you won't let me fly and will throw me in with everybody else who answered yes. Of COURSE I didn't have contact with Ebola!)

He DIDN'T deny the questions at the hospital. They knew he'd been to Liberia recently. But their bureaucracy didn't get that info to the person who made the release decision.

Comment That needs another caveat: (Score 1) 144

an educated population is one of the best defenses against mindless wars. That's why it's so important to the corrupt governments that want to wage those wars to have control of the education systems in their societies.

Because MISeducating their population is key. "Give them a light, and they'll follow it anywhere."

But this needs another caveat: If the population in general doesn't have any effective power over the goverenment, they can know exactly what's going on, be against it, but be utterly unable to change it. If some of them attempt to make such change, it just separates them out from those who will knuckle under, for easy removal from the herd.

Comment How about polywell? (Score 1) 315

This article led me to check what was up with the Navy's work on the Polywell concept, long kept under their hats. I discovered:

The navy just published their results last Sunday!

I haven't time to look into it for the next few hours, but this may be very interesting news on the "are we there yet?" front.

Comment Re:The $50,000 question... more energy out than in (Score 1) 315

No. Current solar absorption (accounting for albedo) is on the order of 50PW. By comparison, current peak world wide energy production is a paltry few TW. We're several orders of magnitude away from the point where our civilization's thermal output becomes a concern.

Not to mention that we woudl stop putting carbon dioxide from energy production into the atmosphere (and could, if it became an issue, use some of that fusion power to freeze some of it OUT of the atmosphere and do things like turn it back into coal and bury it.)

If human industry generated CO2's contribution to the greenhouse effect IS significant, we could pull that "gift that keeps on giving captured solar heat" back out of the air and put it into the bottle - at least until we reach pre-industrial levels. (Beyond that we probably don't want to go, because of the detrimental effect of low CO2 levels on plants.)

Comment It's like troposphere/stratosphere but upside down (Score 5, Informative) 295

In the atmosphere there's a situation: The weather all happens down near the surface, in a region called the troposphere. Here the density/temperature gradients can result in instabilities, where a parcel of air that is, say, lighter than its sourroundings can become MORE ligher-than-its surroundings as it moves up (and vice-versa). Above that is another (set of) layer(s) called the "stratosphere", where everything is most stable right where it is. Nothing very exciting happens there except when something coming up REALLY fast from below coasts up a bit before it stabilizes and moves back down.

The oceans do something similar, but upside down:

Water has an interesting property: Like most materials it gets more dense as it gtss colder - but only up to a point. As it approaches freezing the molecules start hanging out in larger groups, working their way toward being ice crystals. The hydrogens on one molecule attract the oxygens on another, and because of the angle between the hydrogens bondended to the oxygen in each molecule, the complexes are somewhat LESS dense than liquid. As a result, with progressively lower temperatures the density reaches a maximum, then the water begins to expand again. When it actually freezes it is so much less dense than near-freezing liquid that the ice floats. With fresh water the maximum density happens about 4 degrees C. Salt disrupts the crystalization somewhat so the maximum density is a tad cooler (and varies a bit with salt concentration - and thus depth), but the behavior is similar.

The result is that, when you have a mix of cooler and hotter blobs of fresh water, the water closer to 4 degrees sinks and that farther from it rises. The result is that, absent a heat or impurity source below, the bottom (and much of the volume) of a deep lake tends to be stable, stratified, water at about 4 degrees year around, while all the deviations from it and "weather" activity is in no more than about the top 300 feet: Wave action, ice, hot and cold currents, etc. are all above the reasonably abrupt "thermocline" boundary. Below that things are very slow, driven mostly by things like volcanic heat. (Diffusion is REALLY slow in calm water. It takes decades for, say, dissolved impurities to move a couple inches.)

The ocean is much like that, too, but a little cooler and with some temperature ramps spreading out the thermocline due to variations in salt concentration.

So global warming/cooling/weather, whatever would NOT be expected to affect deep water temperatures. This would all be happening in the top few hundred feet. If, say, the ocean were heating up without the surface water temperature changing, this would take the form of the thermocline gradually lowering near the equator and/or rising near the poles, rather than the deep water becoming warmer.

Comment Re:NASA? (Score 1) 295

Shouldn't the oceans fall under the NOAA's domain?

NASA has a mandate to figure out what's with a variety of planets throughout the universe. They only have a few samples nearby, and this is the only one they can measure REALLY thoroughly, to test and refine their models and theories.

They also have the technology to do measurements from space. AND they work closely with NOAA (including launching and operating observation satellites for them).

Comment %%^$^ LeNovo kbd and trackpad strike again... (Score 1) 283

I posted:
There are many jobs in TEM if you hold an H1B (and are willing to work for less than it would cost for someone with .

I meant to post:
There are many jobs in TEM if you hold an H1B (and are willing to work for less than it would cost for someone with the same qualifications who is native-born.)

(The keybord and trackpad on this recent LeNovo z710 are driving me nuts.)

Comment There's a shortage & glut at the same time in (Score 1) 283

Either shorten it to TEM or mention that there are many jobs under the STEM categories, but S-cience is experiencing total glut.

There are many jobs in TEM if you hold an H1B (and are willing to work for less than it would cost for someone with . But substantially more newe H1Bs are being issued than new TEM jobs created. This means that the number of citizens in such jobs is dropping.

Last I heard about 1/3 of the citizens qualified for TEM jobs are actually so employed, and recent grads mostly need not apply.

(And then the newsies wring their hands and wonder why students - especially women - are largely uninterested in completing a Computer Science or other TEM degree.)

Comment Re:Moore's law applies here. (Score 1) 365

I'm not saying that C is 2% faster than C++ in fact C++ might be faster but if C were faster... go C for a core functionality.

But what about new functionality? Suppose it takes an extra three months to get the next new thing working in C than it would in C++. That's two million users times three months productivity lost because the feature wasn't there if you stuck with C. And then there's the feature after that, and the next one... The future gets here sooner. Some of those "features" save lives.

Not that it matters: Some teams will use C. Some will use C++. Some will use [insert other language here]. Eventually some will use whiffy-spim-bam that hasn't been invented yet. Getting to release of a working product first is a big factor in displacing the competition and becoming a dominant paradigm.

What strikes me as ludicrous is that we're having this discussion centered around a variant of unix. In case you weren't aware, operating systems USED to be written entirely in assembler, long after vritually all applications were coded entirely, or all-but-critical-stuff, in compiled languages. What made UNIX different is that it was the first major OS that was coded primarily in a compiled language:

As of System 6, which was the big source code leak from the University of New South Wales, the kernel was 10,000 lines (including a lot of added blank lines to make modules come out on 50-line boundaries for easy listing in a textbook). Of that, 8,000 lines - mostly machine independent stuff - was in C and 2,000 - mainly memory management and interrupt vectoring - was still in assembler. As time went on virtually all of the rest migrated to C.

The kernel being tiny, the bulk of it being in C, the availability of good free and/or cheap C compilers for new architectures, and some interesting intellectual property licensing policies by Bell Labs, led to UNIX being rapidly ported to new chip architectures, enabling an explosion of "unix boxes" and wide adoption of UNIX.

All during that time we had EXACTLY this same argument - with assembler taking the role of C and C taking the role of C++. And then, as now, the argument didn't matter, because C won on the merits, despite being a few percent slower than good assembler code. B-)

(Note that this was BEFORE the RISC revolution, when it was discovered that, in actual production code, assembly code, on the average, actually ran slower than compiler output - because compilers knew only MOST of the tricks but they applied them ALL THE TIME, while coders only applied them when they were paying extra attention to something perceived as critical. Want to bet that C++ is REALLY slower than C, across the bulk of production code?)

Comment Moore's law applies here. (Score 1) 365

Looking at the project, I stumbled over the fact they did rewrite an ethernet driver and have rewritten it in C++ with a 2% performance overhead.

Well that makes it easy to determine whether it's worth it.

The David House variant on Moore's Law says processor power doubles every 18 months, and the real world has tracked this very closely for half a century. A 2% improvement doubles after 36 steps, and 18 months is 78 weeks. So in a few hours over 16 days, Moore's law has given you back your loss, and after that it's gravy.

So if having your code base in C++ improves your development process enough that it lets you complete and release your next upgrade (or the one after that, etc.) only a couple weeks earlier, C++ wins.

If it lets you keep gaining on your competition, it wins BIG.

Comment Because ANSI imported strong type checking into C (Score 1) 365

For a short while, the Linux kernel could be compiled as C++. Some developers, I believe Linus included, felt that the stricter type checking offered by C++ would help kernel development. There was no attempt to actually use C++ features though.

The effort did not last long.

Once ANSII had imported C++'s strong type checking into the C standard, and compilers had implemented it, there was no need to use the C++ compiler to get strong type checking.

Since that was the only feature Linus was using, it makes sense to discard the remaining cans of worms unopened. B-)

Comment It won't be until... (Score 1) 365

It won't be time for C++ in the kernel until the standard defines (and the compilers implement) whether you get the base or derived class version of a virtual function if it's called during the initialization, construction, or destruction of a derived class' member variables.

It also won't be time for C++ in the kernel if they DO define it, but they define it to be anything but the base class version. The derived class version should only be called beginning with the first user-written line of the constructor and ending just after the last user-written line of the destructor.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

... a good C programmer can look at a piece of C code and have a pretty good idea of the machine code being generated. ... that hasn't really been true for a long time now. Modern processors and the related architecture have become so complicated that generating correct, efficient code is an extremely specialist skill ...

Yes, these days he might not be able to easly determine what the generated code IS. But with C he can be pretty sure he correctly determined what it DOES - and if it does something else it's a compiler bug.

With C++ you get to override operators, functions, and pretty much everything else. You can redefine the semantics of what they actually DO in various situations. Keeping the semantics analogous when writing overridings is a convention, not something that's forced by the compiler.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Please don't say that in a hiring interview.
You are unable to grasp when a copy constructor or an assignment operator is called?

It is impossible to tell, from looking at an assignment, whether it is being done by a native assignment, a copy constructor, an overridden assignment operator, etc. To determine that you must go back to the declarations of the variables to determine their type, maybe hunt through the include files to identify the type behind a typedef (and possibly which typedef is in effect given preprocessor-flag-driven conditional compilation), then (if you're dealing with a class type), studying the class definition (including walking back through and groking its base classes).

Ditto for overridable operators - which is pretty much all of them.

In C you can pretty much look at the code describing what to do and figure out what it does, only going on a snipe hunt when you hit a subroutine or macro invocation.

Comment Re:Why do people still care about C++ for kernel d (Score 1) 365

Do people not debug through object code any more? I've done that so many times when trying to understand a bit of cryptic C++ code or C macrology. There's no mystery possible - just look at the generated object directly if there's any doubt at all what's going on!

I learned to program back in the days when compilers would produce listings with the assembly code for each source line immediately following the source line. It was a given that the programmer would understand it and it often gave insignts into what was going on - and going wrong - in a program.

It was also good training for reverse-engineering from object to source. (I fondly recall reading a piece of a buggy SCSI driver's object code that stored zero in a variable and then tested the value, and realizing that somebody had written "if (a==0)" as "if (a=0)" by mistake.)

But I gave up on this about 1990. RISC instruction sets, with things like the delay slot after a jump, and extreme compiler optimization, made enough of a hash of the object code that determining what it was actually doing became more of a research project than an exercise in reading. Dropping back to inserting instrumentation in the code ended up being far more productive.

Slashdot Top Deals

In seeking the unattainable, simplicity only gets in the way. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...