Follow Slashdot stories on Twitter


Forgot your password?

Comment Hugh steped down in 1988, Christie on Jan 1 2009 (Score 2) 173

Hugh Hefner handed top control spot of the company to his daughter Christie in 1988. She resigned on the 2008/2009 new year boundary.

The magazine's market performance has apparently been gradually declining since then, starting by dropping back to 11 issues per year in 2009. (What mix, if any, of Chistie leaving because the writing was already on the wall, the third generation's changes resulting in a slide, and/or other factors may be a good subject for a post-mortem analysis and publication, some time in the future.)

Comment Re:Don't confuse The Republican Party with The Rig (Score 1) 364

[Quoting Ramesh Ponnuru in Bloomberg] "All in all, then, what Paul is proposing is a big tax cut for high earners and businesses with almost no direct benefits for most Americans. ..."
For the middle class, however, the plan looks like a wash:H
And when you look at the article you see that it's mischaracterized. He claims "For the middle class, however, the plan looks like a wash" because the massive tax cut would be offset by two factors:

  1) The replacement of the corporate income tax with a 14.5% "business activity tax" that doesn't include labor costs as a deduction. He treats this as if it were a hidden 14.5% tax on goods, neglecting the compensating benefits of reducing the corporate income tax, AND the costs of computing it and changing business decisions to work around it, to zero. (Yes, some corporations manage to structure their operations so they can get their corporate tax below 14.5%, or even down to zero. Want to bet whether it costs them less than 14.5% when tax-hacking costs are included?)

2) The alleged reduction in benefits to the middle class from cuts in government spending. Do YOU think that the middle-class actually gets any substantial benefits from the government spending that would be cut? Then take into account that cuts in government spending tend to stimulate the economy BIG time (by not having so much of its blood drained every time it circulates another round), something that his source for this claim - the pro-business "The Tax Foundation" - explicitly ignores in its analysis.

IMHO Ponnuru's article was another hit piece - part of business interests' attempts to convince the voters that tax reform plans which favor the working / middle classes, growing the pie and letting them keep a bigger piece of it, are bad ideas, so they elect another shill who is in the moneyed interests' pocket.

Comment That because LBJ bought a war on the credit card. (Score 1) 542

20 years ago, a working man could pay for his rent with one week's salary. Now on the average it costs 2 weeks or more... and that's before you've paid for other necessties such as food, utilities, and car payments and gasoline.

A large part of that is that the government went on a spending spree that hasn't abated. The extra work is to provide the value that's sucked out to pay off the creditors and for the latest spending schemes. That value has to come from somewhere, whether it's devaluation of the currency (from more dollars chasing the goods) or the double-whammy of government borrowing sucking out the investment market, which means that money isn't making more consumer stuff AND it has to eventually be paid back, at interest, out of taxes.

There was some government debt for a long time. But the big fall-off-the-cliff turning point, IMHO, was when LBJ ran, first the Vietnam (undeclared) war, then also the Great Society welfare entitlement programs, on credit (meaning looting future generations). Then Nixon tried to fix things by unhooking the dollar from gold, and it's been unchecked government spending, explosive inflation, and accumulating debt and interest ever since.

Comment Re:Reasonable Doubt (Score 1) 112

No I disagree, as the data involved in matching is the result of a highly filtered process. There are many many data point's that don't support a match, as evidenced by the inability to replicate a match via any other method.

Simply using the matching data allows the filtering assumptions to go unchallenged.

If what's at issue is whether the tool selected the matches and hid the mismatches, and this can't be determined by comparing the defendant's genome against the tracable raw data that went into building the database, then the defendant's team gets to examine the software or the evidence is out. Agreed.

Defendant have the right to a thorough cross-examination. The database used, the filtering process are all relevant to the actrual likelihood of a match.

Here's where we're differing. I am claiming that, once a match is found, the quality of the match can be checked by comparing the raw data of the defendant's sample against the raw data that went into the database that was searched. Even if the data was reduced and encoded in some proprietary way to assist rapid searching and probability estimation by the proprietary tool, the match can be proven - as can the assertion that no exculpatory evidence was withheld - by providing the base data and ANY algorithm that performs the equivalent probability computation in a transparent way. If it gets the same numbers, that part of the issue is proven.

If there is some question of whether the tool used improperly obtained evidence in deciding to look at this guy's data, that would make its internals relevant. If there is some question that it may have identified other, equally good, matches and these were withheld from the defence, that might make the OPERATION of the tool relevant, without putting the workings of its innards into that category.

But IANAL. If the court says you're right on this it won't surprise me. (But their reasoning would be interesting.) Also: I won't complain if they kill database-fishing for "a wrong reason". B-)

Defendants have the right to compel testimony and other evidence to be produced if they can show it is relevant. Trade secret, patent, or copyright has no power to override constitution guarantees to due process.

Total agreement there.

Prosecutors should not be allowed pseudo-science or selective disclosure of data. In fact, knowing use of either constitute prosecutorial misconduct, [an offense] that can result in financial sanctions and or disbarment.

Total agreement there, too. (Also, IMHO: Such sanctions and/or disbarment should be invoked far more often than they are. B-) )

Comment Don't confuse The Republican Party with The Right (Score 1) 364

Our Right just keeps advocating policy that will heep even more money onto a wealthy class of citizens who are wealthier then they have ever been in American history.

Please don't confuse 'The Republican Party" with The Right". For at least the last three presidential election cycles the Republican Party has been solidly under the thumbs of one of its four major factions - the Neocons. (And this cycle that faction is finally being bumped by a new challenger which I'll call "The Plutocrats", in the form of the self-financed campaigns of Trump and Fiorina.)

The Classic/Paleo conservatives, religious right, and liberty wing (libertarians and other anti-tax, government-off-our-backs types) are more of the party but lately have negligible power.

Comment Re:Reasonable Doubt (Score 1) 112

From the perspective of the burden of proof placed on the Prosecution, they have to disclose how they arrived at this derived 'evidence' of a match via TrueAllele.

IMHO: Unless there is an issue with whether the database TrueAllele searched was obtained illegally (making any results of searching it for suspects "fruit of the poisoned tree"), they DON'T have to show how the match was found.

They just have to show that the match IS a match. This can be done with the data involved in the match standing on its own,

Comment Not true. (Score 1) 112

If he refuses to show his code to an expert witness and explain it, then the evidence can't be used.

Not true...

As I understand it, he should be able to get his program (or a modification of it) to produce as an output:
  - The computation of the probabilities
  - The data used to compute them, with annotation giving a trace back to its source.
  - The assumptions behind the computation.

The issue of HOW IT IDENTIFIED this individual is separate from WHAT IT IDENTIFIED ABOUT HIM. The former is the "secret sauce" and would not be revealed. The latter is the evidence and can stand on its own. Further, it MUST be able to stand on its own - because if it can't, it's inadequate.

Now if part of his "expert testimony" is that his program did NOT find any other people who 'matched' and this is somehow relevant, THEN how it goes about doing the matching also becomes relevant and he's hosed.

Of course the defence is going to do their darndest to monkeywrench the prosecution, and threatening the tool builder with disclosure of his trade secrets is a good move tactically. It's up to the judge (and possibly the appeals judge) to call them on it if it's just an irrelevant thrash.

(I say this all as someone who personally believes that DNA evidence should only be used for defence, not prosecution, in criminal trials, because non-match is definitive while "match" is a difficult probability estimate based on assumptions about genetics, gene distribution, gene correlation, and on some very difficult to grasp probability computations. Hunting for matches in databases is, IMHO, subject to false positives and overestimation of the improbability of the match being false, based on underestimation of correlation and the genetic and familial mechanisms that might promote it.)

Comment "Your NOx may vary" - one more thing. (Score 1) 416

So with ... selection pressure ... Engineers, with the best intentions, would tend to design engines that pollute a bit more when off the test.

By "a bit", after 30+ years of selection pressure I wouldn't be surprised by as much as 25 to 50% extra NOx on "off the test" readings from just optimizing with only the test and field mileage for feedback.

Unless there's something special about diesels that makes them inherently troublesome on some non-test cycles, though, 2x or more seems too high to be honest fallout, and should prompt a detailed search for explicit cheat code.

Comment I was there. "Your NOx may vary" (Score 1) 416

Much of my early career was consulting to the auto industry (in particular, Ford and GM) during the early periods of electronic engine controls and their interaction with the emissions test regime in question. I did some work with engine controls, but most of it was emissions testing automation and data reduction.

We all (executives, engine designers, test equipment designers, and regulators) knew:
  - The test conditions were arbitrary but standard.
  - Detecting them and switching modes would be trivial to implement and look good at first, but also illegal, immoral, and financially disastrous for the company when they were eventually detected.
  - Because engineering was done to meet the regulations - which met score well on the tests - even with honest efforts and no cheating it would eventually evolve the vehicles to do well on the tests but probably not so well on other operational cycles. (You see this with "your mileage may vary".)
  - Tests and design processes were VERY expensive and the companies highly competitive. They couldn't afford to engineer for BOTH the regulations and to be good all the time out of niceness: The "nice guys" would "finish last", be driven out of the market, and you'd STILL only get cars that only met the regulations. A level playing field was needed.
  - So it was the responsibility of the regulators to write test specifications that modelled the driving cycle well enough that engines tuned to them would also perform adequately in general, despite the "design to the test" evolutionary pressure, and the engineers to meet the law on the tests that were imposed, not do so by explicit detect-the-tests cheats.

The executives and early-stage engineering departments were aware of the temptation for engineers to write cheats, and (at least at one I worked for) put some draconian controls in on software changes to the engine control, to prevent them. (The official explanation given to the inconvenienced engineers was "insuring regulatory compliance".)

I was told that the regulators came up with the standard test by
  - instrumenting a car (with a bicycle wheel speed recorder on the bumper and some event-recording switches),
  - parking behind various cars (in Denver?) and, when their owners started up, surreptitiously tailing them to their destination and recording their warm-up idle time, speeds, acceleration, braking, standing waiting for lights, etc. (but not the upslope/downslope and wind).
  - picking one of these trips, which contained both city and highway driving and looked pretty typical, and adding a "cold soak" to the start (engine is not run for several hours) to standardize the starting conditions and model an initial start, and a guesstimate of a final idling period before shutdown. (To meet the cold-soak requirement, cars were pushed into the test cell by hand or things like electric pallet jacks.)

The test measures exhaust airflow volume and concentration of CO2, CO, and unburned hydrocarbons. So gasoline consumption can be easily computed by "carbon balance" - you know how much carbon is in a gallon, you measure all of it as it comes out, none is lost and only a tiny bit of burned lube oil adds any. So you get mileage for free by postprocessing the data. The regulators got the bright idea of putting this computed mileage on the stickers for customers to make objective comparisons when shopping.

It's easy to measure the average mileage of cars in the field: Just divide the odometer mileage by the gallons pumped to refill the tank, and average over several fillups to smooth out variation in how the tank was topped off. It quickly became apparent that:
  - Mileage in normal service varied substantially.
  - The trip defined as the standard one got substantially better mileage than was typical.
Thus was born the caveat "your mileage may vary" and a regulation change to partition the sticker mileage into separate pieces for the stop-and-go city portion and mostly-cruising highway portion. For gasoline engines, using those two, and a small nudge downward for the standard trip's deviation from the typical, gives customers a good guide.

Also because it's easy to measure, mileage numbers from the field provided feedback to limit the tendency for "design to the test" to make gas consumption evolve into complete optimization for the test. Any model that got horrible mileage in the field would soon get bad reviews, and the engineers would be on its case (if this hadn't happened before it was released.)

But emissions are NOT easily measured in the field. About the only tests there are periodic checks in some states - and they tend to use a very abbreviated cycle. They're just intended to check that the stock emissions control equipment hasn't broken or been disconnected.

So with field feedback on mileage but not emissions, the secondary selection pressure (after "do well on the standard test) is for the engine to get good mileage on other cycles without regard to whether this affects emission. Engineers, with the best intentions, would tend to design engines that pollute a bit more when off the test.

= = = =

I agree with most of what you say. But this is incomplete:

The higher temperatures and pressures (of diesels) help with CO and unburned hydrocarbons (they favor more complete combustion), but the scale of the added NOx and PM problems are much greater.

Which is true upstream of the catalytic converter. But the whole POINT of a (three-way) cat is to move oxygen from NOx to CO and unburned hydrocarbons. Get the right fuel-air mixture and any leftover oxygen, NOx, CO, and HC are all burned exactly. Getting this right with early engines - using fluid and mechanical computation - was a real pain. With software and exhaust oxygen sensors it's a much easier job.

As for particulate matter, the original emission control regulations were designed around what was current when they were imposed: gasoline engines, running the Otto cycle, which doesn't emit much PM unless horribly detuned, worn into burning lots of lube oil, or fed the wrong fuel (like accidentally topping off the tank from the green diesel-fuel pump hose). Diesels tend to put out a lot of PM, and (as big lumps of mostly carbon and unburned hydrocarbons) a surface catalyst can't do much with it. So getting that right pretty much needs to be dealt with separately.

Comment Re: ZFS is nice... (Score 1) 275

But it's combined by the user at runtime, not by canocal. The GPL allows an end users to do this.

This is a way that people kid themselves about the GPL. If the user were really porting ZFS on their own, combining the work and never distributing it, that would work. But the user isn't combining it. The Ubuntu developer is creating instructions which explicitly load the driver into the kernel. These instructions are either a link script that references the kernel, or a pre-linked dynamic module. Creating those instructions and distributing them to the user is tantamount to performing the act on the user's system, under your control rather than the user's.

To show this with an analogy, suppose you placed a bomb in the user's system which would go off when they loaded the ZFS module. But Judge, you might say, I am innocent because the victim is actually the person who set off the bomb. All I did was distribute a harmless unexploded bomb.

So, it's clear that you can perform actions that have effects later in time and at a different place that are your action rather than the user's. That is what building a dynamic module or linking scripts does.

There is also the problem that the pieces, Linux and ZFS, are probably distributed together. There is specific language in the GPL to catch that.

A lot of people don't realize what they get charged with when they violate the GPL (or any license). They don't get charged with violating the license terms. They are charged with copyright infringement, and their defense is that they have a license. So, the defense has to prove that they were in conformance with every license term.

This is another situation where I would have a pretty easy time making the programmer look bad when they are deposed.

Comment Perhaps he's making flakes of Rydberg matter? (Score 1) 186

The secret sauce seems to be ultra-dense deuterium, "D(0)" whatever that means. Looking through the author's other papers, it looks like he's claiming to have made metallic hydrogen, which would be a Nobel Prize right there.

If he can demonstrate this, then fine ... he's a super genius.

Perhaps he's making flakes of Rydberg matter, floating in a near-vacuum.

(If I understand it correctly) this is matter where the individual atoms have been NEARLY ionized, by pumping an electron up to ALMOST, but not quite, the energy needed to free it from the atom, leaving an ion. (You can do this with a laser tuned to the energy difference between the ground state, or the state the electron WAS originally in, and the state you want it in.) If you get the electron into one of the high, flat, circular orbitals, it looks almost like a classic Bohr atom (earth/moon style orbit), and the state lasts for several hours.

Atoms in such a state associate into dense hexagonal clusters. (19-atom clusters are easy and heavily studied, and clusters of up to 91 atoms are reported.) The electrons bond the atoms by delocalizing, forming a metallic, hexagonal grid, similar to a tiny flake of graphite sheet. You can't make them very big. (There's some issue with the speed of light screwing up the bonding stability when the flakes get too big.) But you can make a lot of them, creating a "dusty plasma".

So hitting gas with the right laser pulse could end up with lots of flakes of this stuff, with deuterons held in tight (dense!) and well-defined flat hexagonal arrays by a chicken-wire of delocalized electrons, with zero (or tiny) net charge, floating around in a near vacuum and suitable for all sorts of manipulation. (Like slamming them into each other, for instance.)

Now how this interacts with substituting muons for electrons (something analogous to an impurity in a semiconductor crystal?), missing or extra electrons (ditto?), occasional oddball nuclei (again ditto?), or perhaps how it might generate muons when tickled by appropriate laser pulses, all look like good open questions for active research.

The point is that it's pretty easy to get these long-lived, self-organized, high-density, stable regular geometry, crystal flakes of graphite-like deuterium floating in a near vacuum, where you can poke at them, without any pesky condensed matter to get in the way.

Easy as in maybe you can do it on a desktop with diode lasers, producing "maker" level nuclear physics experiments. B-)

"Stupidity, like virtue, is its own reward" -- William E. Davidsen