She resigned on the 2008/2009 new year boundary.
Correction: Jan 31 2009
She resigned on the 2008/2009 new year boundary.
Correction: Jan 31 2009
Hugh Hefner handed top control spot of the company to his daughter Christie in 1988. She resigned on the 2008/2009 new year boundary.
The magazine's market performance has apparently been gradually declining since then, starting by dropping back to 11 issues per year in 2009. (What mix, if any, of Chistie leaving because the writing was already on the wall, the third generation's changes resulting in a slide, and/or other factors may be a good subject for a post-mortem analysis and publication, some time in the future.)
[Quoting Ramesh Ponnuru in Bloomberg] "All in all, then, what Paul is proposing is a big tax cut for high earners and businesses with almost no direct benefits for most Americans.
For the middle class, however, the plan looks like a wash:H
And when you look at the article you see that it's mischaracterized. He claims "For the middle class, however, the plan looks like a wash" because the massive tax cut would be offset by two factors:
1) The replacement of the corporate income tax with a 14.5% "business activity tax" that doesn't include labor costs as a deduction. He treats this as if it were a hidden 14.5% tax on goods, neglecting the compensating benefits of reducing the corporate income tax, AND the costs of computing it and changing business decisions to work around it, to zero. (Yes, some corporations manage to structure their operations so they can get their corporate tax below 14.5%, or even down to zero. Want to bet whether it costs them less than 14.5% when tax-hacking costs are included?)
2) The alleged reduction in benefits to the middle class from cuts in government spending. Do YOU think that the middle-class actually gets any substantial benefits from the government spending that would be cut? Then take into account that cuts in government spending tend to stimulate the economy BIG time (by not having so much of its blood drained every time it circulates another round), something that his source for this claim - the pro-business "The Tax Foundation" - explicitly ignores in its analysis.
IMHO Ponnuru's article was another hit piece - part of business interests' attempts to convince the voters that tax reform plans which favor the working / middle classes, growing the pie and letting them keep a bigger piece of it, are bad ideas, so they elect another shill who is in the moneyed interests' pocket.
20 years ago, a working man could pay for his rent with one week's salary. Now on the average it costs 2 weeks or more... and that's before you've paid for other necessties such as food, utilities, and car payments and gasoline.
A large part of that is that the government went on a spending spree that hasn't abated. The extra work is to provide the value that's sucked out to pay off the creditors and for the latest spending schemes. That value has to come from somewhere, whether it's devaluation of the currency (from more dollars chasing the goods) or the double-whammy of government borrowing sucking out the investment market, which means that money isn't making more consumer stuff AND it has to eventually be paid back, at interest, out of taxes.
There was some government debt for a long time. But the big fall-off-the-cliff turning point, IMHO, was when LBJ ran, first the Vietnam (undeclared) war, then also the Great Society welfare entitlement programs, on credit (meaning looting future generations). Then Nixon tried to fix things by unhooking the dollar from gold, and it's been unchecked government spending, explosive inflation, and accumulating debt and interest ever since.
I have yet to see an "anti-tax" conservative who advocated policies whose most prominent beneficiary wasnt the very wealthy.
I have seen plenty.
Have you looked at Rand Paul's flat tax proposal, just for starters?
No I disagree, as the data involved in matching is the result of a highly filtered process. There are many many data point's that don't support a match, as evidenced by the inability to replicate a match via any other method.
Simply using the matching data allows the filtering assumptions to go unchallenged.
If what's at issue is whether the tool selected the matches and hid the mismatches, and this can't be determined by comparing the defendant's genome against the tracable raw data that went into building the database, then the defendant's team gets to examine the software or the evidence is out. Agreed.
Defendant have the right to a thorough cross-examination. The database used, the filtering process are all relevant to the actrual likelihood of a match.
Here's where we're differing. I am claiming that, once a match is found, the quality of the match can be checked by comparing the raw data of the defendant's sample against the raw data that went into the database that was searched. Even if the data was reduced and encoded in some proprietary way to assist rapid searching and probability estimation by the proprietary tool, the match can be proven - as can the assertion that no exculpatory evidence was withheld - by providing the base data and ANY algorithm that performs the equivalent probability computation in a transparent way. If it gets the same numbers, that part of the issue is proven.
If there is some question of whether the tool used improperly obtained evidence in deciding to look at this guy's data, that would make its internals relevant. If there is some question that it may have identified other, equally good, matches and these were withheld from the defence, that might make the OPERATION of the tool relevant, without putting the workings of its innards into that category.
But IANAL. If the court says you're right on this it won't surprise me. (But their reasoning would be interesting.) Also: I won't complain if they kill database-fishing for "a wrong reason". B-)
Defendants have the right to compel testimony and other evidence to be produced if they can show it is relevant. Trade secret, patent, or copyright has no power to override constitution guarantees to due process.
Total agreement there.
Prosecutors should not be allowed pseudo-science or selective disclosure of data. In fact, knowing use of either constitute prosecutorial misconduct, [an offense] that can result in financial sanctions and or disbarment.
Total agreement there, too. (Also, IMHO: Such sanctions and/or disbarment should be invoked far more often than they are. B-) )
Our Right just keeps advocating policy that will heep even more money onto a wealthy class of citizens who are wealthier then they have ever been in American history.
Please don't confuse 'The Republican Party" with The Right". For at least the last three presidential election cycles the Republican Party has been solidly under the thumbs of one of its four major factions - the Neocons. (And this cycle that faction is finally being bumped by a new challenger which I'll call "The Plutocrats", in the form of the self-financed campaigns of Trump and Fiorina.)
The Classic/Paleo conservatives, religious right, and liberty wing (libertarians and other anti-tax, government-off-our-backs types) are more of the party but lately have negligible power.
From the perspective of the burden of proof placed on the Prosecution, they have to disclose how they arrived at this derived 'evidence' of a match via TrueAllele.
IMHO: Unless there is an issue with whether the database TrueAllele searched was obtained illegally (making any results of searching it for suspects "fruit of the poisoned tree"), they DON'T have to show how the match was found.
They just have to show that the match IS a match. This can be done with the data involved in the match standing on its own,
If he refuses to show his code to an expert witness and explain it, then the evidence can't be used.
As I understand it, he should be able to get his program (or a modification of it) to produce as an output:
- The computation of the probabilities
- The data used to compute them, with annotation giving a trace back to its source.
- The assumptions behind the computation.
The issue of HOW IT IDENTIFIED this individual is separate from WHAT IT IDENTIFIED ABOUT HIM. The former is the "secret sauce" and would not be revealed. The latter is the evidence and can stand on its own. Further, it MUST be able to stand on its own - because if it can't, it's inadequate.
Now if part of his "expert testimony" is that his program did NOT find any other people who 'matched' and this is somehow relevant, THEN how it goes about doing the matching also becomes relevant and he's hosed.
Of course the defence is going to do their darndest to monkeywrench the prosecution, and threatening the tool builder with disclosure of his trade secrets is a good move tactically. It's up to the judge (and possibly the appeals judge) to call them on it if it's just an irrelevant thrash.
(I say this all as someone who personally believes that DNA evidence should only be used for defence, not prosecution, in criminal trials, because non-match is definitive while "match" is a difficult probability estimate based on assumptions about genetics, gene distribution, gene correlation, and on some very difficult to grasp probability computations. Hunting for matches in databases is, IMHO, subject to false positives and overestimation of the improbability of the match being false, based on underestimation of correlation and the genetic and familial mechanisms that might promote it.)
By "a bit", after 30+ years of selection pressure I wouldn't be surprised by as much as 25 to 50% extra NOx on "off the test" readings from just optimizing with only the test and field mileage for feedback.
Unless there's something special about diesels that makes them inherently troublesome on some non-test cycles, though, 2x or more seems too high to be honest fallout, and should prompt a detailed search for explicit cheat code.
Much of my early career was consulting to the auto industry (in particular, Ford and GM) during the early periods of electronic engine controls and their interaction with the emissions test regime in question. I did some work with engine controls, but most of it was emissions testing automation and data reduction.
We all (executives, engine designers, test equipment designers, and regulators) knew:
- The test conditions were arbitrary but standard.
- Detecting them and switching modes would be trivial to implement and look good at first, but also illegal, immoral, and financially disastrous for the company when they were eventually detected.
- Because engineering was done to meet the regulations - which met score well on the tests - even with honest efforts and no cheating it would eventually evolve the vehicles to do well on the tests but probably not so well on other operational cycles. (You see this with "your mileage may vary".)
- Tests and design processes were VERY expensive and the companies highly competitive. They couldn't afford to engineer for BOTH the regulations and to be good all the time out of niceness: The "nice guys" would "finish last", be driven out of the market, and you'd STILL only get cars that only met the regulations. A level playing field was needed.
- So it was the responsibility of the regulators to write test specifications that modelled the driving cycle well enough that engines tuned to them would also perform adequately in general, despite the "design to the test" evolutionary pressure, and the engineers to meet the law on the tests that were imposed, not do so by explicit detect-the-tests cheats.
The executives and early-stage engineering departments were aware of the temptation for engineers to write cheats, and (at least at one I worked for) put some draconian controls in on software changes to the engine control, to prevent them. (The official explanation given to the inconvenienced engineers was "insuring regulatory compliance".)
I was told that the regulators came up with the standard test by
- instrumenting a car (with a bicycle wheel speed recorder on the bumper and some event-recording switches),
- parking behind various cars (in Denver?) and, when their owners started up, surreptitiously tailing them to their destination and recording their warm-up idle time, speeds, acceleration, braking, standing waiting for lights, etc. (but not the upslope/downslope and wind).
- picking one of these trips, which contained both city and highway driving and looked pretty typical, and adding a "cold soak" to the start (engine is not run for several hours) to standardize the starting conditions and model an initial start, and a guesstimate of a final idling period before shutdown. (To meet the cold-soak requirement, cars were pushed into the test cell by hand or things like electric pallet jacks.)
The test measures exhaust airflow volume and concentration of CO2, CO, and unburned hydrocarbons. So gasoline consumption can be easily computed by "carbon balance" - you know how much carbon is in a gallon, you measure all of it as it comes out, none is lost and only a tiny bit of burned lube oil adds any. So you get mileage for free by postprocessing the data. The regulators got the bright idea of putting this computed mileage on the stickers for customers to make objective comparisons when shopping.
It's easy to measure the average mileage of cars in the field: Just divide the odometer mileage by the gallons pumped to refill the tank, and average over several fillups to smooth out variation in how the tank was topped off. It quickly became apparent that:
- Mileage in normal service varied substantially.
- The trip defined as the standard one got substantially better mileage than was typical.
Thus was born the caveat "your mileage may vary" and a regulation change to partition the sticker mileage into separate pieces for the stop-and-go city portion and mostly-cruising highway portion. For gasoline engines, using those two, and a small nudge downward for the standard trip's deviation from the typical, gives customers a good guide.
Also because it's easy to measure, mileage numbers from the field provided feedback to limit the tendency for "design to the test" to make gas consumption evolve into complete optimization for the test. Any model that got horrible mileage in the field would soon get bad reviews, and the engineers would be on its case (if this hadn't happened before it was released.)
But emissions are NOT easily measured in the field. About the only tests there are periodic checks in some states - and they tend to use a very abbreviated cycle. They're just intended to check that the stock emissions control equipment hasn't broken or been disconnected.
So with field feedback on mileage but not emissions, the secondary selection pressure (after "do well on the standard test) is for the engine to get good mileage on other cycles without regard to whether this affects emission. Engineers, with the best intentions, would tend to design engines that pollute a bit more when off the test.
= = = =
I agree with most of what you say. But this is incomplete:
The higher temperatures and pressures (of diesels) help with CO and unburned hydrocarbons (they favor more complete combustion), but the scale of the added NOx and PM problems are much greater.
Which is true upstream of the catalytic converter. But the whole POINT of a (three-way) cat is to move oxygen from NOx to CO and unburned hydrocarbons. Get the right fuel-air mixture and any leftover oxygen, NOx, CO, and HC are all burned exactly. Getting this right with early engines - using fluid and mechanical computation - was a real pain. With software and exhaust oxygen sensors it's a much easier job.
As for particulate matter, the original emission control regulations were designed around what was current when they were imposed: gasoline engines, running the Otto cycle, which doesn't emit much PM unless horribly detuned, worn into burning lots of lube oil, or fed the wrong fuel (like accidentally topping off the tank from the green diesel-fuel pump hose). Diesels tend to put out a lot of PM, and (as big lumps of mostly carbon and unburned hydrocarbons) a surface catalyst can't do much with it. So getting that right pretty much needs to be dealt with separately.
The secret sauce seems to be ultra-dense deuterium, "D(0)" whatever that means. Looking through the author's other papers, it looks like he's claiming to have made metallic hydrogen, which would be a Nobel Prize right there.
If he can demonstrate this, then fine
... he's a super genius.
Perhaps he's making flakes of Rydberg matter, floating in a near-vacuum.
(If I understand it correctly) this is matter where the individual atoms have been NEARLY ionized, by pumping an electron up to ALMOST, but not quite, the energy needed to free it from the atom, leaving an ion. (You can do this with a laser tuned to the energy difference between the ground state, or the state the electron WAS originally in, and the state you want it in.) If you get the electron into one of the high, flat, circular orbitals, it looks almost like a classic Bohr atom (earth/moon style orbit), and the state lasts for several hours.
Atoms in such a state associate into dense hexagonal clusters. (19-atom clusters are easy and heavily studied, and clusters of up to 91 atoms are reported.) The electrons bond the atoms by delocalizing, forming a metallic, hexagonal grid, similar to a tiny flake of graphite sheet. You can't make them very big. (There's some issue with the speed of light screwing up the bonding stability when the flakes get too big.) But you can make a lot of them, creating a "dusty plasma".
So hitting gas with the right laser pulse could end up with lots of flakes of this stuff, with deuterons held in tight (dense!) and well-defined flat hexagonal arrays by a chicken-wire of delocalized electrons, with zero (or tiny) net charge, floating around in a near vacuum and suitable for all sorts of manipulation. (Like slamming them into each other, for instance.)
Now how this interacts with substituting muons for electrons (something analogous to an impurity in a semiconductor crystal?), missing or extra electrons (ditto?), occasional oddball nuclei (again ditto?), or perhaps how it might generate muons when tickled by appropriate laser pulses, all look like good open questions for active research.
The point is that it's pretty easy to get these long-lived, self-organized, high-density, stable regular geometry, crystal flakes of graphite-like deuterium floating in a near vacuum, where you can poke at them, without any pesky condensed matter to get in the way.
Easy as in maybe you can do it on a desktop with diode lasers, producing "maker" level nuclear physics experiments. B-)
Some recently approved cancer treatments (particularly: for inoperaable brain cancer) are basedt on a recent discovery:
- The electric fields from changing magnetic fields interfere with chromosome segregation during mitosis.
- The affected cells generalluy do one of two things:
- Complete the division with missorted chromosomes - then both offspring cells commit suicide.
- Give up on cell division - then the new diploid cell commits suicide.
Cells not undergoing mitosis keep perking along just fine. (Perhaps this is why large-range electric fields aren't present in cells except during division: Electrical effects occur across membranes or in very close range between molecules - because the use of the fields in the chromosome segregation mechanism means any newly-evolving "feature" that involved long-range E-fields would kill the cell partway to evolving it.
This is great for brain cancer treatment: Essentially nothing is splitting except the cancer cells. Maybe you lose some nerve stem cells and have slightly lower brain plasticity over the coming decades - but that's a heck of a lot better than dying in agony and gradually-increasing dimentia over 6 months to a year.
But start poking at brains with this in the long term - especially brains of people under 21 or so, when the brains are still doing substantial interconnection and cell division - and you might start seeing some nasty damage.
Looked it up:
They replace an electron in a hydrogen atom/molecule - but are heavy so the resulting muonic atom/molecule is much smaller, allowing the nuclei to come within fusion distance.
H2 (D-D, D-T) molecule.
The fusion kicks the muon off and it repeats the process. [...] The problem has always been that it takes a lot of energy to make a muon and it has a tiny lifetime - long enough to do maybe four fusions before it decays.
Actually the muon lasts a couple microseconds which is a LONG time at molecular and nuclear speeds. But in addition to decaying it has maybe a 1/2% to 1% chance of sticking to the helium and getting lost until it times out. So it only catalyzes maybe 100 to 200 reactions. You need somewhat more than 300 to break even for the energy used to create it in an accelerator (maybe times a factor of about 2.5 to make up for the accelerator efficiency).
I followed the link to the original paper. It's a bit sketchy. But on a skim I don't get quite as much of a "what did he do" as the author of that piece did.
What it looks to me like he did is:
- Made some "ultra dense" duterium - apparently by the same method as F&P: Using electricity to force it into palladium by electrolysis, with the solid palladium holding it at high density and in particular orientations.
- Hit it with a laser.
- Got muons out - with energies above those that could be explained by the laser excitation, and apparently with energy totalling substantially more than spent on the laser and the electrolysis drive power.
Now if this is real, and can be repeated and engineered:
1) High-energy charged particles, at well-defined energies, emerging from a well-defined location, and with adequate lifetimes to last through a few microseconds of the process, can easily have most of their kinetic energy collected as electricity by pretty trivial equipment.
2) Muons catalyze fusion - at room temperature (or even liquid hydrogen temperature). They replace an electron in a hydrogen atom/molecule - but are heavy so the resulting muonic atom/molecule is much smaller, allowing the nuclei to come within fusion distance. The fusion kicks the muon off and it repeats the process. This has been known for decades: Just point a muon beam at some hydrogen and watch the fun.
The problem has always been that it takes a lot of energy to make a muon and it has a tiny lifetime - long enough to do maybe four fusions before it decays. So muon-catalyzed fusion (using accelerators to make muons) would never approach breakeven. If this guy has figured out how to make muons in a simple cell, with the energy to make the muon coming from a fusion reaction, it could change the game big-time.
Also: If muons manufactured by such a process were a step in the very sporadic, looked-like-fusion, effects seen by the people trying to do cold fusion, it could explain why the effects were sporadic - and understanding the process might lead to being able to produce it reliably and consistently.
So maybe this is just another will-o-the-wisp. Or maybe it's something that could lead to substantial repeatable interesting physics. Or maybe it could lead to real energy-producing reactors on a less-than-tokamak scale.
And just maybe it's a missing piece of a real room-temperature fusion process that led to the cold-fusion flap and might become practical. Wouldn't that be nice?
Regardless, this just got published within the last month or so. If it's real it should be pretty easy to reproduce, and from there not too hard to figure out. So let's see what happens. Maybe nothing, maybe little, just the off chance of another roller-coaster ride. B-)
All I ask is a chance to prove that money can't make me happy.