Great idea, but it might come out looking like the monolith from 2001
Great idea, but it might come out looking like the monolith from 2001
When I was a teenager back in the 70s I knew a kid who put a solenoid controlled bleach dispenser over his rear tires to achieve that truly obnoxious white smoke burnout.
Why, do you ask? What possible purpose could that serve? Well, when his girlfriend dumped him, he backed up into her parent's driveway and blanketed their house in smoke for ten minutes.
This pretty much shows the level of mentality involved.
First thing I thought of when reading the article...they deploy real-life agents into the games? So you'd see some Warcraft orc or SL furry transform into a typical Fed-looking character? Presumably through the use of the Tron scanning laser?
The PCSX2 developers said a PS3 emulator will be possible around 2020...so good luck with a PS4 emulator.
Does anyone else think it's a travesty that our society allows people to reach adulthood while being so stupid? I think education should be a top priority, after that everything else will sort itself out within a generation.
Society wouldn't be so obsessed with telling other people what to do if obesity wasn't something that caused a visible change to a person's body. The bottom line? An awful lot of the feigned "concern" about weight loss is driven by people's own selfish desires for people around them to fit a personal preference of what they deem physically attractive.
Counterpoint: Where I live, most men prefer fat women. Women with naturally thin bodies feel that they've been born with a disadvantage. And our healthcare professionals still harp on and on about obesity, even though they're recommending what I would translate to American sexual tastes as mastectomies for all the women.
Women here still prefer the typical fit and muscular male figure...there are lots of fat women but fat men are uncommon.
Well, start with the conservation status of the birds. Both species are rated as "Least Concern" -- which means no identifiable conservation issues.
In the 1950s there were only 412 nesting pairs of bald eagles in the US, due to hunting and DDT. By 1995 they were taken off the endangered lists, and five years ago they were taken off the "threatened" list. By now there are nearly ten thousand breeding pairs in the lower 48. Half of US states have at least 100 breeding pairs.
From an environmental viewpoint it's quite reasonable to stop treating an occasional accidental bald eagle death as some kind of serious event. For healthy population, an individual removed is room for another individual, just as with reasonable levels of deer hunting. Emitting more carbon in order to stop a handful of eagle accidents makes no sense at all.
Models work from assumptions. The assumptions you put into them don't have to be plausible; a model simply spits out the consequences of the initial conditions you choose. Thus you could start a simulation of the Earth which started with the tropical seas being frozen and the polar seas being at 38 C. Those initial conditions are impossible, but the computer program will faithfully spit out *some* kind of result.
Well, how about this for a system: instead of counting how many papers a researcher publishes, count the number of times a paper he has written has been cited by somebody else.
This is truer measure in any case. I recently had occasion to review the information science research literature on ontologies, and discovered that about 5% of the literature was absolutely vital to read, and were cited by a substantial fraction of papers in the field -- hundreds of times in my own literature search, and likely thousands of times in total in peer reviewed literature.
About 20% dealt with abstruse and narrow technical topics which were nonetheless useful to people working in the field; or were case studies. Such papers make up the bulk of citations in the research literature, although any single such paper probably gets only a few dozen citations. Still that's useful work.
The remaining 3/4 of papers are trivial, a complete waste of anyone's time to read. They may score a handful of citations, but from authors scraping the bottom of the barrel. They're so trivial, obvious, and unoriginal.
Odd side note: the less an author has to say, the more elaborately he says it. The really important papers tend to be written in straightforward, easily understandable prose. The trivial papers read like parodies of academ-ese.
Well, C.S. Lewis had an interesting take on this. He obviously believed in miracles, but he thought of them as becoming "naturalized", in the way a foreigner becomes a naturalized citizen in his adoptive land, and is subsequently bound by the laws of that land. So when the *supernatural* occurs (e.g. drowning the northwest corner of the continent at the end of the First Age), the consequences should follow *naturally*.
I bring this point up with my fantasy writing friends. Just because your world *has* miraculous things in it doesn't mean *everything* should be a miracle. People should have common-sense responses to miraculous things. If wizards throw lightning bolts in battle, then the cavalry shouldn't charge in a tightly packed formation until they're right at the line of battle.
George R.R. Martin's Song of Ice and Fire conspicuously soft-pedals magic, but ironically a lot of the world of those stories fails the naturalization test. For example kind of society depicted is dependent upon consistently generating a massive agricultural surplus, something that's not compatible in my opinion with decade-long winters. But I gave up after only a million words into the stories, so maybe that's explained elsewhere.
... as were nearly all examples of tengwar and dwarf-runes we have from Tolkien's own hand.
That was my first thought, another thing that will be practically required to compete on the job market, oh yay...just wait until those sleep substitute pills hit the shelves.
They're going to die a horrific, slow, and painful death. While I agree they brought it upon themselves, the picture of what they're likely to go through brings me no pleasure or satisfaction.
Neither of these analogies seem quite right to me.
If there are any morally legitimate uses for military weapons, you cannot say the working on weapons per se is automatically immoral. On the other hand, that doesn't make working on any weapon development program for any client morally neutral.
When Mikhail Kalashnikov designed the AK-47, the Soviets were busy trying to repel German invaders -- surely that was a legitimate goal. They needed a cheap, rugged, lethal weapon that could be easily manufactured in large numbers. These same properties that have caused to to proliferate into unstable regions of the world. In some countries it is cheaper to buy an AK-47 than a live chicken. Some have called it a "slow motion weapon of mass destruction."
If somebody had asked Kalashnikov "Design me the ideal weapon to arm a conscripted child-soldier," he'd have told them to get lost. He designed the weapon to liberate his homeland; and he always regretted seeing his inventions in the hands of terrorists. He remarked on one occasion that he'd rather have invented an improved lawn mower.
Clearly, the ethics of weapons engineering is complex. But complex is not the same as "morally neutral". Heisenberg made errors in his atom bomb calculations, leading him to believe that a bomb was not feasible in time to affect the course of the war one way or the other. If his calculations had shown the way to an easier, practical bomb much earlier, then he'd have faced the ethical problem that arming a regime such as the Nazis with such a weapon would be a bad thing.
Today people working on aerial drone warfare are faced with serious ethical questions. Yes, you can construct scenarios in which the drone does the work of a human piloted vehicle without exposing the operator to risk -- clearly that's a good thing if you believe the operator is fighting in a just war. But one of the tenets of just war theory is that killing people pointlessly is never moral. Suppose you believed (as many do) that the Obama administration's use of drones was self-defeating, that we'd never be able to kill more legitimate enemies than are recruited to to the cause by civilian "collateral damage". Working to supply *this* regime with *that* weapon would present a moral dilemma.
Here's a simple analogy that I think works. Selling someone a gun is morally neutral, if you know nothing about what they intend to do with a gun. But if you know for a fact someone is going to use that gun to committ robberies, then selling the gun becomes wrong. The point is that you can't make generalized decisions about weapons development in a vacuum. Circumstances matter. For example it is possible to believe that under the circumstances the Manhattan Project was justified, but believe that North Korean or Pakistani nuclear program is not, without necessarily stipulating that the United States has more rights to nuclear weapons than any other country. You just have to show the circumstances are different.