In Star Trek: Voyager, the character striving for humanity was the Doctor. Also Seven of Nine, but the Doctor was definitely the outsider. By the time the show Voyager came around, half-Klingon wasn't really news and simply made portraying the internal struggles of the character B'elanna easier (since everybody has internal struggles) when it came up. It wasn't a huge deal that she was half-anything. The Doctor's programming to resemble a human and act similar to a human, as well as Seven of Nine's Borgness led to each character's struggle to be more human.
When looking for a Spock analog in ST:V, I struggled a bit between B'Ellana, Seven-of-Nine, and the EMH, but settled on B'Ellana. Yes, the EMH was trying to understand human behavior, but Data already covered that; Pinocchio is still Pinocchio even if you are just a hologram, so I felt that the EMH would have been a bit redundant for the purposes of my argument. I needed somebody different. The issue with Seven-of-Nine was a bit more subtle. She didn't have to struggle to become human, because she was born human. Her conflict was not about becoming human, but more about recovering what she lost to the Borg. Admittedly, B'Ellana's conflict was often given in terms of losing that Klingon temper of hers, and little more, but her internal conflict was much more like Spock's struggle with his Vulcan-suppressed human half than Seven-of-Nine's allegorical rejection of her Borg implants.
I like his humble, collaborative attitude, befitting a true scientist. I expect that, in practice getting there in a repeatable way will be the result of various international cooperations where different organisations will bring their own skills. Empahy and dialogue can only accelerate the process.
He is no longer a scientist. He is a bureaucrat, now, so he is faced with problems where the scientific method and its associated toolbox are sub-optimal, as are his attitude of cooperation and collaboration. They are still useful, to be sure, but he will get more use out of a couple chapters of Machiavelli's The Prince than Newton's entire Principia.
The NASA director's primary challenge is to find compromises acceptable to groups of people who have divergent goals. Congress, DoD, private industry, various scientific orgs -- all of these have claims on, and thus have influence over, NASA's ability to function. Unfortunately for the director, their goals are not the same and are often opposed.
For example, the chair of the House sub-committee that controls NASA's budget, Representative Lamar Smith (R, Texas) denies the existence of AGW and has threatened to withhold funding from NASA if NASA continues to support projects that investigate it. Smith has already dismissed science-based reports on AGW as "biased" and has set up a committee funded by and staffed by the petroleum industry to "review" all AGW data before it is presented to Congress. In a bucket, if the man controlling your funding denies the very existence of what you are trying to investigate, then no amount of cooperation and collaboration on your part is going to produce anything but incredulity and anger on his part, so your funding will evaporate.
This is just one sample of some of the problems NASA's director faces. There are others, similar in scope and nature, including the conflict among scientists and engineers over manned vs unmanned exploration, and the re-emerging conflict over extraplanetary colonization now that Elon Musk has decided to colonize Mars. None of these problems are unsolvable, but they may not be amenable to collaboration or compromise, or yield to the scientific method. They may require a different set of tools and a different mind set, ones more often to be found in career civil servants, IMHO, than in scientists or engineers. It will be interesting to see whom he appoints to various roles in his administration; I'd wager it will be people more familiar with Machiavelli than with Newton...
What made Star Trek great was that these things were exactly treated as non-issues. Like, say, in the future, we consider it ridiculous that we even have to mention that women can command ships or that black people hold power on stations. Even TOS had an alien as the second in command (and admittedly, it was made a theme far more often than necessary).
Well said. Roddenberry's optimistic view of the future is the reason most often cited when people talk about the appeal of ST:TOS.
Trek, in its original run, was adept at tipping the sacred cows of gender, ethnic, and political identity (Uhura, Sulu/Uhura, and Chekov, respectively).
But I think you missed one sacred cow by dismissing the role Spock played in Roddenberry's attack on societal mores. Roddenberry wanted to skewer religious sensibilities as well cultural ones, so he gave one character green skin and pointed ears to make him look like a demon, and would have given him wings and a tail if it had been in the costume budget. Having a demon as a sympathetic character striving to be more human (a trope, btw, that has firmly embedded itself in the Trek franchise a la Data, Odo, and B'Ellana) Roddenberry was taking aim at the religiosity that was (and still is) a core American demographic.
The racism and religiosity that Roddenberry baited in TOS fifty years ago is still a legitimate target in the US, so it will be interesting to see which sacred cows Discovery is going to try to tip, if it remains true to its roots.
I don't think copyright is totally bad. For example, I recently published my first novel. Without copyright law, someone else could grab my novel and start printing/selling their own copies of it. I'd wind up competing with my own novel. Then there are issues of film studios being able to take anyone's work and make movies based off of it without compensating the author at all. I'd have to spend a lot of time and money filing lawsuits to make them stop and, without copyright law, I might not be successful.
The big problem with copyright law isn't its existence. It's the length. Copyright was originally 14 years plus a one-time 14 year extension. This isn't so bad. The novel I just published would have until 2044 (assuming I renewed the copyright) to make me money. Then, the book transfers to the public domain for others to build on it. Very few works still make money after 28 years - and I'd wager most of the ones that still do (like Star Wars) partly keep making money because of new material being added.
However, over the years, copyright terms lengthened until now it's 70 years after the author's death. If I die at age 90, my novel will be protected by copyright until 2135. At that point, my youngest son (now 9) would be 128 - and likely deceased. If my youngest son had a child at 30, his child would be 98 when my copyright ran out. I don't need copyrights on my works lasting until my great-great-great grandchildren are born. That's not giving me incentive to create new works. 14 years + 14 years would be plenty.
If copyright law was reset back to 14 years plus an optional one-time 14 year extension, a lot of the problems with copyright would go away.
Nice analysis, but...
Corporations are effectively immortal, and are being imbued with the legal rights once reserved for humans. Well, at least in the US anyway, thanks to recent court cases like Citizens United and Hobby Lobby. Not sure about the rest of the planet, but I'm certain the success of American corporations in getting legislation passed to favor their interests (or prevent legislation inimical to their interests, see below) is not lost upon the business communities in Europe and Asia.
As it stands, copyright law was written long before the rise of corporations and their political influence. It was created in an environment where people had a very finite window to create and profit from a work, as your cogent analysis makes clear. If a corporation is effectively immortal, though, then this underlying assumption about finite windows is no longer valid. Existing copyright law would need to change to reflect the fact that corporate copyright holders are not constrained by the same finite windows that humans are. Lo and behold, that is exactly what is happening, via laws enacted to extend the length of time the copyright can be invoked.
You make very valid points about the copyright period, but your solution doesn't take into account the needs of corporations, who have a vested interest diametrically opposed to your solution. Any attempt to roll back the current time limits on copyright would be resisted by corporate copyright holders who would stand to lose hundreds of billions, if not trillions of dollars of potential profit. Corporate copyright holders would act to preserve the status quo, and would prevent any such legislation from ever seeing the light of day. Again, I can't speak for Europe and Asia, but it is certainly clear that the interests of corporate copyright holders in America are already well-represented in Congress -- that kind of legislation wouldn't even get out of committee, let alone make it to the floor for a vote.
Only in the legal sense that they won't be tried for murder.
In every moral sense, they had an obligation to deescalate the situation. She was not a threat to anybody but the cops, and the video proves it.
Just...no. You don't win a moral argument by trolling morality.
Certainly, they had a moral obligation to act. But morals aren't absolute. They shift based on context. The minute she leveled that shotgun, the context changed.
The video proves that the cops grokked the situation rightly. You can't deescalate a situation that is already escalated. You did notice that part, right? Maybe in your ivory tower, you can, but here in reality, you don't negotiate with somebody who is saying they are going to kill you while simultaneously bringing a weapon to bear on you.
The new context obligated the cops to preserve their own lives and the lives of bystanders (you did notice the five-year-old bystander, right?)
She was a threat to the cops, to the bystander, and a case could be made to society at large (pulling a weapon on a cop with the stated intention to kill him is, by any definition you care to invoke, sociopathic behavior.) Leaving aside the threat to society (and the host of morals that would, according to you, obligate all sorts of action) the cops acted morally by acting to preserve their lives and the life of the bystander.
Since you opened this with a troll, I'm going to close it with an ad hom. What would you have done differently in that context? I strongly suspect you have neither the training nor the temperament to react rationally (let alone morally) when threatened with deadly force. I think you would have just crapped your pants and froze.
To what end? It can't be enforced, and it could lead to serious legal problems if somebody did try to enforce it.
How would you compel an operator intent on criminal activity to comply with this regulation? One of the problems with well-meaning regulations like this is not their lack of common sense. It is, rather, their lack of enforceability.
By definition, an operator intent on criminal conduct is not going to be deterred by any regulation prohibiting or interfering with that conduct, right? He wouldn't be a criminal if he was.
And what happens to enforceability when some dolt weaponizes his legal quad copter with the firing mechanism from his legal AR-15? Interfering with the operation of that drone now becomes a 2nd Amendment issue, and that trumps any well-meaning "common sense" regulation. Just look at all the common sense pouring out of the gun lobby, if you are skeptical. The gun lobby successfully defeated common sense regulations (supported by a majority of gun owners in this country, btw) that would have prevented the sale of guns to people on the FBI's "no-fly" list by simply invoking the 2nd Amendment.
...because the goal is to be the first to produce an AI capable of replacing a human driver. Autonomous ground vehicles are a potentially lucrative space. The demand is already there; first to get a product into the space is going to clean up. Elon Musk knows this, and Eric Schmidt at Alphabet (Google's parent company) knows it as well.
The handoff problem is the number one challenge facing autonomous vehicle developers. Schmidt and Musk are trying to solve it, and it is only the approach to solving it that differentiates them.
Schmidt is lobbying hard to get the laws in the US changed to allow him to take the driver out of the loop, because the engineer running Schmidt's autonomous vehicle R&D, Chis Urmson, believes the handoff problem is unsolvable.
Musk, otoh, doesn't see any problem as unsolvable. He is willing to continue pushing for a solution, and that means continuing to use data from the real world use of the Tesla AI. Musk knows that this is a risky strategy.
Consumer Reports seems to agree with Schmidt's engineer. The last several paragraphs are a discussion of exactly why the handoff problem is, well, a problem. CR is advocating a very conservative approach to developing and marketing autonomous vehicles, because the handoff problem is too much for an AI to handle, and keeping humans safe is a big part of what CR is about.
Musk knows he's going to face litigation every time a Tesla is involved in a crash. He's prepared for that, because the payoff is enormous.
Sadly, most players will never make the switch because they rightly assume that it's too much of a headache. I can tell you with some authority, it is.
FWIW, the article was also accompanied by teases for articles with titles like "Do you really need a dedicated graphics card to play your favorite games?" and "Watch malware turn this PC into a digital hellscape."
So, in a bucket, consoles are better than hard to acquire, dangerous to build (ow, my bloody finger!), pricey PCs.
"If the code and the comments disagree, then both are probably wrong." -- Norm Schryer