Comment Re:Remember when (Score 1) 39
Remember when Windows 95 wasn't installing the IP stack by default? Better yet you had to buy it as Trumpet Winsock for Windows 3.x ?
Remember when Windows 95 wasn't installing the IP stack by default? Better yet you had to buy it as Trumpet Winsock for Windows 3.x ?
and peppered the public with constant lies.
That skill proved useful in his later career.
Russia is testing nuclear delivery systems like their new "Skyfall" missile. But they're not testing warheads. Now, in fairness, Trump is very old, quite possible senile, and not terribly bright so it's entirely possible that he doesn't understand the difference between Russia testing a missile and Russia testing a bomb. But his order is making news because, as written, it's calling on the United States to resume the live-fire testing of nuclear weapons and we stopped doing that in (off the top of my head) 1992.
If Trump is trying to go tit-for-tat with a rival over nuclear weapons testing it's basically North Korea. China hasn't shot one off since 1996, Russia stopped before us in 1990.
Both China and Russia are suspected of having run clandestine tests in the 2000s but if US intelligence has more than a suspicion they're playing it close to the vest.
Testing of nuclear weapons among the major nuclear powers tapered off with the end of the Cold War and the international norm against testing creates a real disincentive to test, even in well contained, underground scenarios.
Back when testing wasn't so taboo the United States had a HUGE advantage in terms of the measurement and recording of test data. That advantage stemmed from computing advantages which have since ebbed. Normalizing live testing gives Russia and China an opportunity to catch up with that data and modeling advantage consequence free. "The US is testing, so we should too."
Trump isn't leaning into testing because Russia or China told him too -- he's just a vainglorious blowhard who likes the idea of setting off nuclear weapons -- but this nevertheless benefits American adversaries a great deal more than it benefits the United States.
Per mentioned 18 months that would cost otherwise $399.
THIS. I have some Raritan lights out management boards that need some Java run in a way that isn't at all supported in any modern browser, even with any tweaks, whitelisting or anything. They have an app now but such older boards aren't supported. The only simple way to connect is just to use a VM with Windows 7 and Internet Exploder. Then I realized even Windows 98 would do, faster and the VM is tiny. Faster CPUs need a small patch but work just fine (as long as it's still on an x86 CPU, I just couldn't figure out how to run it reliably with qemu or dosbox on a different CPU).
For sure. And that's why I think it's important to distinguish harm caused despite good intentions and reasonable practices being followed from harm caused because someone did not follow reasonable practices or actively chose to cut corners.
Just my personal opinion, but given the track record in this particular industry, I think there should be demonstrable intent by decision-makers to follow good practices, not merely a lack of evidence of intent to circumvent or cut corners. This is expected in other regulated industries, compliance failures are a big deal, and for good reason. I see no reason why similar standards could not be imposed on those developing and operating autonomous vehicles, and every reason they should be given the inherent risks involved.
Maybe this will be an area where the US simply gets left behind because of the pro-car and litigious culture that seems to dominate discussions there.
Reading online discussions about driving -- admittedly a hazardous pastime if you want any facts to inform a debate -- you routinely see people from the US casually defending practices that are literally illegal and socially shunned in much of the world because they're so obviously dangerous. Combine that with the insanely oversized vehicles that a lot of drivers in the US apparently want to have and the car-centric environments that make alternative ways of getting around much less common and much less available, and that's how you get accident stats that are already far worse than much of the developed world.
But the people who will defend taking a hand off the wheel to pick up their can of drink while chatting with their partner on a call home all while driving their truck at 30mph down a narrow road full of parked cars past a school bus with kids getting out are probably going to object to being told their driving is objectively awful and far more likely to cause a death than the new self-driving technologies we're discussing here. You just don't see that kind of hubris, at least not to anything like the same degree, in most other places, so we might see more acceptance of self-driving vehicles elsewhere too.
IMHO the only sensible answer to is separate responsibility in the sense that a tragedy happened and someone has to try to help the survivors as best they can from responsibility in the sense that someone behaved inappropriately and that resulted in an avoidable tragedy happening in the first place.
It is inevitable that technology like this will result in harm to human beings sooner or later. Maybe one day we'll evolve a system that really is close to 100% safe, but I don't expect to see that in my lifetime. So it's vital to consider intent. Did the people developing the technology try to do things right and prioritise safety?
If they behaved properly and made reasonable decisions, a tragic accident might be just that. There's nothing to be gained from penalising people who were genuinely trying to make things better, made reasonable decisions, and had no intent to do anything wrong. There's still a question of how to look after the survivors who are affected. That should probably be a purely civil matter in law, and since nothing can undo the real damage, the reality is we're mostly talking about financial compensation here.
But if someone did choose to cut corners, or fail to follow approved procedures, or wilfully ignore new information that should have made something safer, particularly in the interests of personal gain or corporate profits, now we're into a whole different area. This is criminal territory, and I suspect it's going to be important for the decision-makers at the technology companies to have some personal skin in the game. There are professional ethics that apply to people like doctors and engineers and pilots, and they are personally responsible for complying with the rules of their profession. Probably there should be something similar for others who are involved with safety-critical technologies, including self-driving vehicles.
The perfect vs good argument is the pragmatic one for moral hazards like this. IMHO the best scenario as self-driving vehicles become mainstream technology is probably a culture like air travel: when there is some kind of accident, the priority is to learn from it and determine how to avoid the same problem happening again, and everyone takes the procedures and checks that have been established that way very seriously. That is necessarily going to require the active support of governments and regulators as well as the makers of the technology itself, and I hope the litigious culture in places like the US can allow it.
That's my opinion too.
Unfortunately, after a certain amount of actual progress we are now regressing again.
Yeah, we have a long history of not practicing what we preach.
Remember when the USA took pride in being a melting pot?
I mean, if they can't write a trusted parser, maybe they should get out of the OS business? Jesus.
THIS. I was flabbergasted when NextCloud wouldn't do previews for security reasons, but freakin' Microsoft?! Also, this isn't something that needs network or file access, just take some memory range and spit out some results in another buffer (which should be controllable, as big as a bitmap thumbnail would be), with all the features in modern CPU/OSes can't we isolate this well enough?
This part from TFA is mind boggling "vulnerabilities that allow them to obtain NTLM hashes when users preview files containing HTML tags "
Lead me not into temptation... I can find it myself.