Comment Re:This has got to be (Score 2) 86
The last 999 were frauds, so this one *must* be the real thing.
The last 999 were frauds, so this one *must* be the real thing.
and test ALL your food on a hamster first.
But only a hamster that you don't like.
How do you arrest someone in absentia ?
Even more confusing: The explosions we will see in 80 years, in 160 years, in 240 years... have already happened too.
What's the deal, is it just broadcasting re-runs?
Now let's see your ChatGPT equally compelling Maxist and Feminist critiques of science.
If you actually look at the pledge the content is a bunch of meaningless platitudes. Specifically, it requires
1. Transparency: in principle, AI systems must be explainable;
2. Inclusion: the needs of all human beings must be taken into consideration so that everyone
can benefit and all individuals can be offered the best possible conditions to express
themselves and develop;
3. Responsibility: those who design and deploy the use of AI must proceed with responsibility
and transparency;
4. Impartiality: do not create or act according to bias, thus safeguarding fairness and human
dignity;
5. Reliability: AI systems must be able to work reliably;
6. Security and privacy: AI systems must work securely and respect the privacy of users.
They might as well have pledged to "only do an AI things we think we should do" for all the content it has. If you think that some information shouldn't be released you don't call it non-transparency you call it privacy. When you think a decision is appropriate you don't call it bias you call it responding to evidence and you wouldn't describe it as not taking someone's interests into account unless if you think you balanced interests appropriately. It might as well have said
The only requirement that even had the possibility of a real bite is #1 with explainable, but saying "in principle" makes it trivial since literally all computer programs are in principle explainable (here's the machine code and the processor architecture manual).
That works in politics (sadly), not science.
He was hoping it would work in courts too.
Probably an AI will advise the Board.
Isn't that illegal in Texas and Florida?
For example, Biden's justice department manufacturing novel legal theories [nytimes.com] to imprison non-violent political protesters...
Unlike Farty Don, who merely wants to shoot them.
Don't trust a company with "spin" in their name.
Certain topics do not lend themselves very well to the scientific method.
It's kind of hard to set up 100 universes, say, and run them through a few billion years. You can't do the experiment part.
Sometimes a hypothesis has potentially observable implications, even if a mad scientist can't reproduce everything in their lab.
I think it has been decades since cosmologists believed the universe is expanding at a constant rate.
IANAPhysicist, but isn't a thrust of 1g specific to the mass you are accelerating? Same device pushing heavier mass gives less acceleration?
Is a claim that you can create a thrust of 1g even meaningful without additional details?
I respect the objections that a committed pacifist (or opponent of standing armies) might have to their company taking on military contracts -- even if I disagree. But, anyone else is just being a selfish fucker. They are saying: yes, I agree that we need to maintain a military so someone needs to sell them goods and services but I want it to be someone else so I don't have to feel guilty.
Doing the right thing is often hard. Sometimes it means doing things that make you feel uncomfortable or icky because you think through the issue and realize it's the right thing to do. Avoiding doing anything that makes you feel icky or complicit doesn't make you the hero -- it makes you the person who refused to approve of interracial or gay marriages back in the day because you went with what made you feel uncomfortable rather than what made sense.
Eureka! -- Archimedes