Bari Weiss is right wing? From what perspective? I mean, she doesn’t buy into the Steele Report, supports free speech, and doesn’t think Musk is a Nazi, but that’s hardly “right wing”. Is it? Or does the left wing club require thinking Musk is a Nazi, supporting censorship, and assuming Trump is a Putin super spy?
That Bari Weiss doesn't buy into some of the most extreme ideas of the modern right doesn't make her not right wing, it just shows how utterly off the walls the American right has gone. As for the idea that free speech is a right-wing approach the last year of the Trump administration should make it very clear how much of the right cared only about free speech when it was useful for them. Trying to do things like take away broadcast licenses because one doesn't like what channels have to say https://www.bbc.com/news/articles/c626ye5gq16o is not remotely free speech. And Weiss herself has shown that same hypocrisy which I find particularly, disappointing because she was someone who I disagreed with on some issues but seemed 2 or 3 years ago as a genuine advocate for free speech. And you'll find that I'm someone on Slashdot who had a history of saying that the left had serious free speech issues. But much of her behavior, including her time at CBS, but also her actions at the University of Austin, showed that her support of free speech was only a fig leaf for when she was not in power. Censorship is coming far more from the right right now than the left. Most of the rest of your comment is essentially a strawman of what people on the left generally believe (and I say that as a pretty center-left person who finds much of the left pretty aggravating).
Parrots sound like they are speaking, but they are merely repeating.
So to start off, this is not an accurate statement about parrots. Parrots can recognize individual objects, individual people, and make requests for specific things. African Grey parrots are the most studied in this regard but they are not the only such. See https://pmc.ncbi.nlm.nih.gov/articles/PMC11196360/. Alex, one of the first African Greys to be systematically studied, had to even be removed from the room when other parrots were being tested because he would sometimes correct them if they got an identification wrong. So, if you are not estimating what parrots can do in the first place, this should already be a pretty large warning sign.
AI has only one single reasoning methodology - prediction based on existing data.
This is accurate. This is also what humans do the vast majority of the time. Prediction based on existing data is incredibly powerful.
AI is not gaining more methods, it is instead just increasing the data. This gives 'better' results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.
I'm not sure what content this has, but in so far as it has content it ignores the vast improvements in benchmarks which certainly look like gaining more methods. The degree to which models today are better than early models is just massive, to the point where many types of tasks which were on standard benchmarks 3 years ago are not even being used on benchmarks today because models match 99% on those routinely. Now, some of that is due to questions leaking, but others are not. For example, one standard thing to use for a benchmark for a bit was the AIME, a standard high school math competition. Using each year's AIME was reasonable because one could be confident it wasn't in the training data. The AIME competition is an invited competition in the US to students who perform well on the ACM competition. The easiest AIME problems should be solvable by any student who is confident with algebra 2 and they get progressively harder. There are 15 problems on a test. For example, here is problem 1 from 2023:
The numbers of apples growing on each of six apple trees form an arithmetic sequence where the greatest number of apples growing on any of the six trees is double the least number of apples growing on any of the six trees. The total number of apples growing on all six trees is 990. Find the greatest number of apples growing on any of the six trees.
By the time one gets to problem 15, one has things like the following:
Find the largest prime number p I've rewritten the problem slightly for formatting here but this was problem 15 of the 2023 AIMEI 1. The other example I gave was from the 2023 AIME 2 (there are two test dates each year. I choose two from different contests here because I was trying to avoid having to put any complicated diagrams in this comment. You can find all the AIME problems and solutions https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions to get a better idea of what they all look like. Now, ChatGPT in its early version could typically got correct a single AIME problem at best. Now, the [best models are getting 98% routinely with some scoring 100% every time https://www.vellum.ai/llm-leaderboard. This is not the only example of this. The IMO, the International Math Olympiad, is a proof based international competition, and is the highest level high school competition in the world. Models started off not being able to solve a single problem. Now, multiple models are getting gold medals on the IMO. The Putnam exam is a college level equivalent where some AI systems are now even scoring perfect scores on that https://axiommath.ai/territory/from-seeing-why-to-checking-everything If you want more, Carina Hong who made the first system which could ace the Putnam has an interview here https://www.youtube.com/watch?v=xldMXTPGMGI. If these are all merely improvements in "data" not in methods, then we should recognize the absolute power of increases in data.
The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.
Insects cannot do any of those things! All of those require massive amounts of language use. You are confusing morality with intelligence. Unfortunately, one of the serious problems we're facing is that morality and intelligence aren't the same thing.
You can get better results from AI simply by telling it not to guess and to only show results it can back up. That is not something a person has to be told. That is something we do automatically. A well trained dog does that (i.e. drug detection dogs know not to false alert if they are well trained).
So everything about this is wrong. First, humans do this all the time. And telling humans (young children, high school kids and even college students) to think carefully and do things step by step improves their results. Second, drug detection dogs have an incredibly high false positive rates, even the "well trained" ones. Standard estimates are that drug detecting dogs have about a 40% to 75% false positive rate https://www.abc.net.au/news/2018-12-03/fact-check-are-drug-dogs-incorrect-75-pc-of-the-time/10568410 https://www.npr.org/sections/thetwo-way/2011/01/07/132738250/report-drug-sniffing-dogs-are-wrong-more-often-than-right https://pmc.ncbi.nlm.nih.gov/articles/PMC10440507/ Now, part of this is likely due to genuine stray drug scents (e.g. there used to be a drug in a bag and they took it out this morning and the dog is smelling it a few hours later), but that's still an incredibly high false positive rate. Third, the most advanced models don't perform better when told to do this. In that regard, this is just like children who after being told for years to think carefully and not guess have functionally learned to do so.
> someone else had 10 shares at 40 cents for Khameini to stay.
This sounds like an assumption, which may be valid based on what they state, but changing policy seems to indicate that they might not maintain any actual truth.
Huh? No. This is the entire way a prediction market works. If one person buys a yes share at price X, then someone else has to have bought a no share at price 1-X. That's what ensures there is always enough to pay out. No one, even the people filing this lawsuit, is claiming that Kalshi lied about running an actual prediction market, and that would be a much bigger deal. What this lawsuit is claiming is that Kalshi did not adequately emphasize their death policy so someone could buy leave shares not realizing that they would not have a full payout in event of death.
"Life sucks, but death doesn't put out at all...." -- Thomas J. Kopp