Forgot your password?
typodupeerror

Comment Re:But I thought the phrase went the other way? (Score 5, Interesting) 57

Bari Weiss is right wing? From what perspective? I mean, she doesn’t buy into the Steele Report, supports free speech, and doesn’t think Musk is a Nazi, but that’s hardly “right wing”. Is it? Or does the left wing club require thinking Musk is a Nazi, supporting censorship, and assuming Trump is a Putin super spy?

That Bari Weiss doesn't buy into some of the most extreme ideas of the modern right doesn't make her not right wing, it just shows how utterly off the walls the American right has gone. As for the idea that free speech is a right-wing approach the last year of the Trump administration should make it very clear how much of the right cared only about free speech when it was useful for them. Trying to do things like take away broadcast licenses because one doesn't like what channels have to say https://www.bbc.com/news/articles/c626ye5gq16o is not remotely free speech. And Weiss herself has shown that same hypocrisy which I find particularly, disappointing because she was someone who I disagreed with on some issues but seemed 2 or 3 years ago as a genuine advocate for free speech. And you'll find that I'm someone on Slashdot who had a history of saying that the left had serious free speech issues. But much of her behavior, including her time at CBS, but also her actions at the University of Austin, showed that her support of free speech was only a fig leaf for when she was not in power. Censorship is coming far more from the right right now than the left. Most of the rest of your comment is essentially a strawman of what people on the left generally believe (and I say that as a pretty center-left person who finds much of the left pretty aggravating).

Comment But I thought the phrase went the other way? (Score 4, Informative) 57

I thought the phrase was "Get woke, go broke" but apparently CBS is suffering terrible trouble after it became right-wing. Wow, I guess it wasn't accurate either. And before anyone questions: Yes, CBS has gone drastically right wing since Bari Weiss took over. And this hasn't just been a subtle level of editorial slant but things like pulling a 60 Minutes episode that was critical of the Trump administration https://www.pbs.org/newshour/nation/cbs-editor-in-chief-bari-weiss-pulls-60-minutes-piece-on-trump-deportation-policy-hours-before-air and they killed Colbert's show right after he had an episode critical of Trump and CBS's connection https://www.theguardian.com/tv-and-radio/2025/jul/22/stephen-colbert-trump-cbs-bribe. In fairness to CBS leadership, this may not be as much about their own political beliefs, and about trying to get antitrust approval for the Skydance and Paramount merger. https://en.wikipedia.org/wiki/Merger_of_Skydance_Media_and_Paramount_Global.

Comment Re:AI is not very intelligent and not improving. (Score 3, Interesting) 148

Almost everything about this is just wrong. Let's break it down and discuss each claim.

Parrots sound like they are speaking, but they are merely repeating.

So to start off, this is not an accurate statement about parrots. Parrots can recognize individual objects, individual people, and make requests for specific things. African Grey parrots are the most studied in this regard but they are not the only such. See https://pmc.ncbi.nlm.nih.gov/articles/PMC11196360/. Alex, one of the first African Greys to be systematically studied, had to even be removed from the room when other parrots were being tested because he would sometimes correct them if they got an identification wrong. So, if you are not estimating what parrots can do in the first place, this should already be a pretty large warning sign.

AI has only one single reasoning methodology - prediction based on existing data.

This is accurate. This is also what humans do the vast majority of the time. Prediction based on existing data is incredibly powerful.

AI is not gaining more methods, it is instead just increasing the data. This gives 'better' results, but evolution not revolutionary. Minor improvements at great speed, not major improvements.

I'm not sure what content this has, but in so far as it has content it ignores the vast improvements in benchmarks which certainly look like gaining more methods. The degree to which models today are better than early models is just massive, to the point where many types of tasks which were on standard benchmarks 3 years ago are not even being used on benchmarks today because models match 99% on those routinely. Now, some of that is due to questions leaking, but others are not. For example, one standard thing to use for a benchmark for a bit was the AIME, a standard high school math competition. Using each year's AIME was reasonable because one could be confident it wasn't in the training data. The AIME competition is an invited competition in the US to students who perform well on the ACM competition. The easiest AIME problems should be solvable by any student who is confident with algebra 2 and they get progressively harder. There are 15 problems on a test. For example, here is problem 1 from 2023:

The numbers of apples growing on each of six apple trees form an arithmetic sequence where the greatest number of apples growing on any of the six trees is double the least number of apples growing on any of the six trees. The total number of apples growing on all six trees is 990. Find the greatest number of apples growing on any of the six trees.

By the time one gets to problem 15, one has things like the following:

Find the largest prime number p I've rewritten the problem slightly for formatting here but this was problem 15 of the 2023 AIMEI 1. The other example I gave was from the 2023 AIME 2 (there are two test dates each year. I choose two from different contests here because I was trying to avoid having to put any complicated diagrams in this comment. You can find all the AIME problems and solutions https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions to get a better idea of what they all look like. Now, ChatGPT in its early version could typically got correct a single AIME problem at best. Now, the [best models are getting 98% routinely with some scoring 100% every time https://www.vellum.ai/llm-leaderboard. This is not the only example of this. The IMO, the International Math Olympiad, is a proof based international competition, and is the highest level high school competition in the world. Models started off not being able to solve a single problem. Now, multiple models are getting gold medals on the IMO. The Putnam exam is a college level equivalent where some AI systems are now even scoring perfect scores on that https://axiommath.ai/territory/from-seeing-why-to-checking-everything If you want more, Carina Hong who made the first system which could ace the Putnam has an interview here https://www.youtube.com/watch?v=xldMXTPGMGI. If these are all merely improvements in "data" not in methods, then we should recognize the absolute power of increases in data.

The various stories of evil (AI blackmailing people, AI blogging about how people are prejudiced against it for not letting it post, AI being racist) all demonstrate low level thought - not dogs, not rats, not mice, but instead the kind of thing that an insect could do.

Insects cannot do any of those things! All of those require massive amounts of language use. You are confusing morality with intelligence. Unfortunately, one of the serious problems we're facing is that morality and intelligence aren't the same thing.

You can get better results from AI simply by telling it not to guess and to only show results it can back up. That is not something a person has to be told. That is something we do automatically. A well trained dog does that (i.e. drug detection dogs know not to false alert if they are well trained).

So everything about this is wrong. First, humans do this all the time. And telling humans (young children, high school kids and even college students) to think carefully and do things step by step improves their results. Second, drug detection dogs have an incredibly high false positive rates, even the "well trained" ones. Standard estimates are that drug detecting dogs have about a 40% to 75% false positive rate https://www.abc.net.au/news/2018-12-03/fact-check-are-drug-dogs-incorrect-75-pc-of-the-time/10568410 https://www.npr.org/sections/thetwo-way/2011/01/07/132738250/report-drug-sniffing-dogs-are-wrong-more-often-than-right https://pmc.ncbi.nlm.nih.gov/articles/PMC10440507/ Now, part of this is likely due to genuine stray drug scents (e.g. there used to be a drug in a bag and they took it out this morning and the dog is smelling it a few hours later), but that's still an incredibly high false positive rate. Third, the most advanced models don't perform better when told to do this. In that regard, this is just like children who after being told for years to think carefully and not guess have functionally learned to do so.

Comment Re:Neccessary but not sufficient (Score 2) 61

Where has LeCun said that anything here is necessary and sufficient? If something is in your view or his view necessary to accomplish a goal, then of course trying to do that thing even if it isn't sufficient makes sense. I'm also not sure what the point of your last sentence is since LeCun is one of the more prominent people who doesn't think that LLM AIs will lead to intelligence, and even says so in the summary above.

Comment Re: The End Cannot Come Soon Enough (Score 2) 44

Ok. So Kalshi has for a long time now taken the position that markets would resolve this way in event of death. This is so they aren't accused of running a functional assassination market. This is in contrast to PredictIt, their rival, which has had a policy of paying out on deaths. This is essentially what Kalshi means when they are quoted in the summary above "If Ali Khamenei dies, the market will resolve based on the last traded price prior to confirmed reporting of death." So, there is a slight complication here in reality, which is they resolve the shares based on the last trade, not the amount someone traded them for initially.

> someone else had 10 shares at 40 cents for Khameini to stay.

This sounds like an assumption, which may be valid based on what they state, but changing policy seems to indicate that they might not maintain any actual truth.

Huh? No. This is the entire way a prediction market works. If one person buys a yes share at price X, then someone else has to have bought a no share at price 1-X. That's what ensures there is always enough to pay out. No one, even the people filing this lawsuit, is claiming that Kalshi lied about running an actual prediction market, and that would be a much bigger deal. What this lawsuit is claiming is that Kalshi did not adequately emphasize their death policy so someone could buy leave shares not realizing that they would not have a full payout in event of death.

Comment Re: The End Cannot Come Soon Enough (Score 1) 44

No it doesn't reduce their losses. Here let's do a concrete example. Let's say you have one person who has bought 10 shares at 60 cents for Khameini to leave. That means someone else had 10 shares at 40 cents for Khameini to stay. Now, what the death clause means is that rather than the people with 60 cents to get a full payout and the people with 40 cents get nothing, instead the 60 cent people get 60 cents back a share and the 40 cent people get their 40 cents per a share back. So the total payout here is the same. Does that make sense?

Comment Re:The End Cannot Come Soon Enough (Score 1) 44

If you want to make that argument for sports gambling, then sure go ahead. Sports gambling is wildly addictive, very popular and set up in ways which maximize addiction. And of course, it has no positive externalities. But none of that applies to prediction markets. Labeling prediction markets as "gambling" misses all of that.

Comment Re:Really? (Score 1) 78

Modern panels work better, so the surface area problem is not as much of a problem as it used to be, even though it is still an issue. But all the other problems remain. One is moving panels which means that they get all the extra wear and tear from vibration and exposure to dust, grit and gravel from the road. And they add extra mass to the car. Easier to just have the panels on a fixed location like a house, and charge off of that. If the panels were really efficient enough, it might benefit someone with an apartment who cannot add panels to a house, but that's a marginal case, and even then isn't going to work that well. The fact that the summary needs to explicitly talk about hypermilling reflects how this isn't really anywhere near being a reasonable car for a general market.

Slashdot Top Deals

"Life sucks, but death doesn't put out at all...." -- Thomas J. Kopp

Working...