Comment Re:PR article (Score 1) 212
Sure do
Sure do
The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.
One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?
What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?
Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.
Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.
For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)
As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.
For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)
But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.
Greater Fool scheme finds Greatest Possible Fools (goverments) to keep pumping money into their zero-sum game.
Why would he feel inadequate when, according to a trustworthy source, he's a better boxer than Mike Tyson, fitter than LeBron James, hotter than Tom Brady, one of the top minds in history with a near-Olympian physique, the world's best runway model, better at resurrection than Jesus, the world's best bottom (ahem) (cough) and the ultimate throat goat?
Sooner or later, we'll end up at the point where trying to maintain the ways of the past is a fruitless fight. Teachers' jobs are no longer going to be "to teach" - that that's inevitably getting taken over by AI (for economic reasons, but also because it's a one-on-one interaction with the student, with them having no fear of asking questions, and that at least at a pre-university level, it probably knows the material a lot better than the average teacher, who these days is often an ignorant gym coach or whatnot). Their jobs will be *to evaluate frequently* (how well does the student know things when they don't have access to AI tools?). The future of teachers - nostalgia aside - is as daily exam administrators, to make sure that students are actually doing their studies. Even if said exams were written by and will be graded by AI.
I used to work for Sling TV, and you basically have that backwards. ESPN is the part of Disney's package that people are willing to pay money for. The shutdown and negotiations every year is just Disney forcing the various providers to pay for and carry their other channels. That's why Disney always holds these negotiations during football season, so if they have to shut someone down their customers actually care. Every year viewership on Disney's other channels (and non-sports channels in general) is lower, and the prices that the content producers require goes up. Scripted television is in serious decline, and Hollywood is using sports fans to prop it up.
As an example, If you don't care about sports you can get Disney+ without ads for about $12 a month. Disney will happily throw in Hulu for that same price if you will watch some ads. You can binge watch the shows that you care about and then switch to another channel. Heck, you can buy entire seasons of their shows ala carte. You can't get ESPN however, without paying at least $45/month, and that's with a package with no non-Disney channels and chuck full of ads. For the record, that's basically what the streaming services are paying Disney as well. When I worked at Sling the entirety of the subscription fees went to the content companies (primarily Disney). There is essentially no profit in cable packages. All of the profit has to be made up somewhere else.
People that aren't sports fans, especially if they are entertainment fans, tend to believe that scripted programming is carrying sports, but it is the other way around. That's why AppleTV, which has spent over $20 billion creating content for their channel has about as many subscribers the amount of people that typically watch a single episode of Thursday Night Football, the worst professional football game of the week. Amazon Prime pays $1 billion a year for that franchise, and it is a bargain compared to creating scripted content. Apple makes great television that almost no one pays for. The other content providers are in the same boat. You'll notice, for example, that Netflix's most expensive package is $25/month, and the revenue per user in the U.S. is around $16. That's ad free. The lowest promotional price you can pay for ESPN is basically twice that, and it always comes with ads. What's more, sports fans tend to actually watch the ads.
Sling is selling day and weekend passes to people because it knows that most of its customers only have their service to watch the game. No one is watching linear television anymore, but the content creators have built their entire business around the idea of having a channel that they fill up with content. Even with Sling's ridiculous prices they can typically watch the games they want to watch for less than maintaining a subscription.
I have spent most of my adult life in the sports world, but I don't watch sports. I personally believe that in the long run sports television is probably going to end up uncoupled from scripted television. I think that is going to be very bad news for people that like scripted television.
"It's my estimation that every man ever got a statue made of him was one kind of a son of a bitch or another." --Malcolm Reynolds
(Ironically applies well to Joss Whedon himself. Kind of wonder if one of the show writers was thinking about Joss when they wrote that...)
The only single-source point of failure is me.
I think I saw someone swimming in some sewage en route from scraping a bear carcass off the road, let me go check.
1. I got asked once if I played world of warcraft since they say a guy with the name "thegarbz" playing. I said no. By the way I know exactly who that person is because he impersonated me as a joke. I found that flattering and funny, but it has no impact on my life beyond that.
Reminds me of my first email account
I don't trust single points of failure.
Yeah, this. If I have to sign up to some site that I don't care at all if it gets hacked, I use a throwaway password. Oh noez, someone might compromise my WidgetGenerator.foo.bar account and generate some widgets in my name, heavens to betsy!
His surname is one transposition away from "AI Mode".
Anthropic's entire pitch has always been safety. Innovation like this tends to favor a very few companies, and it leaves behind a whole pile of losers that also had to spend ridiculous amounts of capital in the hopes of catching the next wave. If you bet on the winning company you make a pile of money, if you pick one of the losers then the capital you invested evaporates. Anthropic has positioned itself as OpenAI, except with safeguards, and that could very well be the formula that wins the jackpot. Historically, litigation and government sponsorship have been instrumental in picking winners.
However, as things currently stand, Anthropic is unlikely to win on technical merits over its competition. So Dario's entire job as a CEO is basically to get the government involved. If he can create enough doubt about the people that are currently making decisions in AI circles that the government gets involved, either directly through government investment, or indirectly through legislation, then his firm has a chance at grabbing the brass ring. That's not to say that he is wrong, he might even be sincere. It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.
"Hello again, Peabody here..." -- Mister Peabody