Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Competition (Score 3, Insightful) 59

Well it's a problem or the US too. The last bastion of American industry is heavy equipment including construction and agricultural machines. Interestingly this is also a prime industry for Europe too. A lot of stuff is still made locally. But China is now making their own versions and just beginning to exporting them. And it's a double whammy. They can make them cheaper, but choose to just barely undercut western equivalent products (say by 20%), making a huge profit.

Comment Re:Europe is discovering what Canada discovered (Score 3, Insightful) 59

Except that in the case of Canada, there was a great deal of trade in both directions, in terms of commodities and also finished goods. It was a mutually beneficial trade arrangement too. It promoted the US' interests without ham-fisted authoritarian threats. This sort of trade made the US the powerhouse it was. In other words this was a sharp shift in US government attitude from one of friendship to one of an adversary (which is really how all relationships and business deals have ever been done in Trump's life). Sadly there's no going back. The damage is done and the US will never ever recover the good will and trade benefits it once had with its closest allies and trading partners, no matter what a future Democrat does to try to undo the damage, now that Trump's attitude has become the attitude of the GOP.

With China, though, there was no abrupt shift. China's goals have always been clear. The only thing they want from the west in terms of trade is raw commodities and foreign currencies. Whereas the west demands cheap goods, full stop. So China's domination of European industry and economies has been ongoing for years, and it's benefited by European policy and attitudes. China is happy to build high quality items and sell them for a premium, but there's no very little demand from the west. If there was demand, we wouldn't have seen local manufacturing capability disappear in the first place. Tariffs are not going to change these fact.

Comment Re:PR article (Score 2) 219

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:For the record (Score 1) 102

Very interesting point. For all the talk of how efficient EVs are, the fact is at higher speeds you need much more energy to accelerate. In other words going from 0-20 in an instant requires not much kw compared to trying to accelerate from 60 to 80 mph.. This is why EVs have such ludicrous motor power ratings for their direct drive systems. And in reality all EVs have a gear train even if it's a fixed ratio with few parts. It's a real head scratcher why more don't use a two speed gearbox to better handle the difference in energy requirements for high speed acceleration vs low speed. Could use much smaller and cheaper electric motors too with good efficiency.

Comment Re:PR article (Score 1) 219

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 1, Insightful) 219

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:Anything for money (Score 3, Informative) 102

In some ways US standards are way stricter than European. In other ways, not so much. So mainly the standards are different and focus on different aspects of safety. American standards focus on things like rollover protection more than European standards do. US crash test standards are higher too. I think this might have to do with everyone driving big SUVs here in North America. Europe focuses on other safety features including driver assistance technologies. AI tells me that European regs are now requiring emergency button to call for help. Also Europe allows headlights that have no clear high or low beam, but can transition between as the car detects oncoming traffic, and steerable headlights, which have stricter requirements in the US. Also different configurations are allowed for tail lights than the US does.

Besides the tariffs and outright ban on Chinese EVs, they would have to change their vehicles for North America, and I suspect they will once the US reverses the ban.

Canada is about to allow Chinese EVs in and reduce tariffs, but the reality is that only chinese Teslas will met safety regs here. Canada is way too small a market for other Chinese companies to build special vehicles for.

Comment Sure, whatever (Score 1) 219

Show me how your insights have enabled you to create more advanced functionality, and then I'll be interested.

Much of the critique seems irrelevant to AI other than LLMs, such as self-driving cars which map visual input to actions.

Comment What I've been telling colleagues... (Score 2) 219

AI = "Amalgamation of Information"

AI just uses probability calculations to amalgamate together an "average" of information on the subject. It's not smart. It doesn't think. It's not self-aware. It just is a digital hamburger grinder that churns out a paste of what gets put into its hopper.

Comment Re:The thumbnails make themselves (Score 1) 102

My wife and I bought a used 2024 Mini Cooper EV just last weekend, for roughly that amount. It seems well-built and is very fun to drive. However it is only useful for driving around town because its range is only 120 miles. Technologically this is clearly out of date. I couldn't help but think that if not for trade restrictions we could be paying the same for a new car with more advanced batteries and motors. In fact the Mini Cooper EV, the 2025 model with almost double the range, is not available in the US because of trade restrictions.

Comment Re:Forget about 25 (Score 1) 35

I never liked the framing of 'their brain hasn't finished maturing.' You could as well say that after 26 the brain begins its decline into risk aversion and senescence. Somebody has to go out and slay the beasts and fight the enemies and make the babies and young people in their physical prime did most of it.

Slashdot Top Deals

There are two ways to write error-free programs; only the third one works.

Working...