Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:EVs are not a solution beacuse of (Score 1) 129

A Tesla Model Y battery is 1700 pounds, whereas a full gastank of a typical sedan is less than 150 pounds. The battery thus increases the weight of the Model Y by about 35% versus a similar gas fired sedan.

No because the electric motor is lighter than the combo of engine and transmission. Electric vehicles are heavier. The car after a quick search I found closest in length to the Model 3 is the Skoda Octavia. The weight difference is between 5 and 25% depending on which configurations you choose.

So yeah EVs are heavier, but it's not as much as your estimate and it's not catastrophic.

Comment Re:Whenever an outlier like Norway (Score 1) 129

Worldwide figures are not relevant in this context,

They are if it's a question of scale.

Short answer: they're ahead because a/ they want to be and b/ because they have unique structural advantages, which other countries, who also very much would like to be ahead, do not have.

A is vastly more important than b. America is the richest country in the world. In terms of GDP per capita, Norway is a bit ahead but not wildly so. France has oodles of electricity and a substantially lower GDP per capita (a bit over half) and electric cars have double the share compared to America. The only structural advantage France has is investment.

Being loaded from fossil sales and having oodles of electricity available

America is loaded from a variety of things. Norway has easy cheap hydro, and that was important in the past. Btu we're in 2025 now. Wind and solar are now doable cheaply, and America has fucktons of heavily insolated land, and plenty of windy places too. What's missing is the will. Investment is needed in grid infra, and the BANANAs need to be told to go kick rocks.

Every time stories come along with excuses like America is too cold (it is not), America is too big (it is not), America is too poor (WTF), there aren't enough cars (there really are) and a litany of other excuses. It's purely political at this stage.

Comment Heuristic (Score 1) 34

It's expected. At their core all chess algo are search a min-max tree, but instead of going width- or depth- first exhaustive search, they use heuristics to prioritize some branches of the tree (A-star).

On the modest hardware of the older machine there isn't that much you can explore before the player gets bored waiting.
So obviously, you're going to make much stringent rules: "Never take a branch where you lose a piece" prunes entire swaths of the tree, rather than "see if sacrificing peice XXX gives us a better path" which would require exploring more of the tree.

Having been trained on all the corporation could scrape from the internet, I would expect an LLM to have been trained from a lot of real-world chess games (e.g.: reports of games from online archives of chess magazines, etc.), so as long as the tokeniser is able to parse the notation found there (or at least distinguish the moves. It doesn't really need to be able to "understand" the notation into english), it has a large corpus of "lists of moves leading to a win" which would include real moves (sacrifices as you mention).

And given a large-enough model to encode that, would be in the same ballpark of the hidden Markov models of yore -- provided it keeps track of the current state of the game (which it currently does not).

Comment Devil's in the detail. (Score 1) 34

I wonder if you wouldn't win if you just told ChatGPT to write an chess AI and then used the chess AI to beat the Atari. Writing code is something text models are good for. Playing chess is not.

The devil is in the detail.
All chess algorithms are A-star: they search a min-max tree, but use heuristic to prioritize some branches instead of doing width- or depth- frist.
Generatingn a template of a standard chess algo would be probably easy for a chatbot (these are prominently featured in tons of "introduction to machine learning" courses that training the LLM could have ingested), writing the heurisitc function to guide the A-star search is more an art-form and is probably where the chat bot is going to derail.

Funnily though, I would expect that if you used the chatbot AS the heuristic it wouldn't be a super bad player.
Have some classic chess software that keep tracs of the board and lists all the possible legal moves, then prompt the chatbot with something like:
"This is the current chessboard: {state}, these at the last few moves: {history}, pick the most promising among the following: {list of legal move}".

In fact, decades ago that's how some people have applied hidden Markov models to chess.

Similarly, I would expect that during training, the LLM would have been exposed to a large amount of all games available only, and has some vague idea of what a "winning move" looks like given a current context.

Not much trying to simulate moves ahead, as rather leveraging "experience" to know what's best next for a context, exactly like the "chess engine+HMM" did it in the past, but a lot less efficient.

Comment Context window (Score 1) 34

I've had ChatGPT forget the current state of things with other stuff too. I asked it to do some web code, and it kept forgetting what state the files were in. I hear that some are better like Claude with access to a repo, but with ChatGPT even if you give it the current file as an attachment it often just ignores it and carries on blindly.

Yup, they currently have very limited context windows.

And it's also a case of "wrong tool for the wrong job". Keeping track of very large code bases is well within the range of much simpler software (e.g.: the thing that powers the "autosuggest" function of your IDE which is fed from a database of all functions/variables/etc. names of the entire database).
For code, you would need such an exhaustive tool to give the list of possible suggestion and then the language model to only predict which from the pre-filtered list, rather free-styling it.

For chess you would need to have a special "chess-mode" training that is trained to always dump the current content of the board and the list of most recent turns' moves in the scratchpad between each turn, so that the current state doesn't fall out of the context. Best would be to do it like people did with HMMs a long time ago: have a simple actual chess software keep track of the board and generate a list of all possible next legal moves, and use the Markov model to predict from that pre-filtered list(*).

(*): That could be doable currently with a piece of software that automatically generates a prompt "The current status of board is: {board description}. The last few moves where: {history}. Chose the best move from: {list of legal moves}".

Comment Already done with markov chains (Score 1) 34

I know it scanned and consumed like.. all of the great Chess games ever played. It can only predict the next word, or move.

...and this has been already demonstrated eons ago using hidden Markov models.
(can't manage to find the website with the exact example I had in mind, but it's by the same guy who had fun feeding both Alice in Wonderland and the bible into a Markov model and use it to predict/generate funny walls of text).

That seems like the nature of LLM's. If I ever can coax ChatGTP to play a whole chess game.. I will let you know the results.

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

Back in that example, the dev used a simple chess software to keep track of the moves and the board and generate a list of possible next moves, then uses the HMM on that pre-filtered list to predict the next best.

Nowadays, you would need the chat both at least have a "chess mode" where it dumps the state of the board into its scratch pad, along a list of the most recent moves by each play, so that it always has the entire game in context.

Otherwise they should both do roughly the same thing (try to predict the next move out of a model that has been fed all the history of Chess games ever played), but with insane levels of added inefficiency in the case of the chatbot.

Comment Check the title: Norway (Score 1) 129

Tire particulate.

Check the /. item's title: It's Norway we're speaking about.
i.e.: a rich European country.

So a country with a not to shabby public transport network.
Thus compared to, say, the USA: it's already doing quite some efforts toward shifting traffic away from personal vehicles and toward much more efficient transportation systems that have a lot less problems per passenger than private vehicles.

Public transport is the best solution to reduce travelling-related pollution, but can't cover 100% of cases.
EV are a "good enough solution" at reducing problems caused by people who *MUST* and *CANNOT avoid* driving cars.

Comment Re:Whenever an outlier like Norway (Score 1) 129

That Norway has a very small population, primarily.

The only real difference that makes is whether there is sufficient supply. EV's are at over 20% of all car sales worldwide and climbing.

Doesn't alter the fact however that it's somewhat silly to point to Norway, hoping to demonstrate that like them we could all have 100% EV right now if only we really wanted to, rather than stubbornly sticking with ICEs for general principle.

It debunks the problems about coldness and charging infrastructure since that's a per-capita issue, not a size issue.

Comment Re:Eating the seed corn (Score 1) 226

You would have less illegal immigration if there were more legal ways to immigrate. Not just work visas, but family reunion visas too.

Work visas need to be for more than just skilled people. Americans don't want to do the hard, unpleasant work of picking crops for minimum wage. That's fine, it's a choice, but you need someone to do it.

Then there's the fact that your whole economy is based on the premise of never ending growth, and your birth rate is falling. Either you start with the handmaid bullshit, you make up the numbers with immigration, or you tell the billionaires that they need to adjust to a shrinking economy while still increasing your wages.

Comment Re:Guess what (Score 1) 26

Human beings are not machines, they do not produce a constant stream of output while they are working. Outside of simple manual jobs, at least. They get tired, they have lives outside the office, stress and overworking make them sick.

Turns out that 5 days a week is less efficient than 4 days a week for most people, i.e. they can get the same amount of work done in fewer hours if the duty cycle is reduced. It's a win-win - the employee has more free time and better quality of life, the employer loses nothing in terms of productivity and saves money on their energy bills.

Comment Re:2600 chess is better than you think (Score 1) 34

It's main advantage seems to be that it knows where the pieces are on the board.

I've had ChatGPT forget the current state of things with other stuff too. I asked it to do some web code, and it kept forgetting what state the files were in. I hear that some are better like Claude with access to a repo, but with ChatGPT even if you give it the current file as an attachment it often just ignores it and carries on blindly.

In fact one bug it created was due to it forgetting what it named a variable, and trying to use a similar but different name in some new code.

Slashdot Top Deals

FORTRAN is for pipe stress freaks and crystallography weenies.

Working...