Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Privately held; indie devs (Score 2) 43

nah, this will soon come crashing down as the enshitification of commercial games continues.

Valve itself is NOT publicly traded. There are no shareholders to whom the value needs to be shifted.
This explains (in parts) why Valve has been a little bit less shitty than most other companies.
It also means Valve's own product (Steam, SteamDeck, upcoming Deckard, etc.) are slightly less likely to be enshitified
(e.g.: whereas most corporations try to shove AI in any of their product, the only news you'll see regarding Vavle and AI is Valve making it mandatory to label games that uses AI-generated assets)

if i really want a game i wait until the price seems reasonable and affordable even if that means waiting for years, the side benefits are there's more content, most of the bugs are squashed and the drama is history, it seems unethical to support classist corporations in any fashion especially financially in my view

Also indie games are a thing.
Indie-centric platform like itch.io are a thing.
Unlike Sony and Microsoft, Valve isn't selling the Steamdeck at a loss, so they care less where you buy your games from -- hence the support for non-Steam software (the onboarding even includes fetching a browser flatpak from FlatHub).

Humble bundles are also a thing (with donation to charities in addition to lower prices).

So there are ways beyond "buy a rushed-to-market 'quadruple A' game designed-by-comitee at some faceless megacorp".

Comment Heuristic (Score 1) 48

It's expected. At their core all chess algo are search a min-max tree, but instead of going width- or depth- first exhaustive search, they use heuristics to prioritize some branches of the tree (A-star).

On the modest hardware of the older machine there isn't that much you can explore before the player gets bored waiting.
So obviously, you're going to make much stringent rules: "Never take a branch where you lose a piece" prunes entire swaths of the tree, rather than "see if sacrificing peice XXX gives us a better path" which would require exploring more of the tree.

Having been trained on all the corporation could scrape from the internet, I would expect an LLM to have been trained from a lot of real-world chess games (e.g.: reports of games from online archives of chess magazines, etc.), so as long as the tokeniser is able to parse the notation found there (or at least distinguish the moves. It doesn't really need to be able to "understand" the notation into english), it has a large corpus of "lists of moves leading to a win" which would include real moves (sacrifices as you mention).

And given a large-enough model to encode that, would be in the same ballpark of the hidden Markov models of yore -- provided it keeps track of the current state of the game (which it currently does not).

Comment Devil's in the detail. (Score 1) 48

I wonder if you wouldn't win if you just told ChatGPT to write an chess AI and then used the chess AI to beat the Atari. Writing code is something text models are good for. Playing chess is not.

The devil is in the detail.
All chess algorithms are A-star: they search a min-max tree, but use heuristic to prioritize some branches instead of doing width- or depth- frist.
Generatingn a template of a standard chess algo would be probably easy for a chatbot (these are prominently featured in tons of "introduction to machine learning" courses that training the LLM could have ingested), writing the heurisitc function to guide the A-star search is more an art-form and is probably where the chat bot is going to derail.

Funnily though, I would expect that if you used the chatbot AS the heuristic it wouldn't be a super bad player.
Have some classic chess software that keep tracs of the board and lists all the possible legal moves, then prompt the chatbot with something like:
"This is the current chessboard: {state}, these at the last few moves: {history}, pick the most promising among the following: {list of legal move}".

In fact, decades ago that's how some people have applied hidden Markov models to chess.

Similarly, I would expect that during training, the LLM would have been exposed to a large amount of all games available only, and has some vague idea of what a "winning move" looks like given a current context.

Not much trying to simulate moves ahead, as rather leveraging "experience" to know what's best next for a context, exactly like the "chess engine+HMM" did it in the past, but a lot less efficient.

Comment Context window (Score 1) 48

I've had ChatGPT forget the current state of things with other stuff too. I asked it to do some web code, and it kept forgetting what state the files were in. I hear that some are better like Claude with access to a repo, but with ChatGPT even if you give it the current file as an attachment it often just ignores it and carries on blindly.

Yup, they currently have very limited context windows.

And it's also a case of "wrong tool for the wrong job". Keeping track of very large code bases is well within the range of much simpler software (e.g.: the thing that powers the "autosuggest" function of your IDE which is fed from a database of all functions/variables/etc. names of the entire database).
For code, you would need such an exhaustive tool to give the list of possible suggestion and then the language model to only predict which from the pre-filtered list, rather free-styling it.

For chess you would need to have a special "chess-mode" training that is trained to always dump the current content of the board and the list of most recent turns' moves in the scratchpad between each turn, so that the current state doesn't fall out of the context. Best would be to do it like people did with HMMs a long time ago: have a simple actual chess software keep track of the board and generate a list of all possible next legal moves, and use the Markov model to predict from that pre-filtered list(*).

(*): That could be doable currently with a piece of software that automatically generates a prompt "The current status of board is: {board description}. The last few moves where: {history}. Chose the best move from: {list of legal moves}".

Comment Already done with markov chains (Score 1) 48

I know it scanned and consumed like.. all of the great Chess games ever played. It can only predict the next word, or move.

...and this has been already demonstrated eons ago using hidden Markov models.
(can't manage to find the website with the exact example I had in mind, but it's by the same guy who had fun feeding both Alice in Wonderland and the bible into a Markov model and use it to predict/generate funny walls of text).

That seems like the nature of LLM's. If I ever can coax ChatGTP to play a whole chess game.. I will let you know the results.

The only limitation of both old models like HHM and the current chatbots is that they don't have a concept of the state of the chess board.

Back in that example, the dev used a simple chess software to keep track of the moves and the board and generate a list of possible next moves, then uses the HMM on that pre-filtered list to predict the next best.

Nowadays, you would need the chat both at least have a "chess mode" where it dumps the state of the board into its scratch pad, along a list of the most recent moves by each play, so that it always has the entire game in context.

Otherwise they should both do roughly the same thing (try to predict the next move out of a model that has been fed all the history of Chess games ever played), but with insane levels of added inefficiency in the case of the chatbot.

Comment Check the title: Norway (Score 1) 224

Tire particulate.

Check the /. item's title: It's Norway we're speaking about.
i.e.: a rich European country.

So a country with a not to shabby public transport network.
Thus compared to, say, the USA: it's already doing quite some efforts toward shifting traffic away from personal vehicles and toward much more efficient transportation systems that have a lot less problems per passenger than private vehicles.

Public transport is the best solution to reduce travelling-related pollution, but can't cover 100% of cases.
EV are a "good enough solution" at reducing problems caused by people who *MUST* and *CANNOT avoid* driving cars.

Comment Re:Perpetual (Score 2, Interesting) 69

Having spent a whole hell of a lot of time lately on Gnome, configuring it and testing various configurations for rollout at the company I work for, all I can say is that it just works. There's a browser, and bizarrely, printers just work on Linux now in a way they just used to work on Windows, and it's now Windows, at least in an enterprise environment, where printing has become the technical equivalent of having your teeth filed down. Where work does need to be done is on accessibility, so we have one staff member who will stick with Windows 11 for now. Libreoffice's Calc is good enough for about 90% of the time, and Writer about 95%. We remain open to Windows machines for special use purposes, but most people after mucking around for a bit are able to navigate Gnome perfectly well, since once they're in the program they need to use, what's going on on the desktop is irrelevant.

On the enterprise back end, supporting global authentication has been around a long time, and if you only have admins who know how to navigate a GUI, then you have idiots. The *nix home folder is infinitely superior in every way to the hellscape that is roaming profiles, so already you're ahead of the game.

Comment Re:So, yeah for microkernels? (Score 4, Interesting) 36

That just about sums it up. Moving drivers into user land definitely reduces the attack surface. As it stands, antivirus software in most cases is essentially a rootkit, just one we approve of because that low level access allows it to intercept virus activity at the lowest level. With a microkernel, nothing gets to run at that level anyways, so microkernels are inherently more secure.

Traditionally the objection to microkernels was they were slower, since message passing has a processing cost in memory, IO bandwidth and CPU cycles. In the old days where may you had a couple of MB of RAM, or even 8 or 16mb of RAM (like my last 486), with 16 bit ISA architecture and chips that at the high end might run at 40-60mhz, a microkernel definitely was going to be a bit more sluggish, particularly where any part of that bandwidth was being taxed (i.e. running a web stack), so Windows and Linux both, while over time adopting some aspects of microkernel architecture (I believe Darwin is considered a hybrid), stuck with monolithic architecture overall because it really is far less resource intensive.

But we're in the age when 16gb of RAM on pretty high end CPUs where even USB ports have more throughput that an old ISA bus, that I suspect it may be time to revive microkernels.

Slashdot Top Deals

A freelance is one who gets paid by the word -- per piece or perhaps. -- Robert Benchley

Working...