Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:People shouldn't get a high school degree (Score 1) 254

I would be happy if we pooled all money for education into a pool and then redistributed a voucher per pupil. School funding would come from students wanting to go to your school. Everyone would get the same amount to work with. Also, make it so a school can't accept donations or outside funding beyond the vouchers from the students. The amount of money even the poorest states spend on their students is often much more then it would cost to just send your child to private school.

The rich will never let that happen though.

Fundamentally, I agree with your concept. In an idealized world, this is the best method for a public education. The main issue I do have with that is how do you transition to it. You have school districts and states that are historically under-funded for years, even decades. They are in desperate need of massive institutional infrastructure funding, well and beyond what would be an otherwise fair payout per student. Some of these districts need upwards of $1 billion or more needed to get out of buildings that are over 100 years old and have not had an overhaul in over 60 years, with all the included dangers involved of buildings of that age with asbestos, lead paint, lead pipes, and mold/mildew/rodents/insects....

Comment Wow, socking! (Score 1) 59

I can't believe it took this long for them to realize people didn't want a camera that they could not fully control within their own private homes possibly recording and displaying everything to random people at the company that built and maintains the device and potentially for anyone who found a security flaw in it.... I mean, really, what a smart item to stick into a TV that might go in say a bedroom, because nothing happens in there that shouldn't be broadcast across the world and saved forever....

Comment This lays bare one of the problems with LLMs.... (Score 4, Informative) 74

What too many people do not seem to understand with LLMs is that everything it spits out is simply a probability matrix based on the input you gave it. It will first attempt to deconstruct the input you provided and use statistical analysis against it's trained knowledge base to then spit out letters, words, phrases and punctuation that statistically resembles the outputs it was trained to produce in it's training materials.

Until this version, ChatGPT obviously suffered from a lack of training materials within it's trained neural network to have it overcome the English language's typed grammar rules for it to be able to discern that em dashes are not typically used in everyday conversations and/or that the input to not use them needed to change it's underlying probability network to be able to ignore the English language's grammar rules and adopt it's output without the use of the em dash. This is a very difficult concept to train into a neural network as it needs to have been training on specifically this input/output case long enough to have that training override the base English grammar language model, which is a fundamental piece of knowledge a LLM requires to function and one of the very first things it is trained to handle.

It also exposes a flaw in how neural networks are typically working. There is a training/learning mode and then there is the functional mode of just using the trained network. In the functional mode, the neural network links, nodes, and function are effectively static. Without having built in-puts to the network so that it can flag certain functionality, it can not change it's underlying probability matrix to effectively forget something it was trained to do. Once that training has changed any of the underlying neural network, you can not effectively untrain it (without simply reverting to a previous backup copy of the network before it was trained). This is why it is so important to scrutinize every piece of data that is used to train the network. One you have added some piece of garbage input training, you are stuck with the changes it made to the probabilities of the output. Any model that is effectively training against the content of the internet itself is so full of bad information that the results can never really be trusted for anything other than probability of asking a random person for the answer because it will have trained on and included phases like "The earth is flat", "birds are not real", and "the moon landing was a hoax". It will have seen those things enough times that it will include them as higher and higher percentages of the proper response to questions about them....

Comment Re:Customized music is the future (Score 2) 68

You'd be surprised how difficult that actually is. Take the role of a DJ for instance - a good DJ will know what genre they're doing, play well know things from that genre but also introduce new music to keep it fresh. They might also step outside the genre a little - not too far so it's not dissonant with the rest of their set, but just far enough to give a break and a moment of "ah, that's nice/romantic/gnarly/metal/" for the listener.

It's a skill, and if you haven't got that ability to start with then you're unlikely to be able to give the correct prompts to create it. You might well get a lot of identical things, but a listenable varied set is more than that.

Comment Re:*some* games (Score 1) 100

A worry might be SteamOS as a requirement, rather than as simple support. You could imagine kernel modules being developed for 'anti-cheat' and them running under SteamOS but not some other distro that may (justifiably) block them.

Slashdot Top Deals

Innovation is hard to schedule. -- Dan Fylstra

Working...