Comment Re:BSoD was an indicator (Score 1) 72
To crash the OS out wound need to be in the kennel, so do a quick test if that RAM using something like MARCH C.
To crash the OS out wound need to be in the kennel, so do a quick test if that RAM using something like MARCH C.
Given that Linehan had spent months harassing her on Twitter, to which she didn't respond, and she was a child at the time... I can't see any justification.
I've read MRI scans require helium, and there is currently a shortage. It's not practically renewable or recyclable so far. If everyone gets an MRI it could cause a severe shortage.
I wonder if the tech can be reworked to not require helium or any other hard-to-find resource.
as suggested by me from 2007: "Why Educational Technology Has Failed Schools"
https://patapata.sourceforge.n...
"... Ultimately, educational technology's greatest value is in supporting "learning on demand" based on interest or need which is at the opposite end of the spectrum compared to "learning just in case"
based on someone else's demand. Compulsory schools don't usually traffic in "learning on demand", for the most part leaving that kind of activity to libraries or museums or the home or business or the "real world". In order for compulsory schools to make use of the best of educational technology and what is has to offer, schools themselves must change.
But, history has shown schools extremely resistant to change.
That is not all technology has been asked to do in schools. It has been invited into the classroom in other ways, including educational simulations, Lego/Logo, web browsing, robotics, and computer-linked data collection from sensors. But assessment is mostly what technology does in schools that *matters*, where the other uses of it have been marginalized for various reasons. These "learning on demand" or "hands on learning" activities have been kept in their boxes so to speak (sometimes figuratively, sometimes literally). Or to recall from my own pre-computer elementary school experiences in the 1960s, there was a big fancy expensive "science kit" in the classroom closet -- but there was little time to use it or explore it -- we were too busy sitting at our desks.
Essentially, the conventional notion is that the compulsory schooling approach is working, it just needs more money and effort. Thus a push for higher standards and pay and promotion related to performance to those standards. Most of the technology then should be used to ensure those standards. That "work harder" and "test harder" approach has been tried now for more than twenty years in various ways, and not much has changed. Why is that? Could it be that schools were designed to produce exactly the results they do? [as John Taylor Gatto has suggested] And that more of the same by more hard work will only produce more of the same results? Perhaps schools are not failing to do what they were designed; perhaps in producing people fit only to work in highly structured environments doing repetitive work, they are actually succeeding at doing what they were designed for? Perhaps digging harder and faster and longer just makes a deeper pit?
However, over the past 150 years or so the world has changed, and we have entered a post-industrial information age, with cheaply copied songs and perhaps soon cheaply copied material goods in nanotech replicators.
Industry still matters of course, but only now in the sense that agricultural still matters, where an ever smaller part of the population is concerned directly with it, as innovation after innovation makes people in those fields ever more productive. If only a small percent of the people in the economy produce food, and now only an ever shrinking part of the population produces material goods, what is left for the rest to do?
So, [as Dr. David Goodstein, Vice Provost of Caltech pointed out] employment in conventional research is closed for most people [even with PhDs, due to funding issues]. Still, if you look at, say, the field of biology, there are endless opportunities for people to research millions of species of organisms and their biochemistry, ecology, and history. If you look at astrophysics, there are endless stars and solar systems to study. If you look at medicine, there is a vast amount we do not know, especially for chronic diseases of poor people. If you look at music, there are endless opportunities for people to make songs about their specific lives and families. If you look at writing, endless novels yet to be written. And if you look at programming, there is even a vast enjoyment to be had reinventing the wheel -- another programming language, another operating system, another application -- just for the fun of doing it for its own sake. The world wide web -- from blogs to you tube to garage bands -- is full of content people made and published just because they wanted to. It is an infinite universe we live in, and would take an infinite time to fill it up. However, there is practically no one willing to pay for those activities, so they are for the most part hobbies, or at best, "loss leaders" or "training" in business. And, as always, there is the endless demands of essentially volunteer parenting to invest in a future generation. And there are huge demands for community service to help less fortunate neighbors. So there are plenty of things that need doing -- even if they do not mesh well with our current economic system based around "work" performed within a bureaucracy, carefully reduced to measurable numbers (parts produced, lines of code generated, number of words written) producing rewards based on ration units (dollars).
But then, with so much produced for so little effort, perhaps the very notion of work itself needs to change? Maybe most people don't need to "work" in any conventional way (outside of home or community activities)?
But then is compulsory schooling really needed when people live in such a way? In a gift economy, driven by the power of imagination, backed by automation like matter replicators and flexible robotics to do the drudgery, isn't there plenty of time and opportunity to learn everything you need to know? Do people still need to be forced to learn how to sit in one place for hours at a time? When people actually want to learn something like reading or basic arithmetic, it only takes around 50 contact hours or less to give them the basics, and then they can bootstrap themselves as far as they want to go. Why are the other 10000 hours or so of a child's time needed in "school"? Especially when even poorest kids in India are self-motivated to learn a lot just from a computer kiosk -- or a "hole in the wall"...
Granted if people want to send kids to a prison-like facility each day for security or babysitting, then the "free school" model makes a lot of sense for that
So, there is more to the story of technology than it failing in schools. Modern information and manufacturing technology itself is giving compulsory schools a failing grade. Compulsory schools do not pass in the information age. They are no longer needed. What remains is just to watch this all play out, and hopefully guide the collapse of compulsory schooling so that the fewest people get hurt in the process.
Hmm, two decades ago perhaps. We must be getting old because it doesn't seem like that long ago.
It's more like "dictionary entry of the year". Some things where the two words have to be used together for a specific meaning have dictionary entries, such as speed bump, for example.
Then everyone will start using it as an excuse to attack other people. Especially the cops - they hate being filmed. They will claim it was "aggressive" and justified violence.
A guy called Graham Linehan was just convicted of smashing a girl's phone, when she filled an interaction where she asked him why he called her a "groomer" on twitter, and worse. It was the right judgement, he had no need to do it, and mere annoyance can't be enough or everyone will be smashing stuff they dislike.
If there were better APIs they could do that. An API for sensors, with a maximum safe limit supplied for each one. When a BSOD happens, it could do a little memory test on the affected area.
This is "absolutely without question" incorrect. One of the most useful properties of LLMs is demonstrated in-context learning capabilities where a good instruction tuned model is able to learn from conversations and information provided to it without modifying model weights.
You're ignorance is showing. The model does not change as it's used. Full stop. Like many other terms related to LLMs, "in context learning" is deeply misleading. Remove the wishful thinking and it boils down to "changes to the input cause changes to the output", which is obvious and not at all interesting.
Who cares?
People who care about facts and reality, not their preferred science-fiction delusion. I highlight the deterministic nature of the model proper and where the random element is introduced in the larger process to dispel some of the typical magical thinking you see from ignorant fools like you. The model does not and can not behave in the ways that morons like you image.
This is pure BS, key value matrices are maintained throughout.
Do you get-off on humiliation? While some caching is done as an optimization, this has absolutely no effect on the output. Give the same input at any point to a completely different instance of the model and you'll get the exact same results.
Again with determinism nonsense.
LOL! You think that the model isn't deterministic? Again, the only thing the model does is produce a list of next-token probabilities. It does this deterministically. The only non-deterministic part here is the final token selection, which is done probabilistically.
That you believe otherwise suggests that you're either even more ignorant that even I thought possible, or you think that LLMs or NNs are magical. What a fucking joke you are.
These word games are pointless.
The only one playing 'word games' here is you, ignorant troll.
He's not nice, but he's also not wrong. You have some very odd ideas about what LLMs do.
LLMs absolutely, without question, do not learn the way you seem to think they do. They do not learn from having conversations. They do not learn by being presented with text in a prompt, though if your experience is limited to chatbots could be forgiven for mistakenly thinking that was the case. Neural networks are not artificial brains. They have no mechanism by which they can 'learn by experience'. They 'learn' by having an external program modify their weights in response to the the difference between their output and the expected output for a given input.
It might also interest you to know than the model itself is completely deterministic. Given an input, it will always produce the same output. The trick is that the model doesn't actually produce a next token, but a list of probabilities for the next token. The actual token is selected probabilistically, which is why you'll get different responses despite the model being completely deterministic. The model retains no internal state, so you could pass the partial output to a completely different model and it wouldn't matter.
I vividly remember a newspaper article that said Ai performed better if you asked it to think things through and work it out step by step.
LLMs do not and can not reason, including so-called 'reasoning' models. The reason output improves when giving a 'step by step' response is because you end up with more relevant text in context. It really is that simple. Remember that each token is produced essentially in isolation. The model doesn't work out a solution first and carefully craft a response, it produces tokens one at a time, without retaining any internal state between them. Imagine a few hundred people writing a response where each person only sees the prompt and partial output on their turn and they can only suggest a few potential next words and their rank, the actual next word selected probabilistically. LLMs work a bit like that, but without the benefit of understanding.
I think LLMs resemble the phonoligical loop a bit.
I assure you that they do not. Not even a little bit.
Pretty sure at some point self awareness is needed to stabilize the output.
You probably realize by now that this is just silly nonsense.
The bloody thing hallucinates for Christ's sake!
That's a very misleading term. The model isn't on mushrooms. (Remember that the model proper is completely deterministic.) A so-called 'hallucination' in an LLM's output just means that the output is factually incorrect. As LLMs do not operate on facts and concepts but on statistical relationships between tokens, there is no operational difference between a 'correct' response and a 'hallucination'. Both kinds of output are produced the same way, by the same process. A 'hallucination' isn't the model malfunctioning, but an entirely expected result of the model operating correctly.
If people can buy a new car for not much more than a used one, and it's more efficient and comfortable, they just might.
Wow, you have a lot of trouble with basic reading comprehension!
Before you reply to another one of my posts, find an adult to help you understand it.
Nothing is easier than to denounce the evildoer; nothing is more difficult than to understand him. - Fyodor Dostoevski