More generally this is a sign of a kernel-wide "cleanup" effort where stuff that's old is getting yoinked for no particular reason.
The particular reason is that their time and effort is finite. If very few people are using them, then it is being removed from future versions as they do not want to keep maintaining them. Remember that your current versions do not suddenly stop working. Also if someone else wants to maintain it, they can. They will not maintain it.
On the flip side trying to get a modern Linux distribution running on a K5 would be like a trip to the dentist. Actually you could do both because your system probably won't have finished booting by the time your dentist is done with your root canal
For future versions of Linux, it will be more difficult to get running on a K5. Current working versions are not affected. However, I cannot imagine anyone running a K5 now needs all the new features of 7.2 going forward. They can continue to use the 7.1 kernel and older.
I remember building a bunch of Cyrix boxes for in-house use at my second job out of uni. It was an interesting time for x86-compatible processors, it seemed like the x86-compatible ecosystem was going to expand with a whole bunch of manufacturers getting in on it, some with very different approaches (remember Transmeta?). Didn't really work out that way though.
The program is still deterministic - the output is determined *entirely* and deterministically by the input. (Where the input is the set of the prompt, the sequence of numbers returned by the calls for random(), and the LLM data model itself.)
Your "mistake", if we want to call it that, is treating the random() function as an innate quality of the LLM. It isn't it is simply part of the input.
Provide the the system with the same model, the same prompt, and the same sequence of numbers, and you WILL get the same answer, regardless of how complex the question is, or who asks it.
It is insane that major US companies are making trillion-dollar bets that a single-source provider will remain operational.
You are aware TSMC has built two chip fabs in Arizona and building a third one, right? You are also aware that Samsung built a chip fab in Texas over 20 years ago and is building a second one, right?
Except that they would have to presumably reengineer the old silicon for Intel's process, which kind of defeats the purpose of reusing old designs to save money, I would think.
Or more likely, Intel will have to adapt their processes to Apple designs and for other companies if Intel wants to do business as a chip foundry.
Apple also uses CPUs in things like the Apple Watch
Off the top of my head, here are the other processors Apple uses: Apple Watch, Homepod (S series), AirPods (H series), modems (C series). There are millions to tens of millions of these processors that Apple will need each year.
Not at all. Apple used Intel CPUs for 15 years. The great "PC vs Mac" debate is about the user experience, not the hardware architecture of the CPU behind it, and certainly not what foundry a CPU comes from.
The main reason Apple left Intel was all on Intel for not making progress for years on chips. This was the same reason Apple left IBM. Apple thought that by using Intel they would not be in the same situation again. Little did anyone know how Intel would struggle at 10nm for years. It is unlikely that Apple will ever go back to using x86 for their main processors though.
Intel these days is more open to being a chip foundry like TSMC than before. Apple using Intel to fabricate their chips as a secondary supplier makes business sense for everyone.
How great is it that Trump requires Apple to do business with Intel, the spin will be delightful.
And why is this "great"? The main reason Apple stopped sourcing chips from Intel had nothing to do with politics. It was due to Intel's stagnation in making chips. Intel was stuck for years while AMD passed them by. Apple finally had enough. Some would call that just business.
Please, please, please let it be Apple's main processors. A hysterical black eye to Intel and a kick in the balls to Apple fanboys. Win win!
Again the issue was entirely Intel's incompetence at making progress for years. Apple would probably keep buying chips from Intel if they were good chips. After all Apple bought Intel's entire modem business from them. More than likely Intel will make other chips for Apple first. For example every AirPod requires a chip. Every Apple Watch requires a chip. Apple modem chip C1 could be fabricated by Intel.
You absolutely can though. There is nothing stopping you from seeding the run with a single LLM, or even substituting the function definition for random() with:
random() {
return 5;
}
We can trivially and easily do this.
And further, it seems you are now suggesting that substituting the above random function for this one:
random() {
input = ask-user-for-fair-dice-roll();
return input;
}
and now you sit there rolling dice and inputing the results, and the computer program gains consciousness?
really?
The difference, of course, is that we currently DO actually know EXACTLY how an LLM works. We can snapshot the model and seed the random number generator to make it generate exactly the same output from exactly the same input every single time. We can pause it, set breakpoints, inspect and dump data structures.
It IS simply a program running on a CPU, and using RAM.
Is it possible that's all humans are in the end? Sure its possible, I can't prove otherwise. But we are not remotely in a position to assert that its the case.
You invoke philosophy which is entirely appropriate. There are fairy tales for example of artists painting things so realistic that they come to life. And it poses an interesting question here: is there is a difference between a simulation and a real thing? Can a simulation of life, be "alive"? Or must it forever remain a simulation.
And a related, and perhaps ultimately simpler question is can a *turing machine simulation of life* be "alive".
A lovely illustration of the question:
https://xkcd.com/505/
Can what you and I perceive as our lives, the universe around us, and everything REALLY be underpinned by some guy in a desert pushing pebbles around in a big desert somewhere?
Can the arrangement of stones in a desert, and some guy updating moving them aorund, in some pattern he interprets as representing the information that describes our universe actually "BE" our universe?
Is is the pattern of rocks is JUST a pattern of rocks. Is the guy moving them around JUST moving them around. Is the interpretation of the pattern as a representation of the state of a universe, just that, a representation?
Or you truly think there is a galaxy with a planet with people on it having a conversation on slashdot,'frozen in time' waiting for some guy to move the rocks into the next pattern and that somehow results in the experience we are sharing right now?
Or put more succinctly - can an abstract representation of a thing be the thing? be it bits in a DRAM module memory or pebbles arranged in the sand? can it be the thing it represents? Can the painting of a zebra if its done skilfully enough be a zebra?
It is not for me to attempt to fathom the inscrutable workings of Providence. -- The Earl of Birkenhead