seriously, slashdot? It's 2025 and you still can't do the Euro sign?
Even so the prices are excessive. If I want to upgrade the SSD in the current MBP from 512 GB to 2 TB that's +750 â
Meanwhile, a Western Digital Red SN700 with 2 TB I can get for a bit over 200 â.
A Samsung 990 PRO 2 - 245 â (was just rated the best M.2 SSD on the market by Tom's Hardware).
Whatever exact chips Apple is using, they're not 3x as expensive as other high-quality SSDs.
Even if "locked in place" is your underlying assumption, anyone who's even heard of the real world from their mom who has a friend whose father once visited it should know that there is no rule without exceptions and even if that is perfectly true, a small number of those particles will not be locked in perfectly.
I get the impression that a company like ADP requires that an employer employ at least some minimum number of employees in an area. Otherwise, ADP appears to fall back to printing paper checks for the employer to mail. I don't know the specifics; I just know that I got ADP paper at one job after a bunch of layoffs, and I got ADP paper when I was the only remote worker in a particular state.
You just know that some dufus at the gym would bring a 10kg steel dumbbell into the MRI room and ruin things for everyone.
No.. this would have to be a locked restricted room where only MRI techs and their clients may access.
However; the equipment is A. Very expensive. Your Gymn is not likely to buy it.
B. Due to the expense.. It is likely that your nearest local Hospitals will guarantee you are not able to obtain the necessary permits in order to protect their monopoly. They've got a huge investment to protect, And they actually have to successfully sell MRIs at those rates to make the return on that investment. Can't have some random additional facility installing one nearby. Guaranteed it would be blocked by the government due to the availability of another provider's location and the lack of public necessity for yours.
There are still places I write out checks because I get a discount for doing it.
In my experience at my last three jobs (in the midwestern USA), small businesses that don't have enough employees in an area have to print and mail paper payroll checks instead of paying their employees through direct deposit.
LLMs absolutely, without question, do not learn the way you seem to think they do. They do not learn from having conversations. They do not learn by being presented with text in a prompt, though if your experience is limited to chatbots could be forgiven for mistakenly thinking that was the case. Neural networks are not artificial brains. They have no mechanism by which they can 'learn by experience'. They 'learn' by having an external program modify their weights in response to the the difference between their output and the expected output for a given input.
This is "absolutely without question" incorrect. One of the most useful properties of LLMs is demonstrated in-context learning capabilities where a good instruction tuned model is able to learn from conversations and information provided to it without modifying model weights.
It might also interest you to know than the model itself is completely deterministic. Given an input, it will always produce the same output. The trick is that the model doesn't actually produce a next token, but a list of probabilities for the next token. The actual token is selected probabilistically, which is why you'll get different responses despite the model being completely deterministic.
Who cares? This is a rather specific and strange distinction without a difference that does not seem to be in any way related to anything stated in this thread. Randomness in token selection impacts the KV matrix which impacts evaluation of subsequent tokens.
Remember that each token is produced essentially in isolation. The model doesn't work out a solution first and carefully craft a response, it produces tokens one at a time, without retaining any internal state between them.
This is pure BS, key value matrices are maintained throughout.
That's a very misleading term. The model isn't on mushrooms. (Remember that the model proper is completely deterministic.)
Again with determinism nonsense.
A so-called 'hallucination' in an LLM's output just means that the output is factually incorrect. As LLMs do not operate on facts and concepts but on statistical relationships between tokens, there is no operational difference between a 'correct' response and a 'hallucination'. Both kinds of output are produced the same way, by the same process. A 'hallucination' isn't the model malfunctioning, but an entirely expected result of the model operating correctly.
LOL see the program isn't malfunctioning it is just doing what it was programmed to do. These word games are pointless.
AI will certainly provide some investors with a great return, while other, less savvy investors, will lose their shirts. But AI is here to stay, it's not going to suddenly disappear because everybody realizes it's a scam. Just as with the dot-com bubble in the 1990s, the AI bubble will burst, leaving behind the technologies that are actually useful.
The dot.com bubble provided value in the form of useful infrastructure investments. When the AI bubble bursts all you are going to be left with are rooms full of densely packed GPUs that will be scrapped and sold off for pennies on the dollar.
I agree completely that it's absurd to suggest that AI will "replace humanity." But that doesn't mean AI (or LLMs specifically) isn't useful.
AI is a tool. Used well, while understanding its limitations, can be a tremendous time-saver. And time is money.
How much of a time saver is it to have a magical oracle at your fingertips that constantly lies to you? How much time is saved when you have to externally cross check everything it says? It only saves "tremendous" time when you can afford not to care about the results.
Unlike some of these other mergers, the Netflix-Warner merger (or a merger of Warner with any other major studio) would require approval from foreign regulators who (generally) can't just be bought off.
Allowing any existing studio (and with all the shows Netflix makes and has made, they are definitely a studio at this point) to merge with Warner Bros is anti-competitive.
Doesn't matter if its Paramount, Disney, Netflix, Sony, Universal, Amazon or anyone else, it should be blocked on competition grounds.
All of these post training bludgeons are inherently dishonest. They attempt to sell an intentional lie inserted by model designers and ultimately make the technology worse and more difficult to use.
"I am your density." -- George McFly in "Back to the Future"