Comment Geeze, what a coincidence (Score 2) 38
Elon just made a pertinent statement:
https://futurism.com/future-so...
Elon just made a pertinent statement:
https://futurism.com/future-so...
Some possibilities:
-The agent buys the wrong thing and Amazon sees a substantially higher rate of returns or other bad customer feedback
-The agent buys one thing despite Amazon search results trying to push a different option
-Amazon's upsell for "you may also like" is tanked by the agentic purchaasing.
Don't worry, they are probably getting paid 300b by Oracle, 250b by microsoft, and 38b from Amazon so it all will work out nicely.
A lot of the deals lately seem to be company A and B pay each other X amount of money and pretend that is big revenue despite relatively little net money exchanging hands.
You could, in theory, have a context that is entirely within the sandbox and useful. Hence my comment about getting things in and out of the environment potentially negating many of the scenarios I can think of. But broadly speaking, if you had some local processing to do, you feed the environment a blob and the environment can now pretend it's a normal file as far as it is concerned, and then you can pull the blob out when done. WASM can't touch real stuff but you can feed it stuff within the reach of javascript which itself is still sandboxed, but specific network touch points and user indicated file touch points can be put in the reach of javascript.
So if you wanted to apply, in browser, some linux utility to a file, then the user has to indicate a file for operating on via browser, and that action allows javascript code to access that file, and with that granted it can load it into some memory that you've allocated for this purpose, and when done move the data back or wherever.
But the much needed sandbox does greatly complicate things and for some sorts of files the resource usage would be prohibitive in this scenario.
So I have had a few scenarios where I really didn't have any business moving data between the browser and a backend service and I would have just as soon done an operation client-side, but the ecosystem that was equipped to do the task wasn't exactly trivial to get to work in-browser. I could imagine some such use cases easier to port if a Linux instance could live transiently in browser runtime.
I've spent a fair amount if time trying to wrangle specific use cases into this scenario, but could imagine a 'lazier' way if a linux layer already abstracted away the browser runtime weirdness that many libraries aren't equipped to deal with naturally.
I think broadly speaking people that induce these requirements on my team are thinking the wrong things, and there's generally a smarter way to do it, but it does mean I get exposed to some weird use cases where a more traditional software interface is abstracting the browser-specific environment. Though I wager moving data in and out of the wasm may disrupt all the potential benefit...
I agree this is more like 'religion' than science, as it is not falsifiable, even if this 'proof' purports to do that. It's kind of a pointless exercise of no practical use, however...
the universe that simulation is running in would need to be infinitely more complex and large than the one we're in. That's non-sensical in itself
But it isn't non-sensical. Because we would have no perspective to know about 'complexity' in absolute terms. We think quantum stuff is small and the speed of light is fast, but that's just because of what we possibly observe. If hypothetically a 3d engine were self aware, they may conclude that triangles are the impossibly smallest things, and some game engine limitations dictated some absolute limits to reality that the outside world sees as a significant simplification.
within a given universe that contains it.
That's the thing, by definition in the hypothetical the computation device is *not* within the given universe that contains it. Again, if you look at some of these things like minecraft where they build logic devices, they are, in the scale of the target universe, impossibly huge because that's what the in-engine physics allow. So again, such a self-aware hypothetical would conclude that even a simple calculator has to be the size of a large building and mock the concept of a handheld device being able to simulate everything they observe despite us knowing that such a game engine is in fact on the easier side of things a handheld computer can do.
so slow as to be pointless.
Which comes to another point, we have no absolute concept of time. If it takes the hypothetical higher order universe an hour to simulate a second of our universe, we'd be none the wiser. We do these sorts of things in simulation all the time, though we don't run it for long.
Also there's fact that we don't have any way of really *knowing* everything we think we remember and observe is substantially done at all. In Half Life in-universe they would perceive the phenomenon as some maddeningly complex physics stuff, because that's what the game engine presents. However we know that it's just "special effects". We think we have memory of many years and history of centuries, but a lot of games present themselves the same way, despite never actually *running* that material, just preloading the memory/history into the scenario. Any individual can only speak to what they see in that instant of time and can't know that there's really anything substantial directly behind them let alone light years away.
Trying to disprove is pretty much a waste because the goalposts can move freely.
Hey, if running on billions is good enough for Java, it's good enough for Chrome and Chrome-likes.
I think it was intended as a supremely milquetoast query that would have a search engine pretty much pop up a specific thing the user is after.
And the LLM first approach is *really* bad at that. If you are looking for an existing, canned piece of content, the LLM is a let down. A large chunk of what people want is an existing thing.
LLM as a readily available *option* for the sorts of inputs that it works with? Sure. As a replacement for internet search, not so much.
But the summary is misleading given the output from the calculator.
So the summary says 4k doesn't matter in the typical living room. Ok, if I look up a typical 4k living room set, the very cheap ones are about 43". I plug that into the calculator at 2m viewing distance (which seems pretty typical and it says 99% of people could tell that it wasn't full resolution.
So I look up 8k sets now and they seem to be more targeting 85" diagonal. Now don't know if I want to devote 85" to a TV, but plugging in 85" for a 4k shows it also is apparently distinguishable to 14% of the population.
Now if you have a 27" TV viewed from a couch, then absolutely none of this makes sense. But the screen sizes associated with the higher resolutions by their own calculator does make a perceptible difference, though subjectively I don't know that it should be reasonably be considered worth it.
Oh, I'm fully aware that it might cut it off, and then I'll use the FreeSleep project or, failing that, the mod where someone replaced the base with another temperature controller.
Biggest problem is a company like eight sleep has the marketing. So if someone really wants a temperature controlled bed, it's hard to know what a credibly good one is. I *think* Chilipad is a good one, but it's a pretty pricey thing to evaluate and thanks to internet-everything, it's not like you can see for yourself.
But yeah, Eight sleep deserves every amount of bad press they can get for being such a douche company.
They have local controls on most recent models, *however* the controls will deactivate unless the cloud control has blessed the user in the last 24 hours. Before getting going, they'll talk to local phones without internet, but *only* for the end of getting the WIFI set up. They know exactly how to make local phone control work.
It was never about cost savings, it was always about a path to forced recurring revenue. They opened with early adopters not having to pay subscription fee, but still forcing them through the servers. Early adopters also didn't have to pay too much and actually had a decent warranty. When they managed some good momentum, they cranked up the price, tanked the warranty, and forced subscription.
The code was written toward the purpose of forcing the users into a monthly subscription.
The goal was not to deliver the best user experience. To the extent they have tried to accommodate demands for local control, they have predicated it on having relatively recently been 'blessed' to let the user do that within the last few hours. That takes explicit effort to implement a local control loop and make sure it gets approval.
My wife insisted on it and we bought one when they were getting started and relatively cheap and the subscription was not yet required. We've been grandfathered in so we don't pay the subscription, and thanks to the leaks we have been upgraded to the latest model, so I have familiarity.
They are a shit company with a decent hardware design (now) that stops short of being good all around precisely to gouge users.
Even worse, they have a local control loop, but they deliberately cripple it.
If the bed is 'on' (which is only allowed through their cloud connection), then you can locally adjust things fine. However it will refuse to do this if the internet hasn't approved the device to operate locally.
This 'enhancement' was added after people demanded local remote or buttons or *anything*. They implemented an earbud-style tap side of bed N number of times for adjusting temp or dismissing alarm.
So they know precisely what they are doing, it's not dumb engineering, it's malicious engineering.
But what if no one is ever going to use the output anyway? Might not need to check it.
I've dealt with *way* too many business processes that have people generate obscene amounts of prose that no one will ever read or even skim or reference.
I remember one of these companies championed that they used LLM to complete an important 'overhaul' of their source code. The 'overhaul' was generating separate document detailing all these uncommented functions and what the LLM guessed they were supposed to do and how in plain text. The theory was that if one day they actually wanted to start porting this code to something else, that document would be 'helpful'. And of course:
- They never will do that porting
- Even if they did, the developers will likely ignore that document.
"All we are given is possibilities -- to make ourselves one thing or another." -- Ortega y Gasset