Comment Re:WEBP is deprecated (Score 1) 11
Even when Chrome adds support, we'll have to wait ages before we can actually reliably use the format without having to implement fallback logic and fallback formats for legacy browsers.
Even when Chrome adds support, we'll have to wait ages before we can actually reliably use the format without having to implement fallback logic and fallback formats for legacy browsers.
AVIF is also painfully slow. And if I recall correctly, is outperformed by JPEG XL at moderate and low compression (but - again, if I recall correctly - AVIF wins at highly compressed images). Also, AVIF faces some patent threats. And misses a lot of JPEG XL's interesting features.
A practical issue with a circle is that it is not a circle until it is finished,
That's not the reason at all, AFAIK. The reasoning is, okay, we want people to be able to move from one place to some distance place in the city at the maximum comfortable speed, which is limited by G-forces. You have some guaranteed G-forces from first accelerating and then decelerating. But if it's linear, that's your only G forces. If it's curved, however, you also have radial G-forces.
The Line's train going from one end to the other (170km) nonstop is supposed to do it in 20 minutes, aka with a mean speed of ~510 kph. Let's say a peak of 800 kph. Now if we shape that 170km into a circle, that's 54km diameter, 27km radius. From the centripetal force formula a=v^2/r, that's 222,22...^2 / 27000 ~= 1,83 m/s^2, or a constant ~0,2g to the side. This is on top of the G-forces from your acceleration and deceleration. You can probably deal with ~0,2g in a train if everyone is seated without much discomfort, though it's double what's acceptable for standing passengers. But you can eliminate that if the city is linear (at the cost of increasing the mean distance that the average person has to travel to go from one arbitrary point in the city to another)
That's not to defend this concept. Because the city doesn't need to be 170km long; you can just made it more 2d and have the distances be vastly shorter (at the cost of just needing some extra lateral travel within the city). Honestly, if I were building a "designer" city from the ground up, I'd use a PRT (Personal Rapid Transit) system rather than trying to make it super-elongated.
What got me is that I don't see why this isn't readily resolved by active damping, the same systems that many tall towers now use to resist earthquakes or resonant wind forces. Big heavy weight at the top (or in this case the bottom) hooked up to actuators that make it move in an inverse direction to the sway.
Again, this is not to defend this colossal waste of money. I just don't see why there aren't ready solutions for this specific problem.
Agreed - but that said, there are space elevator alternatives, like the Lofstrom Loop / Launch Loop, which at least theoretically can be built with modern materials (and have far better properties anyway - not latitude-constrained, provides dV, vastly higher throughput, far more efficient, stores energy / can add cheap energy at off-peak times, etc). One could always "waste" money on them trying something new
No. Like any software, AI requires maintenance, and that maintenance costs money, lots of money.
It does not. Models need nothing more than the storage of some gigs of weights, and a GPU capable of running them.
If you mean "the information goes stale", one, that doesn't happen at all with RAG. And two, updating information with a finetune or even LORA is not a resource-intense task. It's making new foundations that is immensely resource intensive.
Can you integrate it into your products and work flow?
Yes, with precisely the difficulty level of any other API.
Can you train it on your own data?
With much less difficulty than trying to do that with a closed model.
And my point is that AI wouldn't just stop being used even if the bubble imploded so heavily that all of the major AI providers of today went under. It's just too easy to run today. The average person who wants something free would on average use a worse-quality model, but they're not going to just stop using models. And inference costs for higher-end models would crash if the big AI companies were no longer monopolozing the giant datacentres (which will not simply vanish just because their owners lose their shirts; power is only about a third the cost of a datacentre, and it gets even cheaper if you idle datacentres during their local electricity peak-demand times).
Your scenario is impossible, so try again.
Because we're discussing a scenario where the big AI companies have gone out of business, remember? And the question is whether people just stop using the thing that they found useful, or whether they merely switch to whatever alternative still works.
It's like saying that if Amazon went out of business, people would just stop buying things online because "going to a different website is too hard". It's nonsensical.
They believed you could mimic intelligence with clockwork, etc. Why do you only count if it if it involves computers?
If you want to jump to the era of *modern* literature, the generally first accepted robot in (non-obscure) modern literature is Tik-Tok from the Oz books, first introduced in 1907. As you might guess from the name, his intelligence was powered by clockwork; he was described as no more able to feel emotions than a sewing machine, and was invented and built by Smith and Tinker (an inventor and an artist). Why not electronic intelligence? Because the concept of a programmable electronic computer didn't exist then. Even ENIAC wasn't built until 1945. The best computers in the world in 1907 worked by... wait for it... clockwork. The most advanced "computer" in the world at the time was the Dalton Adding Machine (1902), the first adding machine to have a 10-digit keyboard. At best some adding machines had electric motors to drive the clockwork, but most didn't even have that; they had to be wound. This is the interior of the most advanced computer in the world in the era Tik-Tok was introduced. While in the Greco-Roman era, it might be something like this (technology of the era that, to a distant land that heard of it, probably sounded so advanced that it fueled the later rumours that Greco-Romans were building clockwork humans capable of advanced actions, even tracking and hunting down spies).
I think this is an oversimplification. Musk dreams of a sci-fi future. Isaacman does too (and is friends with Musk). Duffy wants to gut NASA. Hence, Musk strongly supported Isaacman. It's not too complicated; you don't need to search for subtext when what's out in the open makes perfect sense.
They don't have to "understand" anything. They just have to know that "If I go to this website, I can still ask the AI questions, even though ChatGPT shut down". Or that "If I click to install this app, I get an icon on my desktop and I can ask the AI questions there".
This is what got me. Why the hell are they calling a crypto auction something aimed at "the AI generation", when they clearly mean "Cryptobros"?
This is unscientific, but long ago I once conducted a poll on the Stable Diffusion subreddit, and one of the questions asked about peoples' opinions of crypto and NFTs. Only a small percentage liked it. The most popular poll choice by far was one with wording along the lines of "Crypto and NFTs should both go drown in a ditch."
It's an entirely different market segment. Crypto and NFTs appeal to gamblers, criminals, and anarcho-libertarians. AI appeals to those who want to create things, to automate things, and to save time or accomplish more. There's no logical relation between "This high school kid wants to save time on her homework" and "this 42-year-old mechanic thinks this bad drawing of an ape is going to be worth millions some day because a hash somewhere links its checksum to his private key."
The GP's comment wasn't accusing there of being a nuclear waste problem (there isn't). They were talking about how nuclear waste can be burned in a breeder reactor, producing orders of magnitude more than the burning of a couple tenths of a percent of the natural uranium in a conventional reactor does.
Despite the press hype about thorium (which is way more popular among the media and nerds on the internet than with actual nuclear engineers), nuclear power is already basically unlimited, even without breeder reactors (which are very much viable tech, and much more mature than thorium). Only with an incredibly weak definition is it in any meaningfully way "limited" - if you limit yourself to currently quantified reserves, at current fuel prices, with production mining tech, you have a bit over two centuries worth at current burn rates. But this is obviously nonsense. Uranium production tech isn't going to advance in *two centuries*? Nobody is going to explore for more in *two centuries*? And as for "at current prices" - fuel is only a very small percentage of the cost of fission power, so who cares if prices rise? And rising prices or advancing production tech doesn't just put linearly more of a resource onto a market, they put exponentially more onto the market. As an example with uranium: seawater uranium could power the world's current (overwhelmingly non-breeder) reactor fleet for 13000 years, and current lab-scale tech is projected to be nearly as cheap as conventional uranium production at scale.
Also, if you switch to breeder reactors, you don't just extend the amount of fuel you have by two orders of magnitude - the cost of the raw mined uranium also becomes two orders of magnitude less relevant than its already very small percentage of the cost of fission power generation, because you need so much less per kWh.
As for any thoraboos in the comments section: thorium fuel is more complex and expensive to fabricate (fundamentally - thorium dioxide has a higher melting point and is much harder to sinter), it's more complex to reprocess (it's more difficult to dissolve), its waste is much more hazardous over human timescales, the claimed resistance to nuclear proliferation is bunk, the tech readiness level is low and the costs are very high, and it's unclear it'll ever be economically competitive - most in the nuclear industry are highly dubious (due to what's needed to actually burn it vs. uranium). Hence the lack of investment. And I say this with the acknowledgement that nuclear power is already a very expensive form of electricity generation.
Easy for you, a technical person familiar with LLMs and WebAssembly
I'm not talking about how to develop LLM inference servers. You don't have to understand WebAssembly in order to run a WebAssembly program in your browser any more than you have to understand Javascript to run Javascript in your browser. It's *less* technological knowledge than using the Play store. And installing Ollama is no more difficult than installing any other app.
Your difficulty conceptions are simply wrong.
The only difference between a car salesman and a computer salesman is that the car salesman knows he's lying.