Comment Re:PIaaS (Score 1) 64
And just to head you off: no, you can't "tag it yourself". Tags are denoted by special tokens. The tokenizer does not convert any supplied text into these tokens.
And just to head you off: no, you can't "tag it yourself". Tags are denoted by special tokens. The tokenizer does not convert any supplied text into these tokens.
Are you literally incapable of looking over an ebay listing before you click "buy" to verify that it meets your specs? Are you literally incapable of looking over a form or report to verify that it's filled out correctly? How do you dress yourself?
In general, most online tasks are vastly faster to verify than to implement.
No, it does not simply make it "harder". The LLM only looks at the instruction-tagged section for instructions. It doesn't look at other tags for instructions.
Gasoline is the older word, and FYI, it originated in London. It was a product of John Cassell called "Cazeline". A Dublin shopkeeper named Samuel Boyd got into a trademark dispute with Cassel, so changed the spelling to "Gazeline" as a dodge. The word "Gasoline" appears as a listed product taxed in the US in the 1860s. By contrast, the word "Petrol" didn't come into play until the 1890s, as a product created by Carless, Capel & Leonard. They tried to trademark it, but the trademark failed and it became a generic. So yes, "gasoline" is a decades-older word than "petrol".
Also, for the record, if you want historical lingustic accuracy: All Rs are rhotic, never pronounced like "uh"; the a in words like "bath", "path", etc doesn't sound like the o in "cot"; the suffix "-tary" (secretary, military, etc) is two syllables, not "trie"; and while Received Pronunciation has better preservation of central "t"s (in American English they're more like a d), increasingly Brits drop them outright (E.g. water: American "wadder", UK commonly "wa'uh").
Fall IS the historic name. Autumn is loanword originating in French that started taking over in common parliance in the late 1700s in Britain (before it was mainly used in poetic speech - for example, Shakespeare preferred it to fall).. And while we're at it: it's trash, not rubbish; the past participle of got is gotten; mad means angry (read the King James Bible); it's guess, not suppose; it's candy, not sweets; it's diaper, not nappy; etc.
While there certainly are plenty of elements in which British English remains more conservative than American English**, it's well more common for American English to be more conservative. In general, it's usually London to blame; often deviations arose in London and then (due to its cultural domination) spread to the rest of the UK.
That said, when it comes to spelling, British English is usually more historically conservative than American English. Webster sought to make words be spelled more like they sound - for example, colour to color, theatre to theater, etc. Though ironically the US in some cases ended up restoring past spellings - the change of the "-ize" ending of American English actually predates the American-British split; it had been lost under French influence to "-ise", only to be restored in the US.
** - Interestingly, it was the American retention of rhoticity that led to some of the vowel shifts that British English kept as more original than the US. The classic case is Mary, Marry, and Merry - in American English, the vowels are pronounced the same, whereas in British English, Mary's is the same as in American English (like in "fair", "stair", etc), Marry is more like the a in "cat", and Merry is more like the e in "pet" or "step". But British English lost the rhoticity of the R instead of unifying the vowels to be easier with a rhotic R.
That's why I mill my own flour, smelt my own steel, saw my own timber and synthesize my own plastics.
Be sure to ask for blink tags and a bunch of "under construction" gifs and webring banners
People enjoy clicking through dozens of ebay listings trying to see which product actually matches their particular set of specs? People enjoy filling out forms? People enjoy getting quotes from electricians and plumbers? People enjoy filing expense reports? What on Earth are you smoking?
Yeah, while there's a lot of excitement about Clawdbot, I've also seen a number of complaints about the... idiosyncratic decisions of the developer. I understand that there's fork projects underway.
Ugh, Slashdot messed up the italics.
Just to clarify: on the macs, the GPU operates on system memory. It has pretty awful FLOPS (~26 TFLOPS), but what matters for LLM inference is that its latency is low and bandwidth is high, and you can get versions with up to 512GB, for very sizable models
DGX Spark (formerly called "Digits") is a tiny desktop box from NVidia with 128GB (you can chain two together for 256GB). It has 4x the FLOPS than the macs (still well less than a modern gaming GPU!) but 1/3rd the memory bandwidth. In practice you get about the same performance per dollar of capital cost, and inference is a bit more power efficient. Another selling point for the DGX Spark is that the environment is *designed* for inference and training.
In practice, though, both are good options for home serving of large models. Again, it's inefficient compared to bulk serving of models on high-end servers with large-scale batching, but then you don't have the advantages of fully controlling your data.
Sure there is, structured input.
I haven't looked at what this thing is but why can't it be run on ordinary PC hardware? Either CPU or GPU, nvidia, etc? Why a a Mac?
If you're only going to run small models at home, your best option by far is a modern NVidia gaming GPU. The problem comes when you want to run a large model at home. And there's really only two good "home scale" options for this: macs like the M3 Ultra / Mac Studio, and the NVidia DGX Spark (1 or 2 linked together). You simply can't run these large models (even quantized) on regular gaming GPUs; they just don't have enough VRAM. You can run multiple GPUs in one computer, but the memory is still limited and now you're bandwidth limited across the bus. And yes, you can do it with any computer on CPU if you have sufficient RAM, but it's immensely slow and inefficient. You can combine running on CPU with running on GPU (layer offloading), but the CPU heavily bottlenecks the GPU and the GPU runs at near idle.
Even faster / more cost efficient / more power efficient are the high-end NVidia AI servers, but unless you've going to be serving out inference for a large number of people, there's no way you can justify the cost.
That said, you can do an awful lot with smaller models that fit on GPU. It's just a question of how much quality you care about for the sort of tasks you're wanting to run.
A glue layer is otherwise known as an agentic framework. Hence the GP's comment about "having an agent that they control and own".
Lasers are much brighter than sunlight
These aren't. That's the point you're missing. The beam divergence is so great from traveling nearly 36 thousand kilometers that by the time it hits the surface it's at typical solar intensities, spread out over a large area.
The sun also puts out a painful level of visible light, which kicks in the blink reflex if the sun enters your field of vision.
What you're freaking out about is that a space solar plant:
1) Aims in the wrong place
2) Keeps aiming in the wrong place
3) But *consistently* in the *same* wrong place
4) And you're staring up in the sky
5) And at the EXACT right spot
6) For long enough to cause permanently damage to your eyes
I'm sorry, but find something better to worry about.
A thousand beams
Let's say this is a thousand square kilometer solar array in space getting 1 kW/m^2 with 20% efficiency in turning that into a beam headed for Earth and then 50% of that energy is lost before hitting the ground. So 1 square meter of the array would produce about 100 Watts per square meter on the ground (averaged out, of course). Then there are 1000 of those beams
So your scenario is a *million square kilometers of solar panels*? Yes, obviously you can fry an area with that, but it's also not a single space solarplant. Even just one of those thousand square kilometer arrays you propose is a 200 GW solar plant.
Do you have any big what you're proposing is? It's ~40% bigger than Texas.
Nothing dumber than insisting to people, "No, you're enjoying the WRONG kind of music. That music was made with the WRONG tools - stop enjoying it! Enjoy THIS music - it's made from the RIGHT tools!"
"You know, we've won awards for this crap." -- David Letterman