Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Don't say the quiet part out loud (Score 1) 51

Are you literally incapable of looking over an ebay listing before you click "buy" to verify that it meets your specs? Are you literally incapable of looking over a form or report to verify that it's filled out correctly? How do you dress yourself?

In general, most online tasks are vastly faster to verify than to implement.

Comment Re:Reuters used to be able to write an article... (Score 5, Interesting) 75

Gasoline is the older word, and FYI, it originated in London. It was a product of John Cassell called "Cazeline". A Dublin shopkeeper named Samuel Boyd got into a trademark dispute with Cassel, so changed the spelling to "Gazeline" as a dodge. The word "Gasoline" appears as a listed product taxed in the US in the 1860s. By contrast, the word "Petrol" didn't come into play until the 1890s, as a product created by Carless, Capel & Leonard. They tried to trademark it, but the trademark failed and it became a generic. So yes, "gasoline" is a decades-older word than "petrol".

Also, for the record, if you want historical lingustic accuracy: All Rs are rhotic, never pronounced like "uh"; the a in words like "bath", "path", etc doesn't sound like the o in "cot"; the suffix "-tary" (secretary, military, etc) is two syllables, not "trie"; and while Received Pronunciation has better preservation of central "t"s (in American English they're more like a d), increasingly Brits drop them outright (E.g. water: American "wadder", UK commonly "wa'uh").

Fall IS the historic name. Autumn is loanword originating in French that started taking over in common parliance in the late 1700s in Britain (before it was mainly used in poetic speech - for example, Shakespeare preferred it to fall).. And while we're at it: it's trash, not rubbish; the past participle of got is gotten; mad means angry (read the King James Bible); it's guess, not suppose; it's candy, not sweets; it's diaper, not nappy; etc.

While there certainly are plenty of elements in which British English remains more conservative than American English**, it's well more common for American English to be more conservative. In general, it's usually London to blame; often deviations arose in London and then (due to its cultural domination) spread to the rest of the UK.

That said, when it comes to spelling, British English is usually more historically conservative than American English. Webster sought to make words be spelled more like they sound - for example, colour to color, theatre to theater, etc. Though ironically the US in some cases ended up restoring past spellings - the change of the "-ize" ending of American English actually predates the American-British split; it had been lost under French influence to "-ise", only to be restored in the US.

** - Interestingly, it was the American retention of rhoticity that led to some of the vowel shifts that British English kept as more original than the US. The classic case is Mary, Marry, and Merry - in American English, the vowels are pronounced the same, whereas in British English, Mary's is the same as in American English (like in "fair", "stair", etc), Marry is more like the a in "cat", and Merry is more like the e in "pet" or "step". But British English lost the rhoticity of the R instead of unifying the vowels to be easier with a rhotic R.

Comment Re:Knowing your (local) audience. (Score 1) 64

Ugh, Slashdot messed up the italics.

Just to clarify: on the macs, the GPU operates on system memory. It has pretty awful FLOPS (~26 TFLOPS), but what matters for LLM inference is that its latency is low and bandwidth is high, and you can get versions with up to 512GB, for very sizable models

DGX Spark (formerly called "Digits") is a tiny desktop box from NVidia with 128GB (you can chain two together for 256GB). It has 4x the FLOPS than the macs (still well less than a modern gaming GPU!) but 1/3rd the memory bandwidth. In practice you get about the same performance per dollar of capital cost, and inference is a bit more power efficient. Another selling point for the DGX Spark is that the environment is *designed* for inference and training.

In practice, though, both are good options for home serving of large models. Again, it's inefficient compared to bulk serving of models on high-end servers with large-scale batching, but then you don't have the advantages of fully controlling your data.

Comment Re:Knowing your (local) audience. (Score 1) 64

I haven't looked at what this thing is but why can't it be run on ordinary PC hardware? Either CPU or GPU, nvidia, etc? Why a a Mac?

If you're only going to run small models at home, your best option by far is a modern NVidia gaming GPU. The problem comes when you want to run a large model at home. And there's really only two good "home scale" options for this: macs like the M3 Ultra / Mac Studio, and the NVidia DGX Spark (1 or 2 linked together). You simply can't run these large models (even quantized) on regular gaming GPUs; they just don't have enough VRAM. You can run multiple GPUs in one computer, but the memory is still limited and now you're bandwidth limited across the bus. And yes, you can do it with any computer on CPU if you have sufficient RAM, but it's immensely slow and inefficient. You can combine running on CPU with running on GPU (layer offloading), but the CPU heavily bottlenecks the GPU and the GPU runs at near idle.

Even faster / more cost efficient / more power efficient are the high-end NVidia AI servers, but unless you've going to be serving out inference for a large number of people, there's no way you can justify the cost.

That said, you can do an awful lot with smaller models that fit on GPU. It's just a question of how much quality you care about for the sort of tasks you're wanting to run.

Comment Re:Laser safety [Re:Beaming Gigawatts of IR] (Score 1) 58

Lasers are much brighter than sunlight

These aren't. That's the point you're missing. The beam divergence is so great from traveling nearly 36 thousand kilometers that by the time it hits the surface it's at typical solar intensities, spread out over a large area.

The sun also puts out a painful level of visible light, which kicks in the blink reflex if the sun enters your field of vision.

What you're freaking out about is that a space solar plant:

1) Aims in the wrong place
2) Keeps aiming in the wrong place
3) But *consistently* in the *same* wrong place
4) And you're staring up in the sky
5) And at the EXACT right spot
6) For long enough to cause permanently damage to your eyes

I'm sorry, but find something better to worry about.

Comment Re:EEVBlog explaining why this is BS (Score 1) 58

A thousand beams

... each with the power output of 1/1000th of the power plant.

Let's say this is a thousand square kilometer solar array in space getting 1 kW/m^2 with 20% efficiency in turning that into a beam headed for Earth and then 50% of that energy is lost before hitting the ground. So 1 square meter of the array would produce about 100 Watts per square meter on the ground (averaged out, of course). Then there are 1000 of those beams

So your scenario is a *million square kilometers of solar panels*? Yes, obviously you can fry an area with that, but it's also not a single space solarplant. Even just one of those thousand square kilometer arrays you propose is a 200 GW solar plant.

Do you have any big what you're proposing is? It's ~40% bigger than Texas.

Slashdot Top Deals

"You know, we've won awards for this crap." -- David Letterman

Working...