Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Science fiction is not about the new shiny things, (Score 1) 107

Itâ(TM)s about how human beings react to new shiny things. I would ballpark science fiction as being about 50% cautionary tales and 50% hopeful inspiration. If you miss those lessons youâ(TM)re going to focus on the shiny things. Kurt Vonnegut wrote about ice nine because his brother Bernard was helping figure out how to freeze clouds and create weather. A fable about how scientists donâ(TM)t always look at the full effects of what they create. If you want the phasers, but donâ(TM)t want the society it may not go well.

Comment Re:I never stop being amazed (Score 4, Informative) 48

"Matter-over-Thread" is actually a solid strategy compared to most 'cloud connected' wifi smart devices.

This is more akin to Zigbee/Z-Wave. It's a local, non-internet scheme for local communication and control. You can get a totally local air-gapped Matter over Thread setup running without internet. It's if you pick a cloud-connected thread border router when you get in trouble, but you can roll your own, e.g. with Home Assistant platform providing a way forward.

Comment no thanks (I'm an author) (Score 1) 30

Won't happen, at least not with my books.

There is a reason writing the last one took two years. Many of its passages have very carefully considered wordings. Intentional ambiguities. Alliterations. Words chosen because the other term for the same thing is too similar to another thing that occurs in the same paragraph. Names picked with intention, by the sound of them (harsher or softer, for example).

I've used AI extensively in many fields. Including translations. It's pretty good for normal texts like newspaper articles or Wikipedia or something. But for a book, where the emotional impact of things matter, where you can't just substitute one words for a synonym and get the same effect - no, I don't think so.

This is one area where even I with a general positive attitude to AI want a human translator with whom I can discuss these things and where I can get a feeling of "did she understand this part of the book and why it's described this way?".

Comment Re:If all of AI went away today (Score 1) 149

No. Like any software, AI requires maintenance, and that maintenance costs money, lots of money.

It does not. Models need nothing more than the storage of some gigs of weights, and a GPU capable of running them.

If you mean "the information goes stale", one, that doesn't happen at all with RAG. And two, updating information with a finetune or even LORA is not a resource-intense task. It's making new foundations that is immensely resource intensive.

Can you integrate it into your products and work flow?

Yes, with precisely the difficulty level of any other API.

Can you train it on your own data?

With much less difficulty than trying to do that with a closed model.

Comment Re:If all of AI went away today (Score 1) 149

And my point is that AI wouldn't just stop being used even if the bubble imploded so heavily that all of the major AI providers of today went under. It's just too easy to run today. The average person who wants something free would on average use a worse-quality model, but they're not going to just stop using models. And inference costs for higher-end models would crash if the big AI companies were no longer monopolozing the giant datacentres (which will not simply vanish just because their owners lose their shirts; power is only about a third the cost of a datacentre, and it gets even cheaper if you idle datacentres during their local electricity peak-demand times).

Comment Digital Media and the Truth (Score 1) 71

I remember a paper I found interesting which was remarking on the changing landscape of photoshop and other digital tools transforming the 'validity' of media as evidence in court cases. How, previously, having a picture or security camera footage was considered 'definitive' proof, and how the march of technology was eroding a jury's confidence in such evidence and opening new doors into reasonable doubt. The paper's focus was that the idea of how 'accurate' such media was has always been evolving, as we play with digital compression and display technologies, and that there has always been a drift in digital media between the 'captured' image and an ever changing landscape of how we attempt to display and view it.

I don't see a problem in continuing to refine and introduce new tools to offer additional options in how we interact with such media. Colorization, upscaling, and other 'enhancements' are always going to be controversial as purists argue about how it is 'supposed' to be and connoisseurs adopt a growing list of preferences of which technologies we prefer. The challenge becomes when such technologies are not optional, and manufacturers need to make choices about which to include in our phones, monitors, and televisions so that any given device has a chance to view media made today versus those captured decades ago in a variety of formats and scaling and frame rates, and codecs. As I get older and gulf between my childhood and today continues to grow, I am continually introduced to new versions of nostalgic media that aren't quite right or just feel unsettling or wrong. Part of this I see as the cost of growing older, but part of it is forced obsolescence as the choices on offer change or dwindle. Added complexity is added cost. My fancy new television has a dizzying array of settings and options to choose versus the old analog knobs of old. Deciding which to include and which to continue to support and how many to offer or bombard customers with is still a continually shifting landscape.

And while I agree with those who are saying these new technologies should be optional, I am under no illusion that such can or will always be the case. How long will we continue to carry the ever growing catalog of digital media? How much media has already been lost to time? How will copyright and licensing limbo continue to decide how much is available for streaming or 'purchase' via physical media releases? And that's before we get into the conflict between artistic vision and the ability to edit and rerelease new versions of media where Han shot first or not. I don't have any answers except that Han definitely shot first.

Comment Re:If all of AI went away today (Score 1) 149

Because we're discussing a scenario where the big AI companies have gone out of business, remember? And the question is whether people just stop using the thing that they found useful, or whether they merely switch to whatever alternative still works.

It's like saying that if Amazon went out of business, people would just stop buying things online because "going to a different website is too hard". It's nonsensical.

Comment Re:What do they care? (Score 1) 44

I don't use an agent but I use AI to find the exact thing I want on Amazon and it gives me the link and I buy it, without having to wade to the crap that Amazon's "search" throws at me.

Glad to see I'm not the only one who noticed that over time Amazon's search feature has enshitified. If that's the correct verb. It used to be fairly good. These days, nah, unless I'm looking for a book or other product from Amazon directly, as a search for the marketplace it's crap.

And since it used to be better, something must be responsible for that. Greed, most likely.

Comment Re: Cue the hate... (Score 1) 68

Not 99% but definitely some of the most useful ones. And yes, stack traces are one of the things that only Linux users send you without an explicit request.

And the advantage of debugging a (this specific exception) error in (this specific file) on (that specific line) over a "hey, the game crashed when I jumped out of the car" bug report cannot be overstated.

Comment Re: If all of AI went away today (Score 1) 149

They believed you could mimic intelligence with clockwork, etc. Why do you only count if it if it involves computers?

If you want to jump to the era of *modern* literature, the generally first accepted robot in (non-obscure) modern literature is Tik-Tok from the Oz books, first introduced in 1907. As you might guess from the name, his intelligence was powered by clockwork; he was described as no more able to feel emotions than a sewing machine, and was invented and built by Smith and Tinker (an inventor and an artist). Why not electronic intelligence? Because the concept of a programmable electronic computer didn't exist then. Even ENIAC wasn't built until 1945. The best computers in the world in 1907 worked by... wait for it... clockwork. The most advanced "computer" in the world at the time was the Dalton Adding Machine (1902), the first adding machine to have a 10-digit keyboard. At best some adding machines had electric motors to drive the clockwork, but most didn't even have that; they had to be wound. This is the interior of the most advanced computer in the world in the era Tik-Tok was introduced. While in the Greco-Roman era, it might be something like this (technology of the era that, to a distant land that heard of it, probably sounded so advanced that it fueled the later rumours that Greco-Romans were building clockwork humans capable of advanced actions, even tracking and hunting down spies).

Slashdot Top Deals

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...