Forgot your password?
typodupeerror

Comment Re:Black box not useful for artists (Score 1) 107

All I need to do is point to one artist who judged his work by what the output of the model was, no differently than a complex filter in photoshop, which they also have no fucking clue what its technical details are, but a pretty fine grasp of what its non-technical details are- almost precisely analogous to how they'll use model-generated/augmented art.

It's not really the same. With the photoshop filter the artist can choose which filter they want to use, apply it, check whether they like the end result, go back to the original if they don't, apply a different filter on top of that if it's not quite right, tweak the results of the filter by hand, etc etc and only after all that if they're completely satisfied with the end result it will go into the actual game binaries. Certain textures may benefit from one type of filters while others need another type or nothing at all.

With nvidia filter, as I understand it, the decision is just whether to use the one filter for everything, or not. Apparently you can set the intensity of the filter (i.e. how much you want to AI-ify your texture) per object, but that's about all the fine control you get with it. And that decision may well be out of artists' hands - if management mandates dlss 5, not much you can do.

The results of generative AI, from what I've seen, are attractive but kind of bland, and once you've seen enough they all look pretty similar. That seems no less true for the examples they gave for dlss 5. So for shovelware that uses all the standard assets it may improve the look, but for something where the artists were going for a specific atmosphere and specific character look, it certainly won't.

Comment Re:good luck (Score 2) 45

It's not an unusual judgement call to make. Let's say you're deciding whether an essay plagiarizes another work. If it basically copies the whole thing verbatim, maybe changing a couple of words to disguise the fact it's been copy-pasted, it's clearly plagiarism. If it lays out some ideas expressed in that other work (clearly attributing those ideas) and then makes own commentary and analysis of those ideas, it is not plagiarism.

I mean if someone take AI generated art, for instance, and opens it in Photoshop/Affinity....and alters it...is it AI then?

That is very definitely AI.

Or reverse...start with human generated and sent to AI to finish it?

That depends. I think I would be fine with things like AI-powered cleanup of lines, changing tone or colour composition of the image, denoising, stuff like that. The kind of tools that already exist in something like Photoshop, but better. As soon as you have objects in the artwork actually drawn by AI, it's an AI artwork.

Comment Re:An unrestricted, unregulated (Score 2) 188

Gambling caused an arguably bigger problem in sports where athletes would deliberately underperform in exchange for a share of the gambling money. Football (or soccer) and a number of other sports have match-fixing scandals on a fairly regular basis in spite of all the efforts to stamp it out. Now this Polymarket brings the same problem to basically any other part of society. Imagine there being a bet on some sensitive information being leaked, or a high profile legal case being decided a certain way. Or some celebrity being involved in a car accident.

Comment Re:Would be way too expensive. (Score 1) 74

I'm not sure about the cost. I believe the main reason why nuclear fission reactors are so expensive is the risk of catastrophic meltdown and costs of building all the safety features needed to mitigate that risk, as well as costs of dealing with legal challenges etc. Fusion would not have that problem. For comparison, the cost of construction of UK's Hinkley Point C nuclear reactor is projected to reach £33 billon - see here. £2.5 billion is relative peanuts by comparison.

As far as I'm able to tell the engineering involved is indeed massively complex, but the prototype of anything is always expensive. If the technology can be made to work, then down the line people will find many ways to reduce the cost and simplify the construction.

Whether it can be made to work is a big question, I do have my doubts as well. If they can pull it off though, the payoff would be huge. Fusion would provide power 24 hours a day, regardless of the weather, and be easily scaleable, like nuclear, but without the danger of meltdown, the need for exotic fuel - deuterium and tritium are far easier to obtain then uranium, and without the need to dispose of toxic reactor waste. All the benefits of nuclear fission, with almost none of the drawbacks.

And the benefits go far beyond providing reliable power and creating jobs. If commercial fusion power can be achieved, I imagine most developed countries will want to build their own reactors. And UK at that point would be the only place with the expertise on building functional fusion reactors. There would be an absolute shitton of money to be made on selling that expertise. But again, that's all dependent on whether it can be made to work. Still, of all the ways to spend 2.5 billion, not the worst by a long shot.

In the meantime though, we should continue building up renewable power.

Comment Re: Because of analphabetism? (Score 2) 33

Here when you buy something you typically pay by putting your bank card into a card reader, or nowadays by tapping the card or your phone for a contactless payment. The card reading terminal of the shopkeeper has has an lcd screen that will show if the payment has been accepted or declined. Does it work differently in India? Do the shoppers have to enter the store's bank account into their phone to pay? In that case the shopkeeper would need some confirmation that the payment has been received.

Comment Re:Sounds familiar.... (Score 1) 24

As much as I dislike defence spending, weapons could at least be useful if Ireland invades, or the Martians, which has similar likelyhood. Or they could sell them to Saudis or somebody similar. Datacenters otoh have zero value, all they do is raise electricity prices. I wouldn't be super surprised if they cause brownouts or water supply problems in this town, UK's ancient utility grid may well struggle to cope with a sudden peak in demand.

Why can't they designate housing "critical national infrastructure"? That's what they should be building in these "gray belt" areas, if they build anything there. This would have an additional benefit of being actually true. Or actual infrastructure, as in roads and railways.

And the usual tortured wording they use

a new "gray belt" land designation that loosens building restrictions on underperforming greenbelt parcels

What the fuck does that even mean? How do areas of land "underperform"?

Comment Re:That's not basic income (Score 1) 121

Maybe, but over all of human history new technology has never taken away employment- it has always changed its nature while increasing productivity.

That's a pretty contentious claim. You can safely say that new technology has never taken away employment permanently. Or at least long term. The people made unemployed by new technology eventually found new employment, or at least their children or grandchildren did, but short-term automation would often absolutely lead to overall increase in unemployment. And just because permenent reduction in employment has never happened in the past doesn't mean it can't happen in future. There's always a first time for everything.

Besides, the binary employed/unemployed distinction isn't the only important one, the type of job you do is just as, if not more, important. Working some kind of job almost always beats being unemployed, but there is a huge difference between a comfortable respectable middle-class job that can pay for a house, a car and support your whole family and a minimum-wage or below McJob.

The 'increasing productivity' bit is also more complicated than that. How easily you can manufacture a good or provide a service (and usually consequently how cheap that good or service becomes) is only one variable. Another one is the quality of that good or service - and that can obviously be subjective, but often it's pretty objective. Hand-produced nails are no more valuable than machine produced ones (less valuable normally, because human will make mistakes and won't be able to reproduce the required shape perfectly every single time). But with food for example, as you start to add more and more automation and chemicals at some point quality and effect on health can start to go down. Then food produced with less automation can become a premium product.

As far as art is concerned, the desirability difference between human art and mass-produced AI slop is pretty obvious. Besides, even before AI we were already getting way more art than we could possibly consume, and so human art was never that expensive (unless it was collector high art). The phrase 'starving artist' exists for a reason. So increase in productivity doesn't produce that much benefit here, whereas decrease in quality is certainly a huge drawback.

Having said that, 70% of middle class jobs being replaced in last 45 years due to automation does seem like a very dubious claim. It probably depends on how you define a 'middle class job'. Also, if automation replaced one middle class job with another comparable middle class job, would that also count?

Comment Re:OK that seals the deal! (Score 1) 70

I'm pretty sure that wasn't the plan, as far as there has ever been any plan beyond hype the tech and keep getting investor money. I'm guessing they wanted to keep the basic service free to get as many people taking AI for granted as they could, and make their money off premium subscriptions sold to industry. After trying AI though industry quickly figured out that they wouldn't be able to fire all their workers and replace them all with AI agents, as promised, so industry uptake of AI has not been high at all. So now OpenAI and co are probably starting to panic about how they're ever going to repay all the money invested in datacenters, and 'just put some ads in' is a tried and trusted solution to getting some revenue. Even if they planned to start enshittifying eventually, I imagine they didn't want to start so soon.

Comment Re:"hallucinating non-existent play-makers" (Score 1) 25

This is an often repeated but totally flawed argument in that it's trying to estimate AI's 'intelligence' by comparing the text it generates to text a human might generate - if it's similar enough, it must be about as intelligent as that human. If it makes mistakes - well, humans make mistakes too, right? That's exactly what all the firms peddling 'AI' want it to do, imitate human thought convincingly enough, and that is exactly how they want you to think. But we don't need to estimate anything, we know how AI works.

To us, language is a way to express our thoughts and feelings. To us, a word 'dog' is a handle to a cache of memories and mental images involving dogs we've encountered or read/heard about, as well as generalised ideas and beliefs about a concept of a 'dog'. AI does not have such a cache. To AI, 'dog' is an arbitrary linguistic token and it is a handle to a mathematical model of what other arbitrary linguistic tokens go together with the 'dog' token. It's like, some physicist for some reason wants me to give a group of PhD Physics students a lecture about very advanced quantum mechanics, so he types up the lecture and gives it to me to memorize. I do so and read it to students. That means I'm a very knowledgeable in the field of quantum mechanics, right? Wrong - I've just memorized the words without having a clue what they mean.

When people come across false information, they're capable of using their logic and knowledge of the concept that the words stand for to figure out that the information is false and they shouldn't repeat it to others (unless they want to fool others for some reason). Sometimes they fail to figure out that the information is false, because their logic is flawed, because they just don't feel like thinking etc etc. But humans are at least capable of figuring it out. For AI, all information that is in its training dataset is of exactly equal value, it all goes into building the model of how tokens go together. It isn't capable of telling a hallucination from a non-hallucination.

Comment Re:morons (Score 1) 181

The persons inside/outside the car may be weak, ill or too panicked to break the glass. Especially if the car has reinforced glass windows. It's an action that is a lot more complicated than just opening the door, in a situation where every second may count. The person inside the car may hesitate to break the glass because they're afraid of flying glass shards, or because they don't understand the urgency of the situation and don't want to damage the car any further. Yes that would be extremely stupid but people with that level of stupidity aren't exactly rare.

For example, imagine this situation: an adult driver with a young child (maybe early teens) in the backseat crashed the car. Car lost power and a fire is starting. The adult is passed out in the driver's seat. With manual-only car doors the child can get out of the back seat, open the driver's door from the outside and drag the adult to safety, if that's physically not possible then maybe wake them up. Or a similar situation where a child or an elderly person or someone who would have difficulty or lack the presence of mind to break the glass witnesses a car crash with the driver passed out.

On the flipside, what are you gaining from fully retractable doors? Slightly better aerodynamics? The gain would be very marginal. Better aesthetics maybe? This would be absolutely fine if it didn't potentially compromise the safety of the people inside the car.

Slashdot Top Deals

Porsche: there simply is no substitute. -- Risky Business

Working...