Comment Re:People who will buy a camera bc a celeb uses it (Score 1) 61
Did you just pick a random person to say that to?
Did you just pick a random person to say that to?
...probably aren't going to do their research, and will be willing to buy a shittier version for a higher price.
This camera is a fashion accessory for shallow people.
It's a good way of laying off the people who are good enough at what they do that they can find other jobs.
Found the office building real estate investor. Or the sociopath from upper management.
There have always been some critics like that, yes, but it's a lot more universal now. It wasn't nearly this bad in the 90s and 00s.
In the last fifteen years, critics have leaned more and more heavily into telling people what they ought to like, as opposed to how likely they are to like something.
Things like the Mario movie, that are enjoyable and escapist tend to get panned by the critics. Conversely, a movie like The Last Jedi that turned a formerly enjoyable, escapist series on its head (the 8th part of a 9 part series isn't the time to do that) has a critic rating of 95% versus an audience rating in the 40s, because the critics straight up don't care about the movie actually making sense in the context of the previous 7 entries (8 counting Rogue One).
Critics were, as far as I can remember, generally against the release of the Snyder cut of Suicide Squad, and also against the rework of Sonic's appearance in the Sonic movies (now widely considered to be a very wise decision by the studio).
It used to be that critics would catch the occasional good, intelligent movie that parts of the audience didn't really get, but recently what they like doesn't seem to have any correlation with what general audiences will like *or* whether a movie is intelligent (The Last Jedi was quite stupid but critics glazed it anyway).
So yeah, it's not so much that Rotten Tomatoes has brought in too many random critics, it's that today's literary "elite" enjoy fart-sniffing more than they enjoy actual entertainment.
In this kind of situation, it's smart to disallow its use until an evidence-based decision can be made about whether it will actually work and whether it performs at the level of human therapists, and any AI used for this purpose should have to go through an approval process, because not all AIs are created equal.
From my own experience:
> Was it mostly boiler plate?
A good bit of it was, but speeding that up without having to dig through a bunch of templates saves a ton of time.
> How much of the generate code was correct?
A lot of it, but I generally only use it to complete a few lines of code at once, and do the high level thinking on my own. If you use AI that way, it's a great productivity tool.
> How much of the Copilot code makes into production?
A lot of it.
...unless you can point to the *specific work* that was taken from *without knowing how it was made*.
In other words, just because you train an AI on a bunch of works, it doesn't follow that most of what the AI makes is violating copyright or plagiarizing or whatever. A lot of people are fundamentally pissed off because they don't feel special, or else they wouldn't be yelling at AI users in hobbyist communities who aren't affecting anyone's livelihood.
I have a lot of code out on github, which LLMs have trained on. I have absolutely no right to tell them that the AI can't learn from my code, because my IP rights don't extend that far, and that goes for art as well. I also think it's great that AI enables people, on their local computers, to do something that until very recently was very hard to do.
No, you have a copyright, which allows you to control redistribution of your specific work. You do not, nor should you, have the right to prevent people from learning from it. If I want to disassemble Windows and learn from the code, I'm 100% allowed to do that.
> You can generate images with signatures from the works that were copied, without attribution.
I've never actually seen this happen. What actually happens is that the AI has generalized on what a signature is and figured out that the name of the artist often appears in cursive down in the bottom corner of the image, so it writes a name down there (an artist's name, if someone tells it to make an image "by so-and-so"), but I've never seen a case where the signature matches the signature of the artist to any degree.
The existence of signatures absolutely is not proof that it's copying anything, because the signatures themselves aren't even copies.
Copyright allows transformative works.
Also, you're using weasel-y wording here. It does train on entire works, but those entire works aren't saved in the AI itself (this is mathematically impossible giving how many works an AI trains on versus the size of the AI. When it trains on an entire work, it absolutely does generalize on style, concepts, and ideas. Only if the entire work is trained many times over does it memorize that work.
That's what the article is going for, anyway. They want readers to feel like AI is fundamentally insecure, when that's not what happened here.
Fuck making people go to the office. It's a waste of everyone's time and energy, and it causes pollution and more traffic and reduces productivity, all so micromanaging twats can feel good about themselves.
> You have no idea what difference it will make in their lives.
It will make absolutely zero difference because an image occurring a single time in a five billion image training dataset doesn't do shit. They're in a lot more danger from a human photoshopping the image directly.
The sooner you make your first 5000 mistakes, the sooner you will be able to correct them. -- Nicolaides