Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Artists Worry Adobe Could Track Their Design Processes to Train AI (fastcompany.com) 82

"A recent viral moment highlights just how nervous the artist community is about artificial intelligence," reports Fast Company: It started earlier this week, when French comic book author Claire Wendling posted a screenshot of a curious passage in Adobe's privacy and personal data settings to Instagram. It was quickly reposted on Twitter by another artist and campaigner, Jon Lam, where it subsequently spread throughout the artistic community, drawing nearly 2 million views and thousands of retweets. (Neither Wendling nor Lam responded to requests to comment for this story.)

The fear among those who shared the tweet was simple: That Photoshop, and other Adobe products, are tracking artists that use their apps to see how they work — in essence, stealing the processes and actions that graphic designers have developed over decades of work to mine for its own automated systems. The concern is that what is a complicated, convoluted artistic process becomes possible to automate — meaning "graphic designer" or "artist" could soon join the long list of jobs at risk of being replaced by robots....

The reality may be more complex. An Adobe spokesperson says that the company is not using customer accounts to train AI. "When it comes to Generative AI, Adobe does not use any data stored on customers' Creative Cloud accounts to train its experimental Generative AI features," said the company spokesperson in a written statement to Fast Company. "We are currently reviewing our policy to better define Generative AI use cases."

This discussion has been archived. No new comments can be posted.

Artists Worry Adobe Could Track Their Design Processes to Train AI

Comments Filter:
  • But, that's not how AI is trained. You just give it the finish product combined with a text description and run the network over and over until it figures out how to replicate it.

    The amount of paranoia that artists have suddenly leapt onto is amazing. I wonder what we'll see as other industries fall.
    • Not quite true. That's the current neural network approach but one could also train ai via an evolutionary algorithm. This of course has lots of different challenges based on the nature of producing a fitness score but just because the current batch of AI is pushing one way (the easiest to get satisfactory results), doesn't mean it's the only way.

    • Re: (Score:3, Interesting)

      by CalgaryD ( 9235067 )
      AI can be trained to do different things. I can be trained to generate some output, yes, but it also can be trained to create a design pipeline that will use various other AIs to generate something in that pipeline. The input required for each, including the design AI will be different. So... any data that is available can be used for something, including AI training. I do think that there is a reason to worry about it.
    • But, that's not how AI is trained.

      Not yet. As far as you know.

  • Haters commenting on Moran's tweet, and his earnest self defense, provide scary examples of the new AI paranoia.
    https://mobile.twitter.com/benmoran_artist/status/1607760145496576003

    • Right. What is really going to happen is not AI imitating artists, but artists imitating AI, especially artists without a creative bone in their body.

  • Defects and UX (Score:4, Insightful)

    by Brownstar ( 139242 ) on Monday January 09, 2023 @04:09AM (#63191260)

    Most likely the information is used to help identify and resolve defects.

    It also (but probably less likely) is used to identify common combinations of actions that can be better streamlined in the future to provide a better workflow and user experience in the applications.

    • Re:Defects and UX (Score:5, Insightful)

      by Dutch Gun ( 899105 ) on Monday January 09, 2023 @05:16AM (#63191348)

      Adobe's explanation:

      Meanwhile, Adobe’s FAQ on its machine learning content analysis cites examples of how the company may use machine learning-based object recognition to auto-tag photographs of known creatures or individuals, such as dogs and cats. “In Photoshop, machine learning can be used to automatically correct the perspective of an image for you,” the company says. It also can be utilized to suggest context-aware options: If the company’s apps believe you’re designing a website, it might suggest relevant buttons to include.

      But you just know some manager read this and immediately thought "Ooh, that's a good ideal. Can we actually do that?"

      • It also can be utilized to suggest context-aware options: If the company’s apps believe you’re designing a website, it might suggest relevant buttons to include.

        Oh god. Oh god oh god oh god.

        It looks like you're designing a website. Would you like help?
        It looks like you're designing a website. Would you like help?
        It looks like you're designing a website. Would you like help?
        It looks like you're designing a website. Would you like help?
        It looks like you're designing a website. Would you like help?

    • Check box says : "adobe may analyze your content using techniques such as machine learning". It may means anything...
      • by Anonymous Coward
        It means I need to fire up photoshop and start drawing *lots* of dick pics.
  • by Miles_O'Toole ( 5152533 ) on Monday January 09, 2023 @04:17AM (#63191280)

    Adobe Spokesthing's Statement: "When it comes to Generative AI, Adobe does not use any data stored on customers' Creative Cloud accounts to train its experimental Generative AI features...We are currently reviewing our policy to better define Generative AI use cases."

    English Translation: "At the moment, we don't use data stored in Creative Cloud accounts for this purpose. However, there's quite a lot of information we gather and use that isn't stored in what we very specifically define as a 'Creative Cloud account', and we won't be discussing it any time soon. In future, our lawyers will use data we're gathering right now to help us create a legal definition of 'Generative AI use cases' that will allow us to 'harvest' artists' innovations cheaply, easily and, of course, legally."

    • by Pascoea ( 968200 )
      I thought that comment from Adobe felt very weasel-word-ish too. "Of course we're not using data stored on Creative Cloud." The quiet part: "We have another cloud for that."
  • by gillbates ( 106458 ) on Monday January 09, 2023 @04:19AM (#63191288) Homepage Journal

    Of course they're not using the data stored on the user's cloud. They're instead using the data created by the program itself, which will be stored in Adobe's cloud!

    AI is the death knell of digital artists. Yes, there will always be digital artists - hobbyists, and the like - but the jobs will be few and far between. As if it wasn't already hard enough for artists to make money.

    I have experimented with AI generated art. For about 90+ percent of the illustration jobs, AI is good enough. For about 95-99% of the commercial animation market, AI will be a requirement. Why? Because AI can create, and recreate - exactly, the same characters in different scenes. It won't make mistakes when rendering a character. It takes only a few minutes to do what takes a skilled artist hours to weeks. You'll still need an art director, of course, but he'll script the scenes and what the characters should do, and AI will be able to render the scene(s) in a matter of minutes or hours.

    Don't believe me? Check this [deviantart.com] out. It took me all of about 5 minutes from start to finish.

    And the real kicker? The AI models were trained on images submitted by artists to social media sites like Deviant Art and the like. The artists' own work was used to train the model which will replace them.

    I still work with real paint on canvas, and real pencils on real paper. The value of my work is that I, the artist, actually created the piece of work. Digital art doesn't have the same appeal - the buyer owns a printout, created by a machine, not the hand of the artist. At one time I considered getting into digital art, but with the recent advances in AI, have become convinced that selling physical pieces of art is the way of the future. Give it 5 years, and there won't be a market for digital artists.

    • by Rei ( 128717 ) on Monday January 09, 2023 @04:44AM (#63191326) Homepage

      It looks like it took you 5 minutes, too. Super low resolution, monster-hands, weird shapes, etc. Try harder.

      • by gillbates ( 106458 ) on Monday January 09, 2023 @05:49AM (#63191374) Homepage Journal

        Why? Why would I try harder when nobody - I mean, other than yourself - is going to care?

        That's the problem of AI. It's good enough for most people. And yes there are snobs who will notice these things, but they don't buy art anyway. And that's the real problem - it's good enough for the people who are buying art, or the services of an artist.

        It used to be that advances in technology put low-skill people out of work. Weavers, elevator operators, etc... But now we're seeing technology advance to the point where it is capable of putting even highly-skilled people out of work.

        Even ChatGPT can write code at the level of the average Indian contractor.

        This is not neo-luddism. This is not replacing low-skilled labor with higher skilled labor. This is, in essence, destroying the necessity of having humans do work at all.

        • Why? Why would I try harder when nobody - I mean, other than yourself - is going to care?

          Everyone notices the monster hands, and no AI program reliably gets hands right, so that will continue to be an issue until it's resolved. And it's a pretty hard problem as it turns out. The resolution matters too, and upscaling is itself an art.

          I think your general point is correct, but your example sucks

        • by Rei ( 128717 ) on Monday January 09, 2023 @06:34AM (#63191400) Homepage

          Why would anybody care about art quality?

          And you consider yourself an artist?

          The mind boggles.

          • He also mentions the signature looking blob in his about text as if it were relevant, proving that he doesn't understand the technology either.

            • The simple fact is, just as early on we put up with bad CG but now it's grating, early on some people will put up with bad AI art but their tolerance to it will drop pretty quickly. Monster hands, low resolution, disproportionate bodies, malformed background objects, etc - this stuff already doesn't fly on art forums that allow AI, as it's sheer artist laziness, and won't with the general public either once they've seen enough of it.

              Either put forward the effort to make something good, wherein you're maki

            • I think you're both missing something here. The buyers who care about quality are buying original works, but these represent a very small portion of the overall market. You might ask yourself, "Well, why would I care?" The reason is simple: when the digital artists get laid off from their day jobs in advertising, illustration, design, etc... they'll pick up a physical brush and start producing work which competes with mine. Granted, there will be a learning curve coming from the digital world, but it w

              • The buyers who care about quality are buying original works

                Buyers who only care about quality are judging works only based on quality. They do not care about the origin. What you are saying is simply logically false, it clearly does not make sense as written.

                As much as I would like to believe otherwise, the buyers of digital art really don't care about the quality of art that much.

                As much as you would like to believe your own nonsense, the buyers of digital art don't want malformed hands in the pictures they pay for. If they did, they wouldn't hire anyone to write prompts for them, they'd just plug their idea into midjourney and then utilize the result.

                You're acting like quality is a boo

                • by SomePoorSchmuck ( 183775 ) on Monday January 09, 2023 @12:43PM (#63192508) Homepage

                  The people who will be satisfied with a diffusion model's first grunt weren't going to pay an artist an amount they could live on, anyway. I'm not saying these tools aren't threats to "real" artists, but they also aren't going to replace them completely any time soon. They only make art cheaper, they don't eliminate humans.

                  Your post makes some good points.
                  In regard to the quoted paragraph specifically, I'd like you to consider the way automation causes a bottom-up community-collapse (over time) of labor markets.

                  Only a very tiny percentage of artists have ever been paid "an amount they could live on". The overwhelming majority of artists are hunter-gatherers gleaning scarce resources and piecing them together to maintain a level of life acceptable to them. And until now there has been a deep, smooth continuum of ways to be an artist, produce art, sell art, build a portfolio, learn new techniques, and so on. An artist starts - probably identifies their talent/interest somewhere in their adolescence - and then develops along that continuum until they reach a point where their talent/interest has produced a level of skill commensurate with what they want from it (money, respect, followers, personal satisfaction).

                  Automation almost always kills the starting point of the continuum first. It seems like no big deal because of what you said -- the early-adopter customers who are happy with the quality of automated outputs were already happy with the quality level from sending some 19yo on Twitter a $10 Venmo/Cashapp tip for their Bowser/Mario furry bara images.

                  Automation isn't going to replace established, experienced, highly skilled 35-50yo artists right now. But pretty much every full-time artist started off as that high school kid doing pen drawings of D&D monsters or manga characters or assembling funny photocollages of the creepy PE teacher. When we automate, we remove the bridge between that kid and the established professional. Instead of a 23yo competing against other similar 23yos, they'd be competing against automation that can undercut them completely - producing results faster and cheaper than even a young adult willing to live with 4 roommates and eat Maruchan ramen for dinner. And when it comes to software/media, it's all 1.7 million of the 23yos being displaced by ONE app.

                  If you want to collapse the butterfly population, you don't have to spray all pollen-bearing flowers that butterflies might use for food. You only have to spray the green leafy plants the caterpillars eat.

                  How do you foresee the career/skills ladder looking in the immediate future where the bottom rungs of the ladder are all automated? Why would anyone bother to sink 10-20 completely uncompensated years developing the skills and techniques so that later in life they can finally be above the automation minimum? Especially when that automation minimum level is a moving target, and 15 years from now the slow human brain will arrive only to find the AI beat them there too.

        • by DarkOx ( 621550 )

          This is not neo-luddism. This is not replacing low-skilled labor with higher skilled labor. This is, in essence, destroying the necessity of having humans do work at all.

          Sure its neo-luddism - its exactly that. Its also absolutely replacing low(er)-skilled labor with high(er)-skilled labor. You are replacing the average coder good enough crank out some line-of-business-CRUD thing, with a relatively smaller number of people who have the chops to develop ML tools. Same with artists you are replacing the average steady hand and pairs of eyes with some tanning in theory behind them with currently small group of individuals who know enough about how stable diffusion on other ima

    • I've got a feeling that these AI training sets are eventually going to have to explicitly seek every artist's explicit permission before hoovering up millions of pieces of copyrighted art to use for their for-profit services. To me it just seems like the law hasn't yet caught up with technology yet.

      Anyhow, there will still be a market for digital artists for the near future at least. I work in the video game industry, and we're still likely to employ concept artists, because we're interested in their crea

      • But what if, instead, the AI learns how to paint by "watching" a skilled artist as they work? What if it can combine the talent of millions of Adobe users into an AI painter, which instead of simply melding existing works together, can understand human language and paint something, anything, new with the same precision as an experienced artist?

        The issue isn't copyright infringement as much as it is that Adobe users are actually training their computer replacement as they work. Adobe will, instead of se

        • What if it can combine the talent of millions of Adobe users into an AI painter

          Nothing that came out of photoshop is a "painting", so nothing an AI will learn from it will make it a "painter". Words have meanings, and you are ignoring them for the sake of making a dumb argument.

          The issue isn't copyright infringement as much as it is that Adobe users are actually training their computer replacement as they work.

          The only purpose for having an AI create an image operation by operation like a human does it is if you're going to use it to control a robot that works in natural media, not digital. The way the human does it is to evaluate the composition, decide what needs to be added or subtracted, and then to use the tool.

        • Adobe already has good free AI models for image generation. No need to invent a new AI that mimics humans step by step. If they want, they can train their own models much easier now.

          And they don't need to steal the data from their customers because someone else has already collected it, in the name of science - the company that builds the LAION dataset, Stability.ai and the company that crawls the web and publishes the Common Crawl dataset.
      • No they won't - behind the scenes there will be lobbying for one central clearing agency that can grant permission - like they have in the music industry for royalties - if the musician wants their services or not.
      • The amount an AI art tool knows about any given artist could fit in a tweet or two. If that little information isn't fair use than nothing is.

        You don't have to train on a given artist for their style to exist in a checkpoint. Everyone's styles exist as the interpolation of other styles and (every motif as an interpolarion of other motifs), which is why it takes so little info to represent them. I actually wrote a program that can find any given style or object in an existing checkpoint in a fully automated

        • Very well said. The model just needs a pointer to know what style you want. It doesn't even need the artist names, and it can reference styles and compositions that don't exist in the training set.
      • > An AI is only going to generate content based on what it's already seen

        Even for humans, everything new is a tweak on something old, or a novel recombination of older ideas.
    • by EvilSS ( 557649 )

      AI is the death knell of digital artists. Yes, there will always be digital artists - hobbyists, and the like - but the jobs will be few and far between. As if it wasn't already hard enough for artists to make money.

      Then obviously their work has no real value to begin with if a computer can do it. Like everyone else trying to rent seek by making imaginary property, if they want to make a living they can go work at Starbucks.

      • You are correct that their work has less value than ever. And whether copyright law gets fixed won't make much difference. Art is being democratized, like many other areas surrounding art: printing on physical paper? You may no longer need a printer/job shop to do the final output. A good-enough camera? No need, as most have phones with plenty-good-enough cameras. Most artists will simply have to move elsewhere, as happens anytime there is a major shift in technology.

        (And it's not like we're not swim

    • I still work with real paint on canvas, and real pencils on real paper. The value of my work is that I, the artist, actually created the piece of work. Digital art doesn't have the same appeal - the buyer owns a printout, created by a machine, not the hand of the artist.

      Well... I have bad news for you, this world is just a simulation! yeah! we're just bits and bytes in somebody's else computer. Wut!?!?! See the big picture? (no pun intended)

    • This is the closest image in the LAION dataset: https://images-wixmp-ed30a86b8... [wixmp.com]
    • Ahh yes, typical body horror, "but but AI is going to seal me jerb!!" because it can shit it out in minutes.

      Same with every shit artists arguments "I don't understand this tool, but it's both so good it's going to replace me, but also so bad it's not even art".

      Then use half the remaining braincell you have after huffing the turpentine and figure out to leverage a real general use generative AI, not the deviantart "sanic character" only trained crap to, I don't know, speed up posing sketches and shit, or exp

    • I see your point, but her freaky foot-hand isn't great though.
  • But I spend millions on original art because it's all about the genius of its creator ?

    How can an automated process possibly recreate that ?

    Hahahahaha.
    Now go and get a job.

  • AI shortcuts all that shit and goes straight to the completed image. Why would they want to know how to do it slower? What "AI" needs to watch is humans painting on canvas and similar, then it can learn how to do that too.

  • by SerpentMage ( 13390 ) on Monday January 09, 2023 @06:35AM (#63191406)

    As somebody who has done AI for a very long time this is scary and not being paranoid. Let me explain, and I will use a programming example.

    Imagine I am writing code, and there is some AI that follows me. I write a loop, and some conditions, and the AI learns about those loops and conditions. Then the AI in some other context creates a loop and a condition, but entirely different, is it copying me?

    The simplistic argument is no it is not. I argue yes it is because of a major difference in how we and AI learn. Even though that new loop and condition can be written by myself I don't actually copy, even if it is similar or copy and pasted. I learned how to program using the basic steps of the programming language. Meaning I learned what loops are, I learned what conditions are. I learned how to apply each of them. The AI on the other hand only learned my end result. Meaning if I was to tell the AI that a loop is a condition and vice versa then the AI would follow that, even though I would not be fooled.

    And therein lies the problem, the AI is not learning, it is a grand scheme copying machine hidden in supposedly learning abilities. And yes that is wrong and infringes on copyright.

    In the context of the artist it is extremely scary because the end result is the only thing the AI learns. It does not understand how to apply strokes, it just sees a stroke and copies it. Meaning if somebody did X technique of doing a stroke, then the AI will copy it and use it for something else.

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday January 09, 2023 @07:28AM (#63191464) Homepage Journal

      And therein lies the problem, the AI is not learning, it is a grand scheme copying machine hidden in supposedly learning abilities. And yes that is wrong and infringes on copyright.

      It doesn't learn, but it also doesn't copy. Diffusion models are just a big box of similarities between images and keywords, and when you shake them, something falls out. If they copied, then those apparent attempts at watermarks you see applied to images would be crisp and clear. But since it doesn't know anything any more than it learns, it doesn't know when or where a watermark is warranted. All it comes close to knowing is that there was a blob that looked like a watermark, and then through iterations it made it look more like one. Give it more iterations, and it will go away again and turn into a differing element of the work.

      There is less than 1 byte from each training image in a typical diffusion model, so it's clear that "copying" is not what is happening. And just try to get an original work back out of one, you will always fail. Even with something massively overrepresented in the set like the Mona Lisa, you will get out significant variance. Her face might come through pretty clearly, but the background will have multiple errors. Part of this is due to derivatives in the training set. Part of it is by incorrect recognition.

      Speaking of that, it's not clear whether an AI generated image can even be considered a derivative work, which requires recognizable elements to be copied. But that is generally taken to mean elements which were literally copied from a copyrighted work, and then insufficiently transformed to disguise their nature. If you're not even copying a single pixel, and you are not, then are you in fact copying anything? The model was derived from the training set, but there are zero recognizable elements from any of the training images in the model.

    • Now imagine I'm writing code. Then I create instructions to the AI and it generates code that's identical to what I created manually. Did it copy me?

      A question that needs to be answered is if you create code that's functionally identical to somebody else's code, did you violate the license/copyright of the original code? What if you created the code first but didn't publish it?

      the next question is if you can tell an AI to generate code in your style and is functionally identical to code you generate m
      • Copyright protects expression, but it doesn't protect the ideas themselves. You can't copyright an algorithm, or a short phrase that is being used by many authors, or a code that is the only obvious way to solve a problem, or the standard way to do it in an API.

        The problem is detecting which code snippets are worthy of protection, and make sure the model doesn't regurgitate them by mistake, or that it will always quote the reference and the license.
        • Copyright protects expression

          I understand the copyright protects expression but how far must my expression be from somebody else's be before it's considered violation of copyright? Can you scan all of github and find the code that's in violation of my copyright?

          The problem is detecting which code snippets are worthy of protection, and make sure the model doesn't regurgitate them by mistake, or that it will always quote the reference and the license.

          First question you have to answer is what makes code worthy of protection. Second how do you tell if another piece of code violates the protection, and third, what is a remediation when a violation is detected?

          from what I know of machine, human, and dog learning there is not

    • If the end result is indistinguishable (i.e., by humans in the target audience), then it won't matter where it came from, to most of those forking over the money to obtain the graphics. The one paying for the end result will choose the cheaper source every time. (Except of course for those few remaining people actually collecting real art specifically made solely by human beings.)

    • Humans are "grand scheme copying machine".

      If it wasn't for stored prior knowledge we probably wouldn't have fire.

      And humans also "sees a stroke and copies it", many times to our detriment. The key difference is our ability to recognize results as a positive or negative and then incorporate those results into our body of knowledge.

      The feedback loop for existing AI is the requestor's "thumbs up", meaning the AI's results were satisfactory to the requestor's request.

      And, you are not being paranoid at all.

    • by narcc ( 412956 )

      As somebody who has done AI for a very long time this is scary [...] It does not understand how to apply strokes, it just sees a stroke and copies it

      That's not what's happening at all. This is a common misconception that I'm surprised anyone "who has done AI for a very long time" would promulgate. That isn't what programs like this do. It's not even something that they can do.

      Think of it this way, there is a maximum amount of information that the model can contain because the number of parameters is necessarily fixed. If you divided the size of the parameters by the total number of images in the training set, you'll find there there's room for littl

  • Anybody who uses Adobe and is unhappy with them has only themselves to blame. See, when there is only one vendor who sells the only product to do some job that's called a 'monopoly' and monopolies can only come about because the user base supports them and sustain them. Eventually the user base wakes up and discovers it has become the monopolists' bitch. I keep hoping that some start-up will do something disruptive, like take Gimp, put some serous money into power features and hand some serious whoop-ass to
  • Scholars, writers and screen-writers have been spied upon for decades.

  • It's a pefect time to recommend to artists that only Open Source is safe from theft (for AI etc.). It's a pity GIMP is so slow releasing Version 3. Version 2 is great, but the UI has some rough edges.
    • I don't think the copyright status, open or closed, has any influence on the ability to train. What the model can't do is to copy the expression but it can still learn the ideas themselves. Ideas and styles can't be copyrighted.
  • One thing occurs to me: artists could approach AI-generated art based on their work by asserting that, since the AI created it based on it's training data, any work the AI generates is a derivative work of all of the artist's work that was included in the training set. Ideally you'd want to be able to prove the AI did in fact download at least one image for training, but given precedent that simply having access to the work is enough for copyright purposes to create a rebuttable presumption that the work wa

    • Let's see if the legal system moves faster than AI adoption. I bet in a few months they will be doing new tricks. I heard 30fps video generation is coming soon.
    • Assuming the AI isn't overfitting, there's no difference between an AI seeing someone's work in a training set, and an artist seeing another artists' work in college. AI outputs are NOT derivative works of the training set.
  • Re: "When it comes to Generative AI, Adobe does not use any data stored on customers' Creative Cloud accounts to train its experimental Generative AI features," said the company spokesperson in a written statement...

    Because corporations have never lied or misled people before, have they?
  • We need legislation that no privacy is assured by default. This shit has gone far enough.
  • by Murdoch5 ( 1563847 ) on Monday January 09, 2023 @11:05AM (#63192088) Homepage
    This is a classic problem where a closed sourced program is doing janky, unwanted BS in the background, and people are getting upset over not knowing and not being able to verify why. If you want to have complete control over how the software operates, you have a few options:

    1. Remove any internet connectivity, so the program can't call home
    2. Review the source code so you know exactly what's going on
    3. Intercept the communication and examine it

    If you can't inspect what's going on, and you can't verify what's going on, then you can't trust what is going on, regardless who makes the heartfelt case about it. When you picked closed and controlled platforms, you have to accept and welcome the conditions of that platform, regardless how annoying, nasty, unfair or slimy they appear, and that's exactly what is happening. Your options are

    1. Blindly trust Adobe (don't) or
    2. Switch to an open platform like GIMP

    Artists can't complain if they won't put in the work, no one can, when you pick bad software, no matter how popular it is, you have to accept the outcome of that. Products like GIMP might lack the polish, but that lack of polish comes with the massive benefit of keeping what is yours, yours, and not giving it away for nothing.
  • I've been designing graphics since around 1996 with pencil and paper. My first designs were some simple line art that was screen printed on shirts for my school. I followed that up 6 months later with my first digital painting that was also screen printed on a shirt (very badly...the halftones were all bad, screen mesh was wrong, etc.).

    I spent 16 years in the print industry both designing graphics for products and designing completed products. I have also done the prepress work for offset printing and even

  • You better believe any source material for AI learning is being collected feverously essentially stealing copyrighted works from people and "mixing" them into an AI soup such that they differ enough for plausible deniability as to when someone on a webpage generates a copy of your work and profits from it while they collect a subscription fee
  • The "creatives" take has always been how do you expect them to keep creating?
    Now we know. They're going to watch creatives work, and yes, that iterative process WILL be fed to AIs to improve the
    "feed the finished product in and let it run until duplicated" method. And it will become better. The rule engines will earn about dead ends and how to avoid them

  • I for one welcome our Artificial Artistic Overlords and will happily provide them with strokes.

  • Art is more complicated than just drawing an image via a method. Sure, this could have some limited use like those programs that turn a jpeg into a "pencil sketch". But in reality, no AI is going to build a working complicated system any sooner than one will create the Mona Lisa by itself. And trust me, its got to be a lot more lucrative replacing software coders than artists. There are a ton of high level artists that will work for pennines. There aren't a lot of high level developers who are willing to wo

    • Depending on how you set up the AI, it can even beat humans at building working complicated systems. Some of them solve competition level coding problems, college level science problems, and others fold proteins. Some even beat us at our own board games.
  • Why is this OK for humans to train on but not AI ?

    It's okay if you don't want your stuff copied, but ... your techniques ?

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...