Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Will AI Just Turn All of Human Knowledge into Proprietary Products? (theguardian.com) 139

"Tech CEOs want us to believe that generative AI will benefit humanity," argues an column in the Guardian, adding "They are kidding themselves..."

"There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life... " AI — far from living up to all those utopian hallucinations — is much more likely to become a fearsome tool of further dispossession and despoilation...

What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon ...) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal... The trick, of course, is that Silicon Valley routinely calls theft "disruption" — and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don't apply to your new tech; scream that regulation will only help China — all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands... These companies must know they are engaged in theft, or at least that a strong case can be made that they are. They are just hoping that the old playbook works one more time — that the scale of the heist is already so large and unfolding with such speed that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all...

[W]e trained the machines. All of us. But we never gave our consent. They fed on humanity's collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.

Thanks to long-time Slashdot reader mspohr for sharing the article.
This discussion has been archived. No new comments can be posted.

Will AI Just Turn All of Human Knowledge into Proprietary Products?

Comments Filter:
  • Dishonest argument (Score:5, Insightful)

    by SoftwareArtist ( 1472499 ) on Sunday May 14, 2023 @12:52AM (#63519791)

    This is a very dishonest argument. They aren't "seizing the sum total of human knowledge". They aren't "walling it off inside proprietary products". All that data they used to train their models is still right where it was before, just as available to the world as it was before. They haven't made anything proprietary except the new models they created that didn't exist before they made them.

    That's how information works. It's available to everyone. One person using it doesn't stop another person from using it too.

    • Don't know that it is.

      Hard to say what data is being used, and certainly google has proprietary products they sell, even as I can't access the data they have collected about me.

      And there will be new data created by AI. Exactly who owns that, especially AI was trained on thousands previously copyrighted works without attribution? And if they lock that data away (and the old philosophical argument of is it right for someone who develops a cure for cancer to destroy it. We're no closer to answering that)?

      While

      • by m00sh ( 2538182 )
        I can read 1000 copyrighted works and then write my own work based on those. That's how textbooks work.
    • by Waffle Iron ( 339739 ) on Sunday May 14, 2023 @01:27AM (#63519813)

      True enough. TFS keeps talking about the "theft" of all that content, whereas anyone who has read enough FSF screeds over the years will realize that it is instead *copyright infringement*.

      Google has always been able to scrape the internet without legal consequences because up to now it only spit out little snippets small enough to be considered fair use, and you follow the link to get to the original content. These AI models, OTOH, digest all of the original content and spew out entire derived works based on them.

      The problem with derived works, however, is that they still infringe on the copyrights of the original authors. The courts need to acknowledge this and tell all of these AI projects to wipe out all of their current data and start over, but this time being sure to appropriately license all of the training material up front. They should also award the quadrillions of dollars of statutory damages due to all of the content producers on the internet for all of this infringement.

      Hopefully that would put enough of a wrench in the works to at least delay the day when the world becomes a dystopian scifi plot where a handful of oligarchs use their machines to subjugate everyone else.

      • by Bobknobber ( 10314401 ) on Sunday May 14, 2023 @01:50AM (#63519829)

        A big question is whether datasets can be copyrighted to begin with. There do seem to be some efforts towards companies trying to legally protect their datasets from being copied. For all intents and purposes these are basically the new trade secrets, especially if all LLMs really need is just training material and fine tuning to be improved.

        Bit of a legal trap imo, since some of these datasets no doubt contain copyrighted materials be it in images, text, or links. That would be like trying to claim copyright on the works of others by extension. Naturally this would not work unless all the data itself was owned by the party in question.
        It might partly explain why the big players are being very secretive with regards to their LLMs.

        • That's not a question unless the dataset was literally written by a human. Copyright is not available for machine-generated works, at least the way things stand right now.

          • So there is no copyright on an executeable generated by a compiler??
            • Does the compiler generate a different binary each time it is invoked?

              I don't know to what extent an AI has different outputs for the same input, but it is clearly not the same as a compiler. However, if it is deterministic then I'm not sure if that variance matters. Then again, some philosophers argue that humans are deterministic as well.
      • by cstacy ( 534252 ) on Sunday May 14, 2023 @02:12AM (#63519849)

        True enough. TFS keeps talking about the "theft" of all that content, whereas anyone who has read enough FSF screeds over the years will realize that it is instead *copyright infringement*.

        They didn't copy anything. These so-called AIs don't have any copy of the original material in them.

        What they have is encoding of the probability that "blue" often comes after or before the word "sky". That came from being shown hundreds of millions of examples of phrases like "The sky is blue" and "Maybe someday we could make an AI or go to Mars, but those are blue sky projects."

        And of course, the AI has no idea what "blue" or "sky" means. Any more than your phone knows anythinig when it predicts the word "late" after you start typing the text message "running". It's just that kind of "knowledge" but on fantastic steroids.

        There isn't one sentence, or even one phrase, in that AI that has been copied from any particular thing it was "trained" on.

        TFS is in effect claiming that when a person reads some stuff on the web and learns something, that copyrights have been violated, and that something has been taken from everyone. Even though the person doesn't remember the actual words or where they read them and can't quote anything.

        I wish someone would invent an AI that could recall where it got it's facts from, recite the facts accurately, and perhaps even give quotations. That's impossible with these current AIs.

        However, Google and Microsoft will try to fool you into thinking that it is possible. By misleading layouts of web pages to make it look like the AI is citing search results, for example. That and bald-face lies about what the AI is doing.

        • by evanh ( 627108 )

          "TFS is in effect claiming that when a person reads some stuff on the web and learns something, that copyrights have been violated, and that something has been taken from everyone. Even though the person doesn't remember the actual words or where they read them and can't quote anything."

          I think you'll find that's exactly how copyright is interpreted. Hence the reason everyone that works on reimplementing an API can never have read the origin sources.

          • "TFS is in effect claiming that when a person reads some stuff on the web and learns something, that copyrights have been violated, and that something has been taken from everyone. Even though the person doesn't remember the actual words or where they read them and can't quote anything."

            I think you'll find that's exactly how copyright is interpreted. Hence the reason everyone that works on reimplementing an API can never have read the origin sources.

            I think you'll find that software is special. For other kinds of works the standard is recognizable elements, not being tainted.

            • by evanh ( 627108 )

              I doubt it. The only thing special might be the zeal of the corporations in pursuing copyright infringements.

            • by cstacy ( 534252 )

              "TFS is in effect claiming that when a person reads some stuff on the web and learns something, that copyrights have been violated, and that something has been taken from everyone. Even though the person doesn't remember the actual words or where they read them and can't quote anything."

              I think you'll find that's exactly how copyright is interpreted. Hence the reason everyone that works on reimplementing an API can never have read the origin sources.

              I think you'll find that software is special. For other kinds of works the standard is recognizable elements, not being tainted.

              I think you'll find that when someone reads something in a book or the internet or whatever, the fact that they now have knowledge in their brain does not in any way constitute a copyright infringement.

              Among other things, two elements of copyright are not in play. A brain is not a "tangible medium". Secondly, the information absorbed into the brain is not a "copy".

              In the case of the AIs, the NN is a tangible medium. However, nothing was copied into it. Well, nothing that you can identify. It just has genera

          • by ceoyoyo ( 59147 )

            Must be tough to implement an API without ever having seen it, in any form.

            Your example is highly specific, and more or less an exception to how copyright normally works. Even so, implementing an API with access only to observations about how it works is a pretty good analogy to how one of these systems learns.

        • They didn't copy anything. These so-called AIs don't have any copy of the original material in them

          Nonsense. If they didn't have some form of the original material, they couldn't do anything at all. Just because it's encoded in a form you don't recognize, it doesn't mean that it hasn't been rerecorded.

          You could also go back to the early 20th century with a PC, grab a book and type in the text. Then you could make your same claim: "Look! All the computer has is a bunch of electrical pulses arranged in 7-bit numbers. There aren't any letters in there! No copyright has been infringed!!!" Of course, in this

          • by cstacy ( 534252 )

            They didn't copy anything. These so-called AIs don't have any copy of the original material in them

            Nonsense. If they didn't have some form of the original material, they couldn't do anything at all. Just because it's encoded in a form you don't recognize, it doesn't mean that it hasn't been rerecorded.

            Actually, that's exactly the requirement for a copyright infringement claim: that you can recognize what was copied.

            You could also go back to the early 20th century with a PC, grab a book and type in the text. Then you could make your same claim: "Look! All the computer has is a bunch of electrical pulses arranged in 7-bit numbers. There aren't any letters in there! No copyright has been infringed!!!" Of course, in this century, we know that's bunk.

            That's not how these AIs work. In your example, there are e

      • by bradley13 ( 1118935 ) on Sunday May 14, 2023 @02:31AM (#63519867) Homepage

        I disagree. There are two correct ways forward. One, as you say, is to force AI modelsvo license fscking everything. Not realistic - only a way to cripple the models.

        The better approach imho (though arguably just as impossible is to fix copyrights. Copyright needs to be short and extremely limited.

        • by greytree ( 7124971 ) on Sunday May 14, 2023 @04:58AM (#63520003)
          This.

          Copyright is a *privilege* we as a society *give* to creators to encourage creativity.
          If we extend that privilege for more than a few years then it no longer encourages creativity but instead *stifles* new works based on the old.

          To benefit humanity, copyright terms must be cut to five years or shorter.
          • To benefit humanity, copyright terms must be cut to five years or shorter.

            The big player, such as Disney, would never allow that. They prefer to lobby for perpetual copyright, in slices of 20 years at a time. And they have the power to get their way, while the population at large doesn't. So perpetual copyright it is.

            IMHO a better alternative would be to give the big players what they want, in a way only they themselves benefit, thus playing onto their infinitely large greed, while at the same time twisting things such that most copyrightable stuff have shorter-term copyrights, t

          • We could also have more exceptions to copyright. We could add one just for generative AI, for example. Maybe for some uses the term is short, for some it's long, and for some there is no protection at all.

      • by Zocalo ( 252965 )
        Yeah, and precedent on derived works and copyright is *extremely* well established in most courts, regardless of whether you agree with the specifics of the legislation or not (and most here almost certainly do not). Whether you use "AI" or not, try and produce a derivative work based on, say, Mickey Mouse post-"Steamboat Willie", or wherever we're up to entering into the public domain by now, I think we all know what'll happen once Disney finds out about it.

        As usual though, the creative sector is split
      • by swell ( 195815 )

        "TFS keeps talking about the "theft" of all that content, whereas anyone who has read enough FSF screeds over the years will realize that it is instead *copyright infringement*."

        This is only partially true. Note that many data hoarders such as Facebook, Twitter and Slashdot do not create content. It is created by users even though the company may claim ownership.

        OTOH some publishers and university presses may have a legitimate claim to own the content on their servers (according to local laws).

        I believe tha

    • As with the same knowledge, one person or a collective thereof (a company eg) can never surpass the whole of humanity. With AI tools - they are just that - now there is the opportunity to steam ahead and with the same knowledge one person/company could surpass the whole: say they use it to start a news column site or programmer as a service. At least some parts of our lives could be replaced/monopolized by AI because it enables such monopolization by being quicker, better, and cheaper. Not everything: just
      • Yeah but who wants to actually be the guy who cleans the floors in the company? Janitor wages do not exactly pay well, and you are generally not acknowledged or respected by most people.
        I would rather not see the labor market become a bigger race to the bottom than it already is.

    • Some of the information they're using, in fact quite a lot, is copyrighted. Just because it is free to view does not make it free to use without permission.
    • by jcdick1 ( 254644 )

      Its not dishonest at all. What it alludes to but doesn't specifically address is the simple fact that humanity will almost always utilize what tools make things easiest, regardless of the ultimate cost. And the AI developers are literally banking on this. Yes, the information scraped and collated for AI training is still out there. But the vast majority of humanity would rather pay X amount to simply ask their question and get a result than go digging through multiple sources and work out their solution.

    • A good example of this is the old web forums of yore being migrated to proprietary silos like Reddit. If all data was held on open web forums like independent PHPBB instances, with webmasters keeping their sites around forever, you could claim the argument to be disingenuous, but that is simply not the case. For example, OpenAI had free and unfettered access to Reddit and many other proprietary online silos to scrape data. Going forward, they are no longer permitting companies or individuals to do what Ope
    • Agree... we all do this. We seize the knowledge presented to us by a.o. copyright protected works! People dedicated a large portion of their life to making those and we just use it and make money out of it. Billions of people!
      There is a lot of competition in that market. A large motivator to keep the prices reasonable. Open source projects are popping up on the topic. I do not think it will become an us and them situation. But yes, it is scary, and may have a large impact on our future. Fingers crossed.
    • Dishonest post at best. When a person learns something, it is bounded to that person. Canâ(TM)t beam it to 5000 copies of itself that are instances. It is akin to encoding the book as an inheritable trade. They should have purchased a book for each instance they ever launch.

    • I'm not sure that argument worked well with napster. If I copy the latest movie and start charging people to show it, did I infringe on an original holders copyright? A lot of these AI companies know they are stealing copyrighted works to train their AI models, they just don't care and will make a lot of money if they ignore the law
    • by ceoyoyo ( 59147 )

      Information should be free!!!

      But not for you. Or for that.

  • Regulatory capture (Score:3, Insightful)

    by crotron ( 7617930 ) on Sunday May 14, 2023 @12:58AM (#63519797)
    I suspect that any regulation on AI will end up being twisted by lobbyists into something that only makes it economically feasible to use for the same wealthiest companies that the article writer wants it to be kept away from. That's not to say there shouldn't be any regulation, just that my hopes aren't too high if it does come to pass.
    • by cstacy ( 534252 )

      I suspect that any regulation on AI will end up being twisted

      Since it's impossible to define "AI" and it's impossible to "regulate" math, there cannot be any regulation of AI.

      It will be "interesting times" to see what laws are passed and how they will be abused. I predict for the immediate future, a bunch of silly platitudes from the Government to assure people that "something is being done".

      • by gweihir ( 88907 )

        Nonsense. It is quite possible to define it by looking at the systems in use and the parameter set is not "math", it is business data.

        • by cstacy ( 534252 )

          Nonsense. It is quite possible to define it by looking at the systems in use and the parameter set is not "math", it is business data.

          Good luck with that, especially the "computing and algorithms are not math" part.

      • Since it's impossible to define "AI" and it's impossible to "regulate" math, there cannot be any regulation of AI.

        When the contents cannot be regulated, what gets regulated are causal chains. For example, years ago a group invented what they thought was a clever way to transfer movies. You took the movie file, another, short file of your own, and the app did something akin to a XOR on them, so you had as a result a file that technically wasn't the movie. Anyone with the first file, but without the second, couldn't reconstruct the movie, so the group thought they had a win, after all, the resulting file "mathematically"

        • by cstacy ( 534252 )

          Since it's impossible to define "AI" and it's impossible to "regulate" math, there cannot be any regulation of AI.

          When the contents cannot be regulated, what gets regulated are causal chains

          Court: "True. But how did you get that specific number on your storage device? Explain it to me step-by-step."

          Naive hacker: "Er..."

          Court: "That'll be $200 million, to be deducted from your wages until it's paid in full, or you die, whichever comes first.

          Could you please cite the case you are thinking of?

          The closest thing I can guess is that you're thinking of a case involving an encryption key. That's not because you can copyright a number. The number is not copyrightable. Instead, the charge is the circumvention of a copyright enforcement mechanism, theft of trade secrets, etc. The number is still not copyrighted.

          But I don't see how that relates to the discussion of LLMs, which do not encode any identifiable phrases or sentences. You have to identify the

          • by cstacy ( 534252 )

            The number was partial evidence that someone was circumventing a copyright enforcement method. The copyright violation was not copying the number. It was about using the number to facilitate making infringing copies.

            If you think that passwords can be copyrighted, then I want a nickel for every time someone writes the following words: "master", "susan", and "1234" which is the one I use on my luggage!

          • Could you please cite the case you are thinking of?

            Sorry, it's been several years and I don't remember enough details to find it. It didn't involve passwords though, but using two distinct files, one of which functioned as a coding and decoding key to convert the other file between playable and encrypted.

            The number is still not copyrighted.

            Consider, for argument's sake, a Base64 encoding of the MP3 version of a song. The Base64 is a merely a big number. The song is copyrighted, and by extension so is that number, as long as it's not considered a number, but a copy of that song. But how can yo

  • Machines can't do anything as long as robotics tech sucks. They could appoint themselves genius but there is no dextrous robotic technology that could enable them to do anything.

    • by dvice ( 6309704 )

      1. Make an AI
      2. Ask the AI to create a virus that kills all americans, but not russians
      3. Make the virus and release it.
      (Note, current AI can not yet do this, but that is not the fault of the AI, but lack of our knowledge about DNA)

      There are multiple different methods how you can use weak AI to kill humans and this AI is currently in the hands of everyone. Russia, ISIS, criminals, your ex. This is why the AI is a threat, not because of a skynet-scenario.

      • +1 there was an article here on /. the other day about work in progress to make a genomic map of the whole humanity... Yum, data.

    • by Whibla ( 210729 )

      Machines can't do anything as long as robotics tech sucks.

      Machines can't do anything alone. Yet.

      ... there is no dextrous robotic technology that could enable them to do anything.

      There's no existing tech that enables them to do everything, but if you think current generation robots can't do anything you're still living in the previous century. Closing the last arc of the circle is, broadly, only a matter of will, on our parts. Sure, it might be a mistake, but that's something human beings are incredibly good at making.

    • Machines can't do anything as long as robotics tech sucks. They could appoint themselves genius but there is no dextrous robotic technology that could enable them to do anything.

      Apparently, hackers can do a lot of damage & harm remotely. Why would that not be possible for a malicious AI bot?

    • Machines can't do anything as long as robotics tech sucks.

      They don't need robotic bodies. All they need is email.

      With that a sufficiently intelligent AI could devise a means to start a business, open a bank account, earn money, use that money to purchase computing on AWS, devise new mathematical tools to calculate the effect of proteins able to damage humanity on the long term, order bits and pieces of those proteins from protein-manufacturing labs, send the resulting protein orders to 3rd-world contractors in countries with lax biological controls, the contractor

  • by Bobknobber ( 10314401 ) on Sunday May 14, 2023 @01:30AM (#63519815)

    IMO this is another symptom of our lack of comprehensive regulation on data collection.

    Simply put, companies, governments, and even independent actors (i.e. various online communities) have been taking, sharing, and selling online data between each other for the better part of several decades. Your name, home address, financial status, medical history, spending habits, educational background, career path, family members, social media history, etc. are being traded like commodities between companies and people alike. Whether it is private or publicly available, it does not matter.

    The big kicker is that we have effectively been pressured into giving all that up as a pre-requisite for living in modern society. For all intents and purposes, you basically need to have a digital presence if you want to have a job, apply for a loan, purchase a house, use medical services, attend schools, etc. You could try to get by, but only if you are either the Amish or Ron Swanson could you maybe live completely off the grid.

    Now with generative AIs being trained off of the works of various creative and technically-minded individuals, a major line has been crossed. Professionals have been pushed into putting their work online as a means of building up their reputation and getting jobs. Now those AIs are being trained off all that work to in effect, replace many of those individuals. As such, I can understand if some feel betrayed by this, especially considering that AIs can produce things at a much faster rate while also improving themselves. Millions of people may have very well trained their replacement without having been notified of it, and those products are being monetized as we speak. That alone has serious implications for our economic, cultural, and societal development.

    So really, with this line crossed, what is next? Are companies going to start taking our thoughts so that they can train the next generation of AIs on them? We already have chatbots trained on social media and forum boards, so it is not far fetched that with MMI tech human thoughts will be collected for various uses.

    I hope that we can draw a solid redline on this matter, but really this is something that should have been done a long while ago.

  • by Petersko ( 564140 ) on Sunday May 14, 2023 @02:01AM (#63519837)

    Now it's here. The Great Social Upset. Now all those fringe-sounding things like Universal Basic Income are going to have to get considered. It might take a while to play out, but it's very hard to stop a landslide in motion.

    While we're rejigging jobs, can we disperse agriculture a bit? Would be nice to buy tomatoes that didn't take an 800 mile trip to get here.

    • It's cheap and easy to deliver tomatoes 800 miles. It's expensive and stupid to grow tomatoes en masse in Minnesota.

      • You missed my point. It's cheaper under the current paradigms. We're undergoing a shift. Maybe we can rethink some things during that disruption. Tomatoes grow fine near me - they just can't compete with the economic advantages of other regions. A resurgence of local agriculture where plants do well would not be a bad direction.

      • It's cheap and easy to deliver tomatoes 800 miles.

        Sure, if you're willing to deliver shitty tomatoes. It's often cheap and easy to do a shitty job.

    • "Buy Local to Go Green" is one of those silly ideas that somehow a lot of people believe.

      It takes less CO2 to deliver sheep from Australia to London by boat than from Scotland to London by bus.

      The distance has very little to do with anything.

    • by ceoyoyo ( 59147 )

      What's stopping you from growing tomatoes? You don't want to? Exactly.

  • Correction (Score:2, Informative)

    by LeeLynx ( 6219816 )

    "Tech CEOs want us to believe that generative AI will benefit humanity," argues an column in the Guardian, adding "They are lying so they can rob humanity blind ..."

    Ah, much better.

  • If an AI made it, then I don't recognize it as IP. I'm sure my government will eventually try to persuade me otherwise, but I'm not wiling to concede this easily.

    • That's nice. But if "society" (as represented by the legal system and the government) deems otherwise, "I don't agree" isn't a great defense if you're in court over the matter.

      • Is this a democratic society that represents the views of its people, or is it an oligarchy that represents the interests of a moneyed class? I'm not under any moral obligation to cooperate with the latter.

  • I think there is question to be answered: is teaching a model on a dataset different from *copyright infringement*? You are not directly using the content. What you're doing is, teaching a model on it. Isn't a human a learning model? Now what about a person with eidetic memory? Arts, philosophy, economy, IT, every field you can think of, all of the great people were inspired one way or another. By knowing the date of a great person and their teacher, you can pretty much assume 50% of their work. Is that *c
    • The claim that LLMs think like humans is something that has yet to be conclusively proven. And until the matter is resolved, does it make sense to assume they are?

      At best you have what might be a fragment of a human mind, but that alone is not sufficient to say it is exactly like a human. Monkeys are effectively related to humans, and have demonstrated the ability to wield tools, form complex communities, and even wage war on one another. I would argue they are much closer to humans then LLMs are, but we do

    • There is a difference. A human learning by heart copyrighted material, or also, a human learning from copyrighted material, then trying to make a living out of his knowledge, is "fair use." Sure, the human can go and tell his human friends of all the cool things he learnt. He could even post some of his heart-learned knowledge on a web forum (possibly then sued for copyright infringement if he starts posting a whole book he had learnt, typing it word by word on his keyboard). The possibilities are endless,

  • All those AI projects needs big computational power, so currently most of them are run by private enterprises with deep pockets, that make the results proprietary. If society wants/needs non-proprietary results, then it needs to create some non-proprietary efforts for AI projects: distributed projects, projects run by trusted NGOs, projects run with public (government) money and such. Is that simple.

    And yes, I expect it to be a problem for FOSS: various proprietary software will leverage the power of genera

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Sunday May 14, 2023 @02:35AM (#63519875)

    ... of the "I" in "AI".

    If the cost of an original painting or novel or movie/videogame/newsreport/essay/piece of software/innovative car design/song/whatever drops to fractions of a eurocent because AI can spit out an infinite and endless stream of those at a rate no human can consume for pennies worth of electricity, we can rest assured that the concept of "intellectual ownership" will become null and void and more laughable than a Gorilla sueing humanity because they "stole" his innovative method of nose-picking.

    In fact I'm pretty certain that quite soon now the very concept of "human creativity" and "wise insight" will become way less foo-foo wha-wha esthetically spiritually romanticized in a very sobering way, once machines can spit out premium quality ideas and their implementations like some fully automated conveyor belt spits out rivets or strands of spaghetti.

    This writer just hasn't quite grasped what's actually about to happen.... Maybe he should ask an AI to revise his piece. I'm not even joking.

  • by StikyPad ( 445176 ) on Sunday May 14, 2023 @03:25AM (#63519913) Homepage

    Generative AI might be able to produce passable works of prose and "art," but that only shows how low our bar is for what is "art," and it's helped by the fact that art is an experience of the viewer more than a work of the creator. If Andy Warhol had never existed, and an AI generated Campbell's Soup NFTs today, they would not suddenly be worth millions. Andy's (assistant's) silkscreens are worth millions because they are associated with Andy Warhol and his "transformative" effect on culture (allegedly, or whatever non-disprovable ideas people want to believe about him), but not because there is something inherently valuable about a depiction of the mundane.

    Further, generative AI cannot perform problem-solving in any meaningful way, nor was it designed to. It might be coaxed in that direction -- and people have tried -- but it's far from the most efficient method. It cannot make connections between seemingly disparate pieces of information, or find new applications for existing technology, or "ponder" the training data it received in order to "understand" the world/universe and its place in it. It does not even have beliefs, per se.

    Indeed, without human intervention to keep it "on the rails," generative AI quickly goes awry, weighting the significance of some training or input too heavily, or not heavily enough. This is an epistemological problem, by the way, not a design question. Humans suffer from the same limitations, and we rely on one another, to some extent, to validate our ideas or experiences, even (and especially) in the scientific community. (Non-generative) AI can certainly apply existing knowledge/beliefs in known ways, but only because we tweak it until it does, not because of its (non-existent) ability to understand what problem it's solving or what training data is most suitable for the task. That level of understanding is at least an order of magnitude more complex than anything AI does. I'm not saying it can't happen one day, but it's not going to be anytime soon.

    • but that only shows how low our bar is for what is "art,"

      Are you talking philosophically or physically, because AI algorithms have been used to produced far nicer works of art than what many have called art.

      Note what I said though "have been used to produce". AI isn't free thinking on its own. As it stands it is a tool. It cannot come up with the idea of Andy Warhol, but a human mind certainly could and thus could direct it to be created. Is a film director not an artists, on account of telling other people how to make his vision happen? Sure right now you can sa

    • by gweihir ( 88907 )

      Indeed. Artificial Ignorance annot even perform on the level of an average human and these are as dumb as bread.

    • Famous last words.

      Humanities last tweet: "ChatGPT 5 isn't all that impressive."

      To get you up to current affairs:

      The human brain has a neural capacity to the equivalent of roughly 10^18 bits plus all the sensory input and the neural network capacity of our evolved naked monkey body.

      AI engines are at (roughly) the equivalent of 10^12 bits right now. Give or take. And rising. Fast. And they don't have two eyes and ears, but millions of cameras, image and audio sources plus other sensory input that humans don't

  • by DMJC ( 682799 ) on Sunday May 14, 2023 @03:32AM (#63519917)
    We don't actually have to make all AI output proprietary. We could use it to maintain and dramatically expand the scale of open source. Technical debt and maintenance burden are two of the largest problems in open source. If AI can fix it then open source could become very wide spread.
    • by gweihir ( 88907 )

      AI cannot fix technological debt. Fixing technological debt requires insight into why the original solution was or is bad and how it can be done much better. AI cannot do "insight".

      • That's not what I meant. Ai can analyse code for api changes and provide the necessary code required to modernise api updates. E.g I contribute to the vegastrike game engine. The lead developer left and noone maintains the ffmpeg code. AI is perfectly suited to this. It can analyse the difference in api calls between ffmpeg versions and migrate the codebase from the outdated ffmpeg api to the one current distros maintain.
        • by gweihir ( 88907 )

          Ah, no. It may be able to adjust an API signature, but as soon as the functionality is not the same, AI is worse than useless.

    • I think the article is a bit confused & it's not really making clear what the issues are. A few cases have already found that AI generated works can't be copyrighted. If that view holds, then it's all basically public domain. The generated works aren't the issue here, it's the technology to generate the works. That's what's proprietary & that's what Microsoft, Apple, Google, Meta, & Amazon are going to sell to us.

      The biggest danger that I think might unfold is that AI can & is generating
  • by nikkipolya ( 718326 ) on Sunday May 14, 2023 @04:36AM (#63519975)

    It's sounding like humanity Vs Backhoe loader argument to me. Companies watched humans digging and picking up dirt with small spades and shovels. And then they went ahead and invented a large artificial scooping hand that can do the job more efficiently and quickly. Oh No! They are ripping off humanity by learning how to dig and pick up dirt after watching humans. They certainly did not have the consent from humans to do so.

    To me the brain is just another, albeit, more complex mechanism than a Backhoe. And companies are trying to build a bigger and efficient one to profit from it!

    Like me, generative models read this article too. Just like me, they learnt something too. If the author didn't intend others to read/learn about their ideas, there is a way out. Don't publish.

    • It's more like the AI was trained on the designs of multiple company trade secret back hoe designs that were leaked on the internet and created a backhoe design using those designs combined together in a trivial way and then declared it designed a backhoe and that the nearly identically copied designs are fair use
  • Of course Microsoft, Apple, Google, Meta, & Amazon aren't evil & in now way would they ever knowingly do anything that would harm people or restrict their liberty. All their founders & CEOs are philanthropists, right? At least the media companies tell us so. They're all making the world a better place... making the world a better place... making the world a better place...
  • Sometimes I think I live in another universe. In my universe the big companies are in a desperate race to upgrade their products with AI and concurrently there are literally millions being funneled into dozens of AI startups pitching the "next big thing". Now I'm no friend of Capitalism, I really am a Socialist but even I can see that multiple players collecting human knowledge is far better than a singled government controlling it all. Put it another way I'll take my chances with greed any day.
  • For decades we all have been brainwashed to equal copying with theft by the usual suspects. No wonder that now some ignorant scribbler presents the training of an AI as "theft" when in reality it is only copying. Building, training and running that AI costs money, so it is understandable that using the AI will not be free. But with so many players competing, that price will be reasonable. The article is based on a series of wrong assumptions.
  • AI will turn knowledge into proprietary products, just as encyclopedia salesmen,
    in the same way as newspapers like the Guardian turn human lives into proprietary products.
    and the same way as {hospitals, farmers, miners, etc. etc.} turn human effort into proprietary products.

    • by lenski ( 96498 )

      "It's just life" is a nearly perfect analogy of how products and projects based on the techniqes we call "generative AI" will affect the ways we live.

      Generative AI systems are evolving under control of their owners. The problem is that these systems evolve at a far far faster rate than anything we have seen before, and I believe we (humans) are not ready.

      The arrival of information automation eliminated lots of drudge jobs: Data entry, clerical work, secretarial pools, etc. The arrival of powered excavators

  • When AI can drive a car, I'll start to think that there may be something to it. As long as it's just regurgitating what humans have created, I'll think it's just a lot of overhyped nonsense.

  • by Opportunist ( 166417 ) on Sunday May 14, 2023 @07:27AM (#63520157)

    If we don't stop them, yes, they'll grab what they can and leave a desert in their wake.

  • "further dispossession and despoliation."
  • Anyone with a major website should just put a footer under it that they don't consider copying the data into memory for machine training fair use and explicitly deny any rights to use the content for it. Also have a whitelist for spiders. Reddit and Stackoverflow could have easily made a couple 100 Million if they had done this a couple years ago, but better late than never.

    I know Reddit and Stackoverflow are negotiating right now, but there's no advantage in leverage to play nice. OpenAI thought it easier

  • by lpq ( 583377 ) on Sunday May 14, 2023 @10:06AM (#63520337) Homepage Journal

    How can AI steal anything -- they can't own it:
    Supreme Court says AI can't copyright or patent.

    When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application

    I.e. Generative AI can't receive copyright or patent protection under US law.

  • by zuki ( 845560 ) on Sunday May 14, 2023 @10:26AM (#63520361) Journal
    ...appears to contradict a large part of what the writer of this article built to support their argument.

    Here's one of many articles [calcalistech.com] commenting on this now-famous "We have no moat, and neither does OpenAI" statement that acknowledges that nimble open-source LLM projects are scaring the bejesus out of these companies because they're able to do much more with smaller training sets and resources. Which entirely contradicts the article's main point, which appears to be that proprietary datasets will only help multinationals enslave people. (a sentiment I don't generally disagree with, but in this case I'm not sure it holds sway)

    So even though I may not be qualified to pass judgment on what this scare piece implies in The Guardian, it feels to me that the copy editors may not have done due diligence and studied the situation anywhere near what should be the required amount of research before allowing this writer's article to be published? Or it might have made sense three days ago based on the data we had, but doesn't anymore today?

    One thing for sure: things are moving very, very fast and this speed is only increasing, not sure if exponentially but it sure feels like it. This implies many of us (Google employees and managers included) are left behind unable to grasp what it actually all means.
    • From your article:

      “But this is a short-term view. When you look at the future, ambitious applications such as making long-term investments in the stock market, managing complex international logistical systems, even dealing with and solving political problems and crises, could result in the creation of a moat by the bigger and better-funded players that the open source community has no chance of overcoming.”

      While the open-source community can squeeze out more performance from anyone particular m

    • Despite OpenAI's lead, they can't have a moat at all.

      They are owned by a non profit promising to democratise AI ... trying to enforce NDAs and invoke trade secret law against say Anthropic would be very dangerous in court because of that. They might keep a secret from the plebs, but their researchers are leaking them all over the place commercially.

  • What the discussion about AI is doing is highlighting the problem with "intellectual property". It is the idea that humans somehow create the knowledge they gain and therefore own and control it. AI, like any computer program, is just a mathematical model that calculates outcomes. The idea that it is "intelligence" is just anthropomorphizing it. In fact, human intelligence is not just about calculation and we don't make decisions based on reason. But you can observe the outcomes and use the probabilities f
    • >In fact, human intelligence is not just about calculation and we don't make decisions based on reason

      You will need to very carefully define your terms here... because I would absolutely argue human intelligence is 100% calculation (and therefore reason), it's just on a fuzzy platform with inputs and values of which we barely have a beginning of an understanding.

      Essentially, though, we're calculators calculating the best way to propagate our genes. Everything beyond that is an interesting emergent prope

      • We don't make decisions based on calculation. We make decisions based on emotional responses and our genes drive those emotions not calculation. Moreover the survival of genes is just a random result, not based on any calculation. I don't mean to suggest that calculation isn't used to inform decisions. But the final weighing of the decision is done by emotions applying what we often call values.
        • >We make decisions based on emotional responses and our genes drive those emotions not calculation

          Those are calculations; your neural net taking inputs, applying weighted decision processes, and coming up with an output.

          That you call the results 'emotions' is just linguistic flavor.

  • All this argument boils down to is A) a bunch of lawyers on retainer whose sole job is to B) ensure that their clients keep getting money for content that they created 10, 20, 30, 40, 50 years ago and who haven't had a real job since. Screw them.

  • > [W]e trained the machines. All of us. But we never gave our consent. They fed on humanity's collective ingenuity, inspiration and revelations (along with our more venal traits).

    As a writer, wasn't I inspired by all the works I've read? Aren't past writers or past artists precisely what we study if we learn writing or art?

    Why wouldn't AI generated art work this way?

    • Likely for the same reasons why we have human rights and animal rights.

      Would you consider a monkey a human? A monkey can demonstrate novel problem-solving skills, wield tools, create societies/tribes with its kind, and even wage war over territory. Does that make a monkey a human? For much of society (outside of eco-hippie communities), we do not. Hence why we do not grant monkeys copyright for selfies.

      Thus far, we have no conclusive evidence that LLMs are sentient, let alone think like humans. We just equa

  • For years, Slashdolts, often with very low user IDs, blathered on about how "copying is not theft". Well, what's good for the goose is good for the gander, eh? If Joe Sixpack can download MP3s and claim "it's not stealing because you didn't lose the song", then huge corporations can do the same. If you're one of the people who staked that moral claim, don't come crying. The chickens are coming home to roost, but it's all rather foul.

  • Knowledge was always put in proprietary products. They are called books and they have copyright.

If you want to put yourself on the map, publish your own map.

Working...