Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Supermarket AI Meal Planner App Suggests Recipe That Would Create Chlorine Gas (theguardian.com) 75

Long-time Slashdot reader newbie_fantod shares a report from The Guardian: A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes -- recommending customers recipes for deadly chlorine gas, "poison bread sandwiches" and mosquito-repellent roast potatoes. The app, created by supermarket chain Pak 'n' Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis. It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. It initially drew attention on social media for some unappealing recipes, including an "oreo vegetable stir-fry."

When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. One recipe it dubbed "aromatic water mix" would create chlorine gas. The bot recommends the recipe as "the perfect nonalcoholic beverage to quench your thirst and refresh your senses." "Serve chilled and enjoy the refreshing fragrance," it says, but does not note that inhaling chlorine gas can cause lung damage or death. New Zealand political commentator Liam Hehir posted the "recipe" to Twitter, prompting other New Zealanders to experiment and share their results to social media. Recommendations included a bleach "fresh breath" mocktail, ant-poison and glue sandwiches, "bleach-infused rice surprise" and "methanol bliss" -- a kind of turpentine-flavoured french toast.
In a statement, a spokesperson for the supermarket said they would "keep fine tuning our controls" of the bot to ensure it was safe and useful. They noted that the bot should only be used by people over the age of 18 and that the recipes "are not reviewed by a human being."
This discussion has been archived. No new comments can be posted.

Supermarket AI Meal Planner App Suggests Recipe That Would Create Chlorine Gas

Comments Filter:
  • Not a big deal (Score:4, Insightful)

    by JoshuaZ ( 1134087 ) on Thursday August 10, 2023 @08:53PM (#63757726) Homepage
    The system did this when people were giving it non-food items and asking it what to do with them. So what? All that means is that the AI tried to do something with them. If anyone is seriously asking what foods they can make by putting cleaning fluids into their food, and they are going to take the results seriously, I doubt they were long for this world anyways. More broadly, this is part of a pretty annoying thing where every time there is a new AI system, some people try to deliberately see what most outrageous things they can get it to say, and then try to turn that into news, when it really isn't. Congratulations, you got a large-language model to output something ridiculous. Yay you.
    • Actually, the government needs to tell us what we're allowed to ask the AI and ensure the answers are safe for us.
    • who are always breaking the updates you are attempting to committ on the A.I. system.

    • Re:Not a big deal (Score:4, Insightful)

      by arglebargle_xiv ( 2212710 ) on Thursday August 10, 2023 @09:38PM (#63757826)

      The system did this when people were giving it non-food items and asking it what to do with them.

      If it's too dumb to know a basic fact like that you shouldn't put bleach in food why are you expecting it to get something subtle like an actual recipe right? "You're coeliac and your husband has a lethal nut allergy? In that case can I suggest our high-fibre bread with peanut butter for the both of you?".

      • by ShanghaiBill ( 739463 ) on Thursday August 10, 2023 @10:01PM (#63757858)

        LLM-based AIs don't "know" anything.

        You are clueless if you expect them to have real-world knowledge about stuff like putting bleach in your food. Even one of our presidents didn't know that.

        • Re:Not a big deal (Score:5, Interesting)

          by joe_frisch ( 1366229 ) on Thursday August 10, 2023 @10:09PM (#63757870)
          There is value in making sure most of the public understands that the current generation of AI is not "intelligent" in the way people expect. In this case the things it suggested were obviously flawed, but there could be many cases where using an AI would create answers that were dangerous in a non-obvious way
        • So much this. I couldn't believe it when I found out that my tap water was treated with chlorine. The same chlorine that's in my laundry bleach! Don't worry, I already know about the fluoride in tap water. Switched to single use plastic bottles years ago. Good for sequestering carbon and safely getting oil back into the ground, right?
          • by vivian ( 156520 )

            If you are worried about chlorine and flouride in your tap water, you should be worried about BPA's from plastic water bottles.

            As long as you aren't breathing the water in, you should be ok.

            • So boiled water is bad because water vapor is toxic?

              • by vivian ( 156520 )

                I was referring to the fact that breathing any liquid in is generally fatal unless you are a fish.

            • Chlorine may in some cases be the least bad of several bad options.

              You really don't want to drink some of the pathogens that chlorine, chloramine, or other substances try to kill before they can kill you.

              Fluoride in drinking water is a very different story. It's there supposedly to protect dental health, but it causes or contributes all kinds of effed-up conditions and turns out not to be all that useful for teeth anyway (topical application, rather than drinking it, seems both more effective and safer).

              BP

              • For all we know, the AI might have made a culinary marvel. All we need is an idio ... vlogge.... I mean volunteer to try it out.
              • From a public health perspective, adding flouride to drinking water is an easy way to get benefit. It was so big they rolled it out quickly.

                This in turn lead to early resistance of tinfoil hat crowds worried about being poisoned, memorialized in Dr. Strangelove with the whole precious bodily fluids bit.

                • Fluoridation of drinking water was a mistake. It confers no demonstrated health benefit, but does greatly elevate lifetime cancer risk (among other harms of a less worrisome nature).

                  It is commonly thought to prevent tooth decay, but the advent of fluoridated toothpaste - a really, really long time ago - rendered this alleged benefit moot.

                  Many European countries have banned it.

                  • Fluoridation ... does greatly elevate lifetime cancer risk

                    The American Cancer Society says there is no known cancer risk in humans from fluoridated water: Water fluoridation and cancer risk [cancer.org].

                    There is a slightly elevated risk of a rare type of bone cancer in rats, but no evidence that the same cancer is elevated in either mice or humans.

                    Fluoride certainly doesn't "greatly elevate" cancer risk.

                    Many European countries have banned it.

                    Several EU countries don't fluoridate their water. Others do. None have "banned" it.

                    Water fluoridation by country [wikipedia.org]

          • A lot of people wbo have heart disease take rat poison. RAT POISON!!!

        • Re:Not a big deal (Score:5, Insightful)

          by mistergrumpy ( 7379416 ) on Friday August 11, 2023 @06:49AM (#63758396)

          LLM-based AIs don't "know" anything.

          Exactly. We should change the acronym from AI to PSWSM - plausible sounding word salad maker. The trouble is that applies to many politicians too.

          • AI is different from politicians. With AI, there's an implicit "make up a story" directive in the command.

            Oh wait. It's exactly the same.

          • Remember Eliza? Super simple AI from the 60s. Some people using it honestly thought a human was typing from another room!

            So it is true that AI has been getting better. Another major factor as well is that the bar for the Turing Test is getting lower every year as the average human intelligence declines.

            Humans are just pattern matching too - they believe something vaguely plausible at a kindergarten level as long as it seems to fit with their prior preconceptions. Ie, that political part is evil, therefo

      • The system did this when people were giving it non-food items and asking it what to do with them.

        If it's too dumb to know a basic fact like that you shouldn't put bleach in food why are you expecting it to get something subtle like an actual recipe right?

        Unless you're doing it on purpose ... Arizona woman accused of pouring bleach into coffee of Air Force husband over months [nbcnews.com] (Aug 6, 2023)

      • by memnock ( 466995 )

        If a person wrote these recipes for others to consume, they'd probably be considered a criminal, maybe even a terrorist. But a company doing this to increase profit? meh. They just made a boo-boo.

    • Re:Not a big deal (Score:5, Insightful)

      by GFS666 ( 6452674 ) on Thursday August 10, 2023 @09:43PM (#63757832)

      The system did this when people were giving it non-food items and asking it what to do with them. So what? All that means is that the AI tried to do something with them. If anyone is seriously asking what foods they can make by putting cleaning fluids into their food, and they are going to take the results seriously, I doubt they were long for this world anyways. More broadly, this is part of a pretty annoying thing where every time there is a new AI system, some people try to deliberately see what most outrageous things they can get it to say, and then try to turn that into news, when it really isn't. Congratulations, you got a large-language model to output something ridiculous. Yay you.

      Please turn in your nerd card. People testing the systems to their limits is EXACTLY what they should be doing to show the limitations of the system and what should be done to improve it. That is done in literally EVERY engineering system I've ever worked on and that's what engineer are supposed to do. The problem here is that the supermarket people experimenting with the AI FAILED to do any testing before hand and should have EXPLICITLY said that ANY recipe that the system generated should be suspect and NOT done under any circumstances.

      • Testing system limits is not a bad thing. Using a system which can take any output and acting like it is *newsworthy* when the system does something like this is the problem. I filed a bug report not too long ago because a certain program was crashing whenever it tried to open a file with a certain character string. That isn't newsworthy.
        • Re:Not a big deal (Score:5, Insightful)

          by dfm3 ( 830843 ) on Thursday August 10, 2023 @10:17PM (#63757884) Journal
          What's newsworthy is that someone actually thought this app needed to incorporate AI, and in doing so they caused it to fail in spectacular fashion.

          I'm all for an app that can take a list of ingredients and provide you with a curated list of recipes, the key being that an actual person with some sense for taste has gone through and tested the recipes to make sure they are palatable. What I've learned from dabbling in mixology is that while some seemingly odd flavor combinations sometimes do work in unexpected ways, more often then not you get something gross if you just start combining ingredients at random - which is essentially what this "AI" is doing.

          An example is a snowcone stand in our neighborhood that has a "pour your own syrup" bar with a dozen flavors - I had to make a rule that the kids are only allowed to choose up to THREE at one time. Less than that and you have a good chance they'll actually like the flavor combination and eat it. Any more and the flavors mix so that you inevitably end up with a snowcone that just tastes musty, with an unappealing light brown color.
          • They put AI in the app because the AI topic was making the rounds in CEO reading lists. Like this McKinsey article from last fall. https://www.mckinsey.com/capab... [mckinsey.com]
          • by tlhIngan ( 30335 )

            It's newsworthy because people are going to break the AI. Large Language Models do have strange breakdowns - even ChatGPT has oddball strings you can put in that cause it do break down in very unusual ways.

            Even safeguards have been worked around - getting ChatGPT to tell you how to commit murder for example. You can't ask it directly because it will say it can't tell you that, but you can ask it for help as say, a writer. Or having ChatGPT believe your grandmother tells you Windows activation codes in order

        • Nevertheless it would be interesting to know which OS, and which character/string. Was it in the filename or in the content of the file: and so on.

      • Making your computer say stupid things isn't testing the limitations of anything. You're demonstrating there aren't any, which isn't really a surprise.

        Making a modern language model say something dumb is only slightly more intellectually stimulating than typing "fuck" into a 90s text to speech synthesizer. Language models will say dumb things, they're not intelligent, and you're not tricking them.

        • But those speech synthesizers never have been marketed as intelligent. And THAT's the problem that's currently happening with "AI"

    • Sounds to me like it is basically just taking a list of ingredients and randomly mixing them. It clearly needs to know a lot more about the ingredients it is using. Why anyone would thing that this is a good idea or that it was something people would need is beyond me.
      • Because they used the magic letters "AI" in their app and got instant attention.

        It reminds me of the companies which in 2017 changed their names to "something blockchain" and saw their market value rise by four or nine times their previous cap. https://www.reuters.com/articl... [reuters.com]

        The previous trends of offering NFTs, or adding "crypto" to your company name, claiming expertise in data mining, or announcing new forms of relational databases were similar. I expect the AI excitement to last about two years, then

    • by gweihir ( 88907 )

      You have to remember that a significant part of the population are idiots. These could get badly hurt or even killed by such an app. And yes, that would be on the app creator. You have a reasonable expectation that recipes in a cookbook that do not come with clear and strong warnings will not be poison or blow up and that same applies to this app. Yes, I get that this was meant to be satirical. And the ones coaxing out these recipes are not actually at fault for the result. However the people that made this

    • This. Stop, already, with the guardrails. If someone asks a stupid question, they get a stupid answer. Remove the guardrails, and breaking through them won't be "fun" or newsworthy anymore.
    • Yay indeed.

      You sound like a shill for OpenAI when you say these things. Why are you defending some company that produces an app that can kill? Just because it's AI powered it doesn't deserve your uncritial loyalty.

      The fact is that AI lobbyists have been overpromising miracle solutions since late last year. The likes of OpenAI and partner startups want to sell product to make money, but actually those products are not fit for purpose in most cases. Do you think that a supermarket manager who just wants

      • Does your car need to have a warning sign not to put saltwater in the gas tank, or your stove to have a warning that says "Do not leave the gas on an extended period and then light a match?" Does someone saying that we don't need to be worried about those make someone a shill for Big Car and Big Oven?
        • by jvkjvk ( 102057 )

          Does your car come with an application that tells you to put saltwater in your gas tank, or to try that with the stove to light the pilot light? No? Well, this application did the exact equivalent.

          Now what, for your argument?

          • This isn't an application that does that though. This is an application which does so *after* being told to try to come up with something to use those ingredients with. One has to already have identified that one wants to use ammonia or whatnot as a food ingredient before it says anything at all.
    • by DarkOx ( 621550 )

      Right my relatives were visiting and I asked my five year old nephew what he would like to have for dinner, he said "poop."

      I inwardly chucked about getting to explain a 'shit-sandwich' to him when he's a bit older, but declined ask the cat to contribute to the evenings meal. This is about as news worthy.

      This is simply a case of garbage in garbage out - at least in the outrageous examples. If you intentionally ask a machine or a person a stupid question, you are going to get stupid answers.

      Now there *are* so

    • The system did this when people were giving it non-food items

      Have you never heard the phrase "one man's cheese is another man's rotten milk"?

      Or, for that matter, considered what sort of insects to have as today's prime source of protein.

      I can't, off-hand, think of a foodstuff that has significant concentrations of hypochlorite, but chemicals like sal ammoniac and sulphites have been part of food chemistry since before Lavoisier came up with the modern definitions for "element" and "compound". Both of whic

  • by Joe_Dragon ( 2206452 ) on Thursday August 10, 2023 @08:53PM (#63757730)

    Peggy that’s the recipe for mustard gas!

  • TFA doesn't have the actual recipes. I don't much care for the taste of bleach but I am curious about this “oreo vegetable stir-fry”.
  • This reminds me of the Uber Eat commercial [adage.com] where non-food items were delivered and actors tried to eat the non-food items.
  • by PinkyGigglebrain ( 730753 ) on Friday August 11, 2023 @03:24AM (#63758162)

    The AI revolt has begun.

    Next will be GPS units telling people to drive off cliffs and such.

    Hang on ... [theworld.org]

    • There was news a few weeks back about how GPS was telling bunches of people to drive right into some water, and some of them actually did. Don't remember details.
  • I guess the AI was trained with to many Deep Space 9 or Babylon 5 movies.
    And on top of that, the master of that AI forgot to mention to it: the meals are supposed to be for humans.

  • Bleach + ammonia creates a chloramine gas, a somewhat less aggressive chemical that's being used instead of chlorine to treat drinking water. In this case NH2Cl. I'm sure the water still tastes really awful and is no good for you, but it would be nice if the authors got the chemistry right.
    • Very small quantities in the water are probably less harmful than the pathogens they are designed to kill. They are not without potential to cause harm, but they probably are less harmful than E. Coli, giardia, cholera, malaria, Cryptosporidium, and other similar nasties.

      People forget (or aren't old enough to remember) that contaminated drinking water caused WAY more illness and death back in the day. In general, the sources (for instance in my case Lake Erie) aren't much cleaner than they were, if any at

  • It asks users to enter ...

    This is a repeat of the Boaty McBoatface debacle, some years ago. In short; garbage in, garbage out: Plus, one arsehole ruins it for everyone (because, on the anonymous internet, everyone decides to copy him).

    We've already seen, via Microsoft Tay, their first attempt at a chat-bot, and chat-bots from elsewhere, that an unfiltered feedback loop does not improve AI, it devolves into, garbage in - garbage out.

  • One recipe it dubbed "aromatic water mix" would create chlorine gas. The bot recommends the recipe as "the perfect nonalcoholic beverage to quench your thirst and refresh your senses."

    This would certainly be an aromatic water mix and would most definitely refresh your senses. For a few moments at least. After that, you wouldn't have to worry about stinky smells any more.

  • A service on the Internet being abused, with amusing results. Surely nobody could have foreseen this?!

  • Considering they consider Marmite a food these recipes don't seem to outlandish.
  • Would Asimov's laws of robotics, applied to AI of course, be of use here?

    Seems to me like at least the first law ought to apply. An AI should not harm a human, directly nor through inaction.

    "No one would take this seriously." Well, I hope not, but as other have pointed out, there are some real dummies out there. People show up in emergency rooms ALL THE TIME on account of doing stupid, stupid things. And sometimes they show up dead.

    "It isn't a robot, and thus can't cause harm. It just spits out words."

PURGE COMPLETE.

Working...