Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI IT Science

When Two Years of Academic Work Vanished With a Single Click (nature.com) 132

Marcel Bucher, a professor of plant sciences at the University of Cologne in Germany, lost two years of carefully structured academic work in an instant when he temporarily disabled ChatGPT's "data consent" option in August to test whether the AI tool's functions would still work without providing OpenAI his data. All his chats were permanently deleted and his project folders emptied without any warning or undo option, he wrote in a post on Nature.

Bucher, a ChatGPT Plus subscriber paying $20 per month, had used the platform daily to draft grant applications, prepare teaching materials, revise publication drafts and create exams. He contacted OpenAI support, first receiving responses from an AI agent before a human employee confirmed the data was permanently lost and unrecoverable. OpenAI cited "privacy by design" as the reason, telling Nature it does provide a confirmation prompt before users permanently delete a chat but maintains no backups.

Bucher said he had saved partial copies of some materials, but the underlying prompts, iterations, and project folders -- what he describes as the intellectual scaffolding behind his finished work -- are gone forever.
This discussion has been archived. No new comments can be posted.

When Two Years of Academic Work Vanished With a Single Click

Comments Filter:
  • FAFO (Score:5, Insightful)

    by SlashbotAgent ( 6477336 ) on Friday January 23, 2026 @11:43AM (#65944346)

    Fucked Around And Found Out(FAFO)

    Scientist wanted to know something and decided to perform an experiment and observe the results. He got his results. The experiment was a resounding success.

    That he didn't actually like the results or that he wants to whine about it now, doesn't really matter in the least.

    • Re:FAFO (Score:5, Interesting)

      by gweihir ( 88907 ) on Friday January 23, 2026 @12:03PM (#65944422)

      Indeed. Come to think of, they did run a safety test on Chernobyl Block 4, with equally nice and clear test results.

      Idiots with no clue how thigs work are really everywhere and they only learn if things go spectacularly wrong. Sometimes not even then.

      On the plus-side, while this person completely failed the IT aspects, he at least did the right thing to publish this so at least some others get a warning.

    • Re:FAFO (Score:5, Interesting)

      by thegarbz ( 1787294 ) on Friday January 23, 2026 @12:10PM (#65944462)

      That's some grade A gaslighting there. If the scientist was given reasonable indication of the intended action rather than the modern version software gives "Something will disappear, yes/no?" there's reasonable grounds to assume that he wouldn't have proceeded with the experiment.

      Stop excusing shitty fucking modern software that is opaque to the user to the point of being actively harmful. That goes with Windows "Something went wrong." error messages.

      If you need to follow 20 fucking prompts to close your Amazon Prime subscription hen Open AI should have been able to fully and clearly explain the exact impact of what clicking yes will be, and then ask you again.

      • Yes, agreed it was a bad design on the part of ChatGPT. But on the other hand, ChatGPT was never designed to be a permanent repository of everything you every asked it to do for you.

        Let's do a thought experiment. You're a researcher digging into what happens when you use Google to search for various combinations of prompts. You should not trust Google to retain a log of your searches and the results. Instead, a careful researcher would log them *somewhere else.*

        So I see both sides.

        • Re:FAFO (Score:4, Insightful)

          by thegarbz ( 1787294 ) on Saturday January 24, 2026 @04:44AM (#65946264)

          While I agree with that, there is a difference with Google Search history retention and ChatGPT's history retention. Google's is incidental. Google's isn't reviewable. Google doesn't store your search history in specific project folders.

          ChatGPT does. The implications of a system that explicitly is designed to retain logs for later review and sort logs into different concurrently active interactions is that it would be there for continued use while you are paying for the service. This isn't some incidental history function, it's core to the product.

          The better comparison would be Google Docs / Drive. No doubt Google would also erase your drive after you cease being a paying member and the system reverts to its default limits, but they won't do so without setting it to read only and giving you ample opportunity to download your files. In fact Google provide you TWO YEARS to do so.

          The user in this case changed a setting that should have had point forward implications. The data used previously has already been used and was used under a different configuration so there's no reason to delete it with the change in question.

          • Your search history on Google is actually reviewable: https://myactivity.google.com/... [google.com]

            Sure, it doesn't let you put it into folders, but it's there. And if you go the top of the page and uncheck the buttons enabling Google to retain your history, it too will disappear, forever. This is essentially what this researcher did: he turned off the history tracking feature in ChatGPT. Sure, it should have warned him. But because he didn't do it accidentally, but wanted to see what would happen, he would have likely

    • Re:FAFO (Score:5, Insightful)

      by unrtst ( 777550 ) on Friday January 23, 2026 @12:12PM (#65944470)

      100%, and I can hear all the comments about backups stampeding towards us. But this is even more straightforward than general backups. If you're going to pull a switch that you know may be risky and you're doing it to find out what happens, at least do a quick "cp -a existing_dir new_dir" first! Or take a filesystem or VM snapshot first.

      This story could easily have been, "When Two Years of Academic Work Vanished With a Single Drive Failure".

      • or, or... (Score:3, Insightful)

        by Anonymous Coward

        Hear me out -- if the dude cared about the answer that much, why not spend $20 on a second OpenAI account to see what happens. Why would he do this on an account that has so much not-backed-up data?

        Like if I was going to test out GMail's kiddie porn detection, I wouldn't take a photo of my kid's dong and e-mail it to myself on my 20 year old account that has all my mail saved. You have no way of knowing whether Google will immediately and permanently delete his entire account (that's apparently what does ha

    • Docker? Venv? There are ways to test this stuff safely.
      That said, if you want to test natively and without safeguards, make an f'ing backup on a flash drive or multiple cloud services.

  • by allo ( 1728082 ) on Friday January 23, 2026 @11:44AM (#65944348)

    If it's important, have a backup.

  • Surprise! It did what they told it to do.

    • by HiThere ( 15173 )

      If this is the case I remember from a few months ago, then no, it didn't do what he told it to do. One of the basic instructions was "don't change anything in my folders". (That's an extreme paraphrase, as I read about it several months ago. I think on Slashdot. But it's the essence of the original instructions.)

      OTOH, I read about it several months ago, and it had happened several months previous to that. So it was one of the really early LLM chatBots. And notice that he wasn't a computer technologist

  • Color me shocked (Score:5, Informative)

    by DeanonymizedCoward ( 7230266 ) on Friday January 23, 2026 @11:45AM (#65944352)

    Put important data in a cloud platform, i.e. someone else's computer. Then just decide on a whim to tell the platform it can't store or access your data anymore. Don't bother making copies of anything before clicking...

    If turning off the "data consent" option deletes everything without confirmation, that's maybe mediocre UI design, but not completely unexpected.

    • by thegarbz ( 1787294 ) on Friday January 23, 2026 @12:13PM (#65944472)

      Actually it very much is unexpected. Removing historical output on a setting change isn't obvious. Most software does *not* behave that way. Something this dangerous should not be behind a single prompt.

      It actually saddens me to see so many Slashdotters here not criticise OpenAI for this. I thought this site had the *good* programmers, but the way you're all acting I wonder if you secretly moonlight for Microsoft's UX design team.

      • There isn't really functionality "behind a prompt." It can generate and run any command you can when the guardrails are off. If you can rm -rf /, so can it.
      • by know-nothing cunt ( 6546228 ) on Friday January 23, 2026 @12:19PM (#65944508)

        the way you're all acting I wonder if you secretly moonlight for Microsoft's UX design team.

        I applied, but they decided I wasn't cruel enough.

      • by NaiveBayes ( 2008210 ) on Friday January 23, 2026 @01:22PM (#65944666)
        *Good* programmers have long since taken responsibility for their own actions. They know pretty much every piece of software out there has a way to screw you over and you've got to protect yourself and your data well in advance. Yes this is terrible UI, but terrible UI is everywhere. Also ChatGPT is still a relatively new product that's still trying to evovle. As such "not completely unexpected" isn't so much giving them a pass for bad UI design, but more an accurate assessment of the situation. But yeah, it does suck for the user, I hope the guy recovers from it okay, it should be fixed (the article may encourage that to happen), but for most programmers the lesson is less "this specific company should improve this specific piece of UX design", it's "never put all your reliance on a single piece of software, ESPECIALLY when its cloud based, and DOUBLE ESPECIALLY if it doesn't have regular backups."
        • *Good* programmers have long since taken responsibility for their own actions.

          But we're not talking about programmers as end users. We're talking about normal users. No doubt everyone here on Slashdot understands the concept of backups and checking data retention. But the thing many here seem to be missing is that the knowledge of the people here is *NOT* normal, and you can't expect normal users to understand the same thing most of us here understand.

          That's the crux of the shitty UX design. These days it seems to be more about making assumptions and alienating users or providing min

  • by aglider ( 2435074 ) on Friday January 23, 2026 @11:51AM (#65944370) Homepage

    Coild you please spell "backup"?

    C'mon! That's your work and you've never done a backup in two years? C'mon!

    • Seems like they didn't do much work in years either...
    • by Junta ( 36770 )

      To be *fair*, a lot of the providers have been going out of their way to make it difficult to extract data in a manner amenable to backups.

      Making the data both available for their own services and catering to bacup/restore concerns is against their business interest.

      • Good point. Like cloud providers using proprietary formats so you you can't move machine images up/down... only data.
        However, like the X-Files says, Trust No One. That's my motto.
        I've told my clients forever, do not use email as a filing system. I have old fart clients that see the entire computing world thru Outlook. Don't Do That!!
        You cannot rely on anyone or anything. Copy your data or important parts of your data to private local storage. Back the fuck up out of that. But Never never trust your service
  • by caseih ( 160668 ) on Friday January 23, 2026 @11:57AM (#65944388)

    When working with AI, the first thing to always do is commit everything to git, then make a branch and see what AI can do. Once it's done, merge it back in. While the concept of diffs don't make as much sense when it comes to spreadsheets and documents[1], git is still a decent framework for managing projects with them in it. If you use LibreOffice you can store in "flat" versions of the formats (such as .fodt) which are single-file, uncompressed xml files. But the diff won't be very readable to humans.

    [1] I've long thought git should be able to better deal natively with modern document formats that are really just xml inside of zip files. Although a diff of an XML file is probably not too useful for humans. Still, it would be nice if the parts of the document that didn't change (style sheet etc) would be stored in a way to avoid duplication. Can anyone point me at resources to do this better? Surely there's a git-based utility out there to make this happen.

    • by caseih ( 160668 ) on Friday January 23, 2026 @12:07PM (#65944448)

      There's nothing like posting to slashdot to get one to do some research after posting!

      There's a utility called ReZipDoc [github.com] to assist in using git with OpenDocument formats. By converting the file to an uncompressed zip file, git can deduplicate the parts of the file that didn't change, even though they are still inside the zip file. And it sets up git to give you a plain-text diff for human consumption. Pretty slick. And on a modern filesystem like ZFS, BtrFS, or BCacheFS, you get get compression at the file system level so you don't need zip compression anyway.

      • by caseih ( 160668 )

        I wasn't happy with having to use the Java sledge hammer (and all the maven-downloaded dependencies) to make such a simple thing work, so I had Claude make a simple python equivalent, which only needs odt2txt. It seems to work nicely, but probably has edge cases I haven't addressed yet. https://github.com/torriem/rez... [github.com]. Claude produced some rather reasonable documentation for it I thought. Use at your own risk of course, but works for me!

    • by unrtst ( 777550 )

      it would be nice if the parts of the document that didn't change (style sheet etc) would be stored in a way to avoid duplication. Can anyone point me at resources to do this better?

      There are better document management practices that handle that even better. IE: don't use a word doc as the working copy; Store body copy separate from style sheets and embedded images and such. Choosing to shove it all in a packed blob in the first place is, IMO, the misstep, and now you're trying to solve for a self induced problem.

      • Totally agree, don't use Word docs to store your info. Store your "doc" as HTML (or MHT if you must have 1 file at the end), or LaTeX, or AsciiDoc, or whatever text based format you want ... just don't store it as a binary blob. Honestly, HTML4 is good enough for most docs that I've seen which are almost all text plus the occasional picture/image, and it can do tables too as well as stylesheets. There are even WYSIWYG programs for creating and editing HTML and LaTeX if you need that.

        • by BKX ( 5066 )

          Ironically, modern MS Office formats are no longer true "binary blobs". They're ZIP files with a different extension. Inside are copies of all the resources and the main file in the ZIP is an XML file. The actual name of the MS Office formats is OOXML, that is, Office Open eXtensible Markup Language. That isn't to say that .DOCX files are the way to go: sometimes they are; sometimes they aren't, but they aren't the completely proprietary garbage that OG DOC files were. You can even edit the XML in the ZIP d

    • by CubicleZombie ( 2590497 ) on Friday January 23, 2026 @12:36PM (#65944560)

      I recently noticed that GitHub Copilot for Visual Studio has "checkpoints" where you can restore the state from past chat history. That's pretty cool. I've been using a script to back everything up each time I turn Copilot Agent loose.

    • This should work unless the researcher gives the agent enough access to delete the repo.

      • by caseih ( 160668 )

        Yes there's no way I'd let Claude anywhere near my github account. I can't fathom why anyone would want to connect Claude Code directly to github. Note that Claude can do a bit of its own revision history and roll back things if you ask it to.

    • Yes to version control with LLM tools! FYI, Aider already has it integrated. In fact, use version control for everything you do on the computer. (Then backup your repo, need I say it?)
  • by gweihir ( 88907 ) on Friday January 23, 2026 @11:58AM (#65944394)

    This is a complete data-management failure on low amateur level. Routinely happens to people thinking they do not need IT experts because it "is all easy". Zero compassion from me, this amateur did it to himself.

    • by thegarbz ( 1787294 ) on Friday January 23, 2026 @12:16PM (#65944484)

      Routinely happens to people thinking they do not need IT experts

      Not surprised you'd make a user toxic response. The reality is if we design systems where end users require "IT experts" to safeguard their data then you designed an incredibly shitty fucking system.

      Also you, *specifically you*, are always the first to criticise OneDrive autobackups of work documents in the cloud, precisely the kind of software setting that can help save an end user who isn't an IT expert given the automatic retention policy of cloud data. But no, all you're able to do is criticise users, and criticise software that supports users. I actively hope your job is to be some sys admin hidden in a basement and that you never interact with users.

      • by esme ( 17526 )

        No, this is not some random user who was farting around with a chatbot. This was a researcher at a major research university. He has access to probably a dozen options for local and cloud storage that he could have used to backup his important data, and he chose not to. He has a grants office, and access to data management instruction and support. He chose not to use it or follow their advice.

        I work at a US research university, and I know that we have several units on campus providing this kind of support,

        • No, this is not some random user who was farting around with a chatbot. This was a researcher at a major research university.

          I just looked it up, it turns out that researches at a major university are *checks notes* not IT experts and are in fact random users. Just like nearly every person who works with a computer at any company somewhere.

          Now you're just as bad as the OP. Blaming the user for not knowing something that he would require knowledge of their absence of knowledge within the same field that the expertise comes from. Just like the OP you're an IT expert and have completely lost touch with normal computer users.

          • by esme ( 17526 )

            "The user can do no wrong" is just as stupid as "everything is the user's fault" â" I'm not taking either of those positions. If you actually read what I wrote, you'll notice I am not blaming him for his IT skills or making a mistake with the software he was using. I'm blaming him for not meeting his responsibilities as a researcher, or taking advantage of the teams of people he had access to help him meet those responsibilities. He had a responsibility to preserve his data and working materials, he ha

            • by gweihir ( 88907 )

              Indeed. The core failure of this person was, as I already wrote, not to consult an actual expert. At this stage, IT systems _cannot_ be made "user-safe". With a look at tech history, my estimate is that achieving this will take another 100 - 200 years. Yet this guy just assumes he was competent and skilled enough to have a handle on things. That is a personal failure and he does not have the excuses of being uneducated or not having access to expert help.

        • by Mitreya ( 579078 )

          I work at a US research university, and I know that we have several units on campus providing this kind of support, encouraging best practices at data organization and preservation.

          I work at a big US-based university (R2 but I am guessing that you may be at R1?).
          I can confirm that there is absolutely no such thing in my university. IT is mostly concerned about blocking off external access (sans VPN) and adding 2FA to absolutely everything. I had to argue for them to allow port 80 on one of my NAS devices in a lab

          If there is a single group or person who would help with data organization and preservation, I am yet to meet them (in my 14 years).

      • It says a lot about our industry how almost all the comments in this thread revolve around blaming the end user. As an old-fashioned UI designer, this pisses me off to no end... and yet, nothing ever changes.

        • by gweihir ( 88907 )

          I am not blaming the end-user for his action. I am blaming the end-user for not asking for help. And that is a completely valid criticism.

          At this time, IT cannot be made safe for regular users. The technology has not advanced to that level. It will probably require another 100-200 years to get there.

      • by gweihir ( 88907 )

        No surprise you do not get it. You never do. You would also call it "toxic" if I call somebody out for doing brain-surgery on themselves and it ends up failing.

        Please realize you are nothing but a problem and start working on yourself. The first step should be to shut up when you have nothing of value to say. But my guess is you are not even capable of even mastering that one.

  • by MpVpRb ( 1423381 ) on Friday January 23, 2026 @11:59AM (#65944398)

    ...one rule remains
    Always Make Backups! Preferably multiple backups
    Tech fails, especially when it's on someone else's computer

    • Yup, the story isn't new.

      Famously about 25 years ago, on the movie Toy Story 2 someone accidentally wiped the project servers. No working backups. It was only absolute luck that someone with a baby did an unauthorized data dump to work from home. I'm sure it was a bit of a sheepish admission but it saved the film. ("I know the servers were wiped out, but, well, I know it was against policy to have an unauthorized, off-site copy, but I did make a copy of all of it, and I have it here...")

      Or go back about

  • Type 'rm -rf /' to see what happens....

  • by methano ( 519830 ) on Friday January 23, 2026 @12:15PM (#65944480)
    I've been doing research for close to 50 years. I've never seen a situation where, if you wipe out 2 years work, it takes anything close to 2 years to recapitulate it. Actually, I don't even understand how this could happen to a plant scientist. Was all the data in one document? Did ChatGPT kill his plants? Are there no notebooks where the data is recorded? This kind of thing doesn't happen to a good scientist. Oh, I see. It's 2 years of work, not 2 years of research. This is just another sad "lack of backup" story. The modern day "my dog ate it".
    • by tlhIngan ( 30335 )

      Yeah, I don't see the real loss here. He already has the final output - he used ChatGPT for grant applications and such, and probably used it for drafts.

      He lost his draft work and notes. He still has the final results. People lose notes all the time - it happens. You write them on scraps of paper that later get lost. Or once the final paper is done, you chuck the notes in a box and never look at them again until they get tossed out as garbage during a clean.

      All he lost were is intermediate notes and stuff h

    • Exactly, why are his "chats" that important? If it produces something useful, like a new idea to pursue or a good summary, you'd think that'd be copied somewhere, not left in his chats only. Or his chats should have been "here's my data synthesize it for me" and the output should have been copied somewhere else. Chats should be thrown away at the end of each day. It the work was important, it should be in his research books or docs as you say.

      I find it hard to be sympathetic for him and do hope he's learned

  • by snarfaliptic ( 2148020 ) on Friday January 23, 2026 @12:18PM (#65944502)

    "I don't usually test, but when I do, it is in production."

  • Not interested in any research from this "scientist". Not having backups of your 2 years of research is just criminal and he should be liable for defrauding public funds. Posting it on ChatGPT is another offense making any of his findings basically meaningless. But then asking the platform to delete your data is just borderline stupid. I'd like to thank ChatGPT and apologize, AI is useful tool in science, specifically in getting rid of really bad science.
    • This "public funds" bullshit is a unique american thing.

      This was in Germany. Hr is an employee of his employer, who happens to be an university. He receives an ordinary wage, like any other University employee. Calling that public funds is extremely misleading.

      And fraud: fraud requires intention. Intention to enrich yourself. Fraud is a clever(?) way of theft.

      You want to tell us a guy who us crying in his bed room that he lost 2 years of work and fails his PhD now, because he can not catch up until the time

    • by gwolf ( 26339 )

      I find the issue to be even worse... Presenting two years of slop as if it were research carried out with actual knowledge and intelligence is even worse. Of course, I won't be surprised to know the researcher had submitted and was approved for publication at various journals.

  • Seriously, I'm tempted to believe this guy actually did very little of the work he claims to have done. The degree of sleepwalking stupidity required to do two years' worth of work without backups is - at its very best - an uncomfortable fit in the mind of someone engaged in what was supposedly a quality academic effort.

  • That's the joke I was hoping to see by now, but written by someone who can write a joke. Obviously the data must have been backed up during this period, and saying that nothing can be recovered, even for a paying customer, is a lie.

  • or Fuck up.

  • That is all.

  • by jpatters ( 883 ) on Friday January 23, 2026 @01:19PM (#65944662)

    We're gonna need IBM to craft a very tiny violin out of Xenon atoms.

  • Scientist: please delete my data.
    AI: no problem. Data deleted.
    Scientist: wait where's my data?
    AI: we deleted it as you requested.
    Scietist: aren't you supposed to be an evil thing that keeps my data regardless? I keep hearing this from mass media and pundits.
    AI: ...
    Scientist: well, I'm going to complain about you to the same media and pundits who told me you wouldn't delete my data when asked. I'll be famous!

    And so, yet another "AI sucks" story is born by yet another "it did exactly what I asked, and I didn'

    • by whitroth ( 9367 )

      You are an ignorant idiot. Where in the story did it say that the option told the user that if they did that, not only would it stop doing this, but ALSO DELETE ALL THEIR DATA FOLDERS?

      Your reading comprehension is F.

      • by Luckyo ( 1726890 )

        "You are an idiot. When you tell the system you delete all your DATA. Why would it delete the DATA folders?"

        You cannot make this shit up. If you did, people would tell you you're lying.

  • Well,
    bottom line people who use computers professionally should learn how to use a version control system.
    You have your prompts under version control.
    And in this case also the output, obviously. As the output varies from day to day, the same prompt wont't give you the same output next time.

    I guess most academic computer users are kind of self trained ... so they can click around, and perhaps us this or that program ... but they refuse to acknowledge that a computer is a kind of complicated machine, which yo

  • We can start a gofundme so he can make his own backup of his work.
    Round up those pennies you won't be using and help him get this backup device:
    https://www.amazon.com/Epson-E... [amazon.com]

  • > a human employee confirmed the data was permanently lost and unrecoverable.

    I find that suspect. You can wipe your entire hard drive and *still* recover most of what got deleted (source: I've done it).

    It's painful, and all your files might be in one giant folder with meaningless names, but at least with a bit of effort you can find/recover the key files.

    I strongly suspect whoever this guy asked just wasn't aware of recovery options.

    • My guy, ChatGPT isn't running on a single hard drive with a file system that just deletes the reference to the files in the file allocation table and doesn't happen to overwrite the data for a while.
    • by Junta ( 36770 )

      You had a needle in a handful of hay. This would be a needle in mountains of hay.

      Relative likelihood of it being intact is low, these systems are churning a whole lot more than drive activity after you erase (if you wiped your entire drive, the good news is that IO probably stopped and recovery is vaguely more plausible, if you deleted a small text file and then proceeded to write a few gigabytes to disk, well your chances of getting that small text file back are small. This is ignoring the abstractions a

  • Using an LLM is like walking a husky - which is to say the husky walks you, mostly.

    I had problems like this, went through an evolutionary process, finally ended with building an app to enter data, because endless token burning for a simple task. The LLM got red only access via Postgres, and it chafed for weeks when it discovered it was no longer able to edit anything.

    LLMs can do cool stuff, looks like my startup is gonna get to $2m - $3m in series A funding, but constraining bad impulses is still a quarter

  • I use ChatGPT all the time. As a matter of routine, every few days, I nudge it to put all the info into a downloadable ZIP file. I download the ZIP file, extract the contents, zap the zip file, and let my backup program go through the contents and deduplicate it. Something I get a lot of cruft, so I nuke all chats. However, other than the actual AI state in the chat, I can load in from the backup. To save headaches, I also use Chatkeeper which does a solid job in pulling out and separating all the prev

  • This is like leaving all your work in an unsaved Word document and hoping your computer never shuts down.

    It's so much worse than not having backups. This guy never even saved it in the first place.

    I guess some people have to learn the hard way.

  • Maybe the service should offer a full export before deleting all files. Something tells me it was not obvious they would be deleted. Yes backups, non-commercial account, fafo, etc. but now a lot of people are starting to depend on these things which have horrendous UI and operated by aloof companies.

  • This is typical for Germans using modern technology, they even called the Internet Neuland just a few years back under Merkel. So much uncharted, new land they do not understand.
    Germany has truly become a weak shadow of the economic and technological powerhouse it used to be.

"To IBM, 'open' means there is a modicum of interoperability among some of their equipment." -- Harv Masterson

Working...