Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Japan

New AI Model Fills in Blank Spots in Photos (nikkei.com) 52

A new technology uses artificial intelligence to generate synthetic images that can pass as real. From a report, shared by a reader (the link may be paywalled): The technology was developed by a team led by Hiroshi Ishikawa, a professor at Japan's Waseda University. It uses convolutional neural networks, a type of deep learning, to predict missing parts of images. The technology could be used in photo-editing apps. It can also be used to generate 3-D images from real 2-D images. The team at first prepared some 8 million images of real landscapes, human faces and other subjects. Using special software, the team generated numerous versions for each image, randomly adding artificial blanks of various shapes, sizes and positions. With all the data, the model took three months to learn how to predict the blanks so that it could fill them in and make the resultant images look identical to the originals. The model's learning algorithm first predicts and fills in blanks. It then evaluates how consistent the added part is with its surroundings.
This discussion has been archived. No new comments can be posted.

New AI Model Fills in Blank Spots in Photos

Comments Filter:
  • by Mal-2 ( 675116 ) on Sunday February 18, 2018 @10:50AM (#56146960) Homepage Journal

    I bet it will be pretty good in some contexts, and most likely an improvement overall compared to content-aware fills. However, when it completely falls on its face I bet it will be even funnier than the way content-aware fill blows up. Lower rate of occurrence, but much more hilarity when it happens.

    • However, when it completely falls on its face I bet it will be even funnier than the way content-aware fill blows up.

      I tried the prototype, and it filled all the holes with pictures and outrageously fake quoted statements from Donald and Hillary. The system must have been hacked by Russians already.

      The pictures will probably be updated really soon with pictures of the new presidential candidates, Oprah Winfrey and Zuckerberg.

    • when it completely falls on its face I bet it will be even funnier than the way content-aware fill blows up.

      Just put the failures into the training set, re-run backprop, and deploy the upgraded NN.

  • Alternate Headline (Score:4, Insightful)

    by OzPeter ( 195038 ) on Sunday February 18, 2018 @11:04AM (#56147006)

    The alternate headline is:

    Computer program analyzes data and based on that analysis invents new data that seems plausible to most people

    • I think it's really deeper tho. If you see just the tip of a football an NFL running back is holding, your brain has a plausible idea that he's holding a football, (not a puppy with a butt that looks like a football) and that will be confirmed when he spikes it in the end zone. Building most probable world models from partial information is a key thing human intelligence does.

      • by OzPeter ( 195038 )

        I think it's really deeper tho. If you see just the tip of a football an NFL running back is holding, your brain has a plausible idea that he's holding a football, (not a puppy with a butt that looks like a football) and that will be confirmed when he spikes it in the end zone. Building most probable world models from partial information is a key thing human intelligence does.

        Nice straw man bro!

        There is a difference between inferring that a complete football is attached to the tip and creating (from scratch) an image of the hidden parts of the football. Because until you actually see it, the hidden parts of the football may well be painted to look like a puppy.

        As in the case of TFA where they created from scratch a shelf full of books without knowing if that shelf actually contained books or a bust of Plato. And in fact from all we know of the "before" image presented in the T

        • The feeling of surprise is when your brain encounters something different from the world model *it already built* from partial information and held subconsciously. It's a part of intelligence.

          I mean yeah, if someone peddles this is purely accurate info it's BS, but you can't deny that statements about what's probably in a concealed area, based on lots of experience/training are useful if taken for what they are.

  • by Anonymous Coward

    Many celebrities like to show off underboob, side boob, cleavage and the occasional nip slip.

    Does this software allow us to piece together a whole CELEBRITY BREAST?

    • by Anonymous Coward

      that's the spirit of old slashdot.

      Good to see it back.

      • that's the spirit of old slashdot.

        Good to see it back.

        Yeah...

        I can't wait 'til the internet teaches the AI about nazis and racism and stuff.

  • Honestly, that's the first application I thought of if or when this tech becomes commonly available.

    Where it gets more scary is when it used to manipulate pictures that may be used in the future to ascertain the veracity of particular circumstances or events, possibly even for legal reasons.

  • by Anonymous Coward

    Photos and videos are becoming so easily manipulated that they will soon be useless as proof of anything.

    • by OzPeter ( 195038 )

      Photos and videos are becoming so easily manipulated that they will soon be useless as proof of anything.

      This is nothing new. Photos have been manipulated from the time of their invention. The very act of pointing a camera in a particular direction manipulates the image and hence proof of something.

      • In ancient times it used to be hard to do video spoofing, for any lengthy footage. I remember Forrest Gump being the first time that I really got the message that, just like photos, a video of an event will no longer be sure proof.

        What's the saying? Don't believe anything you hear, and only half of what you see?

    • Security cameras need verifiable cryptographic signatures on both the timestamp and the frame contents right now. Yes, those can still be faked, but not as easily as image manipulation.

      • But if they can be faked, then you can't trust them.

        You'll need multiple independent video sources (like different people with different camera's).

        The example photo sucks- it's obvious where it was filled. I bet it would be very obvious for video footage but I'm sure the tech will improve further over time.

      • Security cameras need verifiable cryptographic signatures on both the timestamp and the frame contents right now.

        Says who? My company has security cameras, and I can assure you that they do not encrypt anything. Are we breaking the law?

        Amazon sells dozens of security cameras. None of the descriptions mention any encryption other than for the Wifi connection.

        • by HiThere ( 15173 )

          He didn't say encrypted, but rather cryptographically signed. That's essentially a modified hash of the picture, or well-defined parts of the picture. It generally wouldn't be visible if you were looking at the picture.

          OTOH, I also doubt that the general run of cameras used for security do this signing. I suspect that he had a specific variety (not brand) of camera that he was thinking of when he said "security camera" which is a lot different from the normal web-cam that is mounted on a porch.

          • by Cederic ( 9623 )

            I interpreted him as suggesting that non-repudiation of video footage is now fucking hard to achieve and needs building into the initial capture.

            • This. And there are few products even on the market yet, if any, that are doing this. I don't think anyone realized how soon this would be needed and nobody is willing to pay for it until it is.

  • by lurker412 ( 706164 ) on Sunday February 18, 2018 @11:59AM (#56147186)
    The example shown in the linked article doesn't hold up under scrutiny. Look at the blue-green books on the center-right--the convergence of the shelves is wrong and the corner is not rendered correctly. Assuming this was a one-step edit, it's probably better than Photoshop's current content aware fill, but it still requires additional work to escape detection.
    • by epine ( 68316 )

      The example shown in the linked article doesn't hold up under scrutiny.

      The story submission did not say the approach was good enough to evade focussed scrutiny.

      Confucius Says: What was never held up is not able to not hold up. Then he pats himself contentedly on the belly.

  • ...but not necessarily _be_ real.

    Depending on what's missing of a photo of Long John Silver, it might create one with 2 parrots and 2 eye flaps.

  • Hopefully this can be applied to anti-shake filters where existing solutions do a really poor job of inventing blurry missing data.

    • Hopefully this can be applied to anti-shake filters where existing solutions do a really poor job of inventing blurry missing data.

      The data is actually still there, not missing. It's just smeared across several pixels instead of each point in space corresponding to a single pixel. You can use a deconvolution filter [ox.ac.uk] to un-smear it. Same goes for out-of-focus photos.

      In the case of camera shake, if the camera would record its movements while the photo was being taken and included that info in the photo,

      • In the case of camera shake, if the camera would record its movements while the photo was being taken and included that info in the photo, you could apply a perfect deconvolution filter that would almost completely eliminate the blur

        It's easier and probably more robust to compensate by moving the sensor.

        • Why not do both? You could use acceleration data to do OIS but at the same time store the acceleration data in the image so that in the future when image processing improved you could get an even clearer image.

      • In the case of camera shake, if the camera would record its movements while the photo was being taken and included that info in the photo, you could apply a perfect deconvolution filter that would almost completely eliminate the blur (it becomes less accurate near the edges because you've permanently lost info when parts of the photo moved out of the frame).

        That's a really excellent idea. Best thing is that if we started doing that now with an imperfect filter later on as filters improved the images would get sharper so long as you had access to the original image with the 'acceleration as a function of time' data in it.

        It's almost as good as Bladerunner's 'Zoom. Enhance' image processing.

  • I saw these in my RSS feed...

    US's Greatest Vulnerability is Ignoring the Cyber Threats From Our Adversaries, Foreign Policy Expert Says
    New AI Model Fills in Blank Spots in Photos

    and misread the second as...

    New Al Gore Fills in Blank Spots in Photos

    I was very disappointed.

  • ... the Trump statues.

  • Algorithms and people process images differently. This AI/ML technique creates plausible fill that will fool most people, most of time.

    Any Slashdot regular has seen the posting about image manipulation techniques that can fool a neural net image recognition system to think an AK-47 is bunny rabbit, or vice versa. But the manipulated images look pretty much the same to people. Why do the altered images get misidentified? We can;t really say, given that we can;t really say why the neural net identifies an un

  • Content Aware Fill has been around for over a decade now. Adobe acquired a small team that worked on it in the early 2000s, and I believe Photoshop CS4 was the first version to use the tech as Content Aware Resize. CS5 included Content Aware Fill (which is what this article is describing it seems). These were released in 2008 and 2010 respectively.

    • Umm, okay? This is just a bit of a step-up from Photoshop's Content-Aware Fill.

      I presume, to you, that the fax machine was nothing but a waffle iron with a phone attached.

    • No, this is nothing like the content-aware algorithms used in Photoshop (the function is the same but the underlying technology is radically different). A content-aware algorithm simply analyses what is left of the picture and uses that information to guess what should go into the missing parts. The algorithm described here is a deep learning system trained on thousands of similar images. It can then use this data to help it guess how to fill in the missing pieces. This means it is capable of data synthesis

  • The extremely shorted linked article has a single, low-res, unzoomable image, and no link to any further information. There must be a better source than this.

  • I've wanted to come up with an AI software for old Academy Ratio movies like the Wizard of Oz.

    If a camera pans, use leading and trailing info to create edges.

    If it's a still shot use something like this to generate probable edges.

    Make Academy ratio movies 16:9 and do the same to some old TV shows. If I could pull this off I could potentially be the most hated man since the onset of colorization of old movies....

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...