top of page

Inpainting - The Simpsons

My experiences experimenting with GANs haven't led to much success, but I think it's really interesting to see the improvements made over time. Also it's pretty incredible to see some of the models that are being publicly released by the AI industry that keep getting better with every iteration.

When I look back on my trials and tribulations, I think it's pretty interesting to see some of the choices that certain models made in contrast to others or other more traditional tools. Like the standard content aware fill tool in photoshop actually works quite well in certain settings. But most of the time it just copies the pixels in direct vicinity and results in a stamping/cloning effect that only works with specific scenes.


Also sometimes the easiest way is best, I don't think that machine learning is necessary for a lot of use cases.

I also think that any system that improves over time, given enough time and access/resources, will surpass systems that may currently be ahead but are not improving or not improving at the same clip.



Input


So photoshop just tries to copy the solid bar because that's what's in the pixel space near the area it wants to fill, so that it recreates this weird double bar effect.


Whereas the generative inpainting tries to take what is directly in line with what's being asked to fill, and tries to continue or extend that rather than just copy what was preceding it. The result is this weird open window effect that is not realistic to any real car.


ree

ree

ree

ree

ree

ree


留言


  • GitHub
  • Youtube
  • White Instagram Icon
  • Discord
  • X
  • TikTok

© 2025 by LUCY

bottom of page