PixelArt: Edit an Image using Colors

By Samuel Muiruri | Dec. 20, 2018 | Python Scripts


If you’re actively following Machine Learning projects then you’ve probably know of auto coloring scripts like this one https://github.com/satoshiiizuka/siggraph2016_colorization where you simply give it an image, whether in b&w or colored and it will make it’s attempt of coloring it.

You have to run the script on the terminal but you could essentially build a GUI around it. The more similar the image is to some of it’s test examples the more likely it will do a good job of coloring it. It does use a lot of ram as a buffer when it does the job and since it’s a pixel by pixel work it also slows down proportionally to how big the image is so depending on how powerful your machine is especially how much RAM/GPU (if you configured that) you have available you might consider resizing the image.

This isn’t a novelty in terms of could it be replicated, for example here’s a video of doing the same in photoshop:

which I think is more than good enough so why would anyone want to do this again? Reinvent the wheel?

Because if it works it’s self-fulfilling and with ML you can learn a lot by attempting something vastly complex using algorithms that try and tackle the problem from a different angle or improving on another method. The thing with ML it has a term ‘black box’ since it’s going to use a large dataset to learn how to in this case “classify and color” asking why it colored it this way and not that way isn’t written in code but it’s learning model and if it fails the option is to keep training it or learn from this on how to write a better model.

Now this was for me the project that seemed about my alley now that I considered ML a viable thing to learn and after going through video after video of the steps with just algorithms being mentioned I didn’t quite see how things worked so I went with how I usually prefer learning by taking the core concepts and sleeping on it to start with see if I can re-imagine something better or comply this is good enough.

My final take was maybe we are asking a rookie to paint a picture when they don’t know how to paint at all so in context: they cheat by knowing this is usually how an object like this looks and even overshooting the edges and in other cases being right on the money or no idea so it comes out like a hazel effect on the entire image. My opinion was probably the best approach would be have it first identify something in the image like a car or person; the sub features of the item like guess this is a white male or this is a sport’s car and then retro-guess what the proper skin color or car color would be in this case respectively keeping in mind the edges of the object knowing the edges of an identified object an example working version is Facebook’s Detectron.

So this is what I needed to know what to color, decide how to color it then color it with this even failed examples would either be an unidentified object or wrong color appropriation to a feature in an object and continuous updating of this just like a painter learning from a master should have resulted in a working piece. Now for the final part since I was and just now starting to get back into the online courseware I wanted to see if I could build something that can edit an image.

In Python there’s PIL which lets you read and write to an image just like a text file and that is all I needed to start with so I worked on correctly targeting parts of the image. Making sure I get the right results, and since I have experience as a web developer and before this had started with making a gallery for before and after example of results from the auto coloring scripts mentioned earlier I just started to wrap the core features of the image editing system into a “web app” which now is the color editor, video on this here.

The idea was to do the same with a Python web app and it’s got there and works! Just one hiccup though, for a webapp of this scale to work on the large scale this does have the advantage it does use less RAM than the ML example and probably just as much or less than photoshop and it can also give feedback on expected time before it finishes, one of the drawbacks I found with using the ML project. Locally testing it worked nicely but on the web the AJAX connection gets severed much more quickly so you have to do even a minor example for demo purposes to reduce the time it takes. The viable option is to use WebSockets which can keep a connection open for as long as possible and can support also multiple socket connections per user.

This also means on the backend with celery and redis I can work on multiple processes and set a limit so that too many requests don’t lead to RAM being used up and leading to a dead-lock. Your process would just end on a queue and with websockets you’d easily see it move up the queue till it started and on the side of the server as a plus if a user closes the tab or cancels the process I can also do the same to the task unlike with AJAX where it’s hard to know if the user is still on the other side waiting.

As of the moment this is what’s making this not a viable working version and I am in plans to look for an investor this year if I can get one and in the meantime I know have something I know my first ML project would try to use a framework. That’s what I have for my first post and hopefully in the upcoming month will be posting updates.