Close
DarkRoseofHell said:
Web server one is even more lawlsy than the one used in either koroshell or caffe.
If you are talking about your https://dl.dropboxusercontent.com/u/35156/test/tooikirameki_Page_01.jpg image, the webserver doesn't seem to support CMYK jpegs (inversed colors). The tanakamura & caffe versions also don't seem to support proper CMYK->RGB conversion either (over saturated), so you would really need to re-save it as a RGB image first in Photoshop before filtering: http://i.imgbox.com/PKA0iPCL.png

Even so, that image does show how waifu2x denoising is less than perfect and can still get confused when jpeg artifacts are intense enough. It does a decent job, but there is a lot of low-medium frequency residual which would need to be cleaned up in some other way afterwords: http://i.imgbox.com/90icit2D.png

Fooling around a bit, if you do 3-passes of waifu2x high denoising + some manual patching + history brush, you'll end up with something decent given the source: http://i.imgbox.com/0Wm8Rh6t.png

If you spent even more time on history brush you could probably make it look even better, but *effort*. Definitely not a one-stop-shop for general image denoising, but it does seem useful in certain cases.

DarkRoseofHell said:
Just going to note that CUDA isn't any actual routine or algorithm, it's a framework used for gpu accelerated processing.
I wasn't referring to the CUDA framework, but rather the CUDA kernel (i.e. the GPGPU code itself). I originally thought that since the waifu2x docs stated CUDA+Torch7 as a requirement, it could have been using a higher quality filtering routine on the GPU than the CPU for speed reasons or performing some kind of CPU+GPU hybrid processing, but that didn't end up being the case. In reality, Waifux2 was designed for byte identical output between GPU & CPU-only processing. Not always the case when dealing with GPGPU, but I'm happy it is.