Outline
Related Work
- Here, we further discuss some details of the paper “Visual Attribute Transfer through Deep Image Analogy”. Considering that this paper got some ideas from it, such as NNF in the feature space and the coarse-to-fine reconstruction method, this paper compares itself with the “Analogy” paper both in approaches and results.
- In fact, this paper implemented a more reasonable method, instead of just simply upsampling the patchmatch result, when reconstructing the image. Besides, the local affine transform (locally linear model) also helps when getting better edges in the final images.
Algorithm
- We can see from the figure: patchmatch results in deep feature space can better satisfy our needs. So it’s a natural thought that we can reconstruct the image from coarse to fine.
- This figure offers us an intuitive feeling that color transfer in the image domain can work much efficiently than in the feature domain.
- How to merge: for every pixel in G, find a pixel in Gi so that it matches best. At last, we can merge all the guidance map and get a multi-reference guidance map. IL can be considered as one choice that how to merge the guidance map. Different colors in it represent different guidance map choices for pixels.
Experiments
Discussion
Reference
- Neural Color Transfer between Images
- Visual Attribute Transfer through Deep Image Analogy
- Deep Photo Style Transfer