Neural knitworks : patched neural implicit representation networks

Czerkawski, Mikolaj and Cardona, Javier and Atkinson, Robert and Michie, Craig and Andonovic, Ivan and Clemente, Carmine and Tachtatzis, Christos (2021) Neural knitworks : patched neural implicit representation networks. Other., Ithaca, N.Y.. (

[thumbnail of Czerkawski-etal-ArXiv-2022-Neural-knitworks-patched-neural-implicit]
Text. Filename: Czerkawski_etal_ArXiv_2022_Neural_knitworks_patched_neural_implicit.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (20MB)| Preview


Coordinate-based Multilayer Perceptron (MLP) networks, despite being capable of learning neural implicit representations, are not performant for internal image synthesis applications. Convolutional Neural Networks (CNNs) are typically used instead for a variety of internal generative tasks, at the cost of a larger model. We propose Neural Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis by optimizing the distribution of image patches in an adversarial manner and by enforcing consistency between the patch predictions. To the best of our knowledge, this is the first implementation of a coordinate-based MLP tailored for synthesis tasks such as image inpainting, super-resolution, and denoising. We demonstrate the utility of the proposed technique by training on these three tasks. The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity. The resulting model requires 80% fewer parameters than alternative CNN-based solutions while achieving comparable performance and training time.


Czerkawski, Mikolaj ORCID logoORCID:, Cardona, Javier ORCID logoORCID:, Atkinson, Robert ORCID logoORCID:, Michie, Craig ORCID logoORCID:, Andonovic, Ivan, Clemente, Carmine ORCID logoORCID: and Tachtatzis, Christos ORCID logoORCID:;