CUD-NET: Color Universal Design Neural Filter for the Color Weakness

NeurIPS 2021  ·  Sunyong Seo, Hana Kim, Jinho Park ·

Information on images should be visually understood to anyone, including the color weakness. However, it is not recognizable if color that seems distorted to the color weakness meets an adjacent object. We suggest CUD-NET based on convolutional deep neural network to generate color universal design (CUD) images that satisfy both color preservation and distinguishment of color for input images. CUD-NET regresses the node point of the piecewise linear function based on information of input images and comprises a specific filter per image. We present the following methods to generate CUD images for the color weakness. First, we refine the CUD dataset on specific criteria by color experts. Second, the input image information is expanded through the pre-processing specialized on the color weakness vision. Third, we suggest a multi-modal feature fusion architecture that combines features to process expanded images. Finally, we suggest a deformable loss function by the composition of the predicted image through the model to avoid the one-to-many problems of the dataset.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here