inpainting_ctinpainting_ctInpaintingCtInpaintingCt (Operator)

Name

inpainting_ctinpainting_ctInpaintingCtInpaintingCt — Perform an inpainting by coherence transport.

Signature

inpainting_ct(Image, Region : InpaintedImage : Epsilon, Kappa, Sigma, Rho, ChannelCoefficients : )

Herror inpainting_ct(const Hobject Image, const Hobject Region, Hobject* InpaintedImage, double Epsilon, double Kappa, double Sigma, double Rho, double ChannelCoefficients)

Herror T_inpainting_ct(const Hobject Image, const Hobject Region, Hobject* InpaintedImage, const Htuple Epsilon, const Htuple Kappa, const Htuple Sigma, const Htuple Rho, const Htuple ChannelCoefficients)

void InpaintingCt(const HObject& Image, const HObject& Region, HObject* InpaintedImage, const HTuple& Epsilon, const HTuple& Kappa, const HTuple& Sigma, const HTuple& Rho, const HTuple& ChannelCoefficients)

HImage HImage::InpaintingCt(const HRegion& Region, double Epsilon, double Kappa, double Sigma, double Rho, const HTuple& ChannelCoefficients) const

HImage HImage::InpaintingCt(const HRegion& Region, double Epsilon, double Kappa, double Sigma, double Rho, double ChannelCoefficients) const

static void HOperatorSet.InpaintingCt(HObject image, HObject region, out HObject inpaintedImage, HTuple epsilon, HTuple kappa, HTuple sigma, HTuple rho, HTuple channelCoefficients)

HImage HImage.InpaintingCt(HRegion region, double epsilon, double kappa, double sigma, double rho, HTuple channelCoefficients)

HImage HImage.InpaintingCt(HRegion region, double epsilon, double kappa, double sigma, double rho, double channelCoefficients)

Description

The operator inpainting_ctinpainting_ctInpaintingCtInpaintingCtInpaintingCt inpaints a missing region RegionRegionRegionRegionregion of an image ImageImageImageImageimage by transporting image information from the region's boundary along the coherence direction into this region.

Since this operator's basic concept is inpainting by continuing broken contour lines, the image content and inpainting region must be such that this idea makes sense. That is, if a contour line hits the region to inpaint at a pixel p, there should be some opposite point q where this contour line continues so that the continuation of contour lines from two opposite sides can succeed. In cases where there is less geometry in the image, a diffusion-based inpainter, e.g., harmonic_interpolationharmonic_interpolationHarmonicInterpolationHarmonicInterpolationHarmonicInterpolation may yield better results. Alternatively, KappaKappaKappaKappakappa can be set to 0. An extreme situation with little global geometries are pure textures. Then the idea behind this operator will fail to produce good results (think of a checkerboard with a big region to inpaint relative to the checker fields). For these kinds of images, a texture-based inpaiting, e.g., inpainting_textureinpainting_textureInpaintingTextureInpaintingTextureInpaintingTexture, can be used instead.

The operator uses a so-called upwind scheme to assign gray values to the missing pixels, i.e.,:

The initially used image data comes from a stripe of thickness EpsilonEpsilonEpsilonEpsilonepsilon around the region to inpaint. Thus, EpsilonEpsilonEpsilonEpsilonepsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for EpsilonEpsilonEpsilonEpsilonepsilon depends on the gray values that should be transported into the region. Choosing EpsilonEpsilonEpsilonEpsilonepsilon = 5 can be used in many cases.

Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor S. and where * denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with standard deviation and . These standard deviations are defined by the operator's parameters SigmaSigmaSigmaSigmasigma and RhoRhoRhoRhorho. SigmaSigmaSigmaSigmasigma should have the size of the noise or uninportant little objects, which are then not considered in the estimation step by the pre-smoothing. RhoRhoRhoRhorho gives the size of the window around a pixel that will be used for direction estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigenvalue , i.e.

For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be the same for all channels to propagate information in the same direction. Since the weight depends on the coherence direction, the common direction is given by the eigendirection of a composite structure tensor. If u_{1},...,u_{n} denote the n channels of the image, the channel structure tensors S_{1},...,S_{n} are computed and then combined to the composite structure tensor S. The coefficients a_{i} are passed in ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients, which is a tuple of length n or length 1. If the tuple's length is 1, the arithmetic mean is used, i.e., a_{i} = 1/n. If the length of ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients matches the number of channels, the a_{i} are set to in order to get a well-defined convex combination. Hence, the ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients must be greater than or equal to zero and their sum must be greater than zero. If the tuple's length is neither 1 nor the number of channels or the requirement above is not satisfied, the operator returns an error message.

The purpose of using other ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients than the arithmetic mean is to adapt to different color codes. The coherence direction is a geometrical information of the composite image, which is given by high contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587, 0.114] is a good choice.

The weight in the scheme is the product of a directional component and a distance component. If p is the 2D coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the disc restricted to already known pixels), the directional component measures the deviation of the vector p-q from the coherence direction. If the deviation exponentially scaled by is large, a low directional component is assigned, whereas if it is small, a large directional component is assigned. is controlled by KappaKappaKappaKappakappa (in percent): beta = 20 * Epsilon * Kappa / 100 KappaKappaKappaKappakappa defines how important it is to propagate information along the coherence direction, so a large KappaKappaKappaKappakappa yields sharp edges, while a low KappaKappaKappaKappakappa allows for more diffusion.

A special case is when KappaKappaKappaKappakappa is zero: In this case the directional component of the weight is constant (one). The direction estimation step is then skipped to save computational costs and the parameters SigmaSigmaSigmaSigmasigma, RhoRhoRhoRhorho, ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients become meaningless, i.e, the propagation of information is not based on the structures visible in the image.

The distance component is 1/|p-q|. Consequently, if q is far away from p, a low distance component is assigned, whereas if it is near to p, a high distance component is assigned.

Attention

Note that filter operators may return unexpected results if an image with a reduced domain is used as input. Please refer to the chapter Filters.

Execution Information

Parameters

ImageImageImageImageimage (input_object)  (multichannel-)image(-array) objectHImageHImageHobject (byte / uint2 / real)

Input image.

RegionRegionRegionRegionregion (input_object)  region objectHRegionHRegionHobject

Inpainting region.

InpaintedImageInpaintedImageInpaintedImageInpaintedImageinpaintedImage (output_object)  (multichannel-)image(-array) objectHImageHImageHobject * (byte / uint2 / real)

Output image.

EpsilonEpsilonEpsilonEpsilonepsilon (input_control)  number HTupleHTupleHtuple (real) (double) (double) (double)

Radius of the pixel neighborhood.

Default value: 5.0

Typical range of values: 1.0 ≤ Epsilon Epsilon Epsilon Epsilon epsilon ≤ 20.0

Minimum increment: 1.0

Recommended increment: 1.0

KappaKappaKappaKappakappa (input_control)  number HTupleHTupleHtuple (real) (double) (double) (double)

Sharpness parameter in percent.

Default value: 25.0

Typical range of values: 0.0 ≤ Kappa Kappa Kappa Kappa kappa ≤ 100.0

Minimum increment: 1.0

Recommended increment: 1.0

SigmaSigmaSigmaSigmasigma (input_control)  number HTupleHTupleHtuple (real) (double) (double) (double)

Pre-smoothing parameter.

Default value: 1.41

Typical range of values: 0.0 ≤ Sigma Sigma Sigma Sigma sigma ≤ 20.0

Minimum increment: 0.001

Recommended increment: 0.01

RhoRhoRhoRhorho (input_control)  number HTupleHTupleHtuple (real) (double) (double) (double)

Smoothing parameter for the direction estimation.

Default value: 4.0

Typical range of values: 0.001 ≤ Rho Rho Rho Rho rho ≤ 20.0

Minimum increment: 0.001

Recommended increment: 0.01

ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients (input_control)  number(-array) HTupleHTupleHtuple (real) (double) (double) (double)

Channel weights.

Default value: 1

Example (HDevelop)

read_image (Image, 'claudia')
gen_circle (Circle, 333, 164, 35)
inpainting_ct (Image, Circle, InpaintedImage, 15, 25, 1.5, 3,1.0)

Alternatives

harmonic_interpolationharmonic_interpolationHarmonicInterpolationHarmonicInterpolationHarmonicInterpolation, inpainting_anisoinpainting_anisoInpaintingAnisoInpaintingAnisoInpaintingAniso, inpainting_mcfinpainting_mcfInpaintingMcfInpaintingMcfInpaintingMcf, inpainting_cedinpainting_cedInpaintingCedInpaintingCedInpaintingCed, inpainting_textureinpainting_textureInpaintingTextureInpaintingTextureInpaintingTexture

References

Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathematical Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.

Module

Foundation