ClassesClassesClassesClasses | | | | Operators

inpainting_ctinpainting_ctInpaintingCtinpainting_ctInpaintingCtInpaintingCt (Operator)

Name

inpainting_ctinpainting_ctInpaintingCtinpainting_ctInpaintingCtInpaintingCt — Perform an inpainting by coherence transport.

Signature

inpainting_ct(Image, Region : InpaintedImage : Epsilon, Kappa, Sigma, Rho, ChannelCoefficients : )

Herror inpainting_ct(const Hobject Image, const Hobject Region, Hobject* InpaintedImage, double Epsilon, double Kappa, double Sigma, double Rho, double ChannelCoefficients)

Herror T_inpainting_ct(const Hobject Image, const Hobject Region, Hobject* InpaintedImage, const Htuple Epsilon, const Htuple Kappa, const Htuple Sigma, const Htuple Rho, const Htuple ChannelCoefficients)

Herror inpainting_ct(Hobject Image, Hobject Region, Hobject* InpaintedImage, const HTuple& Epsilon, const HTuple& Kappa, const HTuple& Sigma, const HTuple& Rho, const HTuple& ChannelCoefficients)

HImage HImage::InpaintingCt(const HRegion& Region, const HTuple& Epsilon, const HTuple& Kappa, const HTuple& Sigma, const HTuple& Rho, const HTuple& ChannelCoefficients) const

HImageArray HImageArray::InpaintingCt(const HRegion& Region, const HTuple& Epsilon, const HTuple& Kappa, const HTuple& Sigma, const HTuple& Rho, const HTuple& ChannelCoefficients) const

void InpaintingCt(const HObject& Image, const HObject& Region, HObject* InpaintedImage, const HTuple& Epsilon, const HTuple& Kappa, const HTuple& Sigma, const HTuple& Rho, const HTuple& ChannelCoefficients)

HImage HImage::InpaintingCt(const HRegion& Region, double Epsilon, double Kappa, double Sigma, double Rho, const HTuple& ChannelCoefficients) const

HImage HImage::InpaintingCt(const HRegion& Region, double Epsilon, double Kappa, double Sigma, double Rho, double ChannelCoefficients) const

void HOperatorSetX.InpaintingCt(
[in] IHUntypedObjectX* Image, [in] IHUntypedObjectX* Region, [out] IHUntypedObjectX*InpaintedImage, [in] VARIANT Epsilon, [in] VARIANT Kappa, [in] VARIANT Sigma, [in] VARIANT Rho, [in] VARIANT ChannelCoefficients)

IHImageX* HImageX.InpaintingCt(
[in] IHRegionX* Region, [in] double Epsilon, [in] double Kappa, [in] double Sigma, [in] double Rho, [in] VARIANT ChannelCoefficients)

static void HOperatorSet.InpaintingCt(HObject image, HObject region, out HObject inpaintedImage, HTuple epsilon, HTuple kappa, HTuple sigma, HTuple rho, HTuple channelCoefficients)

HImage HImage.InpaintingCt(HRegion region, double epsilon, double kappa, double sigma, double rho, HTuple channelCoefficients)

HImage HImage.InpaintingCt(HRegion region, double epsilon, double kappa, double sigma, double rho, double channelCoefficients)

Description

The operator inpainting_ctinpainting_ctInpaintingCtinpainting_ctInpaintingCtInpaintingCt inpaints a missing region RegionRegionRegionRegionRegionregion of an image ImageImageImageImageImageimage by transporting image information from the region's boundary along the coherence direction into this region.

Since this operator's basic concept is inpainting by continuing broken contour lines, the image content and inpainting region must be such that this idea makes sense. That is, if a contour line hits the region to inpaint at a pixel p, there should be some opposite point q where this contour line continues so that the continuation of contour lines from two opposite sides can succeed. In cases where there is less geometry in the image, a diffusion-based inpainter, e.g., harmonic_interpolationharmonic_interpolationHarmonicInterpolationharmonic_interpolationHarmonicInterpolationHarmonicInterpolation may yield better results. Alternatively, KappaKappaKappaKappaKappakappa can be set to 0. An extreme situation with little global geometries are pure textures. Then the idea behind this operator will fail to produce good results (think of a checkerboard with a big region to inpaint relative to the checker fields). For these kinds of images, a texture-based inpaiting, e.g., inpainting_textureinpainting_textureInpaintingTextureinpainting_textureInpaintingTextureInpaintingTexture, can be used instead.

The operator uses a so-called upwind scheme to assign gray values to the missing pixels, i.e.,:

The initially used image data comes from a stripe of thickness EpsilonEpsilonEpsilonEpsilonEpsilonepsilon around the region to inpaint. Thus, EpsilonEpsilonEpsilonEpsilonEpsilonepsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for EpsilonEpsilonEpsilonEpsilonEpsilonepsilon depends on the gray values that should be transported into the region. Choosing EpsilonEpsilonEpsilonEpsilonEpsilonepsilon = 5 can be used in many cases.

Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor S.

and
where * denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with standard deviation and . These standard deviations are defined by the operator's parameters SigmaSigmaSigmaSigmaSigmasigma and RhoRhoRhoRhoRhorho. SigmaSigmaSigmaSigmaSigmasigma should have the size of the noise or uninportant little objects, which are then not considered in the estimation step by the pre-smoothing. RhoRhoRhoRhoRhorho gives the size of the window around a pixel that will be used for direction estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigenvalue , i.e.

For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be the same for all channels to propagate information in the same direction. Since the weight depends on the coherence direction, the common direction is given by the eigendirection of a composite structure tensor. If u_{1},...,u_{n} denote the n channels of the image, the channel structure tensors S_{1},...,S_{n} are computed and then combined to the composite structure tensor S.

The coefficients a_{i} are passed in ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients, which is a tuple of length n or length 1. If the tuple's length is 1, the arithmetic mean is used, i.e., a_{i} = 1/n. If the length of ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients matches the number of channels, the a_{i} are set to
in order to get a well-defined convex combination. Hence, the ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients must be greater than or equal to zero and their sum must be greater than zero. If the tuple's length is neither 1 nor the number of channels or the requirement above is not satisfied, the operator returns an error message.

The purpose of using other ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients than the arithmetic mean is to adapt to different color codes. The coherence direction is a geometrical information of the composite image, which is given by high contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587, 0.114] is a good choice.

The weight in the scheme is the product of a directional component and a distance component. If p is the 2D coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the disc restricted to already known pixels), the directional component measures the deviation of the vector p-q from the coherence direction. If the deviation exponentially scaled by is large, a low directional component is assigned, whereas if it is small, a large directional component is assigned. is controlled by KappaKappaKappaKappaKappakappa (in percent):

beta = 20 * Epsilon * Kappa / 100
KappaKappaKappaKappaKappakappa defines how important it is to propagate information along the coherence direction, so a large KappaKappaKappaKappaKappakappa yields sharp edges, while a low KappaKappaKappaKappaKappakappa allows for more diffusion.

A special case is when KappaKappaKappaKappaKappakappa is zero: In this case the directional component of the weight is constant (one). The direction estimation step is then skipped to save computational costs and the parameters SigmaSigmaSigmaSigmaSigmasigma, RhoRhoRhoRhoRhorho, ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients become meaningless, i.e, the propagation of information is not based on the structures visible in the image.

The distance component is 1/|p-q|. Consequently, if q is far away from p, a low distance component is assigned, whereas if it is near to p, a high distance component is assigned.

Attention

Note that filter operators may return unexpected results if an image with a reduced domain is used as input. Please refer to the chapter Filters.

Parallelization

Parameters

ImageImageImageImageImageimage (input_object)  (multichannel-)image(-array) objectHImageHImageHImageHImageXHobject (byte / uint2 / real)

Input image.

RegionRegionRegionRegionRegionregion (input_object)  region objectHRegionHRegionHRegionHRegionXHobject

Inpainting region.

InpaintedImageInpaintedImageInpaintedImageInpaintedImageInpaintedImageinpaintedImage (output_object)  (multichannel-)image(-array) objectHImageHImageHImageHImageXHobject * (byte / uint2 / real)

Output image.

EpsilonEpsilonEpsilonEpsilonEpsilonepsilon (input_control)  number HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Radius of the pixel neighborhood.

Default value: 5.0

Typical range of values: 1.0 ≤ Epsilon Epsilon Epsilon Epsilon Epsilon epsilon ≤ 20.0

Minimum increment: 1.0

Recommended increment: 1.0

KappaKappaKappaKappaKappakappa (input_control)  number HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Sharpness parameter in percent.

Default value: 25.0

Typical range of values: 0.0 ≤ Kappa Kappa Kappa Kappa Kappa kappa ≤ 100.0

Minimum increment: 1.0

Recommended increment: 1.0

SigmaSigmaSigmaSigmaSigmasigma (input_control)  number HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Pre-smoothing parameter.

Default value: 1.41

Typical range of values: 0.0 ≤ Sigma Sigma Sigma Sigma Sigma sigma ≤ 20.0

Minimum increment: 0.001

Recommended increment: 0.01

RhoRhoRhoRhoRhorho (input_control)  number HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Smoothing parameter for the direction estimation.

Default value: 4.0

Typical range of values: 0.001 ≤ Rho Rho Rho Rho Rho rho ≤ 20.0

Minimum increment: 0.001

Recommended increment: 0.01

ChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientsChannelCoefficientschannelCoefficients (input_control)  number(-array) HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Channel weights.

Default value: 1

Example (HDevelop)

read_image (Image, 'claudia')
gen_circle (Circle, 333, 164, 35)
inpainting_ct (Image, Circle, InpaintedImage, 15, 25, 1.5, 3,1.0)

Alternatives

harmonic_interpolationharmonic_interpolationHarmonicInterpolationharmonic_interpolationHarmonicInterpolationHarmonicInterpolation, inpainting_anisoinpainting_anisoInpaintingAnisoinpainting_anisoInpaintingAnisoInpaintingAniso, inpainting_mcfinpainting_mcfInpaintingMcfinpainting_mcfInpaintingMcfInpaintingMcf, inpainting_cedinpainting_cedInpaintingCedinpainting_cedInpaintingCedInpaintingCed, inpainting_textureinpainting_textureInpaintingTextureinpainting_textureInpaintingTextureInpaintingTexture

References

Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathematical Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.

Module

Foundation


ClassesClassesClassesClasses | | | | Operators