Object Detection, Instance Segmentation

List of Operators ↓

This chapter explains how to use object detection based on deep learning.

With object detection we want to find the different instances in an image and assign them to a class. The instances can partially overlap and still be distinguished as distinct. This is illustrated in the following schema.
image/svg+xml 'apple' 0.9 'apple' 0.7 'lemon' 0.9
A possible example for object detection: Within the input image three instances are found and assigned to a class.
Instance segmentation is a special case of object detection, where the model also predicts an instance mask marking the specific region of the instance within the image. This is illustrated in the following schema. In general the explanations to object detection also apply to instance segmentation. Possible differences are brought up in the specific sections.
image/svg+xml 'apple' 0.9 'apple' 0.7 'lemon' 0.9
A possible example for instance segmentation: Within the input image three instances are found. Each instances is assigned to a class and obtains a mask marking its particular region.

Object detection leads to two different tasks: Finding the instances and classifying them. In order to do so, we use a combined network consisting of three main parts. The first part, called backbone, consists of a pretrained classification network. Its task is to generate various feature maps, so the classifying layer is removed. These feature maps encode different kinds of information at different scales, depending how deep they are in the network. See also the chapter Deep Learning. Thereby, feature maps with the same width and height are said to belong to the same level. In the second part, backbone layers of different levels are combined. More precisely, backbone levels of different levels are specified as docking layers. Their feature maps are combined. As a result we obtain feature maps containing information of lower and higher levels. These are the feature maps we will use in the third part. This second part is also called feature pyramid and together with the first part it constitutes the feature pyramid network. The third part consists of additional networks, called heads, for every selected level. They get the corresponding feature maps as input and learn how to localize and classify, respectively, potential objects. Additionally this third part includes the reduction of overlapping predicted bounding boxes. An overview of the three parts is shown in the following figure.

image/svg+xml (1) (2) (3) 'lemon' 0.9 'apple' 0.7 'apple' 0.9 confidences confidences confidences + +
A schematic overview of the mentioned three parts: (1) The backbone. (2) Backbone feature maps are combined and new feature maps generated. (3) Additional networks, called heads, which learn how to localize and classify, respectively, potential objects. Overlapping bounding boxes are suppressed.

Let us have a look what happens in this third part. In object detection, the location in the image of an instance is given by a rectangular bounding box. Hence, the first task is to find a suiting bounding box for every single instance. To do so, the network generates reference bounding boxes and learns, how to modify them to fit the instances best possible. These reference bounding boxes are called anchors. The better these anchors represent the shape of the different ground truth bounding boxes, the easier the network can learn them. For this purpose the network generates a set of anchors on every anchor point, thus on every pixel of the used feature maps of the feature pyramid. Such a set consists of anchors of all combinations of shapes, sizes, and for instance type 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2""rectangle2" (see below) also orientations. The shape of those boxes is affected by the parameter 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios" the size by the parameter 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales", and the orientation by the parameter 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles""anchor_angles", see the illustration below and get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param. If the parameters generate multiple identical anchors, the network internally ignores those duplicates.

image/svg+xml (1)