Available with Image Analyst license.
Available with Spatial Analyst license.
All supervised deep learning tasks depend on labeled datasets, which means humans must apply their knowledge to train the neural network on what it is working to identify. The labeled objects will be used by the neural network to train a model that can be used to perform inferencing on data.
Image annotation, or labeling, is vital for deep learning tasks such as computer vision and learning. A large amount of labeled data is required to train a good deep learning model. When the right training data is available, deep learning systems can be highly accurate in feature extraction, pattern recognition, and complex problem solving. The Label Objects for Deep Learning pane can be used to label data.
The Label Objects for Deep Learning button is found in the Deep Learning Tools drop-down menu, in the Image Classification group on the Imagery tab. Once the tool is launched, choose whether to use an existing layer or create an image collection. For a new image collection, browse to the location of your imagery folder and a layer will be created with your image collection.
Once the imagery collection location has been specified, the Label Objects pane appears. The pane is divided into two parts. The upper part of the pane is for managing classes, and the lower part of the pane is for managing the collection of the samples and for exporting the training data for the deep learning frameworks.
Create classes and label objects
The upper portion of the pane allows you to manage object classes and manually create the objects used for training the deep learning model. There are many tools available to help you create labeled objects.
Tool | Function |
---|---|
Create a labeled object by drawing a rectangle around a feature or object in the raster. | |
Create a labeled object by drawing a polygon around a feature or object in the raster. | |
Create a labeled object by drawing a circle around a feature or object in the raster. | |
Create a labeled object by drawing a freehand shape around a feature or object in the raster. | |
Automatically detect and label the feature or object. A polygon will be drawn around the feature or object. This tool is only available if the deep learning frameworks libraries are installed. | |
Create a feature by selecting a segment from a segmented layer. This option is only available if there is a segmented layer in the Contents pane. Activate the Segment Picker by highlighting the segmented layer in the Contents pane, then select the layer from the Segment Picker drop-down list. | |
Assigns the selected class to the current image. This is only available in Image Collection mode. | |
Select and edit a labeled object. | |
Create a classification schema. | |
Choose a classification schema option.
| |
Save changes to the schema. | |
Save a new copy of the schema. | |
Add a class category to the schema. Select the name of the schema first to create a parent class at the highest level. Select the name of an existing class to create a subclass. | |
Remove the selected class or subclass category from the schema. |
- Click one of the sketch tools, such as Rectangle, Polygon, Circle, or Freehand, to begin collecting object samples.
- Using a sketch tool,
delineate the image feature representing the object on the map.
- If you are creating a feature without a class specified, the Define Class dialog box appears. For more information about this dialog box, see the Define Class section.
- Continue to create and label objects as specified in the steps above.
- You can use the Labeled Objects tab (at the bottom of the pane) to delete and organize your labeled object samples.
- Once you are satisfied with all your labeled objects, save your samples by clicking the Save button on the Labeled Objects tab.
Now that you have manually labeled a representative sample of objects, these can be used to export your training data.
Auto Detect
The Auto Detect tool is used to automatically draw a rectangle around a feature—click the feature and a rectangle bounding box will be drawn. If you want a polygon boundary of the feature, hold down the Shift key and click the feature; this will draw a perimeter around the shape of the feature. For the tool to work well, it requires a significant number of pixels of the features to be displayed on the map, requiring you to zoom in closer to the features.
Auto Detect works well on distinct feature cases. It is not recommended when you have continuous features in close proximity to each other.
Define Class
The Define Class dialog box allows you to create a class or define an existing class. If you choose Use Existing Class, choose the appropriate Class Name option for that object. If you choose Add New Class, you can optionally edit the information and click OK to create the class.
Label image collections
If you have a collection of images and you want to label them, use the mosaic dataset or mosaic layer to label each of the images. The Image Collection tab shows the drop-down list of images. The selected image will draw in the map. You can then label the image with the appropriate class. Use the arrow buttons to choose the next image you want to view and label.
When your image is in an Image Coordinate System (ICS), the image may be in an unusual orientation, especially when dealing with oblique or perspective imagery. To view your image in pixel space, check the Label in pixel space check box. This will draw the image in an orientation more conducive to intuitive image interpretation.
Label the entire image
For instances when you don't want to draw a boundary around an object, you can use the Label Image button to label the entire image with the selected class, irrespective of the spatial aspect of the object.
Labeled Objects
The Labeled Objects tab is in the lower section of the pane and manages the training samples you have collected for each class. Collect representative sites, or training samples, for each class in the image. A training sample has location information (polygon) and an associated class. The image classification algorithm uses the training samples, saved as a feature class, to identify the land cover classes in the entire image.
You can view and manage training samples by adding, grouping, or removing them. When you select a training sample, it is selected on the map. Double-click a training sample in the table to zoom to it on the map.
Tool | Function |
---|---|
Open an existing training samples feature class. | |
Save edits made to the current labeled objects feature class. | |
Save the current labeled objects as a new feature class. | |
Delete the selected labeled objects. |
Export Training Data
Once samples have been collected, you can export them into training data by clicking the Export Training Data tab. The training data can then be used in a deep learning model. Once the parameters have been filled in, click Run to create the training data.
Parameter | Description |
---|---|
Output Folder | Choose the output folder where the training data will be saved. |
Mask Polygon Features | A polygon feature class that delineates the area where image chips will be created. Only image chips that fall completely within the polygons will be created. |
Image Format | Specifies the raster format for the image chip outputs.
The PNG and JPEG formats support up to three bands. |
Tile Size X | The size of the image chips for the x dimension. |
Tile Size Y | The size of the image chips for the y dimension. |
Stride X | The distance to move in the x direction when creating the next image chips. When stride is equal to tile size, there will be no overlap. When stride is equal to half the tile size, there will be 50 percent overlap. |
Stride Y | The distance to move in the y direction when creating the next image chips. When stride is equal to tile size, there will be no overlap. When stride is equal to half the tile size, there will be 50 percent overlap. |
Rotation Angle | The rotation angle that will be used to generate additional image chips. An image chip will be generated with a rotation angle of 0, which means no rotation. It will then be rotated at the specified angle to create an additional image chip. The same training samples will be captured at multiple angles in multiple image chips for data augmentation. The default rotation angle is 0. |
Output No Feature Tiles | Specifies whether image chips that do not capture training samples will be exported.
|
Metadata format | Specifies the format that will be used for the output metadata labels. If the input training sample data is a feature class layer, such as a building layer or a standard classification training sample file, use the KITTI Labels or PASCAL Visual Object Classes option (KITTI_rectangles or PASCAL_VOC_rectangles in Python). The output metadata is a .txt file or an .xml file containing the training sample data contained in the minimum bounding rectangle. The name of the metadata file matches the input source image name. If the input training sample data is a class map, use the Classified Tiles option (Classified_Tiles in Python) as the output metadata format.
For the KITTI metadata format, 15 columns are created, but only 5 of them are used in the tool. The first column is the class value. The next 3 columns are skipped. Columns 5 through 8 define the minimum bounding rectangle, which is composed of four image coordinate locations: left, top, right, and bottom pixels. The minimum bounding rectangle encompasses the training chip used in the deep learning classifier. The remaining columns are not used. |
Blacken Around Feature | Specifies whether the pixels around each object or feature in each image tile will be masked out.
This parameter only applies when the Metadata Format parameter is set to Labeled Tiles and an input feature class or classified raster has been specified. |
Crop Mode | Specifies whether the exported tiles will be cropped so that they are all the same size.
This parameter only applies when the Metadata Format parameter is set to either Labeled Tiles or Imagenet, and an input feature class or classified raster has been specified. |
Reference System | Specifies the type of reference system that will be used to interpret the input image. The reference system specified must match the reference system used to train the deep learning model.
|
Additional Input Raster | An additional input imagery source for image translation methods. This parameter is valid when the Metadata Format parameter is set to Classified Tiles, Export Tiles, or CycleGAN. |
The exported training data can now be used in a deep learning model.