We propose two detection tasks. Task1 uses the initial annotation as ground truth. Task2 uses the generated axis-aligned bounding boxes as ground truth. The results from task2 are of great practical value. We recommond you to test your algorithms by way of Task1.
The aim of this task is to locate the ground object instances with an oriented bounding box. The oriented bounding box follows the same format with the original annotation
You will be asked to submit a zip file containing results for all test images to evaluate your results. The results are stored in 15 files, "Task1_plane.txt, Task1_storage-tank.txt, ...", each file contains all the results for a specific category. Each file is in the following format:
An example submission of task1
imgname score x1 y1 x2 y2 x3 y3 x4 y4 imgname score x1 y1 x2 y2 x3 y3 x4 y4 ...
The evaluation protocol for oriented bounding box is a little different from the protocol in the original PASCAL VOC. We use the intersection over the union area of two polygons(ground truth and prediction) to calculate the IoU. The rest follows the PASCAL VOC.
Detecting object with horizontal bounding boxes is usual in many previous contests for object detection. The aim of this task is to accurately localize the instance in terms of horizontal bounding box with (xmin, ymin, xmax, ymax) format. In the task, the ground truths for training and testing are generated by calculating the axis-aligned bounding boxes over original annotated bounding boxes.
You will be asked to submit a zip file containing results for all test images to evaluate your results. The results are stored in 15 files, "Task2_plane.txt, Task2_storage-tank.txt, ...", each file contains all the results for a specific category.
An example submission of task2
imgname score xmin ymin xmax ymax imgname score xmin ymin xmax ymax ...
The evaluation protocol for horizontal bounding boxes follows the PASCAL VOC benchmark, which uses mean Average Precision(mAP) as the primary metric.