DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection
We propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN, which was the state-ofthe-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline
If you use our codes or dataset, please cite the following papers:
(a) and jointly learning feature representation and deformable object parts shared by multiple object classes at different semantic levels (b). In (a), a model pretrained on image-level annotation is more robust to size and location change while a model pretrained on object-level annotation is better in representing objects with tight bounding boxes. In (b), when ipod rotates, its circular pattern moves horizontally at the bottom of the bounding box. Therefore, the circular patterns have smaller penalty moving horizontally but higher penalty moving vertically. The curvature part of the circular pattern are often at the bottom right positions of the circular pattern.
Texts in red highlight the steps that are not present in RCNN
(a) baseline deep model, which is ZF in our best-performing singlemodel detector; (b) layers of part filters with variable sizes and defpooling layers; (c) deep model to obtain 1000-class image classification scores. The 1000-class image classification scores are used to refine the 200-class bounding box classification scores.
The 10 models used for model averaging selection. The selected models are highlighted in red. The result of mAP is on val2 without bounding box regression and context. For net design, D-Def(O) denotes our DeepID-Net that uses def-pooling layers using Overfeat as baseline structure, D-Def(G) denotes DeepID-Net that uses def-pooling layers using GoogLeNet as baseline structure, G-net denotes GoogLeNet. For pretraining, image denotes the image-centric pretraining scheme of RCNN, object denotes the object centric Scheme 1 in Section \ref{Sec:Prtrain}. For loss of net, h denotes hinge loss, s denotes softmax loss. Bounding box rejection is used for all models. Selective search and edgeboxes are used for proposing regions.
Net Structure | Pre-training Scheme | Pretrained Model (on ImageNet Cls data) | Finetuned Model (on ImageNet Det data) |
AlexNet | Image-level Annotations | Download | Download |
AlexNet | Object-level Annotations | Download | Download |
Clarifai | Image-level Annotations | Download | Download |
Clarifai | Object-level Annotations | Download | Download |
Overfeat | Image-level Annotations | Download | Download |
Overfeat | Object-level Annotations | Download | Download |
GoogleNet | Image-level Annotations | Download | Download |
GoogleNet | Object-level Annotations | Download | Download |