THE FUTURE IS HERE

CVPR 2019 Challenges on Domain Adaptation in Autonomous Driving

We all dream of a future in which autonomous cars can drive us to every corner
of the world. Numerous researchers and companies are working day and night to
chase this dream by overcoming scientific and technological barriers. One of the
greatest challenges we still face is developing machine learning models that can
be trained in a local environment and also perform well in new, unseen
situations. For example, self-driving cars may utilize perception models to
recognize drivable areas from images. Companies in Silicon Valley can build and
perfect such a model using large local datasets from the Bay Area for training.
However, if the same model were deployed in a snowy area such as Boston, it
would likely perform miserably, because it has never seen snow before. Boston,
during winter, and Silicon Valley, during any time of the year, can be labeled
as separate domains for perception models, since they present clear differences
in climate and challenges in perception. In other cases, domains may be much
closer in nature, such as a city street and a nearby highway. The process of
transferring knowledge and models between different domains in machine learning
is called domain adaptation.

A large number of papers on domain adaptation of perception models have appeared
in top publishing venues for machine learning and computer vision. However, most
of these works focus on image classification and semantic segmentation. Hardly
any attention has been paid to instance-level tasks, such as object detection
and tracking, even though localization of nearby objects is arguably more
important for autonomous driving. To foster the study of domain adaptation of
perception models, Berkeley
DeepDrive
and Didi Chuxing are
co-hosting two competitions in CVPR 2019 Workshop on
Autonomous Driving
. The challenges will focus on domain adaptation of object
detection and tracking based on the BDD100K, from Berkeley DeepDrive, and
D2-City, from Didi Chuxing, datasets. The domain of BDD100K covers US
scenes, while D2-City was collected on China’s streets. The
competitions ask participants to transfer object detectors from BDD100K to
D2-City and object trackers from D2-city to BDD100K. More
information about the challenges can be found on our website and D2-City.

[youtube https://www.youtube.com/watch?v=X5RjE–TUGs?rel=0]

Following our introduction of the BDD100K dataset, we have been busy working to
provide more temporal annotations. Above is an example of object tracking
annotation, created by our open-source annotation platform Scalabel. Some of the tracking labels are
used in the domain adaptation challenge for object tracking. More data will be
released this summer. Of course, we also have object tracking at night.

[youtube https://www.youtube.com/watch?v=gA7dvJW_il0?rel=0]