<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:academictorrents="http://academictorrents.com/" version="2.0">
<channel>
<title>Self Driving Cars - Academic Torrents</title>
<description>collection curated by joecohen</description>
<link>https://academictorrents.com/collection/self-driving-cars</link>
<item>
<title>Misbehaviour Prediction for Autonomous Driving Systems (Dataset)</title>
<description>@inproceedings{2020-icse-misbehaviour-prediction,
author= {Andrea Stocco and Michael Weiss and Marco Calzana and Paolo Tonella},
title= {Misbehaviour Prediction for Autonomous Driving Systems},
booktitle= {Proceedings of 42nd International Conference on Software Engineering},
series= {ICSE '20},
publisher= {ACM},
pages= {12 pages},
year= {2020},
abstract= {These are reproduction artifacts of the paper "Misbehaviour Prediction for Autonomous Driving Systems" at ICSE 2020.

For more information, check out https://github.com/testingautomated-usi/selforacle
},
keywords= {},
terms= {},
license= {CC BY-SA 4.0},
superseded= {},
url= {https://github.com/testingautomated-usi/selforacle}
}

</description>
<link>https://academictorrents.com/download/221c3c71ac0b09b1bb31698534d50168dc394cc7</link>
</item>
<item>
<title>comma.ai driving dataset (Dataset)</title>
<description>@article{,
title= {comma.ai driving dataset},
keywords= {},
journal= {},
author= {Comma AI},
year= {},
url= {https://github.com/commaai/research},
license= {Attribution-Noncommercial-Share Alike 3.0},
abstract= {This dataset contains more than seven hours of highway driving for you to use in your projects.

Details included within the dataset are:

- The speed of the car
- The acceleration
- The steering angle
- GPS coordinates

You won’t need to register on the site to download the dataset, it can be downloaded with a single click. For your convenience, we’ve included the direct download links below so you can instantly download and use them!

![](https://i.imgur.com/X6LA8Qm.gif)

45 GB compressed, 80 GB uncompressed

```
dog/2016-01-30--11-24-51 (7.7G)
dog/2016-01-30--13-46-00 (8.5G)
dog/2016-01-31--19-19-25 (3.0G)
dog/2016-02-02--10-16-58 (8.1G)
dog/2016-02-08--14-56-28 (3.9G)
dog/2016-02-11--21-32-47 (13G)
dog/2016-03-29--10-50-20 (12G)
emily/2016-04-21--14-48-08 (4.4G)
emily/2016-05-12--22-20-00 (7.5G)
frodo/2016-06-02--21-39-29 (6.5G)
frodo/2016-06-08--11-46-01 (2.7G)
```

Dataset referenced on this page is copyrighted by comma.ai and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.

## Dataset structure
The dataset consists of 10 videos clips of variable size recorded at 20 Hz
with a camera mounted on the windshield of an Acura ILX 2016. In parallel to the videos
we also recorded some measurements such as car's speed, acceleration,
steering angle, GPS coordinates, gyroscope angles. See the full `log` list [here](Logs.md).
These measurements are transformed into a uniform 100 Hz time base.

The dataset folder structure is the following:
```bash
+-- dataset
|   +-- camera
|   |   +-- 2016-04-21--14-48-08
|   |   ...
|   +-- log
|   |   +-- 2016-04-21--14-48-08
|   |   ...
```

All the files come in hdf5 format and are named with the time they were recorded.
The camera dataset has shape `number_frames x 3 x 160 x 320` and `uint8` type.
One of the `log` hdf5-datasets is called `cam1_ptr` and addresses the alignment
between camera frames and the other measurements.},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/58c41e8bcc8eb4e2204a3b263cdf728c0a7331eb</link>
</item>
<item>
<title>Camvid: Motion-based Segmentation and Recognition Dataset (Dataset)</title>
<description>@article{,
title= {Camvid: Motion-based Segmentation and Recognition Dataset},
keywords= {fastai},
journal= {},
author= {Brostow et al., 2008},
year= {},
url= {https://pdfs.semanticscholar.org/08f6/24f7ee5c3b05b1b604357fb1532241e208db.pdf},
license= {},
abstract= {Segmentation dataset with per-pixel semantic segmentation of over 700 images, each inspected and confirmed by a second person for accuracy.},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/890e6716827f31cbd096c5aee7f777e30df7094a</link>
</item>
<item>
<title>CamVid - The Cambridge-driving Labeled Video Database (Dataset)</title>
<description>@article{,
title= {CamVid - The Cambridge-driving Labeled Video Database},
keywords= {},
journal= {},
author= {},
year= {},
url= {http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/},
license= {},
abstract= {The Cambridge-driving Labeled Video Database (CamVid) is the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes.

The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object
classes. 

Over ten minutes of high quality 30Hz footage is being provided, with corresponding semantically labeled images at 1Hz and in part, 15Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we offer custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluated the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation.

![](https://i.imgur.com/q5UYxWa.png)


#### Citation Request 

Segmentation and Recognition Using Structure from Motion Point Clouds, ECCV 2008
Brostow, Shotton, Fauqueur, Cipolla

Semantic Object Classes in Video: A High-Definition Ground Truth Database
Pattern Recognition Letters
Brostow, Fauqueur, Cipolla},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/a6431e7dd33615194c5936fd8a35db043ab51058</link>
</item>
<item>
<title>Holistic Recognition of Low Quality License Plates (HDR dataset) (Dataset)</title>
<description>@article{,
title= {Holistic Recognition of Low Quality License Plates (HDR dataset)},
keywords= {},
author= {},
abstract= {This dataset focuses on recognition of license plates in low resolution and low quality images.

![](https://i.imgur.com/4y2lGaX.png)

### Citation Request

J. Špaňhel, J. Sochor, R. Juránek, A. Herout, L. Maršík and P. Zemčík, "Holistic recognition of low quality license plates by CNN using track annotated data," 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, 2017, pp. 1-6.
doi: 10.1109/AVSS.2017.8078501},
terms= {},
license= {Attribution-NonCommercial-ShareAlike 4.0 International},
superseded= {},
url= {https://medusa.fit.vutbr.cz/traffic/research-topics/general-traffic-analysis/holistic-recognition-of-low-quality-license-plates-by-cnn-using-track-annotated-data-iwt4s-avss-2017/}
}

</description>
<link>https://academictorrents.com/download/8ed33d02d6b36c389dd077ea2478cc83ad117ef3</link>
</item>
<item>
<title>Udacity Didi $100k Challenge Dataset 1 (Dataset)</title>
<description>@article{,
title= {Udacity Didi $100k Challenge Dataset 1},
keywords= {},
journal= {},
author= {Udacity and Didi},
year= {},
url= {https://challenge.udacity.com/home/},
license= {},
abstract= {First Full Dataset Release - Udacity/Didi $100k Challenge

One of the most important aspects of operating an autonomous vehicle is understanding the surrounding environment in order to make safe decisions. Udacity and Didi Chuxing are partnering together to provide incentive for students to come up with the best way to detect obstacles using camera and LIDAR data. This challenge will allow for pedestrian, vehicle, and general obstacle detection that is useful to both human drivers and self-driving car systems.

Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art.

Specifically, students will be competing against each other in the Kitti Object Detection Evaluation Benchmark. While a current leaderboard exists for academic publications, Udacity and Didi will be hosting our own leaderboard specifically for this challenge, and we will be using the standard object detection development kit that enables us to evaluate approaches as they are done in academia and industry.

IMPORTANT NOTICE

There are some major differences between this Udacity dataset and the Kitti datasets. It is important to note that recorded positions are recorded with respect to the base station, not the capture vehicle. The NED positions in the ‘rtkfix’ topic are therefore in relation to a FIXED POINT, NOT THE CAPTURE OR OBSTACLE VEHICLES. The relative positions can be calculated easily, as the NED frame is cartesian space, not polar. 

The XML tracklet files will, however, be in the frame of the capture vehicle. This means that the capture vehicle is also included in the recorded positions, and is denoted by the ROS topic '/gps/rtkfix' in this first dataset. The single obstacle vehicle in this dataset is located in the 'obs1/' topic namespace, but this will be changed to '/obstacles/obstacle_name' in future releases to accommodate the creation of XML tracklet files for multiple obstacles. 

Orientation of obstacles are not evaluated in Round 1, but will be evaluated in Round 2. The pose section of the ROS bags included in this release IS NOT A VALID QUATERNION, and does not represent either the pose of the capture vehicle or the obstacle.

There is no XML tracklet file included with these datasets. They will be released as soon as they are available, in conjunction with the opening of the online leaderboard.},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/76352487923a31d47a6029ddebf40d9265e770b5</link>
</item>
<item>
<title>CH03_002.bag.tar.gz (Dataset)</title>
<description>@article{,
title= {CH03_002.bag.tar.gz},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {Continuous recording of a down/back trip on El Camino Real with two GPS fix / IMU sources.
Contains LIDAR data from Velodyne HDL-32E (previous runs have used VLP-16)

https://github.com/udacity/self-driving-car/tree/master/datasets/CH3},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/48a5a65231a3ad51939251c94f92716ef337d9c1</link>
</item>
<item>
<title>Ch2_001: Udacity Self Driving Car (Dataset)</title>
<description>@article{,
title= {Ch2_001: Udacity Self Driving Car},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {https://github.com/udacity/self-driving-car/tree/master/datasets/CH2},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/69343f173317f847e489d7272920c86795029434</link>
</item>
<item>
<title>CHX_001: Udacity Self Driving Car (Dataset)</title>
<description>@article{,
title= {CHX_001: Udacity Self Driving Car},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {https://github.com/udacity/self-driving-car/tree/master/datasets/CHX},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/989ad2fd8bb95c18e07ada27cb100e5b9c14224d</link>
</item>
<item>
<title>CH3_001: Udacity Self Driving Car (Dataset)</title>
<description>@article{,
title= {CH3_001: Udacity Self Driving Car},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {https://github.com/udacity/self-driving-car/tree/master/datasets/CH3},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/769f5a4641d0d26f64d4b550bdfd88b3f4582b11</link>
</item>
<item>
<title>Ch2_002: Udacity Self Driving Car (Dataset)</title>
<description>@article{,
title= {Ch2_002: Udacity Self Driving Car},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {https://github.com/udacity/self-driving-car/tree/master/datasets/CH2},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/72dfc74e541cb2c53c027d233007808bc42bf103</link>
</item>
<item>
<title>Ch2_001: Udacity Self Driving Car (Dataset)</title>
<description>@article{,
title= {Ch2_001: Udacity Self Driving Car},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {https://github.com/udacity/self-driving-car/tree/master/datasets/CH2},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/692ee7e0c63fb2212bfe4a62a39ce71ee9b16fb3</link>
</item>
<item>
<title>Udacity SDC Dataset: 2016-11-07 (Dataset)</title>
<description>@article{,
title= {Udacity SDC Dataset: 2016-11-07},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {Conditions: Overcast Evening / Night
Sensors: 3 camera streams, 1 VLP-16 LIDAR packet stream, 1 Xsens IMU positional/GPS fix data stream, standard vehicle location/state information

This is a continuous recording of El Camino from Mountain View to South San Francisco (and back). The trip is separated into two ROS bag files corresponding with the direction of the trip. New in this dataset is the presence of the more accurate GPS fix from the Xsens IMU. This more accurate GPS location is available in the '/fix' topic. Accelerometer information is in the '/imu/data' topic.

This was generated for Challenge Three and LIDAR point cloud creation (non-challenge related). Those using this dataset for Challenge 2 need to prune out lane changes and stops.},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/d03edc3fd5320a9a0436a4750a95730783a13d5d</link>
</item>
<item>
<title>Challenges 2 &amp; 3: El Camino Training Data (Dataset)</title>
<description>@article{,
title= {Challenges 2 &amp; 3: El Camino Training Data},
keywords= {},
journal= {},
author= {Udacity},
year= {},
url= {},
license= {},
abstract= {NOTE: LEFT AND CENTER CAMERAS WERE ACCIDENTALLY SWAPPED IN THIS CONFIGURATION DUE TO PERMANENT WINDSHIELD MOUNTING

This torrent contains training data in rosbag format for Challenges 2 &amp; 3. It contains the ROS bags from the previously released PNG format official training set, as well as some extra unreleased data from that day.},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/e9b47deb3391e33df794e5ec4399d38ef8767c07</link>
</item>
<item>
<title>Udacity Self Driving Car Dataset 3-1: El Camino (Dataset)</title>
<description>@article{,
title= {Udacity Self Driving Car Dataset 3-1: El Camino},
keywords= {},
journal= {},
author= {},
year= {},
url= {},
license= {},
abstract= {Dataset of two drives from the Udacity office to San Francisco up (and down) El Camino Real, the path of the final drive and where the test sets of Challenges 2 &amp; 3 will take place. Sunny afternoon and evening drives, an attempt was made to stay in the same lane, but obstacles and construction sometimes required lane changes. While this is an official dataset for Challenge 3, it has all required information to be used in Challenge 2. Note, only the center camera feed will be available in the test set.

Also, this dataset includes Velodyne VLP-16 LIDAR packets. This is so that you may see the format of the LIDAR we will be publishing, but it is not useful (or allowed) in Challenges 2&amp;3.

# To utilize compressed image topics
You need the install a dependency:

$ sudo apt-get install ros-indigo-image-transport*

# To playback data
copy the udacity_launch package found on our github project () to your catkin workspace, compile and source so that it is reachable.

Location of launch files: https://github.com/udacity/self-driving-car/tree/master/datasets/udacity_launch

$ cd udacity-dataset-2-1
$ rosbag play --clock *.bag
$ roslaunch udacity_launch bag_play.launch

# For visualization

$ roslaunch udacity_launch rviz.launch

# Dataset Info
MD5:13f107727bed0ee5731647b4e114a545

file:udacity-dataset_2016-10-20-13-46-48_0.bag
duration:1hr 25:26s (5126s)
start:Oct 20 2016 13:46:48.34 (1476996408.34)
end:Oct 20 2016 15:12:15.15 (1477001535.15)

file:udacity-dataset_2016-10-20-15-13-30_0.bag
duration:1hr 58:44s (7124s)
start:Oct 20 2016 15:13:30.91 (1477001610.91)
end:Oct 20 2016 17:12:15.64 (1477008735.64)},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/c9dae89d2e3897e6aa98c0c8196348c444998a2a</link>
</item>
<item>
<title>Udacity Dataset 2-3 Compressed (Dataset)</title>
<description>@article{,
title= {Udacity Dataset 2-3 Compressed},
keywords= {},
journal= {},
author= {Udacity, Auro Robotics},
year= {},
url= {},
license= {MIT},
abstract= {3 hours of daytime driving on highway and city locations. Includes three cameras, can, diagnostic, and other data.

We’re Building an Open Source Self-Driving Car, and we want your help!

At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too.

Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe!

https://github.com/udacity/self-driving-car

To playback data
=================
copy the udacity_launch package to you catkin workspace,
compile and source so that it is reachable.

cd udacity-dataset-2-1
rosbag play --clock *.bag
roslaunch udacity_launch bag_play.launch

#For visualization
roslaunch udacity_launch rviz.launch},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/1d7fa5116a809b1537bf521fd19897de5d69b7a3</link>
</item>
<item>
<title>Udacity Self-Driving Car Driving Data 10/3/2016 (dataset-2-2.bag.tar.gz) (Dataset)</title>
<description>@article{,
title= {Udacity Self-Driving Car Driving Data 10/3/2016 (dataset-2-2.bag.tar.gz)},
keywords= {},
journal= {},
author= {Udacity},
year= {},
url= {https://github.com/udacity/self-driving-car},
license= {},
abstract= {
| Date      | Lighting Conditions | Duration | Compressed Size | Uncompressed | MD5                              |
|-----------|---------------------|----------|-----------------|--------------|-----------------|
| 10/3/2016 | Overcast            | 58:53    | 124G            | 183G        | 34362e7d997476ed972d475b93b876f3 |},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/5ac7e6d434aade126696666417e3b9ed5d078f1c</link>
</item>
<item>
<title>Udacity Self-Driving Car Driving Data 9/29/2016 (dataset.bag.tar.gz) (Dataset)</title>
<description>@article{,
title= {Udacity Self-Driving Car Driving Data 9/29/2016 (dataset.bag.tar.gz)},
keywords= {},
journal= {},
author= {Udacity},
year= {},
url= {https://github.com/udacity/self-driving-car},
license= {},
abstract= {| Date      | Lighting Conditions | Duration | Compressed Size | Uncompressed | MD5                              |
|-----------|---------------------|----------|-----------------|--------------|-----------------|
| 9/29/2016 | Sunny               | 12:40    | 25G             | 40G          | 33a10f7835068eeb29b2a3274c216e7d |},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/6011d0e932970efc999809e9cafab8e791c93bb8</link>
</item>
<item>
<title>Udacity Self-Driving Car Dataset 2-2 (Dataset)</title>
<description>@article{,
title= {Udacity Self-Driving Car Dataset 2-2},
keywords= {},
journal= {},
author= {Udacity, Auro Robotics},
year= {},
url= {},
license= {MIT},
abstract= {We’re Building an Open Source Self-Driving Car, and we want your help!

At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too.

Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe!

https://github.com/udacity/self-driving-car},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/bcde779f81adbaae45ef69f9dd07f3e76eab3b27</link>
</item>
<item>
<title>Udacity Self-Driving Car Dataset 2-1 (Dataset)</title>
<description>@article{,
title= {Udacity Self-Driving Car Dataset 2-1},
keywords= {},
journal= {},
author= {Udacity, Auro Robotics},
year= {},
url= {},
license= {MIT},
abstract= {We’re Building an Open Source Self-Driving Car, and we want your help!

At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too.

Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe!

https://github.com/udacity/self-driving-car},
superseded= {},
terms= {}
}

</description>
<link>https://academictorrents.com/download/f2666220bb74417dfc43815b710a1565cd1a6b76</link>
</item>
</channel>
</rss>
