Detecting Road Surface Wetness from Audio with Recurrent Neural Networks

This blog post presents a dataset, video, and information for the paper titled “Detecting Road Surface Wetness from Audio: A Deep Learning Approach” (get PDF by clicking link). First, here’s a video about this work:



If you find this paper/page helpful in your work, please cite:

  title={Detecting road surface wetness from audio: A deep learning approach},
  author={Abdi{\'c}, Irman and Fridman, Lex and Brown, Daniel E and Angell, William
          and Reimer, Bryan and Marchi, Erik and Schuller, Bj{\"o}rn},
  booktitle={Pattern Recognition (ICPR), 2016 23rd International Conference on},

PS: You can find this paper on Google Scholar.


The dataset used for this paper is made available as a ZIP archive (click the link to download it). This dataset is what was used in the paper to train and evaluate the proposed LSTM model.

The zip archive contains 3 dry trips and 3 wet trips, each in a separate directory. Each trip contains 3 synchronized data streams that should line up perfectly together at frame-level accuracy:

  • Audio: File audio_mono.wav is the audio of the trip and the main data stream used in the paper.
  • Telemetry: File synced_data_fps30.csv contains a lot of information about the movement of the vehicle sampled at a fixed evenly-sampled rate of 30 Hz. This file is useful for lining up the speed of the vehicle with the audio of the tire’s interaction with the road.
  • Video: File video_front.mkv is the 30fps synchronized (to the other data streams) video of the forward roadway. This is useful for validation by visually confirming the approximate speed of the vehicle and wetness of the road at any moment in time.


Irman Abdic, MIT
Lex Fridman, MIT (contact author:
Eric Marchi, TUM
Daniel E. Brown, MIT
William Angell, MIT
Bryan Reimer, MIT
Björn Schuller, TUM and Imperial College London