📝

The race for autonomous driving is in full swing. There are a number of options to choose for computer-aided vision. The way that Tesla has approached it is totally unique when compared to Waymo or Uber.

Tesla relies on a neural network AI that is fed images from radar and ultrasonic sensors. The images received from these two sources are then processed in much the same way that a human brain processes visible light through the eyes.

In this article, we get to see what the Tesla AV sees as it drives through the roads of Paris. The images were caught and processed byTesla hacker ‘evergreen’ and the TMC user DamianXVI. These two teamed up to purchase an unlocked developer Autopilot Hardware 2.5 computer from eBay.

This bespoke system and the results were presented to the public by the do, through their Tesla subReddit page.

This is what Green initially wrote on the page:

“So keep in mind our visualizations are not what Tesla devs see out of their car footage, and we do not fully understand all the values either (though we have decent visibility into the system now as you can see). Since we don’t know anybody inside Tesla development, we don’t even know what sort of visual output their tools have.”

Green continues to expand on the data visualization.

“The green fill at the bottom represents “possible driving space,” lines denote various detected lane and road boundaries (colors represent different types, the actual meaning is unknown for now). Various objects detected are enumerated by type and have coordinates in 3D space and depth information (also 2D bounding box, but we have not identified enough data for a 3D one), correlated radar data (if present) and various other properties.”

They presented a video of what the Autopilot was viewing as it drove, and added commentary. Here is their video.

01:17 – traffic cones shape driveable space.

01:31 – construction equipment recognized as a truck (shows they have quite a deep library of objects they train against? Though it’s not perfect, we saw some common objects not detected too. Notably a pedestrian pushing a cart (not present in this video).

02:23 – false positive, a container mistaken for a vehicle.

03:31 – a pedestrian in red (dish?) jacket is not detected at all. (note to self, don’t wear red jackets in Norway and California, where Teslas are everywhere).

04:12 – one example of lines showing right turn while there are no road markings of it.

06:52 – another false positive – poster mistaken for a pedestrian.

08:10 – another more prominent example of showing left turn lane with no actual road markings.

09:25 – close up cyclist.

11:44 – roller skater.

14:00 – we nearly got into accident with that car on the left. AP did not warn.

19:48 – 20 pedestrians at once (not that there was shortage of them before of course).

The duo also presented a video of the process on a highway, and was highlighted by Green:

3:55 – even on “highways” gore area is apparently considered driveable? While technically true it’s probably not something that should be attempted.

4:08 – gore zone surrounded by bollards is correctly showing up as undriveable.

11:47 – you can see a bit of a hill crest with the path over it (Paris is not super hilly it appears so hard to demonstrate this on this particular footage).

What the duo actually did to improve the factual and actual way the Autopilot views the scenery, is by removing the background imagery which does not come up in the system. So now you get to see exactly what the autopilots see while driving.

Our take:

This is perhaps the first uncrated third-party review of AV technology. Waymo, Cruise Automation, and Uber all released curated videos, so there is no real corroboration of original footage.

This is an exciting inside view of how AV’s see the world around them. It is imperative that their performance is 100% perfect, this 3rd party proofing is useful to raise trust and perhaps improve the acceptance of AV’s into society.