There is a strong rumor that Tesla will be switching from a single camera autopilot system to a triple camera system at some point. This is based on a change to the 2016 service manual schematics which reportedly show 3 cameras.
Checking some brand new Model X and S with the new refresh on 8-May-2016 shows no change so far, so it is not clear when or if this change will be made. The current module in front of the rear-view mirror hold the rain sensor and the camera. The camera is indented and faces directly forward.
Mobileye makes the EyeQ3 chip that handles the camera video and other inputs such as the radar to understand the car’s environment. Tesla produces the software/hardware to take the analysis from the EyeQ3 chip, combines it with Tesla high-definition maps and turn it into the set of auto-pilot features.
The Model S since late 2014 has had one front-facing camera. While one camera does a great job, 3 cameras may provide additional features with related software changes.
From a Mobileye presentation back in early 2015, we learn that:
- The main 50 degree camera is used to detect objects, traffic sign recognition, adaptive high-beam assist, lanes, traffic lights, path delimiters, and lateral control assist.
- The narrow 25 degree camera is used to also detect objects, lanes, traffic lights, debris further ahead than the 50 degree camera.
- The 150 degree fisheye camera is used for cars cutting in detection, lane detection on tight curves, pedestrian/cyclists/large animal detection, and 1st-in row traffic light detection.
The narrow and main cameras also provide some redundancy to each other in case of an issue with either camera or interference with the visibility of one camera. The radar provide yet another source of redundancy and helps with conditions that the cameras are weak at.
I should point out that none of these cameras would be all that suitable for a dashcam. The cameras are optimized for auto-pilot functions and may be monochrome and/or only medium definition. The frame rate also tends to be non-standard, such as 36 fps.
The cameras and the related electronics such as the EyeQ3 chip are in the housing in front of the rear-view mirror. A retrofit may be possible to older single-camera vehicles without too much difficulty. It’s unclear if Tesla will offer such a retrofit or not or what the cost of such a retrofit would be.
Another interesting point is Tesla is working directly with Mobileye. Most other car makers are working with third-party suppliers. The OEM supplier are integrating the Mobileye chip, other hardware and software as a component and is then sold to the automakers, which then needs to be integrated with the car and the car’s software. This means an extra very large layer of effort and time, and will likely put most automakers always 1-3 years behind Tesla. Fixes and improvements will be quite slow, and rarely applied to existing sold vehicles.
With the speed of changes in this area, I’m not sure I’d trust any other automaker other than Tesla to produce a safe and reliable auto-pilot system for years to come. Having the ability to do over-the air updates as Tesla does is so critical in this fast evolving field.
6 comments
How far can the object be for each of the three cameras to recognize them?
Although I have not seen any published ranges, it obviously depends on the size of the object, the resolution of the camera and of course the software and how it interprets the scene. Other factors include how much contrast the object has over the background and lighting conditions. For example, objects via the cameras may have a far more limited range at night than during the day. Radar helps in filling in object detection at night or in conditions where the camera is weak at object detection.
Interesting article, I thought the mobile eye 3 camera system was still under development, Is it possible to have production unit now for the current tesla ?
The EyeQ3 chip has been in production since mid-2014 (and is the chip used in the Model S and X. It does a lot, but far from everything. Considerable additional software was developed (and continues to be developed) by Tesla. The chip always had 3 camera inputs, but only requires one camera. Additional features do require more cameras.
My understanding is the EyeQ4 chip is currently in development with samples to manufacturers this summer. It will offer then next level of autonomous driving abilities, but will not show up in cars until 2018. I have no idea when or if Tesla will switch to the EyeQ4 chip, seeing how it’s not even available today.
Would a 3-camera system still be considered by many to be a first generation system? Perhaps a first and 1/2 generation system? Some people have said that a second generation system is still a couple of years away. What’s the likely development timeline for a more advanced autopilot system?
There is no formal definition of what different generations might be or contain. I’ve seen many different timelines and evolutions, but little consistency. On top of that the hardware and software are constantly being improved. Some even consider ordinary cruise control a first generation system. So perhaps we are in the 2nd generation? Even with hardware improvements, they don’t matter much until the software catches up. Sorry, I don’t have a clearer answer.