520.629.9909

No More Car Accidents in Tucson

No More Vehicle Accidents?

Tucson Tech, a tech startup run through the University of Arizona, just announced the development of a set of inexpensive, high-performing Luneberg lenses. These Luneberg lenses are spherical lenses that will be able to replace multiple sensors that existing autonomous cars depend on.

It’s possible that these 3-D printed Luneberg lenses will replace the complex (and expensive) elements of today’s autonomous cars. This would make it possible to produce much cheaper autonomous cars, and may bring us that much closer to a driverless future.

Fully autonomous cars aren’t with us yet, but many people are concerned about their safety and the potential hazards of their existence.

What are the dangers?

In 2016, Joshua Brown of Canton, Ohio, died in an accident where his autonomous 2015 Tesla plowed into a tractor trailer. The driver of the tractor trailer reports that Brown was watching a Harry Potter movie when the accident occurred. (According to Tesla Motors Inc., it is not possible to watch videos on the car’s touchscreen.)

Tesla reports that this incident was the first fatality out of a total of 130 million miles driven with the Autopilot system. Autopilot is not meant to make the vehicle “driverless.” A spokesman for Tesla noted, “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.”

Other incidents include a series of minor traffic incidents involving Google’s self-driving cars. Google admits that there have been 11 such accidents during its experiments with autonomous cars.

Another source of concern is the possibility that the software responsible for driving autonomous cars may be faulty — or even that it may get hacked. Until these concerns are addressed, it is difficult to say what the future of self-driving cars may be.

So are autonomous cars safe or not?

One of the most important questions when it comes to the safety of self-driving cars is figuring out just what we mean by “safe.”

This isn’t just a question of semantics. Is it necessary to create software that drives perfectly, or is it enough to create a program that will get into fewer accidents than human drivers?

Until we understand what we mean when we say we want a safe self-driving car, we can’t say if self-driving cars will ever be safe.

It’s also worth noting that the cars are going to be able to drive better as time goes by. Part of the beauty of the system is that the driving software gets smarter as it drives. Just like a human driver gets better with practice, this software does too. But there’s never been a human driver who could simultaneously practice on hundreds (or even thousands) of vehicles at the same time.