Our second project is to create a UI to be able to control features in a autonomous vehicle. I am honestly not the biggest fan of cars so I wasn’t sure how to feel when I was first given this brief but as it is a more futuristic project I knew there would be potential to have a bit of fun in creating ideas in the discovery process.
An autonomous vehicle is a driverless car of vehicle that operates by itself through the ability to sense its surroundings and external conditions without any need for a human intervention, some people would even say they are robots disguised as cars. Operated using remote sensing technology such as radars, gps and cameras to create a 3D map of the surrounding environment in including the street, pedestrians, cars, road signs and traffic lights. The data is continuously collected by the sensing technology is and used to make decisions about vehicle operations, continually adjusting steering, cruising speed, acceleration and breaking depending on the external environment.
It is thought that due to the rise in the use of artificial intelligence with the assistance of machine learning, vehicles will learn from the data and improve their algorithms and expand their ability to navigate the road as well as start to make better decisions in certain situations without needing specific instructions. Vehicles will also be able to connect to each other, detecting each other and other obstacles in the road whether in the vehicles direct view or not, leading to a safer environment for drivers, pedestrians and cyclists.
There is five levels of autonomous vehicles:
The use of autonomous vehicles may provide certain advantages over human driven cars such as increased road safety because automated vehicles could potentially decrease the number of casualties as the software used in them is likely to make fewer errors in comparison to human drivers. This would have a knock on effect by improving traffic congestion, stop and go traffic and shorter commute times. Overtime it would also impact the environment positively by saving fuel, reducing greenhouse gases and encourage car sharing.
Autonomous vehicles would also help those who are not able to drive due to health conditions, age or disabilities as it could be a more convenient form of transport and allow them to be more self sufficient. It also prevents people having driving fatigue and causing accidents by falling asleep at the wheel as they can sleep on overnight journeys or doing other activities such as reading or working.
However there is many worries around the thought of autonomous vehicles such as car hacking due to cars using the same network to talk to each other and the smallest hack could cause large collisions. Other worries is it’s lack of ability to make judgement such as it could have to decide between veering to the right and hitting pedestrians or going left and hitting a wall which puts the passengers at risk, neither are desirable outcomes. It would also cause a lot of jobs to be lost in industries such as taxi and Uber like jobs, bus drivers or even fast food delivery drivers. In a world where the cost of living is so high would we really want this?
The developers of autonomous cars use a large amount of data from image recognition systems and machine learning to help the car drive autonomously such as cameras to identify the traffic lights, tree, pavements and people, radio waves to detect close objects, LiDAR uses light waves to detect further away objects in climates such as fog. All this data is then combined to help identify the vehicles surroundings and if the objects are likely to move or become an obstruction, meaning the more the car travels and drives, the smarter and more mature it is going to get.
The Google Waymo which is Googles autonomous car project works by a driver or passenger setting the destination and the car will calculate a suitable route. A rotating lidar sensor mounted to the roof then creates a 3D map of the environment around the vehicle in a 60 metre radius. There is then a sensor on the left rear wheel which calculates the cars position in relation to the map, radars on the front and rear bumpers to calculate the distance to obstacles and then AI software in the car is connected to all the sensors and collects input from Google street view. The AI simulates human perceptual and decision-making processes using deep learning and controls actions in driver control systems such as steering and brakes and the car software consults Google maps of things such as landmarks or traffics in the environment. If worst comes to worst, an override function is available to let a human take control of the vehicle.