ChatGPT To Boost Self-Driving Cars
In a Rush? Here are the Quick Facts!
- Engineers found LLMs like ChatGPT can enhance AV driving capabilities.
- LLMs help AVs interpret commands naturally, improving user experience.
- AVs using LLMs were rated more comfortable than traditional models.
Purdue University engineers have reported that autonomous vehicles (AVs) can leverage ChatGPT and other chatbots, powered by artificial intelligence algorithms known as large language models (LLMs), to enhance their driving capabilities.
Their study, to be presented Sept. 25 at the 27th IEEE International Conference on Intelligent Transportation Systems, explores how LLMs help AVs interpret passenger commands more naturally, potentially marking a breakthrough in human-vehicle interaction.
Unlike current AV systems, which require precise inputs, LLMs are trained to interpret human speech in a more flexible, conversational manner.
Dr. Wang, the study’s lead researcher, explains that traditional vehicle interfaces often involve pressing buttons or issuing explicit voice commands. On the other hand, LLMs enable a more intuitive and natural dialogue with passengers.
Although LLMs don’t directly control the vehicle, the researchers explained that LLMs can be used to assist the AV’s existing systems, making the driving experience more personalized and responsive to passenger needs.
For their experiment, the research team trained ChatGPT with a variety of commands, both direct and indirect. Examples include, “Drive faster” or “I feel motion sick,” teaching the model to adapt to different situations.
The researchers have tested other chatbots, like Google’s Gemini and Meta’s Llama AI, but found that ChatGPT performed the best.
The model processed these commands while taking into account real-time traffic conditions, weather, and data from the vehicle’s sensors.
The vehicle, which operated at level four autonomy (just one step below fully autonomous), used LLM-generated instructions to control its throttle, brakes, gears, and steering.
In some experiments, Wang’s team tested a memory module they added to the system. This allowed the large language models to store information about the passenger’s past preferences. The models then used that data to personalize their responses to future commands.
Experiments were conducted in a controlled environment, including a former airport runway in Columbus, Indiana, where the AV’s responses to commands were tested at highway speeds and intersections.
The researchers reported that participants found their rides in the LLM-assisted AV more comfortable than in traditional AV systems. The vehicle also consistently outperformed baseline safety standards, even when responding to new commands.
This is especially relevant as self-driving cars are increasingly used as taxis, where personalized experiences may enhance passenger satisfaction.
The large language models used in this study took an average of 1.6 seconds to process a passenger’s command, which is fine for most situations but needs to be faster for emergencies, as noted by Dr. Wang.
While this study didn’t focus on it, large language models like ChatGPT can sometimes “hallucinate,” meaning they misinterpret information and give incorrect responses.
To address this, the team set up safety measures to protect passengers when the models misunderstood commands. The models got better at understanding commands during the ride, but hallucinations still need to be fixed before these models can be used in AVs.
Car manufacturers will also need to run more tests beyond the research already done by universities. In addition, they would need regulatory approval before large language models could be fully integrated into AVs to control the vehicle’s driving functions, said Wang.
Leave a Comment
Cancel