By Michael Pearson
A new study from the Georgia Institute of Technology School of Public Policy harnesses machine learning techniques to provide the best insight yet into the attitudes of electric vehicle (EV) drivers towards the existing charger network. The study findings could help policymakers focus their efforts.
In the paper, which is featured on the cover of the June 2020 issue of Nature Sustainability, a team led by Assistant Professor Omar Isaac Asensio trained a machine learning algorithm to analyze unstructured consumer data from 12,270 electric vehicle charging stations across the United States.
The study demonstrates how machine learning tools can be used to quickly analyze streaming data for policy evaluation in near real-time (see sidebar). Streaming data refers to data that comes in a feed, continuously, such as user reviews from an app. The study also revealed surprising findings about how EV drivers feel about charging stations.
For instance, it turns out that the conventional wisdom that drivers prefer private stations to public ones appears to be wrong. The study also finds potential problems with charging stations in the bigger cities, presaging challenges yet to come in creating a robust charging system that meets drivers' needs.
“Based on evidence from consumer data, we argue that it is not enough to just invest money into increasing the quantity of stations, it is also important to invest in the quality of the charging experience,” Asensio wrote.
Perceived Lack of Charging Stations a Barrier to Adoption
Electric vehicles are considered a crucial part of the solution to climate change: transportation is now the leading contributor of climate-warming emissions. But one major barrier to broader adoption of electric vehicles is the perception of a lack of charging stations, and the attending “range anxiety” that makes many drivers nervous about buying an EV.
While that infrastructure has grown to be quite robust in recent years, the work hasn’t taken into account what consumers actually want, Asensio said.
“In the early years of EV infrastructure development, most policies were geared to using incentives to increase the quantity of charging stations,” Asensio said. “We haven’t had enough focus on building out reliable infrastructure that can give confidence to users.”
This study helps rectify that shortcoming by offering evidence-based national analysis of actual consumer sentiment, as opposed to indirect travel surveys or simulated data used in many analyses.
Asensio directed the study with a team of five students in public policy, engineering, and computing. Two were from Georgia Tech: Catharina Hollauer, a recent graduate of the H. Milton School of Industrial and Systems Engineering, and Sooji Ha, a dual Ph.D. student in the School of Civil & Environmental Engineering and the School of Computational Science & Engineering.
The other three were participants in the 2018 Georgia Tech Civic Data Science Fellows program, which draws talented students from around the country to the Georgia Tech campus for a summer of research and learning. They are Kevin Alvarez of North Carolina State University, Arielle Dror of Smith College, and Emerson Wenzel of Tufts University.
EV Charging Sore Spots Revealed
Asensio’s team used deep-learning text classification algorithms to analyze data from a popular EV users smartphone app. It would have taken most of a year using conventional methods. But the team’s approach cut the task down to minutes while classifying sentiment with accuracy similar to that of human experts.
The study found that workplace and mixed-use residential stations get low ratings, with frequent complaints about lack of accessibility and signage. Fee-based charging stations tend to get more poor reviews than free charging stations. But it is stations in dense urban centers that really draw complaints, according to the study.
When researchers controlled for location and other characteristics, stations in dense urban areas showed a 12% to 15% increase in negative sentiment compared to non-urban locations.
The finding could indicate a broad range of service quality issues in the largest EV markets, including things like malfunctioning equipment and an insufficient number of chargers, Asensio said.
The highest-rated stations are often located at hotels, restaurants, and convenience stores, a finding Asensio said argues in favor of incentive-based management practices in which chargers are installed to draw customers. Stations at public parks and recreation facilities, RV parks, and visitor centers also do well, according to the study.
But, contrary to theories predicting that private stations should provide more efficient services, the study found no statistically significant difference in user preferences when it comes to public versus private chargers.
That finding could be an inducement to more heavily invest in public charging infrastructure to meet future growth, Asensio said. Such a network was cited in a consensus study by the National Research Council as key to helping overcome barriers to EV adoption.
Improving Policy Evaluation Beyond EVs
Overall, Asensio said the study points to the need to prioritize consumer data when considering how to build out infrastructure, especially when it comes to increasingly popular requirements for charging stations in new buildings.
But EV policy is not the only way the study’s deep learning techniques can be used to analyze this kind of material. Such techniques could be adapted to a broad range of energy and transportation issues, allowing researchers to deliver nearly real-time analysis with just minutes of computation, compared to time lags measured sometimes in months or years using more traditional methods.
“The follow-on potential for energy policy is to move towards automated forms of infrastructure management powered by machine learning, particularly for critical linkages between energy and transportation systems, and smart cities,” Asensio said.
The article, “Real-time Data from Mobile Platforms to Evaluate Sustainable Transportation Infrastructure,” was published in the Nature Sustainability on June 1. The article is available at https://doi.org/10.1038/s41893-020-0533-6.
The research was supported by National Science Foundation Award No. 1931980, the Civic Data Science REU program at Georgia Tech (NSF Award No. IIS-1659757), the Anthony and Jeanne Pritzker Family Foundation, and the Sustainable LA Grand Challenge.
About the Deep Learning Model
To perform their analysis, Asensio and his team used a convolutional neural network (CNN), a deep learning method often used in image recognition but also increasingly used for processing language. CNNs, designed to mimic the human brain, can learn what words to focus on in unstructured text reviews.
In this study, for instance, the model automatically learned the importance of the term “ICE’d," which electric vehicle drivers use to describe their frustration when an internal combustion engine vehicle blocks access to a charging station.
Human reviewers, on the other hand, did not reliably recognize the term as negative when asked to identify it in a nationally representative sample from an experiment conducted for Asensio’s team.
The study results show that artificial intelligence can accurately classify text describing complex norms, such as charging etiquette, at a level comparable to human experts.
In fact, the model implemented by Asensio’s team achieved sentiment prediction accuracy of 84.7% when compared to human experts.
The algorithm used by Asensio’s team converts the review text into a series of numbers representing words based on their similarity to one another. These numeric word representations are tuned using billions of words from Google news, which helps researchers reconstruct patterns of natural language. Asensio’s model then uses filters to scan text segments of varying lengths, called “n-grams,” looking for the words most likely to predict sentiment or identify topics of interest.
Take, for instance, the review “got a charging error first few times, got av to reset it remotely and it worked finally.” The filter examining three-word n-grams, broke the review down into a series of phrases, such as, “got a charging,” “a charging error,” and “charging error first.” After the processing was complete, the model predicted “a charging error” was the most predictive phrase.
The model then analyzed the output results, resulting in a prediction declaring the review either positive or negative.
That’s where most machine learning analysis ends. But, being an economist and policy scholar, Asensio further analyzed the data for social meaning. So, he conducted an econometric analysis that adjusts for potential bias in the review data. That resulted in a policy analysis suggesting, among other things, the need for greater attention to the quality of charging stations (see main story).
The model's speed and accuracy suggest the technique is ready for applications involving rapid social and policy analysis of emerging infrastructure, Asensio said.
According to the paper, it would have taken 32 weeks to classify the 127,000 reviews by hand, work the model built by Asensio’s team was able to perform in minutes once trained.
“We show that using computational tools, it is possible to develop more sophisticated performance indicators from unstructured data that offer the potential to update in near real-time,” Asensio said. “This is a major step forward from current practice that relies on indirect travel surveys or simulations, which can be costly and time-intensive to administer.”