Artificial Intelligence is Changing the Conversation

Dr. Aaron C. Elkins

Artificial intelligence, or AI, is positioned to restructure every aspect of the world within the next few years. Regular and ongoing collaboration between humans and intelligent machines will enable solutions for a variety of pressing global issues. In these early stages of artificial intelligence, it is unknown precisely how machines will integrate into our daily lives. Dr. Aaron C. Elkins, researcher, professor, and director of the Artificial Intelligence Lab at San Diego State University (SDSU),  has been conducting research in AI for over ten years. He recently spoke to us to offer his knowledge regarding the landscape of AI, and outline some of the larger implications associated with the adoption of this technology.

Applications for Artificial Intelligence

I think the most obvious way AI can help is by addressing what will be a very serious problem of the future–where an extra few billions of people will be crossing international borders via transit systems that are not prepared to secure them. It’s not obvious now, because we are not in 2035, when it’s anticipated that we will completely outstrip our current screening process. In that sense, it could be pretty critical that we have tools and technologies, like the system I’m working on, in place to help us keep our airports and borders more secure–but also operating.

There are a few projects we research at SDSU, and in my lab. The one I focus on and have been working on for ten years is what we call the AVATAR. The AVATAR is essentially an automated artificial intelligence-based interviewing system. It’s meant to be a kiosk. The research that supported it has been focused on trying to anticipate traffic around the world for travel and airports.

Deception detection is one area of research, but the underlying science and technology is more widely applicable. I view agent/human interaction, and human/robot collaboration as the future society. I see security as just a specific area of a larger interaction. I’m always very interested in the underlying tech, I think natural human/computer interaction is the future we are moving towards.

“I think natural human/computer interaction is the future we are moving towards.”


Even the most conservative projections are that transit systems are going to double in the traffic they will have to serve. The fact is that more and more people are going to come through, and there isn’t much that can be done. You can’t just add another runway or build new terminals, it’s not possible physically or operationally. In anticipation of that, the goal of this system was to build a smarter airport.

The AVATAR kiosk.

The AVATAR kiosk.

People can check themselves into a kiosk and not see an agent–and to do the same for the security process as well. You can come in, check in, interact with this intelligent agent, ask it some questions, and have it conduct an interview in a more efficient manner. Adding to the landscape of a crowded airport in the future, we also are looking at more complex security and harder to predict problems like terrorism, smuggling, drugs, human trafficking.

One thing that is becoming patently familiar to the security agencies is that the threats are becoming more versatile in getting around sensors. You can’t just rely on x-rays, chemicals, or other signatures that look for explosives. There needs to be some sort of intervention to interact with a potential threat, and to get enough information and feedback elicited to determine a threat. It’s not enough to send people through a metal detector or scan their bags. The system is meant to incorporate the efficiency of speedier travel through the airports, but also increasing the effectiveness of the existing human manpower.

One of the things that we focus on is impostership, and how we can detect whether someone has a legitimate passport. You wouldn’t necessarily be able to catch it by detecting a fraudulent document, but they are not the person on that document. That happens quite a lot–where a person has a legitimate document, and just looks similar enough to the photograph.


We are also heavily invested in cybersecurity. With cybersecurity we have tons of data about the network infrastructure, the existing vulnerabilities, as well as what assets exist, and we use AI to analyze and reconcile that to try to make predictions of what might be most likely to be attacked. We work with the US Navy in helping them protect their battleships, which are basically mobile networks that are very complex networks, and susceptible to different kinds of cyberthreats. The ships cannot simply run updates once they are on a mission, so we are using AI to identify potential threats and vulnerabilities.


I also work with data from patients who have Parkinson’s Disease. One of the things we are working on is how we can reduce their tremors. We work with patients that have deep brain stimulation implants. They have electrodes in their brains and then receive a certain amount of electrical voltage that affects their motor functioning and tremors. We developed an AI algorithm that can automatically calibrate these deep brain stimulations to find the optimal settings to reduce tremors and physical discomfort.

I’ve talked with hospitals about using a system like AVATAR for triage in the emergency room. I could see that changing–trying to make the check-in process more efficient at a hospital. We have been asked to incorporate some of the behavioral sensors that are used in the AVATAR for therapy to evaluate the mental state of someone who is maybe undergoing therapy.

Artificially Intelligent Systems

In some ways, you could say they are all just data analytics projects. They are big data, and we use different kinds of machine learning and statistical learning algorithms to model some phenomena and make predictions. AI isn’t a really well-defined term. There are a few trends in nomenclature. Usually, when we are talking about AI, we are really saying we are using big data distributive file systems to analyze extreme amounts of text at the same time. We are using deep learning and machine learning to analyze those data and make predictions.

When I started as a student, it was called business intelligence. That was what we meant to be AI. Then, it became big data. From big data, over the course of a few years, with the addition of deep learning and deep neural networks, it started to become AI. You can think of AI as really just a synonym for having large amounts of data and applying machine learning to advanced insights or predictions that mimic human actions or expertise.

Machines are Better at Analyzing Behavior

Machines are better at things that we are not that good at. They are good at picking up on millimeter changes in pupil dilation, changes in speech or tone or speed. We are really good at language and plausibility, unpacking a story and making sure that it is internally consistent. Computers are not good at that. They don’t really understand language the way we do. Whenever our team develops a system, it’s usually to highlight the strength of the system, and pair it with the strength of humans who are good at listening to a story and assessing what seems plausible or implausible–while the system analyzes the things we aren’t attending to, because we are listening to the words.

In developing the AVATAR for detecting deception or risk, we found that people are more overconfident than they should be about how well they can lie, and how well they can detect deception. Over and over, thousands of studies have replicated that humans, even experts, tend achieve detection rates of around 54%. I feel good about how much better we can do in terms of detection. Regarding people being able to keep their feelings hidden, there’s only so much you can control in an interaction before it starts to really affect your behavior and performance.

If you think about the mindset of a person in a conversation who’s not even trying to hide their lie, it is a pretty involved interaction. You’re talking; you’re trying to vocalize words and construct your thoughts, interpret responses from your speaking partner, keep your consistency and your message, gauge their reaction, try to embellish your words or behavior, try to build an impression of yourself–this is just normal conversation. Then, add in that they are now lying and having to keep their story straight–and also gauge the suspicion of their speaking partner to determine whether to withhold their emotions or fake emotions. It causes them to leak certain kinds of behaviors that we can pick up on.

Controlling your emotions is not easy. When people do it, they often restrict all emotion and they become very rigid and we can see that. When people try to fake emotions, we can see the difference between fake and real emotions. There is a pretty big difference. But, maybe they are amazingly talented at hiding their emotions because they are a trained actor. Then, it will be in their pupils or their speech pattern, language, or posture. There will be something, you can’t control it all.

Building Trust between Humans and Machines

I think people need to be more honest about what AI can and cannot do. We need to fixate on what AI does great, and what humans do great–and mesh those together. There are a lot of questions about what causes humans to reject artificial intelligence technology. I think something that has led to more resistance is that they [the machines] have gotten smarter, and it’s getting to a point where AI is becoming almost a pseudo-expert, rather than a fancy calculator. I think people tend to trust systems that stay in their simple rote calculations. There is also a lot of science fiction on television. That is not to say there aren’t valid criticisms. I think it’s just the popular consciousness right now. The computing hasn’t changed, it’s mostly branding.

I don’t know if people aren’t trusting machines or are feeling threatened by them. That’s what we are looking at right now in research to try and explain. One way I’ve explained it is that they [people] felt threatened–the system made a decision that a human didn’t agree with, and the human responded defensively. It is the same way you would respond to someone with an opposing political opinion and get defensive. I think we need to change the way in which we interface the decision-making process, it’s not just a computer that is a “super smart” AI that dictates and you listen–it needs to be more of a collaboration.

From the perspective of stereotypes and discrimination, you can be guaranteed the machine will have zero stereotypes imprinted. I’ve done experiments with Mexicans and Americans and the Mexican populations really preferred dealing with the technology, because they felt it was judging them objectively from their behaviors and sensor data. There was no preconceived notion. From the perspective of a robot, I don’t think anyone is going to feel they are being stereotyped or discriminated against. It didn’t judge them based on their accent or applying some mental model, it was all equally analyzed. That could be an equalizer in fairness and perceived fairness.

Robotics Are the Next Big Step

Dr. Elkins facilitates an interaction between robots Pepper and AVATAR.

An interaction between robots Pepper and AVATAR.

Ultimately, the fact that we are using a kiosk [for AVATAR] is a limitation of the technology. The true end goal is to have an embodied agent moving around. You can think of AI as the brain, and robotics as the body. Right now, we have disembodied voices speaking to us–that’s not real. When we finally have these physically present robots, things will change. Robotics and virtual or augmented reality are two areas to focus on. Robotics is the tipping point. People had to get comfortable with microphones and listening devices in their homes, now they have cameras. People will get accustomed to it if these technologies provide value.

You’re going to see them converge, where we’re no longer talking about AI as software, we will be talking about it as robotic. I think what makes the technology intelligent is pairing it with physicality. Most of the research shows that people don’t have strong reactions to disembodied voices. In general, it needs a face, expressions, an identity. It needs to seem like an intelligent social actor. Then, things change, and people start to consider them differently. We see faces everywhere, it’s natural. If you look at a cloud long enough, you’re going to see a face. Our brains are hard-wired for it. The other side of the coin is how it will look, that is probably going to be a lot of debate. It could very well be that everyone has their own personal agent that looks a certain way that’s ideal for them.

Anytime we can automate something to allow us to focus on things that are more creative; or take advantage of our expertise–that’s also a good place for automation. That is why autonomous cars are going to be the first major milestone. What is more tedious than driving in traffic?

Concerns and Hopes for the Future

My biggest concern has nothing to do with my research, and everything to do with my career–it’s education. The quality of baseline public education–we need to continue making that a priority for society, not let computers take over critical thinking and decision-making i.e., learnt helplessness. Computers and drones make it so easy, they are our transactional memory, they allow us to offload so much, but we need to make sure that we are also simultaneously educating people. I have concerns that education has become less important and critical thinking hasn’t caught up with the new informational and technological landscape.

These are good problems to have, to be perfectly frank. It’s amazing the world we live in. New generations are coming up with new ideas. They are growing up with this tech as a common understanding of the world and building on it. I think that all this stuff is exciting, even while we are wrestling with coming to terms with what AI is–and how it will become a part of our lives. I think everyone’s life will be better for it. We will have less car accidents, and more people using their time to focus on creative endeavors. And, better computers. Computers are going to continue to evolve beyond advanced calculators, actually helping us live better lives.

“Computers are going to continue to evolve beyond advanced calculators, actually helping us live better lives.”

Aaron C. Elkins is an Assistant Professor in the Department of Management Information Systems and Director of the SDSU Artificial Intelligence Lab. Before joining the faculty at SDSU Dr. Elkins was postdoctoral AI researcher at Imperial College London and earned his PhD from the University of Arizona. Elkins’ research focuses on developing AI models that fuse physiological and behavioral sensor data to predict human emotion and deception.  Elkins conducts experiments investigating automated deception detection in the laboratory, borders, and airports. Complementary to the development of advanced AI systems is their impact on the people using them to make decisions. Elkins also investigates how human decision makers are psychologically affected by, use, perceive, and incorporate the next generation technologies into their lives.



Subscribe to our mailing list

No Replies to "Artificial Intelligence is Changing the Conversation"

    Got something to say?

    Some html is OK