Sixteen years ago, Tim Pleskac came to PBS as a postdoctoral fellow in the lab of Distinguished Professor Jerome Busemeyer. After several other academic positions in the U.S. and abroad, Pleskac returns to Bloomington, one of three new PBS faculty hired under the “Faculty 100” initiative to “amplify IU’s success in core research areas,” among them, the topic of “human and artificial intelligence.”
As Pleskac sees it, his work “bridges the divide between human and artificial intelligence,” by using and developing computational models to understand and improve human decision making. Simply put, these models “are at some level a set of equations or computer code,” he explains, “that take information from a scene to show how that information is processed and transformed into a decision.” Their aim is to mirror in some way what goes on in a decision maker’s mind. As he describes it, “We always talk about opening the black box of the human mind. Can we open it up and say, ‘How do we take information and turn it into a response and specify the process by which information is used?’”
Yet, in addition to bridging the divide between human and artificial intelligence in this way, his work cuts across other boundaries too, from disciplinary divides to those that separate the controlled lab environment and more chaotic real-world scenarios in which human decisions take place.
Take for example, his ongoing analysis of the factors which lead to police officers’ decision to use deadly force against a suspect and how race figures into that process. While in other projects he has collaborated with clinical psychologists, neuroscientists, behavioral psychologists and others, for this project he collaborates with experts in social psychology and social cognition, so as to incorporate social dynamics into their models. “Police don’t make the decision to shoot alone. Other police who are with them are also making this decision, so I’m interested in how one person’s decision affects another.” And while people make decisions in groups in many situations, more and more they are also making decisions with the help of machines, such that one day, he believes, his work may be able to help improve these human and human-machine teams.
In these projects and others, Pleskac seeks to incorporate as much of the real-world situation into the lab experiment as possible. In the “decision to shoot” studies, for example, he and his team work with a video scenario, while police officer participants hold a modified handgun, which still “gives a kick,” as he says, when they pull the trigger. Across this and other projects, Pleskac thus seeks to achieve what he calls “translational modeling,” to translate models for understanding behavior inside the lab into models that explain behavior in real-world environments. “People have done a wonderful job modeling behavior in a controlled laboratory task,” he says. However, he would like to find out how “to take these models outside the lab, to see how far we can push them to predict behavior outside the lab,” he says.