Sean Breeden

Full Stack PHP, Python, AI/ML Developer
"AI won’t replace you, but people who are using AI will replace you" --Anonymous

MIT’s New AI Model Predicts Human Behavior With Uncanny Accuracy

Sunday, April 21st, 2024

A new technique can be used to predict the actions of human or AI agents who behave suboptimally while working toward unknown goals.

MIT and other researchers developed a framework that models irrational or suboptimal behavior of a human or AI agent, based on their computational constraints. Their technique can help predict an agent’s future actions, for instance, in chess matches.

To build AI systems that can collaborate effectively with humans, it helps to have a good model of human behavior to start with. But humans tend to behave suboptimally when making decisions.

This irrationality, which is especially difficult to model, often boils down to computational constraints. A human can’t spend decades thinking about the ideal solution to a single problem.

Development of a New Modeling Approach

Researchers at MIT and the University of Washington developed a way to model the behavior of an agent, whether human or machine, that accounts for the unknown computational constraints that may hamper the agent’s problem-solving abilities.

Their model can automatically infer an agent’s computational constraints by seeing just a few traces of their previous actions. The result, an agent’s so-called “inference budget,” can be used to predict that agent’s future behavior.

Practical Applications and Model Validation

In a new paper, the researchers demonstrate how their method can be used to infer someone’s navigation goals from prior routes and to predict players’ subsequent moves in chess matches. Their technique matches or outperforms another popular method for modeling this type of decision-making.

Ultimately, this work could help scientists teach AI systems how humans behave, which could enable these systems to respond better to their human collaborators. Being able to understand a human’s behavior, and then to infer their goals from that behavior, could make an AI assistant much more useful, says Athul Paul Jacob, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.


(image generated by DALL-E on ChatGPT-4)

| Home | Announcements | Artificial Intelligence | Commodore 64 | CraftCMS | Crypto | Games | Geek | Machine Learning | Magento | Memes | Science News | Security | 7 Days to Die | Ubuntu |