In order to reach the level of intelligence that humans possess, artificial agents need to be able to autonomously interact with other agents and humans and build rich models of how other minds work as a result of these interactions. This learning process should mirror aspects of the human developmental cycle, where infants and children take an active role in their own social and cognitive development by playing with their parents, siblings, and friends in ways that expand their mental capacities. Recent research in artificial intelligence has built this active learning process into autonomous machines, who consequently exhibit cognitive abilities that arise through developmental milestones analogously to a child. We propose to extend this line of research to embodied multi agent settings, where agents must actively learn not only about how the world works but also about the social dynamics involved with interacting with other agents. This project will simultaneously tackle the algorithmic goal of building agents that can interact with humans in realistic ways and also contribute to psychology and neuroscience by comparing these models to behavior and neural data on social tasks.