Over the past 15-20 years Logic has become an increasingly popular and successful framework for modeling and analysing 'multi-agent systems' — various scenarios where several human or artificial agents act autonomously and interact in a common environment in pursuit of individual and collective goals. There are many and diverse types of multi-agent systems, including social, computer or robotic networks, businesses, markets, and entire societies. In particular, and more abstractly, multi-agent systems include and extend multi-player games.

 
Several logical systems have been proposed and studied for the purpose of modelling of, and reasoning about, various aspects of multi-agent systems and their interaction, including: knowledge, beliefs, desires, intentions, social and legal norms, actions and strategic abilities, etc.
 
Besides the purely technical and intrinsically logical problems that have arisen in these studies, a multitude of new conceptual and sometimes truly philosophical challenges have emerged from them.
 
In this talk I will start with a brief general overview of the area and will then focus on issues related to knowledge, actions and strategic abilities of agents and groups (coalitions) of agents to achieve objectives, particularly in the context of incomplete information. I will present some formal models of these concepts and some logical systems design to reason about them. I will mention briefly some technical results and will focus on some of the more conceptual problems arising in the analysis of the interaction between knowledge, actions and strategic abilities of agents.