Theory of mind — the ability to mentalize the beliefs, preferences, and goals of other entities —plays a crucial role for successful collaboration in human groups [56], human-AI interaction [57], and even in multi-agent LLM system [15]. Consequently, LLMs capacity for ToM has been a major focus. Recent literature on evaluating ToM in Large Language Models has shifted from static, narrative-based testing to dynamic agentic benchmarking, exposing a critical “competence-performance gap” in frontier models. While models like GPT-4 demonstrate near-ceiling performance on basic literal ToM tasks, explicitly tracking higher-order beliefs and mental states in isolation [95], [96], they frequently fail to operationalize this knowledge in downstream decision-making, formally characterized as Functional ToM [97]. Interactive coding benchmarks such as Ambig-SWE [98] further illustrate this gap: agents rarely seek clarification under vague or underspecified instructions and instead proceed with confident but brittle task execution. (Of course, this limited use of ToM resembles many human operational failures in practice!). The disconnect is quantified by the SimpleToM benchmark, where models achieve robust diagnostic accuracy regarding mental states but suffer significant performance drops when predicting resulting behaviors [99]. In situated environments, the ToM-SSI benchmark identifies a cascading failure in the Percept-Belief-Intention chain, where models struggle to bind visual percepts to social constraints, often performing worse than humans in mixed-motive scenarios [100].
Трамп в ярости поручил союзникам самостоятельно добывать энергоресурсы14:57,推荐阅读whatsit管理whatsapp网页版获取更多信息
,推荐阅读Replica Rolex获取更多信息
节目中,老友之间自然而然的调侃、推荐,那份经年累月的默契是无法表演的。国内综艺或许可以少一些生硬的“搭档”设定,多挖掘一些真实存在的友情故事。,更多细节参见7zip下载
Представитель российских властей пострадал в результате атаки ВСУ20:46