Partnership on AI has published a progress report on post-deployment governance practices pertaining to foundation models. The document, entitled “2026 Transparency Report on Foundation Model Impacts“, measures the progress of 13 foundation model providers* in publicly documenting the impacts of their foundation models. In carrying out their analysis, authors Jacob Pratt and Albert Tanjaya reviewed more than 150 papers, articles, websites, and reports.
The report focusses on four practices: how well the providers have improved in doing each of the following:
Share usage information
Enable and share research on post-deployment societal impact indicators
Report incidents and policy violations
Share user feedback
For assessment, these four practices were broken down into 19 processes, or activities, that support how foundation model providers adopt practices.
The authors highlight two key findings from their investigations:
Although several leading organizations are defining what information to share and how, the rest are slow in adopting information-sharing practices.
There is a scattered and incomplete landscape of public impact data. For example, we know more about usage and potential labour market impacts than ever, but we don’t know the prevalence of harms, serious incidents, or broader societal impacts.
This report builds on previous work from Partnership on AI, which covered guidance for safe foundation model deployment (published in 2023) and progress-tracking methodology (2025).
*The 13 foundation model providers are: Alibaba, Allen Institute for AI, Amazon, Anthropic, Cohere, DeepSeek, Google, IBM, Meta, Mistral, Microsoft, OpenAI, Stability AI.
Find out more
2026 Transparency Report on Foundation Model Impacts
Guidance for Safe Foundation Model Deployment
Documenting the Impacts of Foundation Models
