Exploring institutions for global AI governance - Related to exploring, transformation, your, governance, institutions
Exploring institutions for global AI governance

New white paper investigates models and functions of international institutions that could help manage opportunities and mitigate risks of advanced AI.
Growing awareness of the global impact of advanced artificial intelligence (AI) has inspired public discussions about the need for international governance structures to help manage opportunities and mitigate risks involved.
Many discussions have drawn on analogies with the ICAO (International Civil Aviation Organisation) in civil aviation; CERN (European Organisation for Nuclear Research) in particle physics; IAEA (International Atomic Energy Agency) in nuclear technology; and intergovernmental and multi-stakeholder organisations in many other domains. And yet, while analogies can be a useful start, the technologies emerging from AI will be unlike aviation, particle physics, or nuclear technology.
To succeed with AI governance, we need to improved understand:
What specific benefits and risks we need to manage internationally. What governance functions those benefits and risks require. What organisations can best provide those functions.
Our latest paper, with collaborators from the University of Oxford, Université de Montréal, University of Toronto, Columbia University, Harvard University, Stanford University, and OpenAI, addresses these questions and investigates how international institutions could help manage the global impact of frontier AI development, and make sure AI’s benefits reach all communities.
The critical role of international and multilateral institutions.
Access to certain AI technology could greatly enhance prosperity and stability, but the benefits of these technologies may not be evenly distributed or focused on the greatest needs of underrepresented communities or the developing world. Inadequate access to internet services, computing power, or availability of machine learning training or expertise, may also prevent certain groups from fully benefiting from advances in AI.
International collaborations could help address these issues by encouraging organisations to develop systems and applications that address the needs of underserved communities, and by ameliorating the education, infrastructure, and economic obstacles to such communities making full use of AI technology.
Additionally, international efforts may be necessary for managing the risks posed by powerful AI capabilities. Without adequate safeguards, some of these capabilities – such as automated software development, chemistry and synthetic biology research, and text and video generation – could be misused to cause harm. Advanced AI systems may also fail in ways that are difficult to anticipate, creating accident risks with potentially international consequences if the technology isn’t deployed responsibly.
International and multi-stakeholder institutions could help advance AI development and deployment protocols that minimise such risks. For instance, they might facilitate global consensus on the threats that different AI capabilities pose to society, and set international standards around the identification and treatment of models with dangerous capabilities. International collaborations on safety research would also further our ability to make systems reliable and resilient to misuse.
Lastly, in situations where states have incentives ([website] deriving from economic competition) to undercut each other's regulatory commitments, international institutions may help support and incentivise best practices and even monitor compliance with standards.
We explore four complementary institutional models to support global coordination and governance functions:
An intergovernmental Commission on Frontier AI could build international consensus on opportunities and risks from advanced AI and how they may be managed. This would increase public awareness and understanding of AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.
could build international consensus on opportunities and risks from advanced AI and how they may be managed. This would increase public awareness and understanding of AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers. An intergovernmental or multi-stakeholder Advanced AI Governance Organisation could help internationalise and align efforts to address global risks from advanced AI systems by setting governance norms and standards and assisting in their implementation. It may also perform compliance monitoring functions for any international governance regime.
could help internationalise and align efforts to address global risks from advanced AI systems by setting governance norms and standards and assisting in their implementation. It may also perform compliance monitoring functions for any international governance regime. A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology for safety and governance objectives.
could promote access to advanced AI as an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology for safety and governance objectives. An AI Safety Project could bring together leading researchers and engineers, and provide them with access to computation resources and advanced AI models for research into technical mitigations of AI risks. This would promote AI safety research and development by increasing its scale, resourcing, and coordination.
Many crucial open questions around the viability of these institutional models remain. For example, a Commission on Advanced AI will face significant scientific challenges given the extreme uncertainty about AI trajectories and capabilities and the limited scientific research on advanced AI issues to date.
The rapid rate of AI progress and limited capacity in the public sector on frontier AI issues could also make it difficult for an Advanced AI Governance Organisation to set standards that keep up with the risk landscape. The many difficulties of international coordination raise questions about how countries will be incentivised to adopt its standards or accept its monitoring.
Likewise, the many obstacles to societies fully harnessing the benefits from advanced AI systems (and other technologies) may keep a Frontier AI Collaborative from optimising its impact. There may also be a difficult tension to manage between sharing the benefits of AI and preventing the proliferation of dangerous systems.
And for the AI Safety Project, it will be critical to carefully consider which elements of safety research are best conducted through collaborations versus the individual efforts of companies. Moreover, a Project could struggle to secure adequate access to the most capable models to conduct safety research from all relevant developers.
Given the immense global opportunities and challenges presented by AI systems on the horizon, greater discussion is needed among governments and other stakeholders about the role of international institutions and how their functions can further AI governance and coordination.
We hope this research contributes to growing conversations within the international community about ways of ensuring advanced AI is developed for the benefit of humanity.
Research Scaling up learning across many different robot types Share.
Together with partners from 33 academic labs, we have pooled ...
Research Building safer dialogue agents Share.
Training an AI to communicate in a way that’s more helpful, correct, and harmless In...
Responsibility & Safety How can we build human values into AI? Share.
Drawing from philosophy to identify fair principles for ethic...
Injecting domain expertise into your AI system

Injecting domain expertise into your AI system.
When starting their AI initiatives, many companies are trapped in silos and treat AI as a purely technical enterprise, sidelining domain experts or involving them too late. They end up with generic AI applications that miss industry nuances, produce poor recommendations, and quickly become unpopular with individuals. By contrast, AI systems that deeply understand industry-specific processes, constraints, and decision logic have the following benefits:
Increased efficiency — The more domain knowledge AI incorporates, the less manual effort is required from human experts.
— The more domain knowledge AI incorporates, the less manual effort is required from human experts. Improved adoption — Experts disengage from AI systems that feel too generic. AI must speak their language and align with real workflows to gain trust.
— Experts disengage from AI systems that feel too generic. AI must speak their language and align with real workflows to gain trust. A sustainable competitive moat — As AI becomes a commodity, embedding proprietary expertise is the most effective way to build defensible AI systems (cf. this article to learn about the building blocks of AI’s competitive advantage).
Domain experts can help you connect the dots between the technicalities of an AI system and its real-life usage and value. Thus, they should be key stakeholders and co-creators of your AI applications. This guide is the first part of my series on expertise-driven AI. Following my mental model of AI systems, it provides a structured approach to embedding deep domain expertise into your AI.
Overview of the methods for domain knowledge integration.
Throughout the article, we will use the use case of supply chain optimisation (SCO) to illustrate these different methods. Modern supply chains are under constant strain from geopolitical tensions, climate disruptions, and volatile demand shifts, and AI can provide the kind of dynamic, high-coverage intelligence needed to anticipate delays, manage risks, and optimise logistics. However, without domain expertise, these systems are often disconnected from the realities of life. Let’s see how we can solve this by integrating domain expertise across the different components of the AI application.
1. Data: The bedrock of expertise-driven AI.
AI is only as domain-aware as the data it learns from. Raw data isn’t enough — it must be curated, refined, and contextualised by experts who understand its meaning in the real world.
Data understanding: Teaching AI what matters.
While data scientists can build sophisticated models to analyse patterns and distributions, these analyses often stay at a theoretical, abstract level. Only domain experts can validate whether the data is complete, accurate, and representative of real-world conditions.
In supply chain optimisation, for example, shipment records may contain missing delivery timestamps, inconsistent route details, or unexplained fluctuations in transit times. A data scientist might discard these as noise, but a logistics expert could have real-world explanations of these inconsistencies. For instance, they might be caused by weather-related delays, seasonal port congestion, or supplier reliability issues. If these nuances aren’t accounted for, the AI might learn an overly simplified view of supply chain dynamics, resulting in misleading risk assessments and poor recommendations.
Experts also play a critical role in assessing the completeness of data. AI models work with what they have, assuming that all key factors are already present. It takes human expertise and judgment to identify blind spots. For example, if your supply chain AI isn’t trained on customs clearance times or factory shutdown histories, it won’t be able to predict disruptions caused by regulatory issues or production bottlenecks.
✅ : Run joint Exploratory Data Analysis (EDA) sessions with data scientists and domain experts to identify missing business-critical information, ensuring AI models work with a complete and meaningful dataset, not just statistically clean data.
Data source selection: Start small, expand strategically.
One common pitfall when starting with AI is integrating too much data too soon, leading to complexity, congestion of your data pipelines, and blurred or noisy insights. Instead, start with a couple of high-impact data information and expand incrementally based on AI performance and user needs. For instance, an SCO system may initially use historical shipment data and supplier reliability scores. Over time, domain experts may identify missing information — such as port congestion data or real-time weather forecasts — and point engineers to those data information where it can be found.
✅ : Start with a minimal, high-value dataset (normally 3–5 data insights), then expand incrementally based on expert feedback and real-world AI performance.
AI models learn by detecting patterns in data, but sometimes, the right learning signals aren’t yet present in raw data. This is where data annotation comes in — by labelling key attributes, domain experts help the AI understand what matters and make more effective predictions. Consider an AI model built to predict supplier reliability. The model is trained on shipment records, which contain delivery times, delays, and transit routes. However, raw delivery data alone doesn’t capture the full picture of supplier risk — there are no direct labels indicating whether a supplier is “high risk” or “low risk.”.
Without more explicit learning signals, the AI might make the wrong conclusions. It could conclude that all delays are equally bad, even when some are caused by predictable seasonal fluctuations. Or it might overlook early warning signs of supplier instability, such as frequent last-minute order changes or inconsistent inventory levels.
Domain experts can enrich the data with more nuanced labels, such as supplier risk categories, disruption causes, and exception-handling rules. By introducing these curated learning signals, you can ensure that AI doesn’t just memorise past trends but learns meaningful, decision-ready insights.
You shouldn’t rush your annotation efforts — instead, think about a structured annotation process that includes the following components:
Annotation guidelines: Establish clear, standardized rules for labeling data to ensure consistency. For example, supplier risk categories should be based on defined thresholds ([website], delivery delays over 5 days + financial instability = high risk).
Establish clear, standardized rules for labeling data to ensure consistency. For example, supplier risk categories should be based on defined thresholds ([website], delivery delays over 5 days + financial instability = high risk). Multiple expert review: Involve several domain experts to reduce bias and ensure objectivity, particularly for subjective classifications like risk levels or disruption impact.
Involve several domain experts to reduce bias and ensure objectivity, particularly for subjective classifications like risk levels or disruption impact. Granular labelling: Capture both direct and contextual factors, such as annotating not just shipment delays but also the cause (customs, weather, supplier fault).
Capture both direct and contextual factors, such as annotating not just shipment delays but also the cause (customs, weather, supplier fault). Continuous refinement: Regularly audit and refine annotations based on AI performance — if predictions consistently miss key risks, experts should adjust labelling strategies accordingly.
✅ : Define an annotation playbook with clear labelling criteria, involve at least two domain experts per critical label for objectivity, and run regular annotation review cycles to ensure AI is learning from accurate, business-relevant insights.
Synthetic data: Preparing AI for rare but critical events.
So far, our AI models learn from real-life historical data. However, rare, high-impact events — like factory shutdowns, port closures, or regulatory shifts in our supply chain scenario — may be underrepresented. Without exposure to these scenarios, AI can fail to anticipate major risks, leading to overconfidence in supplier stability and poor contingency planning. Synthetic data solves this by creating more datapoints for rare events, but expert oversight is crucial to ensure that it reflects plausible risks rather than unrealistic patterns.
Let’s say we want to predict supplier reliability in our supply chain system. The historical data may have few recorded supplier failures — but that’s not because failures don’t happen. Rather, many companies proactively mitigate risks before they escalate. Without synthetic examples, AI might deduce that supplier defaults are extremely rare, leading to misguided risk assessments.
Experts can help generate synthetic failure scenarios based on:
Historical patterns — Simulating supplier collapses triggered by economic downturns, regulatory shifts, or geopolitical tensions.
— Simulating supplier collapses triggered by economic downturns, regulatory shifts, or geopolitical tensions. Hidden risk indicators — Training AI on unrecorded early warning signs, like financial instability or leadership changes.
— Training AI on unrecorded early warning signs, like financial instability or leadership changes. Counterfactuals — Creating “what-if” events, such as a semiconductor supplier suddenly halting production or a prolonged port strike.
✅ Actionable step: Work with domain experts to define high-impact but low-frequency events and scenarios, which can be in focus when you generate synthetic data.
Data makes domain expertise shine. An AI initiative that relies on clean, relevant, and enriched domain data will have an obvious competitive advantage over one that takes the “quick-and-dirty” shortcut to data. However, keep in mind that working with data can be tedious, and experts need to see the outcome of their efforts — whether it’s improving AI-driven risk assessments, optimising supply chain resilience, or enabling smarter decision-making. The key is to make data collaboration intuitive, purpose-driven, and directly tied to business outcomes, so experts remain engaged and motivated.
Once AI has access to high-quality data, the next challenge is ensuring it generates useful and accurate outputs. Domain expertise is needed to:
Define clear AI objectives aligned with business priorities Ensure AI correctly interprets industry-specific data Continuously validate AI’s outputs and recommendations.
Let’s look at some common AI approaches and see how they can benefit from an extra shot of domain knowledge.
For structured problems like supply chain forecasting, predictive models such as classification and regression can help anticipate delays and suggest optimisations. However, to make sure these models are aligned with business goals, data scientists and knowledge engineers need to work together. For example, an AI model might try to minimise shipment delays at all costs, but a supply chain expert knows that fast-tracking every shipment through air freight is financially unsustainable. They can formulate additional constraints on the model, making it prioritise critical shipments while balancing cost, risk, and lead times.
✅ : Define clear objectives and constraints with domain experts before training AI models, ensuring alignment with real business priorities.
For a detailed overview of predictive AI techniques, please refer to Chapter 4 of my book The Art of AI Product Management.
While predictive models trained from scratch can excel at very specific tasks, they are also rigid and will “refuse” to perform any other task. GenAI models are more open-minded and can be used for highly diverse requests. For example, an LLM-based conversational widget in an SCO system can allow clients to interact with real-time insights using natural language. Instead of sifting through inflexible dashboards, clients can ask, “Which suppliers are at risk of delays?” or “What alternative routes are available?” The AI pulls from historical data, live logistics feeds, and external risk factors to provide actionable answers, suggest mitigations, and even automate workflows like rerouting shipments.
But how can you ensure that a huge, out-of-the-box model like ChatGPT or Llama understands the nuances of your domain? Let’s walk through the LLM triad — a progression of techniques to incorporate domain knowledge into your LLM system.
Figure 2: The LLM triad is a progression of techniques for incorporating domain- and corporation-specific knowledge into your LLM system.
As you progress from left to right, you can ingrain more domain knowledge into the LLM — however, each stage also adds new technical challenges (if you are interested in a systematic deep-dive into the LLM triad, please check out chapters 5–8 of my book The Art of AI Product Management). Here, let’s focus on how domain experts can jump in at each of the stages:
Prompting out-of-the-box LLMs might seem like a generic approach, but with the right intuition and skill, domain experts can fine-tune prompts to extract the extra bit of domain knowledge out of the LLM. Personally, I think this is a big part of the fascination around prompting — it puts the most powerful AI models directly into the hands of domain experts without any technical expertise. Some key prompting techniques include:
Few-shot prompting: Incorporate examples to guide the model’s responses. Instead of just asking “What are alternative shipping routes?”, a well-crafted prompt includes sample scenarios, such as “Example of past scenario: A previous delay at the Port of Shenzhen was mitigated by rerouting through Ho Chi Minh City, reducing transit time by 3 days.”.
Incorporate examples to guide the model’s responses. Instead of just asking “What are alternative shipping routes?”, a well-crafted prompt includes sample scenarios, such as “Example of past scenario: A previous delay at the Port of Shenzhen was mitigated by rerouting through Ho Chi Minh City, reducing transit time by 3 days.” Chain-of-thought prompting: Encourage step-by-step reasoning for complex logistics queries. Instead of “Why is my shipment delayed?”, a structured prompt might be “Analyse historical delivery data, weather reports, and customs processing times to determine why shipment #12345 is delayed.”.
Encourage step-by-step reasoning for complex logistics queries. Instead of “Why is my shipment delayed?”, a structured prompt might be “Analyse historical delivery data, weather reports, and customs processing times to determine why shipment #12345 is delayed.” Providing further background information: Attach external documents to improve domain-specific responses. For example, prompts could reference real-time port congestion reports, supplier contracts, or risk assessments to generate data-backed recommendations. Most LLM interfaces already allow you to conveniently attach additional files to your prompt.
2. RAG (Retrieval-Augmented Generation): While prompting helps guide AI, it still relies on pre-trained knowledge, which may be outdated or incomplete. RAG allows AI to retrieve real-time, firm-specific data, ensuring that its responses are grounded in current logistics reports, supplier performance records, and risk assessments. For example, instead of generating generic supplier risk analyses, a RAG-powered AI system would pull real-time shipment data, supplier credit ratings, and port congestion reports before making recommendations. Domain experts can help select and structure these data findings and are also needed when it comes to testing and evaluating RAG systems.
✅ : Work with domain experts to curate and structure knowledge reports — ensuring AI retrieves and applies only the most relevant and high-quality business information.
3. Fine-tuning: While prompting and RAG inject domain knowledge on-the-fly, they do not inherently embed supply domain-specific workflows, terminology, or decision logic into your LLM. Fine-tuning adapts the LLM to think like a logistics expert. Domain experts can guide this process by creating high-quality training data, ensuring AI learns from real supplier assessments, risk evaluations, and procurement decisions. They can refine industry terminology to prevent misinterpretations ([website], AI distinguishing between “buffer stock” and “safety stock”). They also align AI’s reasoning with business logic, ensuring it considers cost, risk, and compliance — not just efficiency. Finally, they evaluate fine-tuned models, testing AI against real-world decisions to catch biases or blind spots.
✅ : In LLM fine-tuning, data is the crucial success factor. Quality goes over quantity, and fine-tuning on a small, high-quality dataset can give you excellent results. Thus, give your experts enough time to figure out the right structure and content of the fine-tuning data and plan for plenty of end-to-end iterations of your fine-tuning process.
Encoding expert knowledge with neuro-symbolic AI.
Every machine learning algorithm gets it wrong from time to time. To mitigate errors, it helps to set the “hard facts” of your domain in stone, making your AI system more reliable and controllable. This combination of machine learning and deterministic rules is called neuro-symbolic AI.
For example, an explicit knowledge graph can encode supplier relationships, regulatory constraints, transportation networks, and risk dependencies in a structured, interconnected format.
Figure 3: Knowledge graphs explicitly encode relationships between entities, reducing the guesswork in your AI system.
Instead of relying purely on statistical correlations, an AI system enriched with knowledge graphs can:
Validate predictions against domain-specific rules ([website], ensuring that AI-generated supplier recommendations comply with regulatory requirements).
Infer missing information ([website], if a supplier has no historical delays but shares dependencies with high-risk suppliers, AI can assess its potential risk).
Improve explainability by allowing AI decisions to be traced back to logical, rule-based reasoning rather than black-box statistical outputs.
How can you decide which knowledge should be encoded with rules (symbolic AI), and which should be learned dynamically from the data (neural AI)? Domain experts can help youpick those bits of knowledge where hard-coding makes the most sense:
Knowledge that is relatively stable over time.
Knowledge that is hard to infer from the data, for example because it is not well-represented.
Knowledge that is critical for high-impact decisions in your domain, so you can’t afford to get it wrong.
In most cases, this knowledge will be stored in separate components of your AI system, like decision trees, knowledge graphs, and ontologies. There are also some methods to integrate it directly into LLMs and other statistical models, such as Lamini’s memory fine-tuning.
Generating insights and turning them into actions is a multi-step process. Experts can help you model workflows and decision-making pipelines, ensuring that the process followed by your AI system aligns with their tasks. For example, the following pipeline demonstrates how the AI components we considered so far can be combined into a workflow for the mitigation of shipment risks:
Figure 4: A combined workflow for the assessment and mitigation of shipment risks.
Experts are also needed to calibrate the “labor distribution” between humans in AI. For example, when modelling decision logic, they can set thresholds for automation, deciding when AI can trigger workflows versus when human approval is needed.
✅ : Involve your domain experts in mapping your processes to AI models and assets, identifying gaps vs. steps that can already be automated.
Especially in B2B environments, where workers are deeply embedded in their daily workflows, the user experience must be seamlessly integrated with existing processes and task structures to ensure efficiency and adoption. For example, an AI-powered supply chain tool must align with how logistics professionals think, work, and make decisions. In the development phase, domain experts are the closest “peers” to your real individuals, and picking their brains is one of the fastest ways to bridge the gap between AI capabilities and real-world usability.
✅ : Involve domain experts early in UX design to ensure AI interfaces are intuitive, relevant, and tailored to real decision-making workflows.
Ensuring transparency and trust in AI decisions.
AI thinks differently from humans, which makes us humans skeptical. Often, that’s a good thing since it helps us stay alert to potential mistakes. But distrust is also one of the biggest barriers to AI adoption. When consumers don’t understand why a system makes a particular recommendation, they are less likely to work with it. Domain experts can define how AI should explain itself — ensuring consumers have visibility into confidence scores, decision logic, and key influencing factors.
For example, if an SCO system recommends rerouting a shipment, it would be irresponsible on the part of a logistics planner to just accept it. She needs to see the “why” behind the recommendation — is it due to supplier risk, port congestion, or fuel cost spikes? The UX should show a breakdown of the decision, backed by additional information like historical data, risk factors, and a cost-benefit analysis.
⚠️ Mitigate overreliance on AI: Excessive dependence of your consumers on AI can introduce bias, errors, and unforeseen failures. Experts should find ways to calibrate AI-driven insights vs. human expertise, ethical oversight, and strategic safeguards to ensure resilience, adaptability, and trust in decision-making.
✅ : Work with domain experts to define key explainability elements — such as confidence scores, data information, and impact summaries — so customers can quickly assess AI-driven recommendations.
Simplifying AI interactions without losing depth.
AI tools should make complex decisions easier, not harder. If customers need deep technical knowledge to extract insights from AI, the system has failed from a UX perspective. Domain experts can help strike a balance between simplicity and depth, ensuring the interface provides actionable, context-aware recommendations while allowing deeper analysis when needed.
For instance, instead of forcing customers to manually sift through data tables, AI could provide pre-configured reports based on common logistics challenges. However, expert customers should also have on-demand access to raw data and advanced settings when necessary. The key is to design AI interactions that are efficient for everyday use but flexible for deep analysis when required.
✅ : Use domain expert feedback to define default views, priority alerts, and user-configurable settings, ensuring AI interfaces provide both efficiency for routine tasks and depth for deeper research and strategic decisions.
Continuous UX testing and iteration with experts.
AI UX isn’t a one-and-done process — it needs to evolve with real-world user feedback. Domain experts play a key role in UX testing, refinement, and iteration, ensuring that AI-driven workflows stay aligned with business needs and user expectations.
For example, your initial interface may surface too many low-priority alerts, leading to alert fatigue where individuals start ignoring AI recommendations. Supply chain experts can identify which alerts are most valuable, allowing UX designers to prioritize high-impact insights while reducing noise.
✅ : Conduct think-aloud sessions and have domain experts verbalize their thought process when interacting with your AI interface. This helps AI teams uncover hidden assumptions and refine AI based on how experts actually think and make decisions.
Vertical AI systems must integrate domain knowledge at every stage, and experts should become key stakeholders in your AI development:
They refine data selection, annotation, and synthetic data.
They guide AI learning through prompting, RAG, and fine-tuning.
They support the design of seamless user experiences that integrate with daily workflows in a transparent and trustworthy way.
An AI system that “gets” the domain of your individuals will not only be useful and adopted in the short- and middle-term, but also contribute to the competitive advantage of your business.
Now that you have learned a bunch of methods to incorporate domain-specific knowledge, you might be wondering how to approach this in your organizational context. Stay tuned for my next article, where we will consider the practical challenges and strategies for implementing an expertise-driven AI strategy!
Note: Unless noted otherwise, all images are the author’s.
Here’s what a day in my life as a data scientist looks like:
I've been around technology long enough that very little excites me, and even less surprises me. But shortly after Open AI's C......
Video creation is still the frontier of generative AI, and OpenAI leads the pack with its Sora AI video generator. The generator,......
Google Cloud: Driving digital transformation

Impact Google Cloud: Driving digital transformation Share.
Google Cloud empowers organizations to digitally transform themselves into smarter businesses. It offers cloud computing, data analytics, and the latest artificial intelligence (AI) and machine learning tools. Using our AI research, we’re making these solutions more effective for Google Cloud clients all over the world. Our research is deciphering written documents, enhancing the value of wind energy, and making it easier to use AlphaFold — our breakthrough AI system designed to more effective predict protein structures.
Expanding product innovation across Document AI From cuneiform tablets to the printing press, countless ways of sharing written knowledge have been developed throughout history. Modern documents vary across countries, languages, and industries — making it hard to extract and use that information, particularly at scale. Google Cloud’s Document AI enables customers to make digital, printed, or handwritten information contained inside a document — like an invoice or tax form — extractable and queryable. Before Document AI, industries looking to use AI tools for document understanding needed vast amounts of training data to perform well. But this data is often unavailable, incomplete, or lacks proper annotation, preventing widespread AI adoption. Working together with the Google Cloud Document AI team, we developed innovative machine learning models that need 50-70% less training data than others to parse documents like utility bills and purchase orders. We’re also working to improve Document AI’s performance in languages with smaller datasets. That way, we can help more clients across different industries and geographies leverage the benefits of Document AI.
Enhancing the value of wind energy As part of our efforts to use AI for achieving net-zero emissions by 2030, we partnered with Google Cloud Professional Services to advance the wind energy sector — and to help build a carbon-free future for all. Wind farms are an essential source of carbon-free electricity, but their output can fluctuate depending on the weather. To balance supply and demand in the electricity grid, operators rely on energy generation forecasts. If operators can commit to selling a certain amount of electricity based on the next day’s forecast, they can get a enhanced price. In collaboration with Google Cloud, we helped develop a custom AI tool to enhanced predict wind power output. This tool was trained on weather forecasts and the customer’s historical wind turbine data. An additional model recommends how much energy an operator can commit to delivering to the electricity grid, a day in advance. The global energy and renewables supplier ENGIE is now piloting this technology in Germany. If the pilot is successful, ENGIE might apply the technology across Europe. Making wind energy more economically attractive — and improving its reliability — will encourage the uptake of renewables. That’s a win for everyone.
Research A generalist AI agent for 3D virtual environments Share.
We present new research on a Scalable Instructable Multiworld Age...
Responsibility & Safety — We want AI to benefit the world, so we must be thoughtful about how it’s built and used...
One of the biggest drawbacks to using Gemini as your default Android assistant is going away.
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
23.1% | 27.8% | 29.2% | 32.4% | 34.2% | 35.2% | 35.6% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
32.5% | 34.8% | 36.2% | 35.6% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Machine Learning | 29% | 38.4% |
Computer Vision | 18% | 35.7% |
Natural Language Processing | 24% | 41.5% |
Robotics | 15% | 22.3% |
Other AI Technologies | 14% | 31.8% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Google AI | 18.3% |
Microsoft AI | 15.7% |
IBM Watson | 11.2% |
Amazon AI | 9.8% |
OpenAI | 8.4% |
Future Outlook and Predictions
The Exploring Institutions Global landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Improved generative models
- specialized AI applications
- AI-human collaboration systems
- multimodal AI platforms
- General AI capabilities
- AI-driven scientific breakthroughs
Expert Perspectives
Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:
"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."
— AI Researcher
"Organizations that develop effective AI governance frameworks will gain competitive advantage."
— Industry Analyst
"The AI talent gap remains a critical barrier to implementation for most enterprises."
— Chief AI Officer
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:
- Improved generative models
- specialized AI applications
- enhanced AI ethics frameworks
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- AI-human collaboration systems
- multimodal AI platforms
- democratized AI development
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- General AI capabilities
- AI-driven scientific breakthroughs
- new computing paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of ai tech evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Responsible AI driving innovation while minimizing societal disruption
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Incremental adoption with mixed societal impacts and ongoing ethical challenges
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and ethical barriers creating significant implementation challenges
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.