Why AI-washing is a Growing Governance Risk for Corporate Boards

Across sectors, artificial intelligence (AI) is transforming how companies operate, helping to automate processes and drive efficiencies. However, as companies race to adopt AI and monetize its potential, they may overhype AI-powered processes or features to showcase innovation and attract investors. This has led to “AI-washing,” a term widely used by regulators and commentators to describe situations where companies misrepresent or exaggerate their use of artificial intelligence. Just as “greenwashing”—making unsubstantiated environmental claims—has attracted regulatory attention over the past few years, AI-related disclosures are now receiving similar oversight.
This article outlines what AI-washing is, why it is a board-level issue and how companies can mitigate its risks.
What Is AI-Washing?
AI-washing refers to false, misleading or exaggerated claims about the use or capabilities of artificial intelligence. For instance, companies may call software “AI-powered” when it actually relies on rule-based automation or conventional algorithms; they may use statements claiming AI “optimizes performance” or “reduces risk” without being able to substantiate these claims through data; they may overstate the commercial readiness of experimental AI tools or pilot projects; or they may highlight AI as a major revenue driver in commercial messaging, even though its true business impact is minimal.
AI-washing may occur when companies feel under pressure to appear technologically advanced. Investors often favor companies that use or develop AI, viewing it as a signal of strong growth potential. As such, organizations seeking to be market leaders may exaggerate their AI capabilities to remain competitive. Even without deliberate intent to mislead, companies may unintentionally overstate, oversimplify or mislabel their AI use since there is no universally accepted legal or regulatory AI definition.
Why AI-washing Is a Board-level Issue
Boards depend on accurate information to fulfill their fiduciary duties, and investors, regulators and consumers rely on truthful representations of technological tools to make informed decisions. As such, challenges can arise from several scenarios, including the following:
- Should boards approve investments or acquisitions based on inflated assumptions about AI capabilities that later prove false, investors may suffer financial losses and pursue legal action.
- If management overstates the maturity of AI tools, boards may make strategic decisions (e.g., restructuring teams or adjusting budgets) that do not align with the organization’s actual risk exposure.
- If management overplays AI’s decision-making capacity, staff may over-rely on systems that are not equipped to replace human judgment, potentially introducing errors, bias or compliance issues.
- If AI-enabled product performance is exaggerated, the gap between expectation and reality can undermine product reliability and safety, increasing operational exposure.
Additionally, because regulators expect companies to disclose material information truthfully, misleading statements about AI may violate the U.S. Securities and Exchange Commission’s (SEC) disclosure requirements. Overall, AI-washing can cause organizations to lose credibility with investors, customers and regulators, risks that fall directly within board oversight responsibilities.
Regulators Are Paying Closer Attention
Companies that make public statements about their AI capabilities are witnessing increased scrutiny from regulatory agencies. The SEC has a newly constituted emerging technologies unit tasked with “rooting out” AI-related fraud and has warned that AI-related statements must be accurate and supported by evidence. The Federal Trade Commission has likewise increased enforcement against deceptive marketing claims, such as falsely labeling products as “AI-powered” or overstating the performance of AI-enabled features.
Alongside potential regulatory penalties, AI-related litigation from AI-washing could further expose organizations. Specifically, investors are increasingly filing lawsuits when companies mislead the market about AI capabilities.
Potential Liability Implications
Securities litigation from investors and regulatory investigations are not the only potential liability implications associated with AI-washing. Directors and officers may also face claims through securities lawsuits or shareholder derivative actions alleging inadequate oversight, misleading disclosures or weaknesses in internal controls related to AI governance. There may also be consumer protection exposure. Consumers could bring claims if AI‑related marketing statements are misleading or if products do not perform as advertised.
Governance Practices that May Reduce AI-washing Risk
To minimize their exposure to AI-washing and related ramifications, boards and risk managers should take several steps to strengthen oversight, including the following:
- Establish AI governance policies. Boards should create a clear policy that articulates the scope and maturity of their AI deployment, including transparent statements about its strengths and limitations. This policy should include a definition of “AI” to prevent misunderstandings.
- Strengthen disclosure controls. Boards should review all AI claims (e.g., those in regulatory filings, on company websites and in marketing materials) under existing disclosure frameworks to assess accuracy, and forward-looking statements should be accompanied by cautionary language. Companies should provide consumer-facing disclaimers that outline any AI limitations and how they will keep personal data safe.
- Require evidence-based claims. Boards must ensure that all AI claims are specific, measurable and evidence-based. Documentation detailing the technical architecture of AI systems can be used to support capability statements and should cover data sources, model governance and human-in-the-loop processes and
- Integrate AI into risk management. Boards should integrate AI into risk management practices, treating it with the same rigor, controls and oversight as they do other Organizations should undertake robust risk assessments before launching new AI capabilities or making public AI claims.
- Periodically review AI claims. Boards should regularly review AI claims, as even slight changes in AI technology can affect its reliability and Organizations must also monitor regulatory developments and update statements as necessary to maintain compliance.
Conclusion
As AI and other emerging technologies continue to evolve, AI-washing remains a significant concern for organizations across sectors. Boards must implement a range of robust governance practices to reduce AI-washing risk, strengthen oversight and avoid liability.