Aimed at boards of directors of companies considering or dabbling in AI, as well as those that have already integrated AI into their corporate strategy, Mayer Brown’s “Generative Artificial Intelligence and Corporate Boards: Cautions and Considerations” explains the use of AI in the context of directors’ fiduciary duties and provides practical guidance to mitigate risks.
Among other key risk considerations are AI’s detachment from personal liability, output integrity, and data privacy and cybersecurity exposures, each of which should prompt responsive and responsible precautionary behaviors, such as refraining from using company-specific, confidential, or proprietary data in inputs or chats with generative AI, which could inadvertently “escape.”
For those companies that have not already done so, the firm suggests identifying a point person among management to oversee the company’s use, and attendant opportunities and risks, of AI; facilitating opportunities for the board to learn about generative AI first-hand; and including the topic on an upcoming board meeting agenda to solicit input from management and outside experts. The memo also provides an overview of initiatives to regulate AI within and outside the US.
The guidance is equally beneficial for corporate secretaries and other governance professionals and management supporting the board.
Relatedly, Corporate Board Member’s “AI In The Era Of ESG: Nine Steps Boards Can Take Now” presents an AI oversight framework for boards developed by one of the authors of the foregoing Mayer Brown memo and The Conference Board’s Paul Washington and informed by a recent Digital Trust Summit. The sound recommendations include ensuring that management is taking appropriate steps to leverage opportunities and mitigate risks associated with the use of generative and other forms of AI as it continues to evolve.