Close filter

AI in the Boardroom: A Cross-Sector Reflection from Singapore’s Corporate Leaders

Board members across Singapore’s leading institutions reflect on the strategic, ethical, and societal implications of AI—and what it means for leadership, governance, and talent in a rapidly evolving world.

  • October 2025
  • 6 mins read

In a recent breakfast gathering in Singapore, a remarkable group of board members—collectively holding over 30 board positions across the country’s most reputable institutions—came together to reflect on the implications of artificial intelligence (AI) for their organizations, industries, and society at large.  

The conversation, rich in insight and spanning sectors from finance and education to defense and industry, revealed a shared recognition: AI is no longer a distant frontier. It is a present force, reshaping the very foundations of strategy, governance, and leadership.

The Defense Sector’s Cautious Approach to AI

The tone of the discussion was not one of hype, but of sober curiosity and strategic concern. Perhaps the most ethically charged reflections came from the defense sector, where the stakes are existential. The application of AI in weapons and decision-support systems may have life-or-death consequences that are not well understood, highlighted Brigadier General Ng Pak Shun, Commissioner of the Global Commission on Responsible AI in the Military Domain, who drew on the Commission’s strategic guidance report to illustrate parallels for good practice in the boardroom. Who is responsible when a machine makes such a decision? Is it the designer, the coder, or the operator who delegates authority to the algorithm? The defense industry's focus on Responsible AI is not just compliance—it is conscience. Bias, hallucinations, and explainability are central to the legitimacy and safety of AI deployment. 

A university board member opened with a question that resonated across the room: How do we bring our alumni up to speed with the developments in AI, and what does this mean for the future of university education? This was not merely a pedagogical concern—it was a reflection of the broader challenge facing all institutions: the need to re-skill and re-orient talent in a world where AI is rapidly altering the nature of work and knowledge. 

A Call for Value Creation from Financial Services

From the financial services sector came a more technical lens. AI is already being deployed for macroeconomic modeling and quantitative analysis, and its accuracy improves dramatically when trained on high-quality data and tailored through vertically specialized large language models (LLMs). Yet, this precision also raises existential questions. One board member wondered aloud whether “bit industries” like banking and telecommunications would even require human operators in a decade. The implication was clear: AI is not just augmenting decision-making—it may soon replace it in domains once considered too complex or sensitive for automation

But the conversation did not dwell solely on efficiency. Several board members emphasized the need to reframe AI’s value proposition—from productivity gains to topline growth. The question is no longer just how AI can reduce costs, but how it can generate new revenues, open new markets, and create differentiated customer experiences. This shift in mindset—from defensive to offensive strategy—marks a critical evolution in board-level thinking.  

Crucially, participants stressed that boards must resist being dazzled by AI's technical capabilities. Instead of focusing on what AI can do as an application or tool, boards need to return to fundamentals: What need does it fulfill, or what opportunity is it positioned to capture? Before signing off on significant AI investments, boards must ensure the technology aligns with strategic priorities and customer needs—not simply deployed because it is available or fashionable. This discipline emerged as a hallmark of mature governance. 

Beyond Business: Broader Board Responsibilities

Yet, as the discussion deepened, it became clear that AI’s impact extends far beyond the enterprise. One participant, offering a macroeconomic and political perspective, raised a profound concern: In democratic societies, where job creation is central to political legitimacy, how will AI-induced job displacement reshape the political agenda? The potential for widespread unemployment is not just an economic issue—it is a political one. And in an era of rising populism and geopolitical volatility, the consequences of ignoring this dimension could be severe. 

This led naturally to a broader reflection on the responsibilities of boards. Strategy, risk, governance, and people—these are the pillars of board oversight, and AI touches each one. Boards are now asking themselves a series of critical questions:  

  • Composition: do we have the right expertise in the room, and how technical should we get? How do we bring existing board members up to speed?  
  • Strategy: what is the balance between risk and opportunity? How much should we push the organization toward AI transformation? What does success look like? And what are the implications for workforce models in the medium to long term?  
  • Risk and Governance: What decision support systems are needed once that is ascertained?  
  • A particularly pressing concern emerged around design accountability: When there is risk, to what extent do boards or policymakers need clarity on the design of AI systems they sign off to deploy? This strikes at the core of governance. If boards cannot comprehend the architecture, logic, and limitations of the AI systems they authorize, how can they meaningfully oversee risk? The conversation revealed a tension between the need for technical fluency and the practical limits of board expertise, underscoring the importance of bridge-builders who can translate complex AI design into strategic implications. 

These questions cut to the heart of responsible AI governance—establishing not only guardrails but also the metrics by which AI initiatives should be evaluated and held accountable. 

Approaching AI from a Sustainability Lens: Voices from the Industrial Sector

The industrial sector brought a sustainability lens to the conversation. AI’s energy consumption, its lifecycle management, and the need for responsible-by-design frameworks were highlighted as urgent concerns. The board’s role, it was argued, is not just to enable innovation but to navigate strategic trade-offs—balancing growth with resilience, and agility with trust. Cybersecurity risks, geopolitical tensions, and fragmented regulatory landscapes across data governance and AI ethics are no longer operational issues—they are board-level imperatives.

Redefining Board Leadership in the Age of AI

One analogy offered during the session captured the essence of AI’s trajectory. AI, like electricity, will eventually become invisible—embedded in everything, taken for granted. We no longer marvel at electricity, yet it powers our entire world. AI is on a similar path. As it integrates into every facet of life and business, we may stop talking about AI per se, even as it continues to disrupt industries, societies, and global systems. 

This invisibility, however, does not absolve boards of their responsibility. On the contrary, it demands greater vigilance. Traditional organizational models, such as RACI (Responsible, Accountable, Consulted, Informed), may need to be redefined in a world where decision-making is increasingly automated. Explainability becomes a cornerstone of trust. Without it, governance falters, and accountability erodes. The idea of “AI regulating AI” was floated—a recursive model of oversight that may become essential as systems grow in complexity and autonomy. 

In sum, the breakfast session revealed a shared understanding among Singapore’s board leaders: AI is not just a technological trend—it is a societal shift, a strategic imperative, and a governance challenge. The questions being asked are the right ones. The challenge now is to translate these reflections into action—ensuring that AI serves not only shareholders but also employees, communities, and society at large. 

Boards must rise to the occasion. This means investing in education, fostering cross-sector dialogue, and building resilient, ethical frameworks for AI integration. It means embracing AI with both ambition and caution—recognizing its potential to transform, but also its power to disrupt. And above all, it means leading with foresight, integrity, and responsibility in a world that is being reshaped before our eyes.  

Leadership in the age of AI will require a new kind of fluency—one that blends technological literacy with ethical sensitivity and strategic imagination. Board members and executives must not only understand the capabilities of AI but also cultivate the judgment to guide its responsible deployment. This calls for a renewed focus on talent: attracting, developing, and retaining individuals who can bridge the gap between data science and business strategy, between innovation and governance. Organizations must rethink their leadership pipelines, ensuring that future leaders are equipped to navigate complexity, ambiguity, and rapid change. 

Ultimately, the integration of AI is not just a technological challenge—it is a human one. The organizations that thrive will be those that invest in their people as much as their platforms, and that recognize leadership as the most critical enabler of responsible transformation. 

These key takeaways encapsulate the essence of our discussions and serve as guiding principles as we navigate the challenges of governance in the AI age.  

Topics Related to this Article

Written by

Changing language
Close icon

You are switching to an alternate language version of the Egon Zehnder website. The page you are currently on does not have a translated version. If you continue, you will be taken to the alternate language home page.

Continue to the website

Back to top