In large public institutions, technological innovation rarely arrives as a dramatic disruption. It tends to emerge more gradually, shaped by the structures and responsibilities those institutions already carry. Agencies that manage high-volume services operate on a foundation of regulatory compliance, secure transaction processing, identity verification, and structured workflow management. These competencies allow governments to process millions of legally binding interactions each year while maintaining public trust and legal accountability.
Those same strengths, however, can become a double-edged sword when new technologies enter the picture. Leadership advisor Ram Charan once warned that the idea of core competencies is not static. As he wrote, “A core competence that once provided advantage can become a liability if it no longer fits the competitive environment.” While the public sector is not competing for customers in the traditional sense, it faces its own evolving environment defined by rising expectations for digital access, responsiveness, and efficiency.
In recent years, generative artificial intelligence has emerged as one of the most significant technological inflection points for both public and private organizations. Governments across the United States have begun experimenting with AI tools that can draft communications, summarize policy guidance, assist with research, and accelerate internal knowledge work. One state government recently announced a pilot program deploying ChatGPT across parts of its executive branch to explore these capabilities in a controlled environment.
For agencies built on structured processes and strict governance frameworks, AI adoption intersects directly with their existing competencies. Policy-driven organizations already rely on documented procedures, formal review cycles, and layered oversight. In theory, these structures provide an ideal environment for responsible experimentation. AI systems can support internal productivity while operating within defined guardrails for security, compliance, and accuracy.
In practice, however, the relationship between institutional discipline and innovation is more complicated.
Generative AI systems are powerful but imperfect. They produce outputs that can appear authoritative while occasionally containing subtle inaccuracies or unsupported conclusions. In an environment where regulatory interpretation, legal documentation, or identity verification may carry statutory consequences, these risks cannot be ignored. If staff members rely on AI-generated material without adequate verification, credibility can erode quickly. The issue is not technological capability. It is governance.
This is where core competencies matter most.
Organizations accustomed to rigorous oversight already possess many of the mechanisms needed to manage emerging technologies responsibly. Review procedures, audit trails, data protection requirements, and formal training programs can serve as safeguards against misuse. When applied effectively, these structures allow innovation to proceed deliberately rather than reactively. Instead of adopting AI as a novelty, agencies can integrate it as a carefully governed capability.
Seen through this lens, procedural rigor does not impede innovation. It shapes it.
Technology leaders sometimes frame innovation as a race toward rapid deployment. That mindset works in some sectors, but public institutions operate under a different social contract. Their mission is not simply to move fast, but to move responsibly while maintaining public trust. In many ways, the challenge resembles something familiar to fans of modern science fiction: powerful tools require careful stewardship. Whether the reference point is the artificial intelligence systems in contemporary space-opera storytelling or the cautionary tales of earlier speculative fiction, the narrative lesson is consistent. Power without governance rarely ends well.
For public organizations exploring generative AI, the strategic question is therefore not whether innovation should occur. It is how existing competencies are used to guide it.
Agencies that treat compliance, risk management, and procedural accountability as barriers will struggle to adopt new technology effectively. Agencies that treat those competencies as guardrails, however, gain a powerful advantage. They can experiment, evaluate outcomes, and scale adoption while preserving accuracy, legality, and institutional credibility.
In that sense, the future of artificial intelligence in government will not be defined solely by the sophistication of the technology itself. It will be defined by the leadership decisions that determine how that technology is integrated into the mission of public service.
Innovation, in the public sector, is rarely about moving fastest. It is about moving wisely.

