In the current discourse surrounding artificial intelligence, the conversation often gravitates towards the benefits of generative AI. Its ability to create art, generate human-like text, and even simulate complex decision-making processes are captivating, but also concerning. As we propel ourselves into an AI-powered future, it's crucial to expand our focus beyond the immediate benefits of generative AI and consider the broader ecosystem in which these technologies will operate. This involves not only recognizing the risks inherent in AI but also critically examining how we might mitigate those risks.
Much of the dialogue around risk mitigation has revolved around the concept of AI safety, a set of practices and policies aimed at ensuring that AI systems do not pose undue harm to society. However, what if the key to AI safety isn't just in creating better, more controlled generative AI, but rather in fostering something entirely counterintuitive—a pathological AI?
At first glance, the notion of a pathological AI may seem entirely negative and even dangerous. The very word "pathological" conjures images of dysfunction, harm, and fear—qualities we would instinctively want to avoid in any system, let alone one as powerful as AI. But when viewed within the context of a broader AI ecosystem, a pathological AI could paradoxically serve as a necessary component for ensuring overall safety and balance.
To understand the context by which I use the term "pathological," we need to turn to Ron Westrum's model of organizational typology, traditionally applied to human organizations. Westrum categorizes organizations into three distinct types: pathological, bureaucratic, and generative.
While most organizations might aspire to be generative, a well-functioning organization often requires elements from all three typologies. For example, it might be hazardous to openly share sensitive information, such as upcoming layoffs or merger plans, necessitating a more pathological approach for such topics. Strict boundaries and their enforcement may be critical for the organization’s survival and necessary to protect the organization from internal and external threats. Human resources, compliance teams, and legal departments serve as the bureaucratic elements of an organization, ensuring that certain laws and processes are followed, even if it slows down the generative side of the organization.
Westrum's framework, though designed for human organizations, offers valuable insights that can be applied to the realm of AI as well. Just as in human organizations, a safe and effective AI ecosystem might need to incorporate aspects of all three typologies. Bureaucratic AI could serve to slow down or regulate the actions of generative AI, ensuring that its outputs are carefully vetted and aligned with society's broader goals. But the ultimate check on generative AI may be a pathological AI that is intentionally designed to be secretive, or even oppositional.
A pathological AI would operate with a primary focus on safeguarding against the risks associated with AI outputs. Its role would be to critically assess, challenge, and limit the outputs of generative AI systems. While generative AI is designed to produce novel solutions, content, and ideas, a pathological AI would serve as a gatekeeper with the power to halt or adjust potentially harmful or unauthorized outputs before they reach the end user.
Much like the branches of government in a well-functioning democracy, generative AI needs checks and balances. Without these, we risk creating a generative-only AI ecosystem that ends up as dysfunctional as organizations like Enron or FTX—companies that failed spectacularly due to a lack of proper oversight and balance within their governance structures. In AI, the stakes are even higher. A lack of balance could lead to not just financial ruin but potentially catastrophic consequences on a global scale.
As we continue to develop and deploy AI technologies, we should keep in mind that a focus only on developing and deploying generative AI systems may create an imbalance in our AI ecosystem. A safe AI ecosystem may not be one that is purely generative or entirely controlled by bureaucratic rules. It may include reticent, pathological AI systems that are incentivized to initiate a full shutdown of other AI systems if they start causing human harm.
We may need an ecosystem where different types of AI (generative, bureaucratic, and pathological) work in concert to create a balanced whole. By embracing the counterintuitive notion of pathological AI, we may unlock a critical component of AI safety, ensuring that as we move forward into an AI-powered future, we do so with the necessary safeguards in place.