Artificial Intelligence (AI) is increasingly described as a modern, technological "Leviathan"—a
supreme, centralized authority managing security, surveillance, and societal
order, much like the sovereign entity - Leviathan - described by Thomas Hobbes in 1651. As
algorithmic systems, rather than people, take on roles like governing,
predicting, and automating, they are likely to reshape global power
structures.
- Technological
Sovereign: Similar
to Hobbes’s concept, AI acts as a "digital sovereign" to which
society grants power (through data and usage) in exchange for order,
efficiency, and safety.
- "Algorithmic
Leviathan": This
term highlights AI's role in governing via code, data, and infrastructure,
bypassing traditional political, legal, and bureaucratic institutions.
- Surveillance
and Control: AI
systems, such as advanced cameras and predictive analytics, enable a new
form of surveillance that regulates behaviour and ensures security, much
like a 17th-century sovereign ensuring order.
- Shifting
Power: The rise
of AI as a Leviathan signals a transition where authority moves from
nation-states to private tech conglomerates.
While
this "AI Leviathan" offers immense efficiency, it brings challenges
concerning privacy, bias, and accountability. Whether Artificial
Intelligence (AI) is an "uncontrolled leviathan"—a massive,
uncontrollable entity—is a subject of intense debate among experts, with
growing concerns that AI is rapidly evolving beyond human control. While
some researchers and industry leaders argue that this potential for a
"superintelligent" system to become uncontrollable is a real,
existential threat, others argue that the term is alarmist and that the real
danger lies with the humans controlling the development of AI.
Increased
Insecurity and Uncertainty
AI
presents a dual-use paradox, significantly increasing risks to global security
and uncertainty while simultaneously offering major potential benefits. It
enhances cybersecurity threats, deepfake misinformation, and autonomous weapon
risks, while fuelling societal anxiety over data privacy, job displacement, and
opaque, biased decision-making.
Impact
on Security
- Cyber
and Physical Threats: AI empowers more sophisticated and frequent
cyberattacks, threatening critical infrastructure, while autonomous
systems could lead to unpredictable, rapid escalation of conflicts. Cyber
and physical threats are increasingly merging into combined, hybrid
attacks, often referred to as cyber-physical systems (CPS) risks, where
digital breaches cause tangible, real-world damage. These threats target
integrated infrastructure, such as manipulating industrial control systems
(ICS), disabling security cameras, or using compromised IoT devices to
breach networks.
- Weaponisation: AI can be used to develop novel
chemical weapons or enable autonomous weapon systems, raising fears of
catastrophic outcomes.
- Surveillance and Control: AI
increases the ability of regimes to implement pervasive surveillance,
eroding civil liberties.
- Risks of over -reliance: Over-automation
reduces human vigilance, leaving systems vulnerable to novel attacks that
AI fails to detect.
Impact
on Uncertainty
- Distrust and Disinformation: The
inability to distinguish between synthetic and real content (deepfakes)
erodes trust in media and democratic institutions.
- Ethical/Legal Ambiguity: AI
bias in hiring, policing, and credit, along with "black-box"
decision-making, creates uncertainty about fairness and accountability.
["Black-box" decision-making refers to systems,
particularly in artificial intelligence (AI) and machine learning, where
inputs are processed to produce outputs without disclosing or allowing
humans to understand the internal logic, algorithms, or reasoning behind
the results. While these systems are capable of high performance and
accuracy in complex tasks, their opacity makes them difficult to audit,
debug, or trust].
- Unpredictability of the Future: The
rapid evolution of AI makes it difficult for regulators and society to
predict long-term impacts, leading to a "flash war" or
"flash crash" scenario of rapid escalation.
Mitigation
and Future Outlook
- Human-AI Symbiosis: Experts
suggest that combining AI capabilities with human judgment—"Authentic
Intelligence"—is crucial for managing risks and reducing uncertainty.
[Authentic Intelligence (AQ) refers to the unique, innate human
capacity for emotional intelligence, empathy, creativity, and ethical
judgment, which complements artificial intelligence (AI). Unlike AI, which
analyzes data patterns, AQ focuses on lived experience, moral reasoning,
and genuine human connection to drive innovation and understanding. Key
aspects of Authentic Intelligence include:
Ø Human-Centric
Skills: Emphasizes qualities such as curiosity, perspective, and moral
choice.
Ø Relationship
Management: Focuses on building, nurturing, and managing human
relationships, which cannot be automated.
Ø Complementing
AI: Rather than replacing AI, authentic intelligence serves as a
guiding force to ensure AI is used ethically and effectively, often termed a
"symbiotic relationship".
Ø Contextual
Understanding: Involves interpreting situations based on experience,
context, and emotion rather than just probabilistic data.
Ø Why
Authentic Intelligence Matters Now
As AI becomes more prevalent, the demand for "human-centric" skills
increases. The World Economic Forum highlights that authentic
intelligence is essential to harness AI for growth while maintaining human
values. In business, it helps bridge team alignment, ensuring technology acts
as an amplifier rather than a replacement for human judgment].
- Regulation: Active regulation is needed to
address the security risks posed by autonomous systems and to establish
accountability in decision-making.
- Proactive Security: While
AI is a tool for attackers, it is also necessary for building robust,
automated defenses to keep up with the speed of modern threats.
Conclusion:
The perception of AI as an uncontrolled leviathan is driven by its unpredictable, fast-moving development and the difficulty of aligning it with human ethics. While many experts, such as Geoffrey Hinton and Yoshua Bengio, consider this a serious threat, others emphasize that it is not inevitable and depends heavily on how AI is developed, regulated, and managed. While some view these fears as "neophobia" or irrational alarmism, the consensus among many researchers is that the potential for AI to become an uncontrollable force is real enough to require immediate, global safety standards.
No comments:
Post a Comment