Governments must prepare for “The Singularity”. They don’t.

Dennis Hillemann
12 min readJan 18, 2024

As a leading expert in administrative law, I have a harrowing vision of the approaching “Singularity” phenomenon. What was once confined to the realm of science fiction is rapidly becoming a tangible reality within our grasp. This pivotal moment in history marks the advent of artificial intelligence surpassing human intelligence, resulting in cataclysmic changes to our society. Shockingly, governments around the globe appear utterly unprepared for this monumental event. In this article, we will delve into the very core of the Singularity, examine past instances of governmental negligence and shortsightedness, critically scrutinize current regulatory frameworks such as the inadequate EU AI Act, and present imperative measures that must be taken by governments to brace themselves for this unprecedented paradigm shift.

Understanding the Singularity

A. Defining the Singularity

The Singularity, a concept popularized by futurists like Ray Kurzweil, is the elusive tipping point in time when artificial intelligence will outpace human intelligence, leading to an unstoppable, exponential growth of technology and ultimately transforming human civilization beyond recognition. The rapid advancements in AI, particularly in areas like deep learning and quantum computing, have raised the possibility that this event could happen sooner than initially predicted, potentially within our own lifetimes. It’s a prospect both thrilling and unnerving, as humanity stands on the brink of a revolutionary era where anything seems possible.

B. The Imminence of the Singularity

Recent data from 2022 and 2023 solidifies the stunning advancements of AI technologies. The ever-evolving capabilities of AI, such as autonomous decision-making and problem-solving abilities that rival or even exceed those of humans, leave little doubt that the Singularity is not a far-off legend, but a looming reality. The rapid pace of AI research paired with the exponential growth of computational power add weight to forecasts that the Singularity may happen within mere decades. Experts are astounded by the progress made thus far and eagerly anticipate what the future holds for this burgeoning technology.

Historical Precedents of Governmental Unpreparedness

A. Lessons from Past Catastrophes

1. The COVID-19 Pandemic

The ongoing COVID-19 pandemic, which has caused widespread devastation and upheaval across the globe, serves as a stark reminder of government unpreparedness for catastrophic events. Despite warnings from the scientific community and previous outbreaks like SARS and MERS, many governments were ill-equipped to handle the rapid spread of the virus and its unprecedented impact on society.

2. The Financial Crisis of 2008

The global financial crisis of 2008 is another example of governmental unpreparedness. Many governments failed to regulate the banking sector effectively, leading to reckless lending practices and ultimately resulting in a devastating economic downturn. The consequences were severe, with millions losing their jobs and homes, highlighting the need for governments to be proactive in addressing potential threats before they escalate into crises.

3. Natural Disasters

Governments have also been caught off-guard by natural disasters, such as hurricanes, earthquakes, and wildfires. These events can have catastrophic consequences on people’s lives and economies if not adequately prepared for or mitigated.

B. Implications for the Singularity

  1. The Consequences of Governmental Unpreparedness for the Singularity

As we approach the Singularity, it is essential to consider the potential consequences of governmental unpreparedness. The rapid advancements in AI and its potential to surpass human intelligence raise several concerns, including economic disruption, social unrest, ethical dilemmas, and legal challenges. Governments must be proactive in preparing for the Singularity to avoid being overwhelmed by the pace of change and its far-reaching implications.

2. Economic Disruption

The Singularity could dramatically alter our economy as machines increasingly replace human labor across various industries. While this could lead to increased efficiency and productivity, it could also result in widespread job loss and income inequality if not managed properly. Governments will need to address these challenges by implementing policies that support retraining and upskilling workers for jobs that require uniquely human skills, such as creativity and emotional intelligence.

3. Social Unrest

The displacement of human workers by advanced AI could also lead to social unrest, particularly if governments fail to address issues such as income inequality and access to education. This unrest could be further exacerbated by discrepancies between different regions or countries in their readiness for the Singularity. For example, developing countries may struggle even more than developed countries in adapting to the changes brought about by AI.

4. Ethical Dilemmas

The Singularity raises complex ethical questions that governments must grapple with, such as how to ensure moral decision-making by AI systems or whether machines should have rights similar to humans. Failure to adequately address these dilemmas could result in societal conflicts or unethical practices being carried out under the guise of AI.

5. Legal Challenges

As AI becomes increasingly autonomous and capable of making decisions without human intervention, governments must update their legal frameworks accordingly. This includes determining liability for accidents or errors caused by AI systems and ensuring ethical standards are upheld in areas like healthcare and criminal justice where decisions made by machines can have profound implications on individuals’ lives.

The Singularity presents a unique and unprecedented set of challenges for governments across the

The EU AI Act — A Step Forward, But Not Enough

A. Overview of the EU AI Act

In April of 2021, the European Union took a significant step towards regulating artificial intelligence (AI) by proposing the AI Act. The legislation has now been passed and will soon go into effect in all EU member states. Its goal is to ensure that AI systems are safe, adhere to EU principles and guidelines, and do not violate fundamental rights. It also addresses concerns regarding transparency, accountability, and the need for human oversight in the development and use of AI technology.

Categorization of AI Systems

The AI Act categorizes AI systems based on their risk level: unacceptable risk, high risk, and limited risk. Unacceptable risk refers to systems that infringe on fundamental rights or have a significant potential for manipulation or fraud. High-risk systems are those that could have serious consequences for individuals or society if they malfunction or are used inappropriately. Limited-risk systems pose minimal risks and do not require additional regulatory requirements.

Requirements for High-Risk AI Systems

High-risk AI systems must comply with strict requirements before being placed on the market or used in the EU. These include undertaking a conformity assessment procedure, ensuring high-quality datasets, implementing technical safety measures, providing proper documentation and labeling, and conducting post-market monitoring.

Prohibitions for Unacceptable Risk AI Systems

The AI Act prohibits certain types of AI systems that present an unacceptable level of risk to individuals or society. These include social scoring systems used by governments for mass surveillance purposes, real-time remote biometric identification systems in public spaces for law enforcement purposes without prior consent, and systems that manipulate human behavior.

Transparency and Human Oversight

The EU AI Act places a strong emphasis on transparency and human oversight in the development and use of high-risk AI systems. This includes ensuring clear communication with users about the capabilities and limitations of these systems, providing information about how decisions are made by AI algorithms, and establishing mechanisms for human intervention or review in cases where decisions made by machines could impact individuals’ lives.

B. Shortcomings of the EU AI Act

1. Ignoring the Singularity

The EU AI Act primarily focuses on current AI applications and does not take into account the potential for exponential growth and transformative impacts of AI as envisioned by some experts. The concept of the Singularity, where AI becomes vastly superior to human intelligence, is not adequately addressed in the legislation.

2. Lack of Specific Guidelines

While the Act provides general principles and requirements, it lacks specific guidelines for compliance. This leaves room for interpretation and potential loopholes in implementation.

3. Inadequate Enforcement Mechanisms

The EU AI Act relies heavily on self-assessments by developers and manufacturers, which may not be sufficient to ensure compliance with regulations. There is a lack of clear enforcement mechanisms to hold those who violate the Act accountable.

4. Unrealistic Timelines

The implementation timelines set by the Act may be too long, given the rapid evolution and pace of development in AI technology. This could result in outdated regulations unable to keep up with advancements in AI.

5. Potential Bias

There is a risk that bias in datasets used to train high-risk AI systems could perpetuate existing discrimination or inequalities when making decisions that impact individuals or groups.

6. Need for International Cooperation

As AI knows no borders, regulating it effectively requires international collaboration and cooperation among governments. The EU AI Act does not address this need for global coordination and could hinder efforts towards a harmonized approach to regulating AI worldwide.

While the EU AI Act is a significant step towards regulating artificial intelligence, it falls short in several areas and may not adequately address future challenges posed by rapidly advancing technology. Governments must continue to review and update regulations to keep pace with AI developments and ensure responsible use of this

Preparing for the Singularity: Steps for Governmental Preparation

Establishing a Dedicated Singularity Task Force

1. Monitoring and Assessing AI Advancements

A dedicated Singularity task force would closely track and analyze developments in AI technology, including breakthroughs in machine learning, natural language processing, and robotics. This would allow governments to stay informed about the latest advancements and assess their potential impact on society.

2. Conducting Risk Assessments

The task force would also be responsible for conducting risk assessments of current and emerging AI applications. This includes identifying potential risks and threats posed by advanced AI systems, such as bias, privacy breaches, and job displacement.

3. Developing Strategic Responses

Based on risk assessments, the task force would collaborate with experts from various fields to develop strategic responses to mitigate potential risks associated with the Singularity. This could include policy recommendations, guidelines for developers, and investment in research and development of safe AI technologies.

4. Facilitating Public Dialogue

To ensure that the public is involved in discussions about the Singularity, the task force could organize forums or public consultations to gather feedback and concerns from citizens. This would allow governments to consider public perspectives when developing policies related to advanced AI.

5. Coordinating International Efforts

Given the global nature of AI research and development, a dedicated task force could also facilitate collaboration with other countries’ governments and international organizations to foster a coordinated approach towards addressing the challenges posed by advanced AI systems.

Investing in Research and Development

To effectively prepare for the Singularity, governments must prioritize research and development (R&D) in AI technology. This includes not only advancing the capabilities of AI, but also ensuring its safe and ethical implementation. Below are some steps that governments can take to invest in R&D for the Singularity.

A. Supporting Universities and Research Institutions

1. Increase funding for AI research

Governments should prioritize funding for universities and research institutions engaged in AI research. This could include increasing grants or creating dedicated funding programs specifically for AI projects.

2. Encouraging collaboration between academia and industry

Collaboration between academia and industry is crucial for advancing AI research and developing practical applications. Governments can facilitate this by creating partnerships or providing incentives for joint projects.

3. Promoting interdisciplinary research

AI is a multi-faceted field that requires expertise from various disciplines such as computer science, psychology, ethics, and law. Governments can promote interdisciplinary research by providing funding for collaborations between different departments or offering grants for interdisciplinary projects.

B. Providing Grants for Innovative Projects

1. Funding ethical and safety-focused initiatives

Governments can provide grants specifically aimed at supporting innovative projects that address potential risks associated with advanced AI systems. This could include developing tools to detect bias in algorithms, creating ethical guidelines for developers, or designing safety protocols for autonomous systems.

2. Supporting startups and small businesses

Governments can allocate funds towards supporting startups and small businesses focused on developing safe and ethical AI technologies. This could include providing grants, tax breaks, or other financial incentives to encourage innovation in this area.

3. Offering prizes/contests

Prizes or contests can be a powerful way to stimulate R&D efforts in specific areas of concern related to the Singularity such as explainable AI or control mechanisms for superintelligent systems.

C. Collaboration with International Partners

Given the global nature of advanced AI development, collaboration with international partners is essential. Governments can establish partnerships with other countries and international organizations to share

Preparing for the Singularity: Investing in Education

With the rapid advancement of AI technology, it is essential to invest in education programs that prepare future generations for a world in which advanced AI systems will play a significant role. This includes not only technical skills, but also critical thinking and ethics training to ensure the safe and responsible development and use of AI.

Below are some key steps that governments can take to invest in education for the Singularity.

1. Incorporating AI into school curriculum

To prepare students for a world where AI will be ubiquitous, governments should incorporate AI-related topics into school curricula at all levels. This could include introducing basic concepts of AI and coding starting from elementary school, as well as offering more advanced courses in high school and university.

2. Supporting STEM (Science, Technology, Engineering, and Mathematics) education

AI is a multidisciplinary field that requires a strong foundation in STEM subjects. Governments should prioritize funding and resources towards supporting STEM education programs to foster interest and proficiency in these areas.

3. Promoting ethical literacy

As AI raises complex ethical questions, schools should incorporate ethical literacy into their curriculum. This could involve teaching students how to identify and analyze ethical issues related to AI development and use.

4. Offering specialized degrees/certificates

Governments can provide funding or grants for universities to develop specialized degrees or certificates focused on AI-related fields such as machine learning or robotics.

5. Collaborating with industry

Partnerships between educational institutions and industry can provide students with real-world experience working with advanced AI systems while also allowing companies to identify potential talent early on.

6. Providing ongoing training for professionals

It is crucial to ensure that current professionals have the necessary skills to work with advanced AI systems effectively. Governments can facilitate this by offering training programs or subsidies for professionals looking to upgrade their skills in this area.

Fostering International Collaboration and Dialogue

A. Establishing Global AI Standards

As AI technology continues to advance, it is essential to establish global standards to ensure safe and responsible development and use of AI systems. Governments can work together to develop these standards, drawing on the expertise of international organizations such as the United Nations and the Organization for Economic Cooperation and Development (OECD).

B. Coordinating Research and Development Efforts

Governments can also coordinate their research and development efforts to share knowledge and resources in developing advanced AI systems. This could involve establishing joint research initiatives or providing funding for collaborative projects.

C. Facilitating Data Sharing

Data is a crucial component in training advanced AI systems, but access to data can be limited due to privacy concerns or proprietary restrictions. Governments can facilitate data sharing by establishing legal frameworks that protect individual privacy while allowing for broader sharing of data for research purposes.

D. Promoting Ethics in AI Development

As advanced AI systems become more prevalent, it is crucial to promote ethical principles in their development and use. Governments can collaborate on ethical guidelines for AI and establish mechanisms for monitoring compliance with these principles.

E. Addressing Economic Impacts

The Singularity will bring significant economic impacts, including potential job displacement due to automation. Governments should engage in discussions with other countries on strategies for addressing these impacts, such as retraining programs or universal basic income.

F. Hosting International Conferences

Governments can also host international conferences focused on discussing the implications of the Singularity and promoting collaboration among experts from different countries.

G. Building Diplomatic Channels

Establishing diplomatic channels specifically focused on discussing advanced AI development can facilitate communication between governments on this critical issue.

H. Supporting Developing Countries

Governments should also consider supporting developing countries in their efforts towards advancing their own AI capabilities through partnerships, training programs, or technology transfer initiatives.

As we hurtle towards the Singularity, governments are scrambling to prepare for the inevitable chaos that will ensue. The looming Singularity presents a daunting mix of unimaginable challenges and limitless potential, making it crucial for governments to take decisive action now. Their choices in the coming years will determine whether society is propelled towards utopia or dragged into a dystopian nightmare by the unstoppable force of the Singularity. Failure to acknowledge and prepare for its arrival would be a catastrophic mistake with irreversible consequences.

--

--

Dennis Hillemann

Lawyer and partner with a track record of successful litigation and a passion for innovation in the legal field