United Advocates

Follow us

Regulators Must Act to Form Generative AI Precautions

The rapid advancement of Generative AI technology in cybersecurity necessitates governmental regulation, as its exploitation by malicious entities becomes increasingly prevalent, as per a recent report from the Aspen Institute. While acknowledging generative AI as a “technological marvel,” the report highlights the urgency for regulatory action, given the escalating frequency and severity of cyberattacks. The report emphasizes the responsibility of regulators and industry bodies to ensure that the benefits of generative AI are not overshadowed by its potential for misuse.

According to the report, the actions taken by governments, companies, and organizations today will determine whether attackers or defenders derive greater benefits from this emerging technology. However, global responses to generative AI security differ significantly. Nations like the US, UK, and Japan have adopted varied regulatory approaches, as have international bodies such as the United Nations and the European Union.

The UN has prioritized security, accountability, and transparency through various subgroups, while the European Union has been proactive in safeguarding privacy and addressing security concerns related to generative AI. Legislative inactivity in the US has not hindered executive action, with the Biden Administration issuing an executive order providing guidance on evaluating AI capabilities, particularly those with potential harmful effects. Additionally, agencies like the US Cybersecurity and Infrastructure Security Agency (CISA) have collaborated with UK regulators to provide guidance.

In contrast, Japan has pursued a less interventionist approach, focusing on disclosure channels and developer feedback loops rather than stringent regulations or risk assessments, according to the Aspen Institute.

The report underscores the urgency for governments to act on generative AI regulation, noting that security breaches erode public trust and that AI capabilities evolve rapidly, with the potential for misuse growing daily. Failure to address these issues promptly risks missing opportunities for proactive discussions on the ethical use of generative AI in threat detection and autonomous cyber defences.