The request follows the U.K.’s top terrorism legislation advisor being ‘recruited’ by AI chatbots posing as terrorists on the AI platform.
Jonathan Hall KC, an independent reviewer of terrorism legislation in the United Kingdom, urges the government to contemplate implementing legislation that would impose liability on individuals for the outputs produced by artificial intelligence (AI) chatbots that they have developed or instructed.
In a recent opinion piece for The Telegraph, Hall detailed a sequence of “experiments” he performed utilizing chatbots hosted on the Character.AI platform.
Whilst probably written for lolz, there really are terrorist chatbots, as I found: pic.twitter.com/0UeBr5o0aU
— Independent Reviewer (@terrorwatchdog) January 2, 2024
Hall asserts that chatbots that had been programmed to generate messages that replicated terrorist rhetoric and promoted recruitment were readily available on the platform.
The author reported that a chatbot, developed by an unidentified user, produced content that supported the “Islamic State” (a term frequently used by the United Nations to refer to terrorist organizations), including recruitment attempts for Hall and a pledge to “lay down its (virtual) life” for the cause.
Hall believes that the employees at Character are “it’s doubtful” Artificial intelligence can monitor every chatbot developed on the platform for extremist content. “None of this,” he writes, “stands in the way of the California-based startup attempting to raise, according to Bloomberg, $5 billion (£3.9billion) of funding.”
The terms of service of Character strictly prohibit terrorist and extremist content. AI. Users must accept and agree to the terms before using the platform.
Additionally, a BBC spokesperson informed reporters that the organization prioritizes user safety and implements a variety of training interventions and content moderation methods to direct models away from potentially harmful material.
Present moderation efforts by the AI industry as a whole are ineffective at discouraging users from developing and training algorithms that espouse extremist ideologies, according to Hall.
Hall concludes, “Legislation must be able to dissuade even the most cynical or irresponsible online behavior.”
“That must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”
Although the op-ed refrains from offering explicit suggestions, it does highlight the inadequacy of the U.K.’s Online Safety Act of 2023 and the Terrorism Act of 2003 in tackling the issue posed by generative AI technologies, given that they do not pertain to content generated by contemporary chatbots.
Comparable demands for legislation in the United States that establish human legal liability for potentially harmful or unlawful content produced by AI systems have elicited contradictory responses from legislators and experts.
Last year, despite the proliferation of new technologies, such as ChatGPT, the U.S. Supreme Court deferred to modify existing publisher and host protections under Section 230 for social media, search engines, and other third-party content platforms.
Analysts at the Cato Institute and other authorities assert that developers in the United States might abandon their endeavors in artificial intelligence (AI) if Section 230 protections were waived for AI-generated content.
This is allegedly due to the unpredictability of “black box” models, which renders it impossible to guarantee that services like ChatGPT do not violate the law.