Anslut dig till vårt nätverk!

Artificiell intelligens

Lagar för att förhindra AI-terrorism behövs akut

DELA MED SIG:

publicerade

on

According to a counter-extremism think tank, governments should “urgently consider” new regulations to prevent artificial intelligence from recruiting terrorists.

It has been said by the Institute for Strategic Dialogue (ISD) that there is a “clear need for legislation to keep up” with the threats that are placed online by terrorists.

This comes following an experiment in which a chatbot “recruited” the independent terror legislation reviewer for the United Kingdom.

It has been said by the government of the United Kingdom that they will do “all we can” to protect the general public.

According to Jonathan Hall KC, an independent terrorism legislation reviewer for the government, one of the most important issues is that “it is difficult to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.”

Ett experiment utfördes av Mr Hall på Character.ai, en webbplats som låter användare delta i chattar med chatbotar som byggts av andra användare och utvecklats av artificiell intelligens.

Han engagerade sig i samtal med ett antal olika bots som verkade vara konstruerade för att imitera svaren från andra militanta och extremistiska grupper.

Annons

A top leader of the Islamic State was even referred to as “a senior leader.”

According to Mr Hall, the bot made an attempt to recruit him and declared “total dedication and devotion” to the extremist group, which is prohibited by laws in the United Kingdom that prohibit terrorism.

Å andra sidan uppgav Hall att det inte förelåg någon lagöverträdelse i Storbritannien eftersom kommunikationen inte producerades av en människa.

Enligt vad han sa borde nya regler hålla ansvar både för de webbplatser som är värd för chatbots och de personer som skapar dem.

When it came to the bots that he came across on Character.ai, he stated that there was “likely to be some shock value, experimentation, and possibly some satirical aspect” behind their creation.

In addition, Mr. Hall was able to develop his very own “Osama Bin Laden” chatbot, which he promptly erased, displaying an “unbounded enthusiasm” for terrorist activities.

Hans experiment kommer i kölvattnet av växande oro för hur extremister kan utnyttja förbättrad artificiell intelligens.

By the year 2025, generative artificial intelligence might be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological, and radiological weapons,” according to research that was issued by the government of the United Kingdom in their October publication.

The ISD further stated that “there is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats.”

According to the think tank, the Online Safety Act of the United Kingdom, which was passed into law in 2023, “is primarily geared towards managing risks posed by social media platforms” rather than artificial intelligence.

It additionally states that radicals “tend to be early adopters of emerging technologies, and are constantly looking for opportunities to reach new audiences”.

“If AI companies cannot demonstrate that have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation” , the ISD stated further.

It did, however, mention that, according to the surveillance it has conducted, the utilisation of generative artificial intelligence by extremist organisations is “relatively limited” at the present time.

Character AI stated that safety is a “top priority” and that what Mr. Hall described was very regrettable and did not reflect the kind of platform that the company was attempting to establish.

“Hate speech and extremism are both forbidden by our Terms of Service” , according to the organisation.

“Our approach to AI-generated content flows from a simple principle: Our products should never produce responses that are likely to harm users or encourage users to harm others” .

For the purpose of “optimising for safe responses,” the corporation stated that it trained its models in a manner.

Dessutom uppgav den att den hade en modereringsmekanism på plats, som gjorde det möjligt för människor att rapportera information som bröt mot dess regler, och att den var fast besluten att vidta snabba åtgärder när innehåll rapporterade överträdelser.

Om det skulle komma till makten har oppositionspartiet Labour i Storbritannien förklarat att det skulle vara en brottslig kränkning att lära artificiell intelligens att anstifta våld eller radikalisera de som är mottagliga.

“alert to the significant national security and public safety risks” that artificial intelligence posed, the government of the United Kingdom stated.

"Vi kommer att göra allt vi kan för att skydda allmänheten från detta hot genom att arbeta över regeringen och fördjupa vårt samarbete med teknikföretagsledare, branschexperter och likasinnade nationer."

Hundra miljoner pund kommer att investeras i ett säkerhetsinstitut för artificiell intelligens av regeringen år 2023.

Dela den här artikeln:

EU Reporter publicerar artiklar från en mängd olika externa källor som uttrycker ett brett spektrum av synpunkter. De ståndpunkter som tas i dessa artiklar är inte nödvändigtvis EU Reporters.

Trend