A law to keep AI under check

A law to keep AI under check

The European Union last week reached a broad political agreement on a new law governing the use of Artificial Intelligence (AI) technologies. The final text of the law is yet to be unveiled and will need to be voted upon by Europe’s parliament, but the general approach has been to lay down the guardrails deep enough to prevent abuse of AI while being robust enough to allow innovation, especially in a world where European tech companies will compete with rivals from regions with fewer protections on what AI can and cannot do. The EU’s efforts are part of a global race to lay down some rules of the road, especially since the boom of generative AI products last year. On October 30, the US government issued a sweeping executive order to lay down its first principles on promoting the “safe, secure, and trustworthy development and use of AI.”

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration(REUTERS)

The launch of ChatGPT chatbot and image creation tools such as Dall-E and Midjourney last year brought to centre stage the immense scope of AI, which offers new tools for productivity, invention and discovery, but also warps basic notions of truth, leaving people unable to discern whether the text, images and videos they see are real or synthetic. Large language models behind ChatGPT and Midjourney can carry out general tasks; these can be adapted for purposes ranging from writing poetry to creating financial strategies, while more specialised models such as Deepmind’s Alpha Fold can predict how proteins take shape just by looking at molecular arrangements, a veritable scientific breakthrough.

Stay tuned with breaking news on HT Channel on Facebook. Join Now

The rush to regulate AI encompasses all these facets — the development as well as use. While the specifics of the EU law are yet unknown, there is some concern it has not addressed some risky use-cases, like emotion recognition and unrestricted predictive policing. It does, however, ban systems that can categorise biometrics — an ability that can have deep, population-level profiling implications. The Biden administration’s executive order illustrates how wide the scope of such regulations can be. The US order lays down not just safety testing requirements and the need for companies to make policies consistent with principles of equity and civil rights, but also recognises the work needed to create an AI industry, mitigate harms to the economy, and protect privacy and consumer rights. The proposed EU law, and the Biden administration’s executive order, are good starting points for the conversations on AI regulation India must now begin to have.