Advertisements

Wipro Chief Hails India’s Key Role in AI Ethics

Advertisement

Bartoletti, founder of the ‘Women Leading in AI Network,’ highlighted that India has established a distinct and robust digital data protection bill, differing from the European Union’s GDPR…reports Nishant Arora

As the global debate over artificial intelligence (AI) and user harm gains momentum – amid the week-long Open AI saga – India is going to play a crucial role in shaping responsible AI with the first-of-its-kind data privacy legislation and draft regulation over deepfakes, says Ivana Bartoletti, Global Privacy Officer, Wipro Limited.

In an interaction with IANS, Bartoletti who is also the founder of the ‘Women Leading in AI Network’, said that India has carved out a safe and robust digital data protection bill which is different from the European Union’s General Data Protection Regulation (GDPR).

“Deepfakes are not a new phenomenon. These existed before but the generative AI dimension has brought new factors to it because it now takes two seconds to create a fake image. However, we now have a global alignment to tackle these AI threats,” she told IANS.

“The UK AI Summit at Bletchley Park, Buckinghamshire, earlier this month was a turning point where leaders, including Indian Minister of State for Electronics and IT, Rajeev Chandrasekhar, vouched for robustness, safety and global governance around AI,” she said.

India, along with 27 other countries, including the US and the UK and the European Union, signed a declaration pledging to work on the assessment of risks linked with AI at the summit, hosted by UK Prime Minister Rishi Sunak.

After the successful AI Safety Summit in the UK, the Global Partnership on Artificial Intelligence (GPAI) in New Delhi next month will further deliberate upon the risks associated with artificial intelligence (AI) — in the presence of world leaders — before a global framework is reached in Korea next year, according to Chandrasekhar.

“We have been talking about openness, safety and trust and accountability. We have always argued that innovation must not get ahead of regulation. We have spoken about the need to have safe and trusted platforms,” he said.

The minister said that the future of tech ought to be architected by countries coming and working together on mitigating the potential risks associated with technologies like AI.

At the UK summit, “we have proposed, and this will certainly be a theme at the GPAI and the India AI summit, that technology should not be demonised to a point that we regulate it out of existence and innovation,” the minister noted.

According to Bartoletti, there is an alignment to an extent globally that we’ve got to use AI in a responsible manner.

“We’re trying to achieve a global agreement around what we’re going to use AI for, but in particular, what we are not going to use AI for. We’re not governing AI or regulating it. We are governing the behaviour of people around AI. So the way that humans develop and deploy AI is really important,” Bartoletti told IANS.

According to her, responsible AI is to understand the risks, train people so that they know how to code and develop and use AI and then being able to govern the AI via a more process-based approach and embed new controls into the existing governance construct of it.

ALSO READ: IndiGo Inks Deal as Launch Carrier for Noida Airport

Advertisement
Advertisements

[soliloquy id="151345"]