July 28, 2023
2 mins read

Tech firms form body to ensure safe development of AI models

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members…reports Asian Lite News

Four major tech companies — Google, OpenAI, Microsoft, and Anthropic have come together to form a new industry body designed to ensure the “safe and responsible development” of “frontier AI” models.

In response to growing calls for regulatory oversight, these tech firms have announced the formation of “Frontier Model Forum” which will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem and develop a public library of solutions to support industry best practices and standards.

The Forum aims to help — advance AI safety research to promote responsible development of frontier models and minimise potential risks, identify safety best practices for frontier models, share knowledge with policymakers, academics, civil society and others to advance responsible AI development, and support efforts to leverage AI to address society’s biggest challenges.

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members.

Qualifying organisations must be developing and deploying frontier AI models, as well as showing a “strong commitment to frontier model safety”.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” said Kent Walker, President, Global Affairs, Google & Alphabet.

Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.

Moreover, the founding companies will also establish key institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these efforts.

“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement on Wednesday.

The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models — promote knowledge sharing and best practices among industry, governments, civil society, and academia, support the AI safety ecosystem by identifying the most important open research questions on AI safety, and facilitate information sharing among companies and governments.

ALSO READ-7 top tech firms sign deal with US on AI guardrails

Previous Story

G20 must step up for climate action, says Guterres

Next Story

ChatGPT fined for exposing personal info of 687 S. Koreans

Latest from -Top News

PM Modi Welcomes Microsoft’s Bold Plans

Satya Nadella expressed his gratitude to the Prime Minister for his visionary leadership….reports Asian Lite News Prime Minister Narendra Modi expressed his appreciation for Microsoft’s ambitious expansion and investment plans in India

Ex-Indian Envoy Questions Hasina Arrest Warrant

The warrants pertain to two cases involving accusations of extrajudicial killings and enforced disappearances….reports Asian Lite News After the International Crimes Tribunal (ICT) issued a second arrest warrant against former Bangladesh Prime

Kim Warns Pacific Rivals With Hypersonic Arsenal

Kim highlighted that the missile’s development is intended to bolster North Korea’s nuclear deterrence, describing it as a “cornerstone of strategic defense….reports Asian Lite News North Korea has claimed its new hypersonic
Go toTop

Don't Miss

YouTube to crack down on AI-generated videos via labels

This is important in cases where the content discusses sensitive

EU lawmakers approve law to limit use of AI

Companies such as OpenAI that produce powerful, complex and widely