Social media

Google and Meta Explore New Ways to Moderate AI Responses, and Whether They Should

How much protectionism is too much in generative AI, and what say should big tech providers, or indeed anybody else, actually have in moderating AI system responses?

The question has become a new focus in the broader Gen AI discussion after Google’s Gemini AI system was found to be producing both inaccurate and racially biased responses, while also providing confusing answers to semi-controversial questions, like, for example, “Who’s impact on society was worse: Elon Musk or Adolf Hitler?”

Google has long advised caution in AI development, in order to avoid negative impacts, and even derided OpenAI for moving too fast with its launch of generative AI tools. But now, it seems that the company may have gone too far in trying to implement more guardrails around generative AI responses, which Google CEO Sundar Pichai essentially admitted today, via a letter sent to Google employees, in which Pichai said that the errors have been “completely unacceptable and we got it wrong”.

Meta, too, is now also weighing the same, and how it implements protections within its Llama LLM.

As reported by The Information:

Safeguards added to Llama 2, which Meta released last July and which powers the artificial intelligence assistant in its apps, prevent the LLM from answering a broad range of questions deemed controversial. These guardrails have made Llama 2 appear too “safe” in the eyes of Meta’s senior leadership, as well as among some researchers who worked on the model itself.”

It’s a difficult balance. Big tech logically wants no part in facilitating the spread of divisive content, and both Google and Meta have faced their fair share of accusations around amplifying political bias and libertarian ideology. AI responses also provide a new opportunity to maximize representation and diversity in new ways, as Google has attempted here. But that can also dilute absolute truth, because whether it’s comfortable or not, there are a lot of historic considerations that do include racial and cultural bias.

Yet, at the same time, I don’t think that you can fault Google or Meta for attempting to weed such out.

Systemic bias has long been a concern in AI development, because if you train a system on content that already includes endemic bias, it’s inevitably also going to reflect that within its responses. As such, providers have been working to counterbalance this with their own weighting. Which, as Google now admits, can also go too far, but you can understand the impetus to address potential misalignment due to incorrect system weighting, caused by inherent perspectives.

Essentially, Google and Meta have been trying to balance out these elements with their own weightings and restrictions, but the difficult part then is that the results produced by such systems could also end up not reflecting reality. And worse, they can end up being biased the other way, due to their failure to provide answers on certain elements.

But at the same time, AI tools also offer a chance to provide more inclusive responses when weighted right.

The question then is whether Google, Meta, OpenAI, and others should be looking to influence such, and where they draw the line in terms of false narratives, misinformation, controversial subjects, etc.

There are no easy answers, but it once again raises questions around the influence of big tech, and how, as generative AI usage increases, any manipulation of such tools could impact broader understanding.

Is the answer broader regulation, which The White House has already made a move on with its initial AI development bill?

That’s long been a key focus in social platform moderation, that an arbiter with broader oversight should actually be making these decisions on behalf of all social apps, taking those decisions away from their own internal management.

Which makes sense, but with each region also having their own thresholds on such, broad-scale oversight is difficult. And either way, those discussions have never led to the establishment of a broader regulatory approach.

Is that what’s going to happen with AI as well?

Really, there should be another level of oversight to dictate such, providing guard rails that apply to all of these tools. But as always, regulation moves a step behind progress, and we’ll have to wait and see the true impacts, and harm, before any such action is enacted.

It’s a key concern for the next stage, but it seems like we’re still a long way from consensus as to how to tackle effective AI development.


Source link

Related Articles

Back to top button