Social media

X Releases Back-End Code and Weighting Data for Grok LLM

Yeah, I don’t really understand the value of most of the generative AI tools being rolled out in social apps, especially given that they’re gradually eroding the human “social” elements via bot replies and engagement. But they’re there, and they can do stuff. So that’s something, I guess.

Today, X (formerly Twitter) has released the internal code base for its “Grok” AI chatbot, which enables X Premium+ users to get sarcastic, edgy responses to questions, based on X’s ever-growing corpus of real-time updates.

Grok chatbot

As explained by xAI:

“We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI. This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023. This means that the model is not fine-tuned for any specific application, such as dialogue.”

The release is part of X’s commitment to being more open about the way in which its systems operate, to help weed out bias, and enable exploration of its systems by third parties.

AI development, in particular, become a focus for X owner Elon Musk, who’s taken to criticizing every other chatbot, from OpenAI’s ChatGPT, to Google’s Gemini, to Meta’s Llama codebase for reportedly being “too woke” to produce accurate responses.

Which, in Musk’s view at least, could pose a risk to humanity:

Which is probably jumping a few steps ahead of reality, given that we’re still a long way off from actual machine “intelligence”, as such. But his explanation here provides insight into the principle that Musk is standing on, as he looks to promote his own, non-biased AI bot.

Which, given that it’s trained on X posts, would likely be very far from “woke”, or anything like it.

Under X’s “freedom of speech, no reach” policy approach, X is now leaving many offensive and harmful posts up in the app, but reducing their reach if they’re deemed to be in violation of its policies. If they break the law, X will remove them, but if not, they’ll still be viewable, just harder to find in the app.

So if Grok is being trained on the entire corpus of X posts, these highly offensive, but not illegal comments, would be included, which would likely mean that Grok is producing misleading, offensive, incorrect responses based on whacky conspiracy theories and long-standing racist, sexist and other harmful tropes.

But at least it’s not “woke”, right?

Really, Grok is a reflection of Elon’s flawed approach to content moderation more broadly, with X now putting more reliance on its users, via Community Notes, to police what’s acceptable and what’s not, while also removing less content, under the banner of “freedom of speech”. But the end result of that will be more misinformation, more conspiracy theories, more misguided fear and angst.

But it also takes the onus off Musk and Co. having to make hard moderation calls, which is what he continues to criticize other platforms for.

So, based on this, is Grok already producing more misleading, incorrect responses? Well, we probably don’t have enough data, because very few people can actually use it.

Fewer than 1% of X users have signed up to X Premium, and Grok is only available in its most expensive “Premium+” package, which is double the price of the basic subscription. So only a tiny fraction of X users actually have access, which limits the amount of insight we have into its actual outputs.

But I would hazard a guess that Grok is both as susceptible to “woke” responses as other AI tools, depending on the questions posed to it, while also being far more likely to produce misleading answers, based on X posts as the input.

You can dig into the Grok code to learn exactly how all of these elements apply, which is available here, but you would have to assume, based on its inputs, that Grok is a reflection of X’s increasing array of mainstream alternative theories.

And as noted, I don’t really see what bots like this contribute anyway, considering the focus of “social” media apps.

Right now, you can get in-stream AI bots to create posts for you on Facebook, LinkedIn, and Snapchat, with other platforms also experimenting with caption, reply and post generation tools. Through these tools, you could create a whole alternative persona, entirely powered by bot tools, which sounds more like what you want to be, but not like what you actually are.

Which will inevitably mean that more and more content, over time, will be bots talking to bots on social apps, eliminating the human element, and moving these platforms further away from that core social purpose.

Which, I guess, has already been happening anyway. Over the past few years, the amount of people posting on social media has declined significantly, with more conversation instead moving to private messaging chats. That trend was ignited by TikTok, which took all the emphasis off of who you follow, and put more reliance on AI recommendations, based on your activity, which has then moved social apps towards a reformation as entertainment platforms within their own right, as opposed to connection tools.

Every app has followed suit, and now, it’s less about being social, and AI bots are set to take that to the next level, where no one will even bother engaging, due to skepticism around who, or what, they’re actually interacting with.

Is that a good thing?

I mean, engagement is up, so the platforms themselves are happy. But do we really want to be moving to a scenario where the social elements are just side notes?

Either way, that seems like where we’re headed, though within that, I still don’t see how AI bots add any value within the experience. They simply degrade the original purpose of social apps faster.




Source link

Related Articles

Back to top button