LinkedIn has provided a new overview of how it’s measuring its latest generative AI features, with a view to enhancing the user experience with these tools, as opposed to facilitating misleading, false representations through robot-generated content.
Which is undoubtedly happening in the app, but LinkedIn believes that it has a stable system in place for assessing generative AI features, in order to, at the least, ensure that they align with member requirements.
LinkedIn’s overview looks at three specific generative AI elements:
- Collaborative Articles
- Profile writing suggestions
- AI features for Premium users
LinkedIn explains how each element is measured, and which metrics it looks to in order to assess product viability.
For example, on Collaborative Articles, which uses AI-generated questions to prompt members for their input:
“Our key metrics for evaluating the initial rollout of this feature included contributions (are members adding their contributions into the article) and contributor retention (do the members who contribute to articles come back and contribute again). Because this is a social product, we also monitor distribution and feedback to contributors: how far their contributions spread in their network and beyond, and how much engagement they receive. These are key indicators of whether contributors feel like the experience is valuable.”
Which makes sense, however this does also overlook the fact that regular contributors to Collaborative Article get a “LinkedIn expert” badge attached to their profile for that activity, which has been a big motivator in boosting adoption of the format.
So while, in principle, LinkedIn’s trying to provide transparency in how it’s assessing its Gen AI features, it also feels a little like it’s overlooking certain aspects.
LinkedIn says that it assesses its generative AI elements based on three core principles:
- Human review to measure the quality of AI outputs
- In-product feedback to evaluate members’ perception of output quality
- And finally, product usage metrics
The specifics vary based on each element, but the idea is that through these feedback loops, LinkedIn will be able to get a good read on how useful members are finding these new tools, which will then define their relative success with each.
Though, again, I’m not sure that these measures alone are enough to weed out the potential negatives.
Of all the major social apps, LinkedIn is the one that’s been most active in adopting generative AI features.
For context, over the past year or so, LinkedIn has added:
- Collaborative Articles, which use AI generated questions to prompt user response
- AI post prompts and ideas
- AI post summaries
- AI profile update recommendations
- AI enhanced job descriptions
- AI application letters and tips
- AI elements within Recruiter to find better candidates
- AI ad creation recommendations
So, basically, LinkedIn is at least trying out generative AI in virtually every element of the app. Well, not for profile images as yet, but you can bet that that’s coming too, with LinkedIn’s parent company Microsoft looking to be the leader in the AI race, by, essentially rolling out as many generative AI tools as possible before anybody else.
But are these tools all useful?
Is it good, for example, that LinkedIn enables users to create AI generated profiles and posts, which could give potential employers the perception that they know more than they do?
This is my biggest question about LinkedIn’s use of generative AI in particular, because LinkedIn is supposed to be a showcase of professional competencies and skills, in order to enhance member standing as a potential hire or business partner.
AI, in this sense, feels like cheating, and I can only imagine that many erroneous hiring decisions will be made as a result of LinkedIn’s AI tools essentially acting as a costume for wannabe business experts.
I guess, the counter to that is that these tools already exist elsewhere, outside of social apps, so people could still use them to create content either way, leading to the same outcome. But I still think that having them available in-stream is a bigger step towards misrepresentation, and really, fraud, as they’re readily available within this context.
But LinkedIn seems confident that these review processes offer enough protection, and will facilitate real value for members.
I remain skeptical, but then again, that is kind of my job.
You can read LinkedIn’s full gen AI assessment overview here.
Source link