AI is changing work and how to look for work. A top LinkedIn executive explains how his service is adapting |
|
|
Hello and welcome to Eye on AI. In this edition: LinkedIn chief product officer Tomer Cohen talks about the future of work and how the Microsoft-owned professional sbocial network is using AI to make the lives of recruiters and job seekers, hopefully, better…OpenAI closes the largest venture capital funding round ever…Big Pharma learns to share data…and London startup Synthesia grants actors equity in exchange for their likeness. Is it a model for solving AI’s IP conundrum?
If you want to know how AI is changing the nature of work, LinkedIn offers a good vantage point. The Microsoft-owned professional social network is a key hub for job seekers and recruiters—every minute, 10,000 people apply for a job through the platform and seven people are successfully hired on it, according to the company. That means it has lots of data on what roles companies are hiring for and the skills they are looking for. LinkedIn is also a good lens through which to examine how AI is altering the nature of looking for work.
The person ultimately responsible for rolling out AI product features at LinkedIn is Tomer Cohen, the company’s chief product officer. I recently sat down with Cohen at LinkedIn’s London office to chat about AI’s impact on job seekers, recruiters, and on LinkedIn’s own platform.
70% of skills in most jobs will change by 2030 Cohen started out by telling me that the company’s research suggests that 70% of skills used in most jobs will change by 2030, with AI being a big driver of those changes. That’s only four years from now. And there are already signs of big shifts happening. LinkedIn also publishes an annual report called “Jobs on the Rise” about which roles are seeing the most growth in job listings in specific geographies. This year, 70% of the roles seeing the fastest growth were new to the list. And what was the most in-demand role on the list? Well, perhaps not surprisingly, it was “artificial intelligence engineer.”
With roles potentially morphing so quickly, Cohen says, wise employers are starting to think less about the specific roles they need to fill—and in fact, are deconstructing some traditional roles—and more about what skills they need their employees to have both today and in the future. So this year, LinkedIn produced a new report called “Skills on the Rise.” Again, not surprisingly, it turns out “AI literacy” ranks as one of the most sought-after skills. But so too do broad, human-oriented skills such as “innovative thinking,” “problem solving,” “strategic thinking,” “public speaking,” “conflict mitigation,” and “relationship building.”
For Cohen, the most striking stat from LinkedIn’s research is that people entering the workforce now will likely have twice as many roles in their career as someone who entered the workforce 15 years ago. “If there was ever a time to build a growth mindset and emphasis on adaptability and agility and the ability to learn and shift between roles, it’s now right,” he says. Formal college and university education is going to matter much less than it did before—at least in terms of what degree people actually get. Instead, smart employers, he says, are going to be looking for life-long learners who can quickly acquire new skills and adapt to new responsibilities.
Learning to let employees learn to learn Cohen used the example of how AI was rapidly allowing the creation of a new role that he calls “the full stack builder”—by which he means someone who can, with the help of AI, perform functions that were previously siloed into different roles and functions, including research and development, design, engineering, and product.
He says the most successful companies during this AI transition will be those that give their employees the time to learn skills and experiment with building things with AI. He also notes that there is a tension because time spent learning is often time away from actually doing the day-to-day work and because not all experiments in trying to build things with AI will be successful. But he says companies need to find this balance. If anything, he says, they should tip the scale in favor of helping employees learn AI skills.
“If you are over-indexing on performing [as opposed to learning], you will be behind,” he says. “Giving people space to learn is critical. You have to transform your own workforce. If in one year’s time, you are disappointed that your workforce is not ‘AI native,’ it is your fault [for not giving them time to learn AI skills.]”
Recruitment becomes an AI vs. AI game I asked Cohen about complaints that AI was having a detrimental effect on the recruitment process. I’ve heard companies say candidates are using generative AI to apply for many more jobs than in the past, so that they were being inundated with applications. What’s more, more people were using generative AI to burnish their CVs and cover letters, making applicants appear more homogenous and making the screening process more difficult—forcing employers in many cases to turn to AI to do the initial screening of applicants.
Job seekers, on the other hand, complain that the way recruiters are using AI may not give candidates a fair shake—especially if those AI tools are not set up to take into account the shifting emphasis towards softer, harder-to-assess skills that Cohen talked about. The use of AI tools for initial screening interviews, something many companies now use, can feel dehumanizing for job seekers—and might unfairly disadvantage candidates who would be good hires but are flustered by doing the video interview with an AI bot. (Worse, in some cases the AI screening tools may harbor hidden biases that even the companies using them may not be aware of.)
Cohen acknowledged that these were problems. But he said LinkedIn’s AI tools were hopefully designed to help counteract some of these trends. For instance, he says it is a tough job market right now in most of the developed world. As a result, many job seekers are feeling a bit desperate and generative AI has in some ways made it easier for people to apply for jobs that might not be the best fit for them. LinkedIn now has AI-powered tools that help a candidate decide how good a match their skills are for a role, providing them with a percentage for how closely they match what the employer is seeking. Cohen says that more than a third of job seekers on LinkedIn use this tool. LinkedIn has also revamped its search process using generative AI, so job seekers no longer need to use keywords that might match what is in the job description and instead can simply describe in plain English what sorts of jobs they are looking for.
The company has also debuted an AI-powered coaching tool that people can use to practice work conversations and receive AI-generated feedback from a coaching model specifically trained to give the sort of feedback that an executive coach might provide. The tool, which works with both voice and text, is mostly designed for the kinds of interactions that an employee and a manager might have—giving challenging feedback, or conducting a performance review, or discussing work-life balance with a manager. But it could also be used to practice for a job interview. The tool is available in English to LinkedIn Premium subscribers.
When it comes to recruitment, LinkedIn has used generative AI to power outreach to candidates. These AI-crafted messages result in a 40% higher response rate and the candidates also respond 10% faster than without AI-assistance, Cohen says. And just this month the company launched its first “AI agent”—called “Hiring Assistant”—that is designed to do many of the tasks that a junior recruiter might. “Everything from sourcing all the way to reaching out to candidates will be automated for [recruiters], so they can focus on those phone calls and interactions and meetings with the candidates,” he said.
The agent has been piloted by some big companies, including SAP, Siemens, and Verizon. Digital infrastructure company Equinix, which was one of the initial users, reported that using the AI agent allowed each of its human recruiters to increase the number of open roles they can handle at a given time from an average of five to an average of 15.
That’s the kind of productivity boost that makes business executives grin. But I’m not convinced companies are taking on board Cohen’s message about life-long learning and finding ways to transform their existing workforces for a future where work is organized around a dynamic set of skills, not roles. Too many companies, particularly in a job market that favors employers, find it easier to fire workers and then hire new ones with experience that seems to exactly match a job description—rather than figure out how to reskill their existing workforce. What’s more, existing recruitment processes are generally poor at assessing people for the kinds of soft skills—adaptability, learning efficiency, flexibility, and resilience—Cohen says will matter most in this brave new world. There’s an opportunity there for companies that can develop and deploy such assessments first.
With that, here’s the rest of this week’s AI news.
Jeremy Kahn jeremy.kahn@fortune.com @jeremyakahn
Before we get to the news, if you’re interested in learning more about how AI will impact your business, the economy, and our societies (and given that you’re reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6-7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I’ll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
|
|
|
AI: Speed matters more, scale matters less, innovation matters most As businesses embrace AI-driven models, they’ll need to rethink everything from workforce strategies to innovation processes. Critical shifts in strategy will emphasize speed more, scale less and innovation most of all. The time to embrace AI is now. Read more
|
|
|
OpenAI completes record fundraise at $300 billion valuation. The AI startup raised a record $40 billion venture capital round led by Japan’s SoftBank, which boosted OpenAI’s valuation to $300 billion, Bloomberg reports. The amount is nearly double the company’s previous valuation, which it achieved during a fund round in October. The deal includes an initial $10 billion investment, with a second tranche of $30 billion due by the end of 2025, contingent on OpenAI completing a corporate restructuring to separate its for-profit arm from its parent nonprofit entity. If that doesn’t happen. SoftBank could reduce this second tranche to $20 billion, although OpenAI would also gain the right to seek other outside investors.
Judge says the New York Times’ copyright lawsuit against OpenAI can go forward. A federal judge has ruled that the New York Times and other newspapers can proceed with a copyright lawsuit against OpenAI and Microsoft for allegedly using their articles to train AI chatbots without consent, the AP reports. While some claims were dismissed, the core copyright infringement allegations remain, meaning the case may be heading to a jury trial. OpenAI maintains that its AI models are built using publicly available data in a manner grounded in fair use. Microsoft declined to comment on the ruling.
CoreWeave IPO disappoints. The AI data center provider’s stock market debut was closely watched for signs of the health of the AI boom and also the IPO market more broadly. But the company wound up raising just $1.5 billion, significantly less than it initially targeted, and its shares slumped in their first two days of trading—although they rebounded strongly today to climb above its IPO price. In a Fortune article, I argued that CoreWeave’s uneven trading debut says more about the company’s debt-fueled business model than anything about how impactful AI is as a technology, but a few critics argue otherwise. For the opposite take see Ed Zitron here and Gary Marcus here. And for CoreWeave CEO Mike Intrator’s own explanation for why the company scaled back its IPO, see his interview with my Fortune colleague Diane Brady here.
OpenAI restricts use of 4o Image Generation to create ‘Studio Ghibli” effect. After social media was flooded by people using OpenAI’s new Image Generation AI capability—which is integrated into its ChatGPT service—to create images that mimicked the style of Japanese anime company Studio Ghibli, OpenAI decided to block people from continuing to use “Studio Ghibli” as a prompt. Anime artist Hayao Miyazaki, who cofounded Studio Ghibli, had been particularly disparaging about AI and many artists, anime fans, and others in creative sectors voiced concerns that the 4o Image Generation-created Ghibli knock-offs would devalue Studio Ghibli’s original work, which depends on painstakingly hand-drawn and hand-painted animation. OpenAI has previously not allowed its image generation models to accept prompts using the names of living artists and it is not clear why it did not implement similar guardrails with 4o Image Generation. You can read more from Business Insider here.
Google DeepMind slows publication of research to avoid giving competitors an edge, FT reports. The Financial Times cited seven unnamed current and former researchers at the advanced AI lab as saying it has recently slowed the publication of AI research for commercial reasons—in some cases to avoid tipping off competitors to advances that Google is incorporating into AI-driven products, and in other cases because the research found that Google’s models did not perform as well as competitors’. The company is now enforcing a rigorous, multi-layered review process, with a mandatory six-month publication embargo on papers deemed “strategic,” the sources told the newspaper. The shift is part of Google DeepMind’s shift in focus from pure research to innovations that are more closely aligned with Google’s products. The change has led to several AI researchers leaving the company, the newspaper said. The company said it had “always been committed to advancing AI research and we are instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem.”
Google DeepMind spin-out Isomorphic Labs raises $600 million. The fundraise, led by Thrive Capital, is the first outside venture money that the Alphabet-owned company has taken. Helmed by the Nobel Prize-winning cofounder and CEO of Google DeepMind, Demis Hassabis, and using some of the same techniques that DeepMind pioneered with its AlphaFold protein structure prediction models, Isomorphic is targeting a number of unnamed targets in oncology and immunology. It has partnerships with Novartis and Eli Lilly. My Fortune colleague Allie Garfinkle spoke to Hassabis and has more on the deal in her Term Sheet newsletter today.
U.K. government developing homework grading AI, plans National Data Library. That’s according to a story in the Financial Times, based on an interview with U.K. Minister for Science, Innovation and Technology Peter Kyle. The U.K. government is developing an AI tool to mark schoolchildren’s homework using anonymized public education data, as part of a broader plan to commercialize public records, including health data, within the next decade. The pilot “content store” for educational data was created by London’s Faculty AI with £4 million ($5.16 million) in government funding, and will be a prototype for a proposed National Data Library that would aggregate and potentially monetize government datasets.
|
|
|
A model for solving science’s data problem—and possibly helping AI unlock scientific progress. For my book Mastering AI, I spent time talking with John Jumper. Jumper leads Google DeepMind’s protein structure team and shared this past year’s Nobel Prize for Chemistry with his boss, Google DeepMind’s Demis Hassabis, for their work on AlphaFold. That’s DeepMind’s AI model that can accurately predict a protein’s three-dimensional structure from its DNA sequence. Jumper told me that while AI holds great promise as a tool for science, that in many ways protein folding was a lucky problem. Why? Because there was this publicly available dataset out there called the Protein Data Bank where researchers published all the protein structures that they had been able to experimentally confirm. At the time the DeepMind team started work on this problem there were about 200,000 of these in the PDB. That gave the DeepMind team a good starting point when they set about trying to train AlphaFold. (Now, thanks to AlphaFold 2, DeepMind’s breakthrough protein structure prediction AI, the company has been able to expand the number of proteins for which there is structural data by 100-fold, to more than 200 million.)
The thing is, Jumper told me, for most scientific questions out there, there isn’t anything like the PDB. There simply isn’t a public data set of sufficient size to train a good AI model. This is even true in other areas of biology—for instance, as one moves beyond protein structure prediction and you start trying to predict other qualities that matter to drug design, such as toxicity, or off-target effects, or stability at room temperature (which is important for making pills). Well, as it turns out, there is a decent amount of data out there on some of this stuff, but it isn’t in the public domain. Instead it is locked away in the databases and archives of big pharmaceutical companies—most of which have not been willing to share it for commercial reasons.
Now, scientific journal Nature reports that several Big Pharma companies have decided that if any of them are going to realize the advantages of AI, it would behoove them to start sharing. Abbie Vie, Johnson & Johnson, Sanofi, and Boehringer Ingelheim have decided to create the AI Structural Biology Consortium and pool resources to train a model that they all will be able to benefit from. The model is based on OpenFold 3, which is an open-source model designed to mimic Google DeepMind’s AlphaFold 3 (which went beyond just protein structure prediction to be able to predict protein-protein interactions and the interaction of proteins and small molecules). The model should help them all be able to advance drug discovery. (The only criticism is that the data will not be available to academics outside the consortium.)
If the consortium works, it could become a model for how private companies can pool resources to create AI models that might advance science in other key areas (that also have commercial implications). One area might be material science, for example.
|
|
|
April 9-11: Google Cloud Next, Las Vegas
April 24-28: International Conference on Learning Representations (ICLR), Singapore
May 6-7: Fortune Brainstorm AI London. Apply to attend here.
May 20-21: Google IO, Mountain View, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
|
|
|
Is equity a way to solve AI’s IP conundrum? A lot of actors are worried about AI. If filmmakers and advertising firms can simply train an AI model to reproduce their likeness, they won’t necessarily need to ever hire the actor again. Actors just starting out in their careers or those who aren’t celebrities might be especially vulnerable and might be pressured into signing away their image rights in order to keep food on the table.
But one AI company is experimenting with an interesting possible solution to the ethical issues GenAI poses to actors. Synthesia is a London-based AI startup whose tech can create highly realistic AI avatars, mostly for use in corporate training and other internal corporate communication scenarios. While Synthesia’s AI allows anyone to create their own AI avatar, it also pays professional actors to create a catalogue of existing avatars that companies can choose to use. Synthesia has generally paid these actors a standard cash day rate for filming them and it does give them the right to opt out of having their likeness and voice used for the avatars at any point. Now Synthesia has also created a $1 million fund of company stock that it will give to the actors whose images it is using for these popular ready-made avatars.
It’s a creative and interesting way to try to compensate artists for the use of their work in training AI. And I wonder if it might be a model that could be used to solve AI’s IP issues more broadly. You can read more about Synthesia’s initiative here in the Financial Times.
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. Did someone share this with you? Sign up here. For previous editions, click here. To view all of Fortune's newsletters on the latest in business, go here.
|
|
|
|