Breaking News

The gen AI skills revolution: Rethinking your talent strategy

If every company needs to be a software company, do you have a software organization that can deliver? The answer to that question could be decisive for the future of many companies.

The ability to compete depends increasingly on how well organizations can build software products and services. Already, nearly 70 percent of top economic performers, versus just half of their peers, use their own software to differentiate themselves from their competitors. One-third of those top performers directly monetize software. Generative AI (gen AI) offers a tantalizing opportunity to increase this value opportunity by helping software talent create better code faster.

Promising experiments that use gen AI to support coding tasks show impressive productivity improvements. Gen AI has improved product manager (PM) productivity by 40 percent, while halving the time it takes to document and code. At IBM Software, for example, developers using gen AI saw 30 to 40 percent jumps in productivity.

Despite its promise, gen AI has barely revealed its full potential. While some 65 percent of respondents to the recent McKinsey Global Survey on the state of AI report that they are regularly using gen AI, only 13 percent are systematically using gen AI in software engineering. Our own experience working with companies reveals that gen AI tools currently help with about 10 to 20 percent of the coding activities of a developer.

Scaling gen AI capabilities requires companies to rewire how they work, and a critical focus of rewiring is on developing the necessary talent for these capabilities. The gen AI landscape and how software teams work with the technology to build products and services are likely to stabilize in the next two to three years as the technology matures and companies gain experience. The skills and practices needed to succeed now may well change considerably over time. Until then, companies must navigate through an uncertain period of change and learning.

To help successfully plot the road ahead, this article identifies the new skills software teams will require, examines how their evolution will alter roles and risks, and reveals how companies can orient their talent management practices toward developing skills for greater flexibility and responsiveness.

How software development is changing

Any engineering talent rethink needs to begin with an understanding of how gen AI will affect the product development life cycle (PDLC). The changes are likely to be significant and affect every phase of the life cycle (exhibit). Recent McKinsey research suggests that gen AI tools have almost twice as much positive impact on content-heavy tasks (such as synthesizing information, creating content, and brainstorming) as on content-light tasks (for example, visualization).

Generative AI affects every phase of the software development life cycle.

To highlight just a few examples, we are already seeing gen AI technologies handle some simple tasks, such as basic coding and syntax, code documentation, and certain web and graphic design tasks. Initial progress is also being made with more complex functions, including generating test cases and backlogs, developing insights from market trends, automating log scraping, and estimating and resolving the impact of bugs.

Any engineering talent rethink needs to begin with an understanding of how gen AI will affect the product development life cycle.

Over time, gen AI should be able to generate insights from automatically created tests, system logs, user feedback, and performance data. Gen AI can use self-created insights and ideas for new features to create proofs of concept and prototypes, as well as to reduce the cost of testing and unlock higher verification confidence (for example, multiple hypotheses and A/B testing). These developments are expected to significantly reduce PDLC times from months to weeks or even days, improve code quality, and reduce technical debt.

New skills for a new age

While many leaders understand at a high level that new skills are required to work with gen AI, their sense of how these changes might create value is often vague and underinformed. So the decisions that seem bold on paper—such as buying hundreds of gen AI tool licenses for developers—are made without a clear understanding of the potential gains and with insufficient training of developers. The result: predictably poor outcomes.

Important roles throughout the enterprise—from data scientists and experience designers to cyber experts and customer service agents—will need to learn an array of new skills. Businesses hoping to operate like software companies will also need to pay special attention to two key roles: the engineer and the product manager.

Engineers

The skills engineers need to develop will likely fall into three areas:

  • Review. A significant percentage of code generated by the current generation of gen AI tools needs some correction. At one level, this requires developers to shift from doer to reviewer, which is not as basic as it sounds. Some proficient coders aren’t good reviewers. Good reviewers must be able to evaluate code-based compatibility with existing code repositories and architectures, for example, and understand what is required so another team can easily maintain the code—skills that more experienced engineers often have but more junior colleagues need to build. Developers will need to not just spot duplicates or obvious errors but ensure high-quality code by developing advanced forensics skills to identify and address issues. Even more complex will be the “training up” of gen AI tools, which have to learn on the job to get better. This will require engineers to understand how to give tools feedback and determine which sorts of tasks provide the best opportunity for a given tool to learn.
  • Connect. Integrating the capabilities of multiple AI agents can improve problem-solving speed and solution quality. Some organizations are already integrating gen AI with applied AI use cases, such as using applied AI systems to analyze the performance of gen-AI-created content by identifying patterns in user engagement, which are then fed back to the model. For example, Recursion, a biotech company, has developed a new gen AI platform that enables scientists to access multiple machine learning models that can process large amounts of proprietary biological and chemical data sets. A critical skill that engineers must develop is how to select and combine gen AI applications and models (for example, how one model might be good at providing quality control for another specific model).
  • Design. As gen AI technology takes over more of the basic coding tasks, engineers can develop a new set of higher-value “upstream skills” such as writing user stories, developing code frameworks (for instance, code libraries, support programs), understanding business outcomes, and anticipating user intent. Communication is a critical emerging skill, and it’s needed to ensure that engineers can more effectively engage with teams, leaders, peers, and customers.

Product managers

For product managers, their equally complex skills shift will focus on the following areas:

  • Gen AI technology use. Like software engineers, PMs will need to develop new skills to work effectively with gen AI technologies. One hardware/software organization, in fact, assessed the skills of its tech employees and found that PMs needed just as much upskilling on AI as any other role did. As gen AI becomes better at building prototypes, for example, PMs will need to be proficient with low-code and no-code tools and iterative prompts to work with models to refine outputs. PMs will also need to become proficient in understanding and developing “agentic” frameworks—large language models (LLMs) that work together to complete a task. This will require PMs to develop plans for working with these LLMs, taking into account unique considerations, such as the costs incurred when models run inferences.
  • Adoption and trust. Given significant concerns regarding trust—either not trusting gen AI or trusting it too much—standard adoption programs (for example, basic training on how to use a new tool) aren’t sufficient. PMs must develop strong empathy skills to identify implicit and explicit barriers to trust (such as not trusting the answers that gen AI solutions provide) and to address them. Significant concerns about risk mean PMs will need to work with risk experts to ensure the right checks and measures are incorporated into every stage of the PDLC.

Emerging—and merging—roles, with more leadership oversight

The new skills needed to use gen AI will affect how and what people do in their jobs, raising significant questions about how roles need to adapt and what oversight leadership must provide.

Emerging and merging roles

With gen AI helping people be more productive, it’s tempting to think that software teams will become smaller. That may prove true, but it may also make sense to maintain or enlarge teams to do more work. Too often, conversations focus on which roles are in or out, while the reality is likely to be more nuanced and messy. We can expect roles to absorb new responsibilities—such as software engineers using gen AI tools to take on testing activities—and for some roles to merge with others. The product manager and developer role, for example, could eventually merge into a product developer, in which one high-performing person can use an array of gen AI tools to create mock-ups, develop requirements, and generate code based on those requirements.

Too often, conversations focus on which roles are in or out, while the reality is likely to be more nuanced and messy.

Given the unproven and unpredictable nature of gen AI over the short term, new roles will be needed, such as one that focuses on AI safety and data responsibility and that also reviews and approves code. Other areas of significant scope that could require new roles may include LLM selection and management, gen AI agent training and management, third-party model liability, and LLM operations (LLMOps) capabilities to oversee model performance over time.

We anticipate that changes in the tech skills landscape will accelerate, requiring HR and tech teams to become much more responsive in defining (and redefining) how skills are bundled into roles.

Strong oversight

Determining what skills matter to the business and its strategy is a long-standing leadership responsibility. The unique uncertainties and opportunities associated with gen AI, however, require special leadership focus. Two areas stand out as particularly important:

  • Standardization. As groups and individuals roll out gen AI pilots, a proliferation of tools, platforms, and architectures emerges. Instead, companies should focus on a single set of standardized capabilities and develop consistency regarding the types of skills needed. Leadership will need to standardize the gen AI tools, models, processes, and approaches, and determine, for example, whether it’s best to license a capability, build it, or partner with a provider (largely driven by what skills are available within the business).
  • Risk. The abiding concerns about the risks related to gen AI require leadership to develop clear guidelines and expectations for employees. While software talent can’t be expected to become deep risk experts, they can be expected to develop basic skills, such as understanding what kinds of risk exist, developing the habit of integrating safeguards into their code, and knowing how to use emerging testing tools (for example, SonarQube, Checkmarx, or Coverty). Some organizations are also putting in place incentives for frontline users to understand the opportunities, risks, and boundaries of gen AI, and are even making certain kinds of training mandatory. As risk and compliance concerns are likely to shift as quickly as gen AI itself, leadership should invest in tools to automatically test code against designated policies (that is, policy as code).

Talent management transformation built around skills

Current approaches to talent management tend to focus on how to integrate gen AI into existing programs. That will not work for long. The highly structured nature of HR systems in modern companies—tightly drawn roles with well-defined competencies, well-worn career paths, fixed compensation levels, and formal learning journeys—has already struggled to keep up with changes driven by digital capabilities. It is no match for the more volatile and unpredictable dynamics of gen AI.

HR leaders, working with CEOs and tech leadership, must instead transform how they find and nurture talent, with a focus on two areas in particular: strategic workforce planning and apprenticeship capabilities.

Grounding strategic workforce planning in business needs and skills

The talent transformation starts with HR leaders developing a strategic workforce plan that’s built around skills. Companies often focus on roles during workforce planning, but that’s insufficient. Identifying the need for a software engineer or senior data engineer role, for example, isn’t useful with gen AI tools taking over tasks, not roles.

HR leaders can’t do this in a vacuum. They need to work with leaders in the business to understand goals—such as innovation, customer experience, and productivity—to help focus talent efforts. With this in hand, they can map out future talent demands.

This collaboration is critical for developing an inventory of skills, which provides companies with a fact base that allows them to evaluate what skills they have, which ones they need, and which ones gen AI tools can cover. This skills classification should use clear and consistent language (so it can be applied across the enterprise), capture expertise levels, and be organized around hierarchies to more easily organize the information.

The talent transformation starts with HR leaders developing a strategic workforce plan that’s built around skills.

To be useful, however, companies should treat skills as data rather than a document. By adding skills with relevant tags (for example, expertise levels) to a database, companies can use AI and LLMs to determine relationships and connections between skills for reskilling, prioritize which skills to develop, enable workforce planning to determine specific skill needs by program or team, and develop tailored learning programs.

One example includes a life sciences company that is working to use an AI skills inferencing tool to create a comprehensive skills view of their digital talent. The tool scans vacancies, role descriptions, HR data about roles, LinkedIn profiles, and other internal platforms (for example, Jira, code repositories) to develop a view on what skills are needed for given roles. The relevant individual employee can then review and confirm whether they have those skills and proficiencies. Once confirmed, those skills are added not only to the individuals’ profiles but also to the company’s skills database for future assessments.

For this approach to strategic workforce planning to be effective, companies have to continually measure progress against their identified skill gaps and revisit the strategy to determine if other needs have emerged, especially as new gen AI tools and capabilities come online. HR teams will have to work with engineering leaders to evaluate tools and understand the skills that they can replace, and what new training is needed.

Building up apprenticeship capabilities as part of a broader talent program

There is no single path to victory in finding and keeping the talent a company needs. Our experience shows that companies need to implement a range of talent strategies, from more customer-centered hiring practices to tailored training pathways. But because gen AI moves quickly and there is little clarity about which skills will be needed, upskilling will need to be front and center. Among the challenges in developing upskilling programs are the lack of codified best practices and workers’ potential resistance to learning new skills. While an engineer, for example, may be interested in becoming more proficient in coding, the need to learn different kinds of skills—such as effective communication or user story development—can seem less important or even threatening.

For this reason, companies should pay particular attention to apprentice models, which tend to be overlooked as part of a business’s upskilling repertoire. Apprenticing offers hands-on learning to demystify change and role modeling to demonstrate hard-to-teach skills, such as problem-solving mindsets and how to use good judgment in evaluating code suitability. But for apprenticing to be effective, senior experts must be active participants rather than just checking a box. They have the credibility and often institutional knowledge that can be useful, such as navigating risk issues specific to the company. Experts will need to code and review code with junior colleagues, shadow them as they work, and set up go-and-see visits so they can discover how teams work with gen AI. They can also act as mentors to coach new skills, such as how to break problems down, deliver business goals, understand end user needs and pain points, and ask relevant questions.

To ensure that apprenticeship programs succeed, companies should create incentives by making apprenticing part of performance evaluations and provide sufficient time for people to participate. One audio company, in fact, has made apprenticeship an explicit part of its learning program. It ran a boot camp covering gen AI skills for about a dozen top-performing engineers who volunteered for the program. In return for this training, participants were required to train others. Each agreed to lead a three- to four-day boot camp for ten to 15 engineers, followed by two sessions per week for three months, in which anyone could ask questions and share their own learnings.


While gen AI’s capabilities will eventually become more stable and proven, in the short term, companies will need to navigate a great deal of uncertainty. By zeroing in on skills and adapting their talent management approaches, and by being flexible enough to learn and adjust, companies can turn their talent challenges into competitive advantages.


McKinsey Quarterly 60th birthday

McKinsey Quarterly 60th birthday

We are celebrating the 60th birthday of the McKinsey Quarterly with a yearlong campaign featuring four issues on major themes related to the future of business and society, as well as related interactives, collections from the magazine’s archives, and more. This article will appear in the first themed issue, on the Future of Technology, which will launch in October. Sign up for the McKinsey Quarterly alert list to be notified as soon as other new Quarterly articles are published.