Threading the Needle: Where AI Should Play a Role in Leadership Assessment and Selection, and Where it Shouldn’t

Based on the prevalence of articles in the media, ChatGPT[9]’s impact, and the recent unveiling of DeepSeek on the general public, it is apparent that technology and AI[17] have arrived in organizations and everyday life with no signs of stopping. While the hype and concerns of the past several years have been intense and, at times, perhaps a little inflated on both sides, this transformation does feel different from those of the past. From an HR perspective, AI will continue to change how we operate as professionals. As a result, many professional associations (e.g., the Society for Industrial-Organizational Psychology, I4CP) and industry voices (e.g., Tomas Chamorro-Premuzic, JP Elliott, David Green, and Brian Heger; see additional readings) are sharing their perspective on the pros, cons, and watchouts of this emerging technology. For example, Josh Bersin’s recent report on AI in HR codifies some of these perspectives, providing a valuable classification for assessing impact (i.e., emerging, first generation, second generation; see additional readings).

There are thousands of informative resources discussing the use of AI in HR. However, many take a cursory view of the landscape or focus on the technology rather than a home in specific HR practice areas. A deeper discussion is required regarding talent management applications, especially hiring and leadership assessment and development. Below, we provide four simple but essential recommendations for selecting and implementing AI in these areas. These recommendations harness the authors’ collective experience leading talent assessment and development efforts across major corporations, including PepsiCo, Novartis, SharkNinja, and PVH Corp., not to mention over 20 years of experience on the client side of TM in senior HRBP and CHRO roles. As industrial-organizational (IO) psychologists, we also feel it’s important to remind ourselves and others of the science, research, best practices[1], and legal requirements that accompany the use of any testing[10] in organizations, whether that’s using AI or not (see SHRM report Selecting Leadership Talent for the 21st Century).

Implement AI Applications Where They Will Work Best – NOT to Replace Resources or Expertise.

AI promises increased efficiency and the allure of cost savings (read: headcount). While cost pressures are important, we don’t believe this is the primary lens through which to view AI’s utility.  At this point in its development, AI should not be used in place of a capable team of people but as an augmentation to them. Ignoring this may come at the expense of long-term strategic impact, including:

  • Applying critical thinking: Algorithm-driven outputs risk[11] losing the nuance and future orientation necessary for effective decision-making. An educated population familiar with the limitations of AI models rooted in historical data[12] is needed to make the proper judgment calls about talent.
  • Focusing on value-added activities: Redeploying and upskilling people resources freed by AI allows teams to prioritize tasks that drive innovation (i.e., AI as a complement vs. replacing human expertise).
  • Providing the human touch in motivating leaders to improve their capabilities:  While many firms are experimenting with virtual coaching and virtual feedback delivery, research has shown that human interaction[3] matters when delivering feedback results, particularly complex psychometric and behavioral assessment data where contextual interpretation matters. Even with the best AI-driven reports, if participants know there isn’t a person on the other side of the discussion, the lack of human connection can detract from their social accountability and motivation to change.

Instead, practitioners focused on talent selection, assessment, and development must approach this topic thoughtfully to maximize the impact of technology (see suggested readings), with AI as the assistant, not the boss. As an HR professional’s assistant, AI can enhance efficiency, scalability[5], and impact in several meaningful ways, including:

  • Streamlining administrative processes like scheduling and automating “nudges” (e.g., targeted reminders to drive follow-up actions to allow employees to focus on higher-value tasks).
  • Distilling themes and drafting concise narratives from large amounts of data (e.g., assessment results, biodata from an HRIS[13], etc.) with the proper oversight to ensure accuracy, fairness, and minimize biases.
  • Recommending tailored development actions and resources based on assessment results and aligning those efforts with strategic organizational goals.
  • Providing summary-level analytics[6] and insights on trends in the assessment and selection data collected (once again, with the proper oversight).

It has already been stated that “AI won’t replace humans — but humans with AI will replace humans without AI” (see HBR article). By integrating these tools thoughtfully, organizations can enhance their talent management processes, drive meaningful outcomes, and build a workforce equipped to meet tomorrow’s challenges. AI is still nascent, and human experience, judgment, and wisdom are still needed to ensure the appropriate and best use of AI tools until we have more confidence in the output of this technology.

Choose Assessment Tools for What They Measure – NOT Their Technology.

Advances in technology have made more traditional measurement tools easier to administer and at a larger scale (e.g., moving from paper and pencil Scantron forms to digitally based organization-wide employee surveys or virtual simulations instead of in-person assessment centers). The recent introduction of AI has introduced a new world of possible applications in this space, with a spike in third-party AI applications. It is our responsibility as practitioners to help organizations determine what to invest in or avoid.

While these major shifts are occurring, the crux of the matter has remained the same – what you measure and why you are measuring it are key. Stated differently, if bright and shiny new tools measure questionable content (e.g., non-scientific, non-predictive, non-relevant), they will undoubtedly produce dubious results (see the WSR article by Ulrich, Church, Eichinger, and Pearman under suggested readings). In our experience, implementing a mix of external and internal tools is the best approach.  These tools should measure key elements supporting your talent strategy and be validated in your organization to predict the desired outcomes.  Because no single tool is perfect, taking a multi-trait (i.e., measure more than one concept), multi-method (i.e., use more than one type of tool) approach enables a more holistic and accurate view of your talent (see the WSR article by Church, Scrivani, and Graf). For maximum impact, we recommend using a combination of custom tools designed to target unique capabilities based on an internally developed model[14] and coupled with other external tools (e.g., personality, cognitive, motivation, etc.).

So, what is one to do the next time a third-party supplier tries to sell their new proprietary AI assessment to your C-Suite?  This situation is where the expression “caveat emptor” (let the buyer beware) applies.  When deciding on a tool, first start with the content.  Always ask for the underlying measurement model or, in layman’s terms, “What does the tool actually measure (versus what it purports to measure)?” If someone tells you their tool measures leadership effectiveness or potential, that may sound good, but it is misleading. These are outcomes of what is measured, not the underlying knowledge, skills, and abilities assessed. Despite what some advocates of AI might suggest, correlation does not equal causation.

Second, ask how the vendor’s technology generates and sources its outputs. Is it harvested from the internet, based on proprietary data (from other clients?), licensed from some content source, and/or customizable to your organization? Garbage in, garbage out was a term coined in the 1950s to caution against a blind reliance on computer output when data input quality was poor. This warning remains essential today. Third, keep your employment law team close if this tool is to be used in decision-making in any way (e.g., hiring and/or promotions), as the legal landscape is constantly changing (see American Bar Association in additional readings). Decisions take many forms (some you might not even realize), including rank-ordering lists of individuals based on a matching algorithm[7], deciding who to advance to interviews, or which employees to invite to development programs.

Asking these questions and understanding the tradeoffs that must be considered regarding accuracy, user experience[2], price, and legal compliance will help prepare you to have the most informed discussion possible for your organization. We all want to ensure that our tools and processes are bias-free. Evaluating them against the above criteria will help distinguish useful tools from bright, shiny objects.

Choose Selection Tools for their Validity – NOT Only for Simplicity or Sleek Experience.

Validity (i.e., whether a tool consistently and accurately predicts job performance) is the most critical factor when selecting a tool, regardless of format or mode of administration. Just because a tool has the latest AI engine behind it doesn’t mean it will be more valid than any other tool.

Many assessments on the market claim to be “valid,” but the first question you should ask is: Valid for what? If a vendor says, “Yes, it’s been validated,” that should be a huge red flag. It often signals one of two things:

  1. They don’t fully understand validation, or
  2. They’re oversimplifying to make a sale.

Moreover, just because a tool has been validated in one organization for a particular role, level, or function doesn’t mean it will work or be legally compliant to use in yours. Validation evidence is notoriously difficult to replicate, even within the same company. Suppose a tool doesn’t effectively help you identify candidates who will be high vs. low performers in your organization, and you intend to inform talent decisions. Why would you want to use it all?

Importantly, even if a tool does predict performance in your organization, without appropriately documented evidence, it’s not legally compliant. While debates around validity generalization (i.e., applying validation evidence to similar jobs or organizations) continue, the reality is that courts remain skeptical. If you don’t have direct evidence that a tool predicts performance in your organization, you could expose your company to significant risk. AI-related selection and assessment tools are even more prone to this issue, mainly when proprietary algorithms are used and firms refuse to share their “black box[8]” or predictive engine with you.  This lack of transparency can put your organization at significant risk of a lawsuit (see ACLU article).

Beyond the validation aspect, when selecting candidates for your organization, it is critical to remember that first impressions count. Like your company website, the tools you use to assess external candidates powerfully convey your employer’s brand, organizational culture, and values. While candidates undoubtedly appreciate a streamlined experience that comes with many AI-driven selection tools, they also want to be evaluated fairly. If an assessment doesn’t feel job-relevant, even though scientifically sound, it can trigger[15] adverse reactions. This is where the non-scientific yet useful concept of face validity comes in: it’s about how job-related a tool seems to candidates. While it does not tell us if the tool predicts job performance, it cannot be ignored because it plays a crucial role in shaping candidate perception. And for many organizations, candidates are also consumers; poor experience can have lasting negative consequences.

We’re not advocating for HR practitioners to be purists or cling to outdated methods. We want organizations to go beyond the hype and for their leaders to make informed decisions and hire the best talent. If you’re unwilling to invest in appropriately implementing and maintaining a selection tool, you should think twice about using one. A tool that seems “fun” and “streamlined” on the front end can quickly become a compliance nightmare if not rigorously validated and monitored. Similarly, if it isn’t clear what a tool is measuring, it can undermine your credibility, sending mixed signals to candidates about what truly matters in your organization.

When applied correctly, we firmly believe in the power of technology and innovation. Ultimately, your selection tools should reflect your organizational priorities while reliably predicting performance. Investing in tools that communicate your values and deliver valid, actionable insights is not just the best practice; it’s essential for driving immediate and long-term success.

Take Advantage of AI to Finally Address the Achilles’ Heel of Assessments. 

One of the biggest criticisms of assessment tools is that all the effort goes into the front end of the process[16]. Selecting the right suite of assessment tools, administering them, analyzing and interpreting the data, and debriefing those who have been assessed is a massive undertaking. In our experience, this is the lion’s share of the focus, and the Achille’s heel of this is what happens next. Participants receive feedback and are often left to independently figure out the following steps. Development rarely gets the same focus and investment. This phenomenon is likely not due to a lack of desire. Benchmarks of TM practices in top development companies (see additional readings by Church and colleagues) have consistently reported that development is the number one goal over and above decision-making, even as the use of assessments continues to increase.

Still, resources are limited, and it’s much easier to provide assessment results than to support long-term development. Moreover, most organizations advocate for leaders to own their own development. However, in reality, most leaders are ill-equipped to do it alone. Development takes time, consistent effort, self-awareness, and resilience, which are difficult to sustain without structured support. Even the most motivated leaders struggle to turn insights into action without guidance.

From our perspective, this is an area where AI can fill the void. Technology can and should do more than assess gaps; it can help drive development by recommending targeted actions, nudging leaders to create development plans, and automating follow-ups to keep them on track. Imagine a world where, instead of feeling overwhelmed by their feedback and development needs, a leader receives a personalized roadmap: clear steps, practical resources, and timely nudges to help them make real progress. A system that guides, reinforces, and sustains change. Yes, some organizations are already successfully implementing these types of approaches. However, for many, this is often in an area of underutilization (i.e., doing the heavy lifting of development). It has enormous potential to keep leaders accountable, match them with high-impact resources, and create structured pathways that sustain motivation over time.

In closing, there is no denying the positive benefits of leveraging technology to implement and utilize selection, assessment, and development tools in organizations. As many industry leaders have noted, the opportunities are wide open for processing information[4], shifting repetitive nonvalue work off HR and talent management professionals, generating insights, and driving development actions. As practitioners, we are excited about what the future will bring, as it will make our jobs easier and, at the same time, deliver a greater impact on our talent management processes. The bottom line, however, is this: it is imperative that organizations make the right choices when implementing and using assessments for making decisions on people. The best way to ensure this happens is by following the science of assessments and leveraging the right types of expertise (e.g., I-O psychologists, employment lawyers, learning and development specialists), whether inside or outside the organization.

Suggested Readings

author avatar
Allan H. Church, Ph.D., James Scrivani, Ph.D., Gina A. Seaton, Ph.D. & Janine Waclawski, Ph.D.
Defined Terms
1. best practices.

An assessment recommending the most appropriate way of handing a certain type of task, based on an observation of the way that several organizations have successfully handled that task.

2. user experience.

The experienced quality of an interactive system from the perspective of those directly using the system.

3. interaction.

The act of two entities communicating or establishing contact with each other for the purpose of achieving a common goal.

4. information.

The by-product of having data in an HR System. Data is gathered and reviewed providing information for decision making.

5. scalability.

The degrees to which the system, network, or process of a computer’s hardware or software can be expanded in size, volume, or number of users served and continue to function properly. The ability of a system or network to handle increasing workloads and data volumes while maintaining performance and responsiveness.

Allan H. Church, Ph.D.
Managing Partner at  |  + posts

Allan H. Church, Ph.D., is Co-Founder and Managing Partner of Maestro Consulting, LLC.  A widely recognized thought leader in the field, he has over 30 years of experience in global corporate executive positions and external consulting.  He is also an Adjunct Full Professor at Columbia University Teachers College, where he teaches Strategic Talent management.  Before Maestro, Allan spent 21 years at PepsiCo, most recently as SVP of Global Talent Management.  He received his Ph.D. in Organizational Psychology from Columbia University. Allan has authored seven books, 50 chapters, and over 190 practitioner and scholarly articles. He can be reached at https://www.linkedin.com/in/allanchurch.

James Scrivani, Ph.D.
Global Head of Assessment & Developmen at Novartis |  + posts

James A. Scrivani, Ph.D., is the Global Head of Assessment & Development at Novartis, overseeing the enterprise's talent assessment and development strategy.  He has over 20 years of talent management experience at Fortune 500 companies, including Apple, PepsiCo, and PwC.  Scrivani specializes in talent management processes, including talent assessment and selection, high-potential identification and development, succession planning, executive coaching, and 360 feedback.  He holds a Ph.D. in Industrial-Organizational Psychology from Alliant International University and an MA in Organizational Management from The George Washington University. He is an active member of the Society for Industrial and Organizational Psychologists.  He was previously an adjunct professor at Sacred Heart University.  Scrivani has recently co-authored articles for Leadership Quarterly and Talent Quarterly.  He can be reached at https://www.linkedin.com/in/jamesscrivani/

Gina A. Seaton, Ph.D.
VP, Head of Global Talent Acceleration at SharkNinja |  + posts

Gina A. Seaton, Ph.D., is VP, Head of Global Talent Acceleration at SharkNinja, where she leads the enterprise's end-to-end talent management and learning & development. She is focused on building and scaling high-impact talent practices that fuel SharkNinja's relentless growth and innovation. Before joining SharkNinja, Gina was VP, Global Talent Management at PVH (Tommy Hilfiger, Calvin Klein), where she built global talent management and organizational effectiveness Centers of Expertise. She also led selection and executive assessment and development at PepsiCo, overseeing data-driven strategies for senior leader growth and driving external selection globally to ensure top-tier talent pipelines. Gina began her career in consulting, advising Fortune 500 companies on leadership and organizational effectiveness. Gina holds a Ph.D. in Industrial-Organizational Psychology from the University of Akron and an MS from Indiana University-Purdue University Indianapolis. She is passionate about unlocking human potential through science-backed talent strategies that drive business impact. She can be reached at linkedin.com/in/ginaseaton.

Janine Waclawski, Ph.D.
Co-Founder and Managing Partner |  + posts

Janine Waclawski, Ph.D., is a Co-Founder and Managing Partner at Maestro Consulting, LLC.  Janine is a human resources leader with over 30 years of experience. She is a Fellow of the Society for Industrial-Organizational Psychology. Before Maestro, Janine spent 20 years at PepsiCo as an HR leader. PepsiCo is a Fortune 50 company operating in over 200 countries with over $80 billion in annual revenues. At PepsiCo, Janine was able to reach the highest ranks of the HR organization as SVP CHRO Latin America (30 plus markets, 72,000 employees, and annual revenues of $8 billion), SVP and CHRO, Global Functions and Global Category Groups, and SVP Human Resources and Talent Management for North America Beverages. Before PepsiCo, Janine was a consultant for over a decade at PricewaterhouseCoopers and W. Warner Burke Associates. Janine has published numerous articles, book chapters, and books. She received her B.A. in psychology from Stony Brook University and her Ph.D. in Social and Organizational Psychology from Columbia University. She can be reached at https://www.linkedin.com/in/janinewaclawski/

WSR.icon.color

Join the world’s largest community of HR information management professionals.

Scroll to Top
Verified by MonsterInsights