Tag: artificial intelligence

  • AI for Talent Development: Good, Scary, or Both? 

    AI for Talent Development: Good, Scary, or Both? 

    AI is everywhere. Have you recently googled something and used the AI generated answer without ever clicking a link from the list? It’s one of my new favorite efficiencies…and it freaks me out a little. In the “if something scares you, it probably means you should do it” kind of way. We’ve been talking about AI for HR over the last few weeks in the context of our work and industries we support, and today I’m wrapping it up with my take on AI for talent development as a whole. It freaks me out a little, so it’s probably a good thing. Right? 

    Speaking of Google’s AI Overview, here’s what came up when I googled “is AI for talent development?”: 

    Yes, AI (artificial intelligence) is being used in talent management to improve efficiency and outcomes at every stage of the talent lifecycle. AI technologies like machine learning algorithms and natural language processing can help with:

    Recruitment 
    AI can help match candidates with roles based on their skills, experience, and cultural fit. AI-powered semantic search can also help recruiters find qualified candidates by running Google-like searches that go beyond keyword matching.
     

    Employee development 
    AI can help create personalized training plans, learning suggestions, and development goals for each employee. AI can also provide real-time feedback based on performance data, which can help employees make immediate improvements. 

    Performance management 
    AI can help set dynamic goals and provide actionable insights to drive employee performance and development. AI can also help with succession planning by analyzing employee data to identify high performers and predict future leadership needs. 

    Skills intelligence 
    AI-powered tools can help HR leaders gain an in-depth view of their workforce’s skills and potential knowledge gaps. This information can help leaders evolve job architectures as skill needs and business priorities shift.

    For each topic featured in the AI Overview, Google provided links to the source material used to inform the AI response. I wanted to know more about AI in Recruitment, so I clicked the link icon and found more detailed articles: 

    (Shoutout to Avature and HireRoad)

    If you’ve kept up with the latest news at Horizon Point, you likely know that I (Jillian) just returned from a 6-week paid sabbatical. During that time, I slept a lot, I made time for hobbies, and I let my brain slow way, way down. Now coming back to work, it’s nice to ease into slow productivity and learn to incorporate the good of AI into our talent development work. 

    I don’t think anyone can say for sure what the future of AI for HR holds, but for now, let’s be curious and explore AI for talent development with open minds. After all, the simple definition of development is the act of improving by expanding, enlarging, or refining, and AI can certainly help. 

  • Creating Actionable Insights from Open Text Survey Questions

    Creating Actionable Insights from Open Text Survey Questions

    We are excited to feature a post by Dr. Larry Lowe with RippleWorx in our AI for HR series. We’ve been fortunate to work alongside RippleWorx with mutual clients, and Larry and I were classmates in Leadership Greater Huntsville’s Flagship Program. Larry is wicked smart, but better than that, he is a really great guy!

    We trust Larry’s in-depth insights on AI for HR and how they (and you) can utilize it to your advantage to understand your workforce’s needs and impact organizational culture. Enjoy!

    Guest Blogger: Dr. Larry Lowe, Chief Scientist at RippleWorx (larry.lowe@rippleworx.com)

    Major Changes Are Coming to Your Organization

    When your organization faces significant changes, a common first step is to send out a survey to understand your workforce’s views on specific topics. Your survey will likely include Likert scale questions, Net Promoter Score (NPS) questions, and some open-text questions. While Likert and NPS questions are straightforward to analyze, open-text questions pose a unique challenge. These responses can be messy in terms of length, sentiment, context, content, format, spelling, and even include emojis  and text speak (SMH). Despite this messiness, open-text questions often provide the most context and insight. Distilling them into common subject categories is difficult and time-consuming. It is mentally draining to read and categorize thousands of responses, and keeping biases from influencing our decisions is challenging. If only there was a tool to help create structured insights from unstructured data…

    RippleWorx has cracked the code to actionizing real data insights to drive meaningful change in organizations. With years of experience analyzing customer feedback, RippleWorx has developed the right AI models to drive continual organizational performance improvements.

    The Power of Generative AI

    If the problem of analyzing a large amount of employee feedback data sounds familiar, good news! One of the greatest benefits of Generative AI in the workplace is its ability to create structured insights from unstructured data. Let’s clarify some terms.

    Structured Data: These are items that fit neatly into rows and columns, like a well-organized Excel spreadsheet where the columns contain consistently formatted data. With structured data, it is straightforward to calculate averages, count categories, or identify outliers. The structure naturally leads to clear insights.

    Unstructured Data: These are items that do not have a predefined format or structure, such as the varied responses to open-ended survey questions. The lack of structure makes deriving insights extremely challenging and sometimes misleading.

    The key to analyzing the open-ended feedback questions from your employees’ surveys is to generate structured, actionable insights from highly unstructured data. Different analytic approaches can be applied, but there are trade-offs. Let’s explore a few.

    Traditional Methods to Analyze Open Text Responses

    Traditional methods for analyzing open text responses include:

    ·         Manual Coding: Reading each response and categorizing it into predefined themes or codes.

    ·         Content Analysis: Reading the entire corpus to determine patterns, themes, and meanings.

    ·         Statistical Text Analysis: Counting word frequency or creating word clouds.

    While statistical text analysis is expedient, it often lacks understanding and semantic meaning across all responses. Manual coding and content analysis are both complex and time-consuming endeavors. When the unstructured data set is large, the human brain cannot equally consider all expressed thoughts. We often get tired and start “seeing” our biases in the data.

    A New Method: Generative AI

    By now, I hope everyone has experimented with the latest chat completion models, such as GPT-4, Claude 3.5, and Gemini 1.5. These models excel at summarizing large corpora of text into easily interpreted bullet points or narrative paragraphs. If the open text responses are saved as a PDF, follow these steps for effective summary insights:

    1.      Attach the PDF in the prompt window.

    2.      Write the following prompt into the chat window:

    “You are a helpful HR assistant. I have attached a document that includes open text survey questions along with all the responses aggregated across the entire organization. I need you to summarize the top three most mentioned themes in the open text responses. The summary output format should be bullet points, each less than 200 words.”

    Two key benefits arise from this approach:

    1.      Semantic Interpretation: The models semantically interpret all open text responses simultaneously, resolving the “messiness” of varied responses. This addresses human fatigue associated with processing large amounts of information, as the language model interprets every response equally and almost instantaneously.

    2.      Coherent Output: The model connects extracted themes from the responses and generates a coherent summary following the provided instructions.

    These models’ ability to identify threads and concepts from numerous responses is remarkable. Adjusting your prompt can extract additional information from the PDF. For example, you can ask the model to summarize the top “positive” and “negative” themes mentioned or to develop an action plan addressing the top issues in the responses.

    While these models significantly improve and expedite the summarization of open text questions, there are important considerations. Uploading corporate information into a public chat completion model poses risks. Sensitive topics discussed may not be intended for public disclosure. This data could be used to train future models, or your prompts and attached data could potentially be hacked and published later. Ensuring data security should be paramount when using Generative AI in your workflows.

    An Even Better Method: Generative AI Mapped into a Performance Taxonomy

    For even greater insight, integrating an organizational performance taxonomy into the prompt allows the model to categorize responses into different dimensions of the organization before summarizing actionable insights. This approach provides more precise results by highlighting not just the overall organizational strengths and weaknesses but pinpointing strengths and weaknesses to specific areas within the organization.

    RippleWorx has created a model for organizational performance called the Performance Chain. In the Performance Chain, an individual addresses a role, roles combine to form teams, and teams combine to form the organization. A performance taxonomy accompanies each link in the chain. The taxonomy for the individual includes motivation and well-being concepts. The taxonomy for the role covers hard and soft skill proficiency and employee readiness. The taxonomy for the team covers collaboration and tactical task execution. The taxonomy at the organizational level covers strategic leadership, culture and climate setting, and key performance metrics.

    Embedding the performance taxonomy within the prompt flow results in more precise insights within the organization. For instance:

    General Prompt Response: “Communication is an issue in the organization.”

    Performance Taxonomy Prompt Response: “Multiple middle managers are having trouble communicating action plans with their teams.”

    The general prompt provides a broad level of actionable insight, but the prompt with the performance taxonomy offers deeper insights, such as the need for targeted training for middle managers. The primary goal of assessing actionable insights is to implement targeted interventions that increase organizational performance.

    The Wrap-Up

    Organizing and analyzing open text survey responses is just one example of how RippleWorx is utilizing Generative AI to transform organizational performance. The Performance Chain framework also integrates external surveys, performance evaluations, and key performance indicator data into our Generative AI prompt workflows. Including this information along with a performance framework provides an even greater level of resolution for actionable insights. The additional resolution aids leaders across the organization in creating targeted action plans that keep individuals motivated and increases organizational momentum.

    www.rippleworx.com

  • AI Isn’t Replacing Jobs, Rather, It’s Writing Them

    AI Isn’t Replacing Jobs, Rather, It’s Writing Them

    This week we continue our exploration of AI. I must admit, I’ve been hesitant to give AI a chance. Given the ethical and legal concerns with its use and my own personal worries about whether it can perform for my needs, I saw no reason to engage with it. These past few weeks however, I’ve been testing its applications within the work place for HR-related tasks.

    Recently, I’ve been working on a compensation project that involved pulling market data, and reviewing job descriptions. I felt it would be a good opportunity to test AI and its research and writing capabilities. In recent months, ChatGPT, a Large Language Model AI developed by OpenAI, has undergone several updates providing it with new capabilities outside of just writing. One such new feature includes doing internet research, but how accurate is it?

    To test this, I enlisted my tech-savvy kids to ask ChatGPT for market data at the 25th Percentile in Huntsville, AL for a Market Assistant. Below I’ve attached screenshots of their results.

    When asking the same question, they both get slightly different answers. And when double checking their results, it seems that ChatGPT provided inaccurate information. Visiting the link it provided, it tells us that the range for the position as a whole is actually $46,530 to $58,286. See here for yourself: https://www.salary.com/research/salary/listing/administrative-assistant-salary/huntsville-al

    Comparing the ChatGPT results to CompAnalyst (Salary.com’s paid wage database) I found that the average salary for an Administrative Assistant for the 25th percentile in Huntsville is about $35,000, which aligns with the result one of my kids got, however, it doesn’t align with the source provided, so we’re unsure where this information is coming from. The results my other son got, $39,502 aligns with the median rate provided on CompAnalyst, which was $39,900. 

    Next, I decided to see how well ChatGPT wrote job descriptions. So, I asked ChatGPT to write a job description for an entry level GIS Analyst. The results were actually pretty decent. The job description had a well written summary of the role, accurately outlined key responsibilities, and specific qualifications including the requirement to know specific GIS software including ArcGIS and QGIS. ChatGPT also included the benefits offered by the employer and outlined the application process. My favorite part though is that ChatGPT even included an EEO statement. What it lacked was information on the physical requirements of the job and the work environment, so I decided to test it out on a job that requires more physical ability – a police officer. But once again, ChatGPT didn’t include any information on the physical requirements or work environment of the role. 

    These were just two simple tests of ChatGPT and how it could benefit HR professionals. Having given it a try, I do believe that AI can be beneficial to HR and help create a starting point for many HR tasks. However, the key takeaway for me is that AI is a starting point, it’s a tool to help aide you but you still have to do work – research the data you obtain through AI, review that document you have AI create for you for accuracy, compliance, and best practices, and remember that you are still responsible for the liability that using AI can create. 

  • Be Creative Anyway: How ATD24 Made Me Feel Better About AI

    Be Creative Anyway: How ATD24 Made Me Feel Better About AI

    Attending the ATD24 International Conference made me feel so energized and prepared for another year around the sun in talent development. The obvious buzzword: Artificial Intelligence (AI). I walked away with pages and pages of notes on AI in training and development. Mary Ila kicked off our series on AI last week, so now I’m sharing a rundown (written in part using ChatGPT) of my key AI takeaways from ATD24.

    Generative AI: The Game-Changer in Scenario-Based Learning

    One of the sessions that really stood out to me was “Use Generative AI to Create Scenario-Based Learning” by Kevin Alster and Elly Henriksen from Synthesia. They showed us how generative AI can take the heavy lifting out of creating scenario-based learning (SBL). Imagine being able to quickly craft engaging, real-world scenarios that captivate your learners and improve retention.

    The tools and frameworks they demonstrated were incredibly user-friendly, making it feasible for anyone to enhance their courses without needing a PhD in AI. This session made it clear that SBL, powered by AI, is not just a future concept but a present-day reality that can significantly elevate our training programs.

    Navigating the Inclusion Maze with AI

    Then there was the eye-opening session by Mychal Patterson of The Rainbow Disruption, titled “AI Doesn’t Mean ‘Always Inclusive.’” This was a deep dive into the potential pitfalls of AI when it comes to diversity, equity, and inclusion (DEI). Mychal highlighted some serious risks, like biased data leading to exclusionary outcomes and the lack of diversity in AI development teams. These are real challenges that can undermine your DEI efforts if not addressed properly.

    This session was a reminder that while AI offers huge benefits, we need to implement it thoughtfully and inclusively to avoid reinforcing existing biases. We’ve written about inclusive training before, and now we are reminded to be more intentional with avoiding language and representation bias, with or without the use of AI.

    Demystifying AI for Leadership Development

    DDI also showed up strong with Patrick Connell’s session, “Demystify AI for Development: What’s Hype, What’s Real, and What to Do,” which struck a perfect balance between optimism and practicality. He debunked some common myths about AI (i.e. we’re not all losing our jobs) and showcased how it can be a real asset in leadership development.

    From using AI-driven assistants for data analysis to generating personalized content, Connell provided a roadmap for integrating AI into our strategies in a way that enhances, rather than overwhelms. This session made AI seem less daunting and more achievable. Since the conference, HPC has practiced using AI to write first drafts of program learning objectives, training outlines, and more.

    Redesigning Training Programs to Stay Relevant

    Another session that hit home for me was actually during the Chapter Leaders Conference that some of us from ATD Birmingham attended prior to the International Conference. The session was “Making it Competitive: Redesigning Your Chapter Programming to Offer Relevant Knowledge, Skills, and Abilities” by Miko Nino. Miko stressed the importance of continuously updating and evaluating our training programs to keep pace with the changing demands of employers and learners. Using technology to assess and enhance curriculum effectiveness was a major highlight.

    The session also covered developing marketing and financial plans to ensure these programs are not only impactful but also sustainable. It was a comprehensive guide to making our training offerings more competitive and relevant.

    Tackling AI Integration Challenges

    Of course, the conference didn’t shy away from discussing the challenges of integrating AI in training and development. But the consensus was clear: with careful planning and a commitment to ethical considerations, we can mitigate the risks.

    For us, an example might be clearly identifying when something we deliver is made with AI, even in small part. If we use AI to create graphics or images that we share in marketing or in training programs, we need to clearly label those as made with AI. We’re all still learning how to use AI ethically, and it starts with a good faith effort on the front end.

    So…What’s Next?

    ATD24 gave me so many insights on AI in training and development. The sessions highlighted how AI can help make learning more personalized, efficient, and inclusive. But they also underscored the need for thoughtful implementation; the future of T&D is not just about adopting new technologies, but about doing so in a human way that truly enhances learning for everyone.

    For now, my AI journey is all about “do it anyway”. Feel intimidated by AI and use it anyway. Don’t feel very creative? Create anyway. Using AI in my work helps me be creative anyway, and that’s a positive in my book.

    Image made with AI to illustrate the idea of “create anyway”
  • AI and HR- A Series

    AI and HR- A Series

    How would your grandmother state your organizational values? Well, ChatGTP might give you some insights. 

    As I sat down with a client to help them form their values statements after the values mapping session I facilitated, we decided there were a few words that just weren’t right. They were close, but we needed a better word or two, so we stuck what we had into ChatGTP. After various takes on the language, including how your southern grandmother would say it-with of course, several “bless your hearts” thrown in from ChatGTP and some laughter from us- we landed on descriptors that resonated with the behaviors we were trying to articulate through shared language. 

    There is a lot of talk about what AI- Artificial Intelligence-is going to do to this world, or has already done.  Jillian highlighted how it was a focus at the Annual ATD conference in her recent blog post.  As she said, we are all relatively new to it and not very good at it, but think it deserves some attention.  

    Whereas many people want to make AI out to be the next major moral dilemma or our times, the way everyone is going to “cheat” in school and on the job, or what is going to take all our jobs away, I think taking more of a practical approach to what AI is and can do for business, specifically HR deserves some focus. So we are going to spend some time learning and then sharing that learning with you in a series of blog posts. 

    Over the next few weeks, we will be writing about how we and others are using AI to impact HR practices that will hopefully provide insights into how you might use it at work as well. We will talk about the tools being used, give you some thoughts on how it might make you a better practitioner and leader, and provide insights on what we see may be coming next.  

    AI may not be right for your organization just yet, but it may help you get a good laugh in or channel the language of your inner grandmother when you are trying to find just the right words for your next job description, proposal, or values statements.  Or, you could try CanvaAI and let it illustrate your next blog post…. Which illustration do you like better?