HR Insights
Quick Read
December 11, 2025

Achieving AI ROI: “How” matters as much as “what”

As AI adoption accelerates, leaders are learning that responsible implementation isn’t optional — it’s what drives sustained ROI. This blog outlines the critical “how” behind effective, ethical, and people-first AI use. 

Share
Table of Contents

AI is reshaping the way organizations operate — from smarter scheduling to personalized learning to transforming how HR teams support a diverse and dynamic workforce. Studying the market, we see organizations gradually moving away from the FOMO (fear of missing out) approach of the last three years, where they implemented AI without a clear purpose, leading to productivity and employee engagement consequences.  

As with any powerful tool, the value of AI doesn’t come solely from what it can do. It also depends heavily on how it's implemented. Privacy, security, accountability, and fairness aren’t buzzwords. They’re critical to making sure AI supports people rather than putting them at risk. They are foundational prerequisites for trust, which in turn brings sustainable AI adoption and results. Sounds straightforward, right? 

Well, not so fast: in our latest Pulse of Talent research, 58% of respondents said AI presents ethical challenges at work. This requires transparency, ownership, and action. Let’s look at how to get there. 

Three “how” AI aspects that can’t be ignored 

1. Data privacy issues  

AI systems frequently process large volumes of personal or sensitive data, including employee records, performance metrics, health data (for benefits), and even behavioral analytics. According to the Stanford AI Index report, incidents involving AI-related data privacy rose 56% in a single year. That’s not incremental risk. That’s a wake-up call.  

Whether it's employees or customers, when their data isn’t safe or there is no apparent reason for its use, it leads to a lack of trust. And in regions under strict data laws (e.g., GDPR in Europe), a privacy misstep doesn’t just mean bad PR with long-term damage to the brand and internal morale, it can mean hefty fines.  

2. AI-specific security vulnerabilities 

AI introduces new security threats that traditional software did not. A recent government-commissioned study from the UK mapped vulnerabilities across every phase of the AI life cycle — from design and training to deployment and maintenance. These vulnerabilities can be exploited to steal sensitive information, manipulate models, or cause disruptions. 

In a home security analogy, a powerful AI solution without robust security is like locking the front door but leaving the side door wide open. A potential break-in is expensive to restore, in terms of business continuity, compliance, and reputation. 

3. Bias, unfairness, and ethical failure 

Even the most sophisticated model is only as good as its training data and the safeguards surrounding its use. If you feed biased data into AI, the output can replicate — or even amplify — those biases. That can lead to unfair decisions, systemic discrimination, or discriminatory outcomes. Besides a lack of trust, this can be grounds for a lawsuit involving both organizations using AI and their technology providers. 

Furthermore, many AI ethics professionals argue that the deployment of AI without accountability structures tends to institutionalize existing inequities. In our Pulse of Talent survey, we found that only 26% of organizations have a person or team responsible for the ethical use of AI. 

Best practices for managing the “how” of AI  

Implementing AI responsibly isn't simply an IT challenge. It’s a governance, culture, and leadership challenge. And leaders need to be aware of their potential biases when it comes to responsible AI. In our Pulse of Talent research, we found that executives are 20% more likely than managers and 29% more likely than workers to say they trust their employer to use AI responsibly. 

Here’s what a mature approach to “how” AI should get used looks like — and why it belongs on the agenda of every leadership team, not just that of technology professionals: 

 

  • Governance and accountability. Define clear policies about what data can be used, who can use it, for what purposes, and under what conditions. Decide who is ultimately accountable. 
  • Privacy-by-design and data minimization. Collect only what you need. Store only what is justified. Anonymize or pseudonymize data where possible. Ensure consent, transparency, and user rights. 
  • Security throughout the AI life cycle. From data ingestion to deployment and ongoing maintenance — apply rigorous protections. Use secure infrastructure, control access, and regularly monitor for potential misuse. 
  • Ethical and fairness review. Before deploying an AI-based decision system (e.g., for hiring, promotions, performance), test for bias. Consider whether decisions are explainable and whether outcomes align with organizational values, even if a model is technically correct. 
  • Transparency and explainability. Employees (and customers) should understand when and why AI is used. AI shouldn’t be a black box. 
  • Training, oversight, and continuous monitoring. AI is not “set and forget.” It evolves, and so should your oversight. Monitor outcomes, collect feedback, and iterate to improve. 

The “what” and “how” of AI together = Real value 

AI has enormous potential to transform work. But technology alone doesn’t create value. People and AI together do. If you treat AI as a magic wand — without considering privacy, security, ethics, and governance — it can yield short-term gains. But long-term? You risk erosion of trust, exposure to risk, and harm to people. 

As you explore AI’s potential for your organization, remember: the question is no longer only “What can AI do for us?” It is also “How should we adopt and use AI in a way that protects, respects, and empowers people?” 

Leaders who prioritize both will shape the future of work — thoughtfully, responsibly, and with people at the center. 

You may also like:

Ready to get started?

See the Dayforce Privacy Policy for more details.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.