How Ernst & Young’s AI platform is ‘radically’ reshaping operations

How Ernst & Young’s AI platform is ‘radically’ reshaping operations

Multinational consultancy Ernst & Young (EY) said generative AI  (genAI) is “radically reshaping” the way it operates, and the company boasts a 96% adoption rate of the technology by employees.

After spending $1.4 billion on a customized generative AI platform called EY.ai, the company said the technology is creating new efficiencies and allowing its employees to focus on higher-level tasks. Following an initial pilot with 4,200 EY tech-focused team members in 2023, the global organization released its large language model (LLM) to its nearly 400,000 employees.

Even so, the company’s executive leadership insists it’s not handing off all of its business functions and operations to an AI proxy and that humans remain at the center of innovation and development. Looking to the future, EY sees the next evolution as artificial general intelligence (AGI) — a neural network that will be able to think for itself and capable of performing any intellectual task a human can at that point it will become a “strategic partner shifting the focus from task automation to true collaboration between humans and machines,” according to Beatriz Sanz Saiz, EY global consulting data and AI leader.

Computerworld interviewed Saiz about how genAI is changing the way the company operates and how its employees perform their jobs.

You launched EY.ai a year ago. How has that transformed your organization? What kinds of efficiencies and/or productivity gains have you seen? “Over the past year, we’ve harnessed AI to radically reshape the way we operate, both internally and in service to our clients. We’ve integrated AI into numerous facets of our operations, from enhancing client service delivery to improving our internal efficiencies. Teams are now able to focus more on high-value activities that truly drive innovation and business growth, while AI assists with complex data analysis and operational tasks.

“What is fascinating is the level of adoption: 96.4% of EY employees are users of the platform, which is enriching our collective intelligence. EY.ai is a catalyst for changing the way we work and re-skilling EY employees at pace.

“We’ve approached this journey by using ourselves as a perfect test case for the many ways in which we can provide transformational assistance to clients. This is central to our Client Zero strategy, in which we refine solutions and demonstrate their effectiveness in real-world settings — then adapt the crucial learnings from that process and apply them to driving innovation and growth for clients.”

How has EY.ai changed over the past year? “EY.ai has evolved in tandem with the rapid pace of technological advancement. Initially, we focused on testing and learning, but now we’re deeply embedding AI across every function of our business. This shift from experimentation to full-scale implementation is enabling us to be more agile, efficient, and responsive to our clients’ needs. In this journey, we’ve learned that AI’s potential isn’t just about isolated use cases — its true power lies in how it enables transformation at scale.

“The platform’s integration has been refined to ensure that it aligns with our core strategy — especially around making AI fit for purpose within the organization. It evolved from Fabric — an EY core data platform — to EY.ai, which incorporates a strong knowledge layer and AI technology ecosystem. In that sense, we’ve put a lot of effort into understanding the nuances of how AI can best serve each business, function and industry. We are rapidly building industry verticals that challenge the status quo of traditional value chains. We are constantly evolving its ethical framework to ensure the responsible use of AI, with humans always at the heart of the decision-making process.”

Can you describe EY.ai in terms of the model behind it, its size, and the number of instances you have (i.e., an instance for each application, or one model for all applications)? “EY.ai isn’t a one-size-fits-all solution; it operates as a flexible ecosystem tailored to the unique needs of different functions within our organization. We deploy a combination of models, ranging from [LLMs] to smaller, more specialized models designed for specific tasks. This multi-model approach allows us to leverage both open-source and proprietary technologies where they best fit, ensuring that our AI solutions are scalable, efficient, and agile across different applications.”

What advice do you have for other enterprises considering implementing their own AI instances? Go big with LLMs or choose small language models based on both open-source or proprietary (such as Llama-3 type) models? What are the advantages of each? “My advice is to start with a clear understanding of your business goals. Large language models are incredibly powerful, but they’re resource-intensive and can sometimes feel like a sledgehammer for tasks that require a scalpel. Smaller models offer more precision and can be fine-tuned to specific needs, allowing for greater efficiency and control. It’s all about finding the right balance between ambition and practicality.”

What is knowledge engineering and who’s responsible for that role? “Knowledge engineering involves structuring, curating, and governing the knowledge that feeds AI systems, ensuring that they can deliver accurate, reliable, and actionable insights. Unlike traditional data science, which focuses on data manipulation, knowledge engineering is about understanding the context in which data exists and how it can be transformed into useful knowledge.

“Responsibility for this role often falls to Chief Knowledge Officers or similar roles within organizations. These individuals ensure that AI is not only ingesting high-quality data, but also making sense of it in ways that align with the organization’s goals and ethical standards.”

What kind of growth are you seeing in the number of Chief Knowledge Officers, and why are they growing in numbers? “The rise of the Chief Knowledge Officer (CKO) is directly tied to the increasing importance of knowledge engineering in today’s AI-driven world. We are witnessing a fundamental shift where data alone isn’t enough. Businesses need structured, actionable knowledge to truly harness AI’s potential.

“CKOs are becoming indispensable, because in the scenario of agent-based workflows in the enterprise, it is knowledge, not just data, that agents will deploy to accomplish an outcome: i.e. customer service, back-office operations, etc. The CKO’s role is pivotal in aligning AI’s capabilities with business strategy, ensuring that insights derived from AI are both accurate and actionable. It’s not just about managing information, it’s about driving strategic value through knowledge.”

What kind of decline are you seeing in data science roles, and why? “We’re seeing a decline in roles focused purely on data wrangling or basic analytics, as these functions are increasingly automated by AI. However, this shift doesn’t mean data science is becoming obsolete — it means it’s evolving.

“Today, the focus is on data architects, knowledge engineering, agent development and AI governance — roles that ensure AI systems are deployed responsibly and aligned with business goals. We’re also seeing a greater emphasis on roles that do the vital job of managing the ethical dimensions of AI, ensuring transparency and accountability in its use and compliance as the new EU AI Act obligations become effective.”

Many companies have invested resources in cleaning up their unstructured and structured data lakes so it can be used for generating AI responses. Why then do you see fewer and not more investments in data scientists? “Companies are prioritizing AI tools that can automate much of the data preparation and curation process.  The role of the data scientist, over time, will evolve into one that’s more about overseeing these automated processes and ensuring the integrity of the knowledge being generated from the data, rather than manually analyzing or cleaning it. This shift also highlights the growing importance of knowledge engineering over traditional data science roles.
 
“The focus is shifting from manual data analysis to systems that can automatically clean, manage, and analyze data at scale. As AI takes on more of these tasks, the need for traditional data science roles diminishes. Instead, the emphasis is on data architects, knowledge engineering — understanding how to structure, govern, and utilize knowledge in ways that enhance AI’s performance and inform AI agent developers.”

What do you see as the top AI roles emerging as the technology continues to be adopted? “We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs.

“Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills —  they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.”

How will artificial general intelligence (AGI) transform the enterprise long term? “AGI will revolutionize the enterprise in ways we can barely imagine today. Unlike current AI, which is designed for specific tasks, AGI will be capable of performing any intellectual task a human can, which will fundamentally change how businesses operate. AGI has the potential to be a strategic partner in decision-making, innovation, and even customer engagement, shifting the focus from task automation to true collaboration between humans and machines. The long-term impact will be profound, but it’s crucial that AGI is developed and governed responsibly, with strong ethical frameworks in place to ensure it serves the broader good.”

Many believe AGI is the more frightening AI evolution. Do you believe AGI has a place in the enterprise, and can it be trusted or controlled? “I understand the concerns around AGI, but with the right safety controls, I believe it has enormous potential to bring positive change if it’s developed responsibly. AGI will certainly have a place in the enterprise. It will fundamentally transform the way companies achieve outcomes. This technology is driven by goals, outcomes — not by processes. It will disrupt the pillar of process in the enterprise, which will be a game changer.

“For that reason, trust and control will be key. Transparency, accountability, and rigorous governance will be essential in ensuring AGI systems are safe, ethical, and aligned with human values. At EY, we strongly advocate for a human-centered approach to AI, and this will be even more critical with AGI. We need to ensure that it’s not just about the technology, but about how that technology serves the real interests of society, businesses, and individuals alike.”

How do you go about ensuring “a human is at the center” of any AI implementation, especially when you may some day be dealing with AGI? “Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.

“At EY, we believe that AI implementation should always be framed by ethics, human oversight, and long-term societal impacts. We actively work to embed trust and transparency into every AI system we deploy, ensuring that human wellbeing and ethical considerations remain paramount at all times. AGI will be no different: its success will depend on how well we can align it with human values, protect individual rights, and ensure that it enhances, rather than detracts from, our collective future.”