Introducing AI Workforce — Your specialized intelligence teammates are now live in beta testing mode Meet the Team
Close

Gartner expects 20% companies to use AI to flatten organizational structure by 2026, eliminating 50% current middle management positions; challenges include job security concerns among wider workforce, employee distrust, lack of development opportunities

Oct 24, 2024 Business Line 3 min read
Share this article:

October 24, 2024 (Business Line) –

Consulting firm Gartner has predicted that one in five organisations will use AI to flatten their organisational structure through 2026, eliminating over half of the current middle management positions. Organisations can capitalise on reduced labour costs. AI deployment may allow for enhanced productivity and increased span of control by automating and scheduling tasks, besides performance monitoring for the remaining workforce, allowing managers to focus on scalable and value-added activities.  

However, this implementation will also present challenges for organisations, with the wider workforce feeling concerned over job security and remaining employees being reluctant to change or adopt the AI-driver interaction. Additionally, mentoring and learning pathways may become broken, and more junior workers could suffer from a lack of development opportunities. “It is clear that no matter where we go, we cannot avoid the impact of AI,” said Daryl Plummer, Distinguished VP Analyst, Chief of Research, and Gartner Fellow. “AI is evolving as human use of AI evolves. Before we reach the point where humans can no longer keep up, we must embrace how much better AI can make us.”  

By 2029, 10 per cent of global boards will use AI guidance to challenge executive decisions. AI-generated insights will impact executive decision-making and empower board members to challenge executive decisions. “Impactful AI insights will at first seem like a minority report that doesn’t reflect the majority view of board members,” said Plummer. “However, as they prove effective, they will gain acceptance among executives competing for decision support data to improve business results.”  

By 2028, 40 per cent of large enterprises will deploy AI to manipulate and measure employee mood and behaviours, all in the name of profit, noted Gartner. AI can perform sentiment analysis on workplace interactions and communications. This provides feedback to ensure the overall sentiment aligns with desired behaviours allowing for an engaged workforce. 

“Employees may feel their autonomy and privacy compromised, leading to dissatisfaction and eroded trust,” said Plummer. “While the potential benefits of AI-driven behavioural technologies are substantial, companies must balance efficiency gains with genuine care for employee wellbeing to avoid long-term damage to morale and loyalty.”  

By 2027, 70 per cent of new contracts for employees will include licensing and fair usage clauses for AI representations of their personas. Emerging large language models (LLMs) have no set end date, meaning employees’ data captured by enterprise LLMs will remain part of the LLM not only during their employment but also after.

By 2027, 70 per cent of healthcare providers will include emotional-AI-related terms and conditions in technology contracts or risk billions in financial harm. The increased workload of healthcare workers has resulted in workers leaving, an increase in patient demand and clinician burnout rates. Using emotional AI on tasks like collecting patient data can free up healthcare workers’ time.

By 2028, 30 per cent of S&P companies will use GenAI labelling, such as “xxGPT,” to reshape their branding while chasing new revenue. 

CMOs view GenAI as a tool to launch both new products and business models. It also allows for new revenue streams by bringing products to market faster while delivering better customer experiences and automating processes. As the GenAI landscape becomes more competitive, companies are differentiating themselves by developing specialised models tailored to their industry. 

By 2028, 40 per cent of CIOs will demand that “Guardian Agents” be available to autonomously track, oversee or contain the results of AI agent actions. As a new level of intelligence is added, new GenAI agents are poised to expand rapidly in strategic planning for product leaders. “Guardian Agents” build on the notions of security monitoring, observability, compliance assurance, ethics, data filtering, log reviews and other mechanisms of AI agents. Through 2025, the number of product releases featuring multiple agents will rise steadily with more complex use cases. 

“In the near term, security-related attacks of AI agents will be a new threat surface,” said Plummer. “The implementation of guardrails, security filters, human oversight, or even security observability are insufficient to ensure consistently appropriate agent use.”  

* All content is copyrighted by Industry Intelligence, or the original respective author or source. You may not recirculate, redistribute or publish the analysis and presentation included in the service without Industry Intelligence's prior written consent. Please review our terms of use.

Cookie Preferences

This website uses cookies to enhance your browsing experience, analyze site performance, and deliver personalized content. We use a minimal cookie to remember your preferences. For detailed information about our cookie usage, please review our Privacy Policy.