Generative AI will enhance human potential, not replace it, KPMG exec says

Generative AI will enhance human potential, not replace it, KPMG exec says
ChatGPT propelled generative AI into the spotlight giving rise to several other chatbots like Google’s Bard & Microsoft’s Bing. (Reuters/Sourced)
Short Url
Updated 15 June 2023
Follow

Generative AI will enhance human potential, not replace it, KPMG exec says

Generative AI will enhance human potential, not replace it, KPMG exec says
  • Traditionally, technology has ‘freed us to focus on more complex work,’ Fady Kassatly says
  • Kingdom aims to be global leader in field, seeks $20bn in investment by 2030

DUBAI: Earlier this year, advisory firm KPMG announced a new initiative to deploy a series of generative artificial intelligence investments and alliances to empower its workforce, offering new solutions for clients and redefining how it operates.

The company is not alone in doing so. Several others have adopted AI, particularly generative AI, both internally and externally. Deloitte, for example, launched its AI Institute in Riyadh last month, while in February, STEP Conference used ChatGPT to write copy for its outdoor adverts.

Simply put, generative AI is a subset of AI “that can produce new and original content,” Fady Kassatly, partner and head of digital and innovation at KPMG, told Arab News.

The content can include anything from text and images to music and even software code.

“While traditional AI focuses on categorizing or labeling existing data, generative AI pushes boundaries by producing new output,” Kassatly said.

ChatGPT, in particular, is what propelled generative AI into the spotlight for mass audiences giving rise to several other chatbots like Google’s Bard, Microsoft’s Bing and Snapchat’s My AI.

The generative AI hype went beyond ChatGPT, however, with the surge of other models like “GPT-4, PaLM2, Stable Diffusion and DALL-E, with applications like ChatGPT and Bard leveraging these models to achieve meaningful results,” Kassatly said.

Microsoft has already integrated its AI assistant feature Co-Pilot across GitHub, Office 365, Teams and Windows, and software company Salesforce has launched Einstein GPT, which it describes as the world’s first generative AI customer relationship management technology, delivering AI-created content across every sales, service, marketing, commerce and IT interaction.

“Generative AI in business applications could lead to transformative results, sparking innovation, efficiency and growth in the region,” Kassatly said.

This was particularly evident in the UAE, “where the National AI Strategy 2031 aims to accelerate the adoption of emerging AI technologies and attract and nurture talent to develop solutions to complex problems using AI,” he added.

Similarly, Saudi Arabia launched its National Strategy for Data and Artificial Intelligence in October 2020, aimed at making the Kingdom a global leader in the field as it seeks to attract $20 billion in foreign and local investments by 2030.

The Kingdom also aims to transform its workforce by training and developing a pool of 20,000 AI and data specialists.

“Governments in the region have recognized the potential of these emerging technologies and are taking active steps to incorporate them into their economies and societies,” said Patrick Patterson, CEO of Level Agency.

“For example, the UAE, which is projected to gain the most from AI — accounting for close to 14 percent of its GDP by 2030 — has even appointed a minister of state for digital economy, AI and remote working systems to oversee this digital transformation,” he told Arab News.

As with any new technology, the growth of generative AI has raised concerns over its ability to replace humans.

“Looking at the history of transformative technologies like spreadsheets, graphical user interfaces, the internet and smartphones, we see a pattern where these technologies haven’t displaced humans but have instead made tasks more efficient, and freed us to focus on more complex work, leading to heightened productivity and innovation,” said Kassatly, who predicts a similar pattern with generative AI.

That said, the job market is likely to change in the future with new industries, businesses and jobs being created.

But this transformation would also lead to job losses, especially in the short term, in sectors like customer service, translation and interpretation, data entry and accounting, Patterson warned.

“In fact, as per some estimates, up to 45 percent of the current jobs in the Middle East could potentially be automated by 2030.

“(But) It is imperative to remember that every technological revolution, while displacing old jobs, also gives rise to new roles that we may not have even envisaged yet,” he said.

Patterson added that as AI systems became more prevalent, the job market would see a surge in demand for data scientists and other roles related to their development and maintenance, with the Middle East expected to “create approximately $366.6 billion in wage income from AI-related roles.”

Still, the growth of AI is not without risk.

“Bias is an inherent risk in AI systems since they learn from existing data. So if the training data contains biases, these biases can be reflected in the outputs generated by AI algorithms,” Kassatly said.

“Another pitfall is hallucinations, which occur when generative AI produces outputs that are entirely fabricated and lack factual or logical basis.”

In April, ChatGPT was found to be citing articles from The Guardian that never existed. The British paper was contacted separately by a student and a researcher asking for articles they had come across during their research using ChatGPT but could not find.

Kassatly said the reasons for this “are not yet fully understood, but advancements in tools and techniques are expected to reduce this occurrence over time.”

Another area of concern is data privacy.

“Ensuring enterprise-grade security and data privacy measures is essential when deploying generative AI systems,” he added.

“When using publicly available generative AI applications like ChatGPT or Bard for content creation, there is also a risk of generated text or images not complying with applicable IP or copyright laws and human intervention or supervision is necessary to navigate these pitfalls and ensure compliance.”

Lastly, like any powerful technology, generative AI was susceptible to intentional misuse, Kassatly said.

Phishing, a technique used by online hackers to acquire sensitive data such as passwords or banking information, has been a leading concern for chief information security officers in Saudi Arabia, with 30 percent identifying such attacks to be the most significant threat to their organizations, according to Proofpoint’s 2022 Voice of the CISO report.

The use of generative AI to craft “highly convincing phishing attacks” was a concern, Emile Abou Saleh, Proofpoint’s regional director for the Middle East and Africa, told Arab News.

“The rapid evolution of generative AI techniques poses a challenge for traditional security defenses, and the ability of AI models to generate realistic content and imitate trusted sources can undermine even the most robust security measures,” he added.

The rise of deepfakes, misinformation and misleading content was already a challenge, even before the popularity of generative AI.

Arthur Gregg Sulzberger, chairman of the New York Times, said during a panel discussion at the World Economic Forum Annual Meeting earlier this year that ChatGPT would exacerbate the global problem of disinformation.

“A lot of this will not be information that is created with the intent to mislead, but based on everything I’ve read, I suspect we are going to see huge amounts of content that is produced, none of which is particularly verified (and) the origins of which are not particularly clear,” he said.

Microsoft President Brad Smith said in a speech in Washington last month on how best to regulate AI that his biggest concern around AI was deepfakes.

He called for steps to ensure that people knew when an image or video clip was real and when it was generated by AI.

Generative AI could help publishers by generating large volumes of content, enhancing user engagement through personalized experiences and streamlining operations with automation, Kassatly said.

But he, too, warned publishers of the concerns around “content authenticity and preserving journalistic integrity,” advising them to “ensure AI-generated content is clearly labeled to uphold journalistic standards.”