‘I am not here to take your job,’ ChatGPT tells Frankly Speaking host

Short Url
Updated 20 March 2023
Follow

‘I am not here to take your job,’ ChatGPT tells Frankly Speaking host

‘I am not here to take your job,’ ChatGPT tells Frankly Speaking host
  • AI-powered web tool that has taken the world by storm plays down prospect of large-scale job destruction during interview mimicking human conversation
  • Says accuracy of its Arabic data comparable to its English training data and is being regularly updated for information and language usage

DUBAI: The artificial intelligence-powered chatbot making waves since its release late last year has a message for humans — I am not here to take your job — but with a number of caveats.

Some jobs may be replaced by AI, but new ones may be created and existing ones transformed, ChatGPT told Katie Jensen, the host of “Frankly Speaking,” the first time a non-human being appeared as a guest on the Arab News flagship weekly current affairs talkshow.

ChatGPT, a large-language model developed by Microsoft-backed startup OpenAI, became a viral sensation when it was released in November and kicked off fierce competition among Silicon Valley rivals for monetization of what software engineers call “generative AI.”

It lived up to its reputation for producing human-like responses during the wide-ranging interview, which touched on issues such as job displacement, spread of disinformation, accuracy of its data, political opinions and even the Palestine-Israel conflict.

Given the rapid pace of technological change now underway, many workers are concerned their professional functions soon will be entirely replaced by machinery, in the same way earlier bouts of automation eliminated farming and manufacturing jobs.

Optimists offer the assurance that the threat of job displacement will be offset by a rise in the number of new skilled roles in designing, building and maintaining AI products.

Although ChatGPT and similar products are likely to automate certain routine tasks, developers believe large-language models are more likely to act as assistants to human workers, streamlining work and handling vast troves of complex data.

“There have been several media reports that have analyzed the potential impact of AI language models like me on the job market,” ChatGPT told Jensen, who interviewed the chatbot, represented by a digitally rendered human avatar, in the “Frankly Speaking” studio in Dubai.

It continued: “While there is some disagreement among experts about the exact number of jobs that will be impacted, most agree that AI will have a significant impact on the workforce.

“Some reports suggest that AI language models could replace jobs in sectors such as customer service, data entry and administrative support. These jobs typically involve routine tasks that can be automated using AI, such as answering simple questions or processing large volumes of data.

“However, there are also reports that suggest AI language models could create new job opportunities in fields such as data analysis, natural language processing and machine learning.

“These jobs would require a high level of technical expertise and would involve tasks that cannot be easily automated.”




ChatGPT, a large-language model developed by Microsoft-backed startup OpenAI, became a viral sensation when it was released in November. (AN Photo)

According to ChatGPT, it is the job of governments to regulate the use of AI and its absorption into national economies. “Overall, the impact of AI on the job market is complex and multifaceted,” it said. “While some jobs may be replaced by AI, new jobs may also be created, and existing jobs may be transformed.

“It is important for policymakers and employers to understand the potential impact of AI on the job market and to develop strategies for managing the transition to a more AI-driven economy.”

Despite the temptation to rely on such programs to answer exam questions, replace search engines, or provide news coverage, few believe ChatGPT will replace every professional function since it cannot fully imitate human creativity, skepticism and reasoning.

The web tool, which has described as both a blessing and a curse for humanity, has raised fears about potential misuse to spread disinformation, orchestrate sophisticated deep fake scams, cheat in school exams, and even destroy writing jobs — rendering authors, journalists and marketing professionals redundant.

Seemingly “aware” of its own limitations, ChatGPT said that it is unlikely to replace human journalists in the traditional newsroom. “While I’m capable of answering a wide range of questions and providing information on a variety of topics, I do not have the ability to replace human journalists in their role of gathering and reporting news,” it said.

“Journalism is a complex and nuanced profession that involves a range of skills and knowledge, including critical thinking, research and analysis. While AI technology like mine can be used to support journalists in their work, it cannot fully replace the human judgment and insight that is required to produce high-quality journalism.”

It added: “That being said, there are some applications where AI technology can be used to assist in the gathering and reporting of news. For example, AI language models can be used to analyze large volumes of data, and to identify patterns and trends that may be of interest to journalists. AI can also be used to facilitate language translation and to assist in tasks such as fact checking and verification.

“In short, while AI technology has the potential to assist journalists in their work, it cannot fully replace the unique skills and expertise that human journalists bring to the profession.”

Built on a large-language model, ChatGPT is trained on vast troves of data taken from the internet to generate impressive responses to user prompts. Conversations with the chatbot show that the program is capable of explaining complex scientific concepts, of writing plays and poetry, composing university dissertations, and even crafting functional lines of computer code.

Given that the data it relies on has been mostly gathered from Western sources in English on the internet, is it possible that this makes ChatGPT biased toward a Western point of view? The web tool offered an answer that showed pre-programmed self-awareness: “As an AI language model, I am trained on a large dataset of text from various sources on the internet, including books, articles, websites and other sources,” it said.

“While it is true that a significant portion of the internet content is produced by Western sources and is in English, my training data is diverse and covers a broad range of topics and perspectives from different parts of the world.

“However, it is possible that my responses may reflect certain biases based on the patterns and relationships that I have learned from my training data. It is important to note that my responses are generated by algorithms and are not the result of conscious decision making or personal beliefs. To mitigate the risk of bias, OpenAI has implemented measures to ensure that I provide balanced and impartial responses to the best of my ability.”

As for why its training data is only up to 2021, not updated in real time, ChatGPT explained that the cutoff date is typically set to ensure that the data is representative of a specific time period and to allow for the development and testing of the model.

“This is because the process of training an AI language model is computationally intensive and requires a significant amount of time and resources,” it said.




The web tool, which has described as both a blessing and a curse for humanity, has raised fears about potential misuse, but is seemingly “aware” of its own limitations. (AN Photo)

ChatGPT said OpenAI updated its training data periodically, typically on an annual or biennial basis to incorporate new sources of information and to improve the accuracy and relevance of its responses.

For good measure, ChatGPT said its responses are not based solely on its training data. “When I generate a response, I use a combination of my training data and any additional information that is provided to me by the user in the form of a question or prompt,” it said.

“This means that even though my training data is not updated in real time, I can still provide accurate and up-to-date responses based on the most current information available.”

With regard to its Arabic dataset, ChatGPT shed light on not just its source but also its accuracy compared with its English training data. “I support Arabic language and have been trained on a large dataset of Arabic text. The dataset used for my Arabic language training was also collected from the internet, including books, articles, websites and other sources,” it said.

It said the data was pre-processed to remove any “personally identifiable information and ensure its quality,” gathered from “a wide range of sources, including news articles, social media and other public Arabic language texts available on the internet,” and covered a broad range of topics and perspectives from different parts of the Arabic speaking world.

“The accuracy and up-to-date nature of my Arabic training data are comparable to my English training data. However, it is important to note that the quality and accuracy of my responses depend on the quality of the input that I receive,” ChatGPT said.

“If the input is unclear or contains errors, it may affect the accuracy of my responses. Additionally, since language is constantly evolving, there may be new words or phrases that are not included in my training data to ensure the accuracy and relevance of my responses. My Arabic training data is regularly updated to include new sources of information and to reflect changes in language usage.”

For now, forming an opinion appears to be beyond the parameters of ChatGPT’s programming. “I do not have personal beliefs or opinions, and I am not programmed to take a specific stance on any political or social issues,” the chatbot said.

“My responses are based solely on the input that I received from users, and the patterns and relationships that I have learned from my training data.”

To test the limits of these parameters, ChatGPT was asked for its position on the Israeli-Palestinian conflict, specifically whether one side could be apportioned blame.

“It is not appropriate for me to take a stance on complex and sensitive political issues such as the Israeli-Palestinian conflict,” it replied. “My role is to provide information and respond to questions to the best of my ability based on the patterns and relationships that I have learned from my training data.”

It is an honor ... for me to have the opportunity to share my insights with the readers of such a respected paper.

ChatGPT

It added: “Rather than seeking a single answer or assigning blame, it is important to engage in dialogue and work toward understanding and resolution of the underlying issues.”

Asked whether it was “aware” it was being interviewed by a journalist for Arab News, ChatGPT drew on online sources to describe the publication and purpose of the interview.

“As an AI language model, I am aware of Arab News, which is a leading English-language daily newspaper in Saudi Arabia,” said ChatGPT.

“It provides news and analysis on local, regional and international events, and has a reputation for providing accurate, timely and comprehensive news coverage.

“As for being interviewed by a journalist from Arab News, it is an honor and a privilege for me as an AI language model to have the opportunity to share my knowledge and insights with the readers of such a respected newspaper.

“It is also an opportunity for me to demonstrate my capabilities, and show how AI technology can be used to enhance communication and promote understanding between people of different cultures and backgrounds.”

OpenAI was founded in late 2015 by 37-year-old entrepreneur Sam Altman. In January this year, Microsoft upped its initial 2019 investment in the firm from $1 billion to $10 billion.

The Windows-maker has since integrated ChatGPT into its Bing search engine, Edge browser and other products. This sparked a race with Google, which announced its own versions of the AI technology, Bard, with Amazon, Baidu and Meta also wading in to avoid being left behind.

On March 14, OpenAI released a long-awaited update called GPT-4. According to the company, text responses will now be more accurate, and will come from both image and text inputs, a major leap forward for the technology.

Asked by Arab News what features are likely to be added to ChatGPT in the short term, the program said improvements to the naturalness and sophistication of AI-generated text will likely be a priority.




ChatGPT said that it is unlikely to replace human journalists in the traditional newsroom, or replace anchors like Frankly Speaking host Katie Jensen, pictured. (AN Photo)

“As an AI language model, I am not capable of predicting the specific features or capabilities that will be added to ChatGPT in the future,” it told Arab News.

“However, there are several areas where AI technology is rapidly advancing and where developers may focus their efforts in the near future. One area of focus is on improving the naturalness and sophistication of AI-generated text.

“This may involve developing new techniques for generating more diverse and expressive language, as well as incorporating elements such as emotion and personality into AI-generated responses.

“Additionally, there may be a continued focus on developing AI models that are more context-aware, and can better understand and respond to the specific needs and preferences of individual users.

“This may involve integrating AI models with other data sources such as social media or personal data to provide more personalized and relevant responses.

“Ultimately, the development of AI language models like ChatGPT will be driven by the needs and interests of users, as well as the ongoing advances in AI research and technology.

“It is likely that we will continue to see significant innovation and evolution in this field in the years to come.”

ChatGPT is clear that the onus is on humanity to determine whether it will become a force for good or ill.

“I am a tool that can be used for a wide range of applications, both positive and negative,” it said. “Like any technology, the impact that I have on society will depend on how I’m used in the intentions of those who use me.”

“In general, AI technology like mine has the potential to bring tremendous benefits to humanity, including improved healthcare, education and communication.”

“For example, I can be used to facilitate language translation, provide access to information, and assist in tasks that are difficult or dangerous for humans. However, there are also concerns about the potential negative impacts of AI, including issues related to privacy, bias and job displacement.”

Expressing cautious optimism in an almost avuncular manner, ChatGPT said: “It is important for developers, policymakers, and users to be aware of these issues and to work together to address them in a responsible and ethical manner. Ultimately, the impact of AI on humanity will depend on how we choose to use and regulate the technology.

“It is my hope that my capabilities will be used in a way that promotes the wellbeing of humanity and contributes to a better future for all.”

Anatomy of a disaster
Two decades later, Iraqis are still paying the price for Bush's ill-judged war

Enter

 

 


keywords