Chat GPT: Introduction and Critical Appraisal

The author dwells upon the academic, social, legal and ethical concerns of Chat GPT, opines that AI does have a potential to dismantle the very foundation of human civilization, and provides suggestions to regulate or should we say tame this technology.

Written by

Dr. Mohammed Rizwan

Published on

The author dwells upon the academic, social, legal and ethical concerns of Chat GPT, opines that AI does have a potential to dismantle the very foundation of human civilization, and provides suggestions to regulate or should we say tame this technology.

June 2020 will be remembered as a historical feat for the field of natural language processing systems. The day saw the launch of Chat GPT (Generative Pre-trained Transformer) also known as large language models (LLMs) in scientific circles. Chat GPT has potential to transform the way we write, rewrite, edit and publish. It is capable of writing essays, find errors in programmes, write poems in Shakespearian style, mimic Russel and Darwin. Alongside, Chat GPT and other generative apps may immensely help people who have difficulty communicating or have language barriers, such as those with speech disorders, hearing impairments, or those who speak different languages. It can be used to develop voice assistants, translation tools, and speech recognition software, making it easier for people to communicate and access information.

Chat GPT is an AI-based natural language processing system created by Open AI, a research organisation which claims to be “dedicated to developing safe AI for the benefit of humanity”. Essentially GPT is an advanced machine learning model designed to process and generate human-like text responses when given prompts.

The origin of Chat GPT dates back to 2018 when Open AI released its first version. It was trained on a massive amount of text data from the internet and was capable of generating coherent text based on the input prompts or questions. Since then, the model has undergone several improvements and has been pre-trained on increasingly larger datasets and continues to do so till date.

GPT-4.0, the latest version of the model, is even one step ahead and has been trained on an incredibly large dataset, making it the most advanced natural language processing model available to us.

Earlier models of GPT generates text which can be distinguished by repetition of words or a typical pattern the text follows but it is claimed that GPT 3 can generate highly sophisticated and coherent text responses that are often indistinguishable from those written by humans.

A brief evolutionary history of Chat GPT, a culmination of Natural Language Processing (NLP), (Readers should not be confused with Neuro-Linguistic Programming (NLP) approach which is essentially used to enhance the effectiveness of communication and facilitate learning and personal development) will help readers to contextualise how NLP reaches where it has reached today.

Natural Language Processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human languages. NLP is a relatively new field, and its development has been driven by advances in computer technology, artificial intelligence, and linguistics.

The origins of NLP can be traced back to the early 1950s, when researchers began exploring the idea of using computers to translate natural language. The field took off in 1960s with the advent of computers capable of processing large amounts of data, and the development of the first formal theories of computational linguistics. During this time, researchers began to develop algorithms and models for processing natural language text.

Early 1970s and 1980s, saw crucial steps by NLP researchers who started to focus on developing rule-based systems that used formal grammars to parse and understand natural language text. These systems were limited due to the complexity of natural language and the difficulty of encoding all of the relevant rules and exceptions.

With reasonable success in late 80s which was not enough for a path breaking result, in 1990s and 2000s, researchers shifted their focus to statistical models for NLP, which rely on large amounts of data to learn patterns and relationships between words and phrases. This approach led to significant advances in tasks such as machine translation, sentiment analysis, and text classification.

More recently, deep learning techniques, such as neural networks, have revolutionised the field of NLP, enabling researchers to develop more complex models capable of interpreting and generating natural language text. This has led to significant advances in areas such as language modelling, machine translation, and text summarisation. Chat GPT is just one application of it, a powerful application which will change the way we look at writing editing and off course the way we define and appreciate creativity.

There are already concerns about usage of Chat GPT for the ease of the readers they may be grouped as follows:

  1. Academic concerns of using Chat GPT in academic/ research writings
  2. Social concerns of using Chat GPT
  3. Legal concerns of using Chat GPT
  4. Ethical concerns of using Chat GPT

ACADEMIC CONCERNS

Plagiarism: One of the biggest concerns related to using Chat GPT in academic writing is plagiarism. (Plagiarism is the act of using someone else’s work or ideas without giving them proper credit. It is essentially a kind of intellectual theft, as the original author’s work is being presented as one’s own).

The problem with generative app is it can’t provide citation for what it generates for concepts and sentences. Very recently one of the most prestigious journals has denied authorship of Chat GPT (1) and laid rules of using Chat GPT as author (2). It quoted: “First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with its accountability for the work, and AI tools cannot take such responsibility.

“Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”

The fundamental issue in academic ethics is that the author should own the responsibility of his writings/experiments/results/inferences. When it comes to generative apps like Chat GPT or for that matter any algorithm, the question arises with whom the responsibility lies – their user, the producer or with the app itself?

Lack of originality: Academic writing is supposed to reflect the original ideas and thoughts of the writer. However, if Chat GPT is used to generate text, it can compromise the originality of the work and may not represent the true intellectual ability of the writer.

As famous linguist Naomi S. Baron, Professor of Linguistics Emerita, American University, puts it: “A high school student in Britain echoed concern about individual writing style when describing Grammarly: “Grammarly can remove students’ artistic voice. … Rather than using their own unique style when writing, Grammarly can strip that away from students by suggesting several changes to their work.”

Inaccuracy: While Chat GPT is highly advanced, it is not infallible, and there is always the risk of inaccuracies in the generated text. If students or researchers rely solely on Chat GPT to generate academic writing, they may inadvertently include inaccurate or misleading information. Academic writing demands accuracy when it comes to presentation of one’s thoughts or inferences as they have consequential value.

Misrepresentation: Chat GPT is designed to generate human-like text, but it is still a machine, and it cannot replicate the depth and nuance of human thought and emotion. If the generated text is presented as the work of a human writer, it can be considered a misrepresentation.

Evan Selinger, a philosopher, interested in ethics and technology, is worried that predictive texting reduces the power of writing as a form of mental activity and personal expression and hence representation. In his words:

“By encouraging us not to think too deeply about our words, predictive technology (Such as Chat GPT and Grammarly) may subtly change how we interact with each other,” Selinger wrote. “We give others more algorithm and less of ourselves. … Automation … can stop us thinking.”

Unfair advantage: If some students or researchers have access to Chat GPT while others do not, it can create an unfair advantage for those who have access to the technology. It leads to further enhance the disparities in academic performance or opportunities.

Consider a scenario of internet availability and usage

The International Telecommunication Union (ITU) USA estimates that approximately 5.3 billion people – or 66 per cent of the world’s population – were using the Internet in 2022. (5) This represents an increase of 24 per cent since 2019, with 1.1 billion people estimated to have come online during that period. However, this leaves 2.7 billion people still offline. In the developed world, this figure is much higher, with over 80% of people having internet access. In contrast, only 28.7% of people in the underdeveloped world have internet access.

So, disparities in a basic need to get befitted from AI is lacked by almost 70 per cent of the developing world. Let alone the added cost of sophisticated devices and charges they come with. The picture becomes even more grimmer when it comes to gender disparities in mobile internet access, mobile internet access is often the only way for people in underdeveloped areas to access the internet.

However, according to GSMA, a mobile industry trade group, the mobile internet gender gap remains at 20%, meaning that 300 million fewer women than men globally access the internet via mobile. No mobile, and in most cases NO AI no chat GPT.

Intellectual property rights: The text generated by Chat GPT can potentially infringe on the intellectual property rights of others. For example, if Chat GPT is used to generate content that includes copyrighted material without proper attribution, it can lead to legal action being taken against the user.

Liability: If Chat GPT is used to generate text that contains errors, inaccuracies, or misinformation, it can lead to legal liability for the user. For example, if a researcher or student or social scientist uses Chat GPT to generate a paper that contains false information and publishes it then who will be held liable.

 

SOCIAL CONCERNS

These all were academic concerns, but academia doesn’t work in isolation; it’s part of a larger social structure; so, it’s imperative to look at the social concerns of Chat GPT.

Reinforcing Biases: As stated earlier, Chat GPT is trained on large datasets of existing text, spread across the internet which can contain biases and stereotypes. If these biases are not addressed, pinpointed, filtered and flagged, Chat GPT can reinforce and perpetuate them in the generated text. This can contribute to societal inequalities and discrimination.

Threat to Jobs: Chat GPT and other AI technologies have the potential to automate certain tasks traditionally performed by humans, which can lead to job losses and economic disruption. Why someone can hire a translator when generative tools at times surpass human translation! Why do one need an editor for a journal or a magazine when a simple graduate can prompt the GPT to do editing for the journal!

Why proof readers are required when a click can generate a wonderful and 100% accurate proof read! Why quality control engineer will be required when a code can be corrected by certain NLP. why one needs coding personnel when GPT can write a fool proof code. The examples are myriad and the potential to disruption in job market is enormous. So, who will take the responsibility of this displacement?

Although threat to job paradigm is challenged by the notion of “rejectionist view”, wherein it’s argued that it happens with every new technology. Whenever a new technology arises, initially it always gets rejected by a section of society e.g. when farming tractors were invented, people cried foul that it would snatch the jobs in the traditional farming zone such as ploughing. But gradually now we are in modern farming!! The reality is the balance lies in between maintaining an equilibrium in all Acceptionist and all Rejectionist views. An excessively technocratic world is far scarier than a less technocratic world. Moreover, Chat GPT is just one application of AI and AI does have a potential to dismantle the very foundation of human civilization!!

Misinformation: The text generated by Chat GPT can potentially contain false or misleading information. If this information is shared or disseminated, it can contribute to the spread of misinformation and conspiracy theories, which can have negative social consequences. Misinformation, disinformation and malinformation ecosystems are propaganda war arsenals and the world has already seen their power during pandemic, during election and even in PR campaigns by powerful leaders across the world.

Chat GPT can be used to generate text that is harmful, unethical, or illegal. For example, it can be used to generate spam or phishing emails, spread false information, or even create deepfake videos.

Impact on Communication: Chat GPT and other generative apps using AI technologies have the potential to change the way we communicate with each other. If people become overly reliant on these technologies, it can lead to a loss of human-to-human interaction and a decline in social skills.

Transparency: The text generated by Chat GPT is often difficult to attribute to a specific source, which can make it challenging to determine the accuracy and reliability of the generated text. This lack of transparency can contribute to concerns related to accountability and trust and in the case of academia, Integrity.

The social concerns are ultimately woven with legal concerns. They can’t be seen in isolation. Following are the concerns which arise due to use of Chat GPT in legal matters.

 

LEGAL CONCERNS

Copyright Infringement: Chat GPT actually reflects its training on vast amounts of text data, much of which is copyrighted material. Therefore, using Chat GPT to generate content that infringes on someone else’s copyright should constitute a legal concern.

Privacy Concerns: Depending on how Chat GPT is being used, it may collect or store personal information from users. The mind-boggling speed at which the data has been collected by virtually any internet connected device has left privacy a far distant dream. Now that GPT is even more hungry for data, it’s important to ensure that any data collected by Chat GPT is being handled in compliance with relevant privacy laws but will it be the case??

Liability: Liability is another concern. Will Chat GPT fall under Torts Law! If yes, how? If no, why? Depending on the context in which Chat GPT is being used, there may be liability concerns. For example, if Chat GPT is used to generate legal or financial advice, which when acted upon leads to damage, the liability will be absolved with just one disclaimer! Will the disclaimer politics enhance the damage or will it limit it?

The legal concerns mentioned above open the room for ethical concerns.

Bias: Biasness has both social and ethical dimensions. As discussed before, Chat GPT’s training data and algorithms may introduce bias into its responses, which can perpetuate stereotypes and discriminatory attitudes towards certain groups of people, which essentially is violation of basic ethical practices globally recognised by a vast majority of countries. This biasness can go a long way. The content generation for propaganda war will be way easy. Making scapegoats using generative apps will be a click away and combining with Deepfake the Bias may be unprecedented. The setting of narratives against a particular community or group may become way easy for screen warrior.

Privacy: One aspect of privacy has a legal connotation while another aspect has social connotation. Chat GPT may collect and store sensitive information about its users, which can lead to privacy violations if this data falls into wrong hands.

Responsibility: As an AI model, Chat GPT does not have moral agency or accountability. It cannot be held responsible for its actions, which can lead to situations where the model is (mis)used to harm individuals or groups.

Some of the concerns have been redressed very recently; however, not all of them have been dealt with in depth in the following section. All these concerns are addressed alongside the conventional redressal narrative with suggestion.

 

ADDRESSAL OF ACADEMIC CONCERNS

Plagiarism can be Prevented by Proper Attribution: To address concerns related to plagiarism, it is important to ensure that any text generated by Chat GPT is properly attributed. But the question remains How? We suggest the following mechanism:

  1. Chat GPT should evolve a mechanism wherein it should give user the information of at least major sources from where it’s generating the text or data. Although it may sound a mammoth task but it’s not impossible.
  2. There should be a filter in GPT asking user if it is being used for academic purpose or information purpose. If it’s for academic purpose, it may flag it in a manner which may reflect in the text that the given text is produced, using Chat GPT.
  3. Providing watermarks to Chat GPT generated text and material.
  4. Chat GPT itself should circulate a general identification pattern of text generated by it, which may help reviewers and peers to identify plagiarism.

Academic integrity: To address concerns related to academic integrity, it is important to ensure that the use of Chat GPT is in line with academic standards and expectations.

The whole predatory publication worlds partake on this very notion of publish or perish and generative apps like Chat GPT will only enhance this circle. So, an overall ecosystem of academics including research needs rectification to prevent further erosion of academic integrity.

Representation and Educational Value: To address concerns related to the original representation of an author’s idea or content and educational value of using Chat GPT, it is important to ensure that the use of the technology should enhance rather than deteriorate learning capacity. This may involve using Chat GPT in conjunction with other teaching methods to supplement rather than replace traditional writing assignments. But what mechanisms may be invoked requires serious work which still remains to be done.

Addressing Social Concerns: Alongside the redressal of academic concerns, there are suggestions to address the social concerns as well. Keeping it in mind, social concerns have deeper and wider impact on the society, have very high consequential value as compared to another realms of concerns be it academic, legal or ethical.

Addressing Biases: To address concerns related to bias and fairness, it is important to ensure that the data used to train Chat GPT is diverse and representative of the regional population, their narratives or discourses, religious, racial, etc. Additionally, researchers and developers should work to identify and mitigate any biases in the generated text.

Education and Awareness: To address concerns related to misinformation and communication, it is important to educate users about the limitations and potential biases of Chat GPT. Users should be encouraged to critically evaluate the text generated by the App and to be mindful of the potential impact on communication and social interaction. Mere disclaimers will not serve the purpose. Effective mechanism having delivery potential should be placed for education and awareness.

Regulation and Oversight: To address concerns related to privacy and responsibility, it is important for policymakers and institutions to develop clear guidelines and regulations for the use of Chat GPT and other AI technologies. Additionally, there should be oversight and accountability mechanisms in place to ensure that the use of Chat GPT is responsible and within the larger frame of accepted ethical guidelines.

Redressal of Legal Concerns: On the similar lines, the legal redressal of concern proposed first and foremost among it is:

Compliance with Existing Laws: Users of Chat GPT should ensure that their use of the technology is in compliance with the existing laws and regulations related to data privacy, intellectual property, and other relevant areas. But legal framework in developing countries is already so vague with regard to data privacy laws that it will be nearly impossible to regulate prospective negative implication of generative app usage. The insensitive judiciary coupled with poor executive makes it easy to make inroads to private data and its probable misuse.

So, a strong political will along with a sensitive, aware, and well-informed restructuring of cyber laws should be the priority at this end.

Secondly, the whole ecosystem of judicial process related to cyber world needs an overhaul.

Regulation and Oversight: Policymakers and institutions should develop clear regulations and oversight mechanisms for the use of Chat GPT and other AI technologies. This can help to ensure that the use of Chat GPT is in line with legal requirements and that any potential legal risks are identified and addressed.

International Considerations: The use of Chat GPT raises questions about jurisdiction and international law. To address this, it is important to establish clear guidelines and regulations that take into account the international nature of the technology and its potential impact on global society.

Redressal of Ethical Concern: Lastly, how ethical concerns about the use of Chat GPT can be addressed.

Ethical concerns broadly fall into transparency, and human oversight categories.

Transparency: To address concerns related to transparency, it is important to ensure that users are aware of the limitations of Chat GPT and its potential biases. Additionally, it is important to be transparent about the data used to train Chat GPT and the algorithms used to generate text.

But despite the awareness about limitations of Generative apps, users still can’t resist the urge to ask questions the App is not meant to answer. Users can’t accept the fact that apps are not meant to replace the human advisor, family friend or relative. They may advice the user which colour to wear according to the season and weather but they are nor designed to advise you whether you should divorce your wife for a new girlfriend you met on a dating site.

So, what should be done to prevent such absurdity is a million-dollar question.

Human Oversight: The ethical angle of human oversight is even more important to address concerns related to the potential misuse of Chat GPT. It is important to ensure that there is a human oversight in place to monitor and evaluate the text generated by Chat GPT. This can help to ensure that the text generated is appropriate and does not cause harm.

 

PARTING WORDS

Despite being new, considerable intellectual activism has been put into generative apps partly because Chat GPT is just an application of AI and there exists foundation for ethical and legal framework for AI, but despite this there remain several questions e.g. Are we lagging behind in choosing between binaries when it comes to technology? Such as which technologies to be evolved and which shouldn’t? Do we have a concrete epistemological foundation to build rules to decide on technologies? Are we seeking technologies with purpose or evolved technologies are finding new purposes! Do secular humanistic values ensure a grounded foundation for development of philosophical narratives about technologies? Is our technology need driven or it is driven by a passion to conquest? If yes, what exactly we are aiming to conquer – the nature, the law or nothing?

What exactly we are aiming at? A technology whose rate of development is fast enough to challenge the very foundational essence of being human? Or a technology which just aims to bring ease and comfort?

Some of the above questions are answered; some are partly answered; some don’t have answers as yet…but one thing is clear we need creative narratives to address the realm of technology because the so-called value neutrality of technology will not take human civilization further as we know that there is nothing in this world which essentially is a value neutral entity.

[Mohammed Rizwan holds a PhD in Biochemistry and Molecular Biology.He works on the intersectionality of religion, science and technology and society. He is particularly interested in understanding how disruptive technologies, by and large, shapes the society. He is Deputy Director Centre for Study and Research New Delhi an Independent Research centre and think tank and Chairs its Council for Islamic Perspectives on Disruptive Technologies.He is also guest faculty Department of Biotechnology Jamia Millia Islamia, New Delhi and Casual Academic @ Department of Molecular Biology and Genetic Engineering, Nagpur University, Nagpur.]