Humanities and social science educators must embrace ChatGPT (for now). Here’s why

It was opened for mainstream use at the end of November 2022 and has sparked a huge debate in the research and higher education space. This is ChatGPT, accessible (mostly) for free on other websites and interfaces.

These applications are examples of the field of artificial intelligence (AI) including natural language processing (NLP) and soft computing. It performs remarkable analytical feats in response to various requests.

You can, for example, respond well enough to a prompt to write an article or sort complex data, in a few seconds. It has attracted the attention of academics and is the latest bête noire for lecturers in faculties and universities.

In the higher education sector, the take-home exam format has increased during the Covid-19 lockdown and has been retained in many institutions. After ChatGPT, the knee-jerk reaction has been to return to the traditional exam, written in person and under supervision, on the board.

But the exam is only the culmination of the learning done in the module. Part of the educational process in the humanities and social sciences is teaching students to conduct research and write original essays.

On this front, many are worried about what they see as a reduced ability to ensure the authorship of the submissions made in the essay, the foundation of humanities and social science education and the main tool for understanding, application and complex synthesis of students. concept tested.

“How can we be sure that students have written posts that are generated and not generated by this sophisticated AI?” seems to be a major concern.

As with all major innovations, two main sides have emerged. One sees AI tools as a threat to education, while others see them as a positive development for the sector.

I fall into the latter camp. ChatGPT is a good development, irreversible, and negative aspects can be achieved. My brief “experimentation” with the tool left me with no worries about this piece of AI – at least in its current rendition.

Two main reasons are the basis for optimism. First, and most importantly, these tools have limitations, partly due to the complexity of people as individuals and societies. Second, and derived from the first reason, there is no substitute for intellectual engagement. Let’s explore each of these reasons.

First, in my experience, ChatGPT can only provide many answers to the same basic question. Educators can run questions through the system and identify all possible answers. It’s a fun endeavor, and it can be fun.

In essence, expert lecturers can see multiple iterations of potential answers to assessment questions before students submit them. Lecturers can then submit these answers to university-mandated plagiarism detection software, the most widely used Turnitin.

This alleviates the most common concerns about ChatGPT – that it is not connected to the internet in real time and that Turnitin cannot detect plagiarism from ChatGPT because almost all the answers it can provide are unique because AI is based on soft computing.

Soft computing in this case refers to the ability of AI processors to flexibly “understand” language in a human-like manner. In practice, ChatGPT can still respond even if there is a typo, which will only get an error message from the AI ​​based on a typical algorithm.

Human nature will also come into play. Students, like all humans, are natural game theorists. That is, they can think not only about their actions but also about the actions of others, which are related to their own interests. Given the task of writing a paper on a limited number of topics, many, if they were rational agents, would not rely on the tool to write their work because they would know that their peers would be using the same software.

Lecturers should design curriculum and assessments in such a way that assessments are designed to extract principles and techniques and not just content. He must, as an experienced expert, “advance” the student. This can only help us refine our tools and assumptions about the discipline and what it teaches.

A second reason for optimism is that AI has limitations that lead to engagement. While it can do impressive things, such as sorting, organizing and giving opinions on complex topics at high speed, AI generally provides very short and superficial answers to questions, even classic social science questions, such as “Write a topic about Theory X and how .can be applied to XYZ, using historical and contemporary examples.

Faced with a brief, general and inadequate response (which, as mentioned above, the lecturer can do it himself and create a plagiarism detector), dishonest students ​​​​have no choice but to investigate further, make further requests and ask elaboration.

Through this pointless exercise, they will probably do what they should have done in the first place – do honest and thorough research. In addition, the tool cannot cite in text and does not have access to works published since 2021. This requires students to study the literature and identify relevant sources.

ChatGPT is also just average student at best. In an article by two professors of psychology at Temple University in the United States, they explained how, in response to a question posed by an honors level class in the department, AI “cannot differentiate. [a] A ‘classic’ article in a field that should be cited from other articles that review the same content. Bots also tend to refer to the same source over and over again.

My own experiments with ChatGPT have produced many contradictions from AI when the question is about specific facts that are still common knowledge among specialists.

This means that it cannot be used, at least (ironically) without sufficient innovation, to produce dissertations and PhD theses at postgraduate level. A well-trained academic will be able to detect the surface level, among other mechanisms involved in that level of education.

Instead of retreating and retreating into the familiar, academics must embrace these new sources of change and show dynamism in the way they approach teaching and learning. To do otherwise would be to not prepare students for the complex and ever-changing world that lies ahead of them and thereby render the sector obsolete and irrelevant.

Indeed, as the philosopher Hericlutus stated more than 2,500 years ago, change is the only constant.

Bhaso Ndzendze is head and associate professor at the University of Johannesburg’s department of politics and international relations, as well as 4IR and the Digital Policy Research Unit. His books include Artificial Intelligence and Theory of International Relations (2023) and Artificial Intelligence and Emerging Technologies in International Relations (2021).

The views expressed are those of the author and do not necessarily reflect official policy or position Mail & Guardians.



Source link

Leave a Reply