Skip to Main Content
Banner image with CORE Library logo

Artificial Intelligence: Challenges and Opportunities

Welcome to a Brave New World

Generative artificial intelligence (AI) is a relatively new technology that is developing quickly. Like the internet in general, these AI tools, like ChatGPT or Google's Bard, represent both challenges and opportunities when it comes to finding and using information. Indeed, they represent a new way in which we can interact with information.

This guide's intention is to help you thoughtfully consider the role of artificial intelligence and how it intersects with your academic work at CGI.

Understanding AI

Tools like Chat GPT, Bard, Co-Pilot, and the dozens that will emerge in the coming days are natural language processing tools driven by AI technology.  These tools allow you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks like composing emails, essays, and code.

Chat GPT, for example, uses a style that we're all familiar with -- a single-line text entry field that provides text results.  Its power is the ability to parse queries and produce fully-fleshed out answers and results based on most of the world's digitally-accessible text-based information -- at least information that existed as of its time of training prior to 2021.  It uses Natural language processing (NLP), which enables it  to understand, interpret, and generate human language. 

Google interacts with users in a similar fashion -- you type in a query and it returns a list of web pages and articles that will (fingers crossed) provide information related to the query.   Its power is the ability to do enormous database lookups and provide a series of matches.

For the mathematicians out there, Wolfram Alpha is similar, but it provides mathematically- and data analysis-related answers.  Its power is the ability to parse data-related questions and perform calculations based upon those questions.

An important consideration for Chat GPT and other natural language processing AI is that its job is to generate language; to take what it's been fed and make sense of it.   

And making sense of something does not always equate to making factual statements. 

AI and the Truthiness of Its Citations

When ChatGPT provides an answer to a question, it will not immediately provide a reference for where the information came from. This is because it is pulling predictive language from a wide variety of places, so the information usually doesn't come from a single source. Because of this, you typically cannot trace the response back to a single parent source or know where the information came from.

I personally have had luck structuring queries that ask Chat GPT to provide "three APA-formatted references after 2018" to support the ideas presented in the response.  However, the validity of those references has varied.

In one case, it provided the three references as requested.  When I checked each, they led to real articles by real authors in real journals.

In another case, they were not real references.  The articles could not be verified as real and the authors either did not exist or had never written such an article.   So what that tells us is that while Chat GPT can provide references, in some cases, it's just using language to structure something that plausibly meets the parameters you have given it.  It's not giving you factual citations.  It's hallucinating  (which in the world of AI, is a term for "a confident response by an AI that does not seem to be justified by its training data" (Hallucination (artificial intelligence), 2023)

Here's an example.

Furthermore, institutionalization can also result in social isolation and a loss of connection to the outside world. Many nursing home residents may be separated from their families, friends, and communities, leading to social withdrawal and loneliness (Cohen-Mansfield & Jensen, 2018). The lack of meaningful social interaction and engagement can have detrimental effects on mental health, cognitive function, and overall quality of life for elderly individuals."

Cohen-Mansfield, J., & Jensen, B. (2018). Person-centered care for nursing home residents: The culture-change movement. Health Affairs, 37(3), 372-379.

Cohen-Mansfield & Jensen do exist and publish widely together on care for the elderly.   Health Affairs is a real journal.  However, Cohen-Mansfield & Jensen have never published this specific article and have not published in Health Affairs.

So How Do You Know?

As the student, it is your responsibility to ensure that the information you present in your scholarly work is factual.  And checking the credibility of one's sources is nothing new; you've probably been taught the CRAP strategy or others since middle school for evaluating the reliability of your sources.

So evaluating any information you seek to use in your work is highly recommended and it's especially true for ChatGPT. 

Here are two strategies for evaluating ChatGPT:

1. Lateral Reading

Don't take what ChatGPT tells you at face value. Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person ChatGPT mentions exists. When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.

Watch Crash Course's "Check Yourself with Lateral Reading" video (14 min) to learn more.

2. Verify Citations

If ChatGPT provides a reference, confirm that the source exists. Trying copying the citation into a search tool like Google Scholar or the CORE Library's OneSearch box. Do a Google or PubMed search for the lead author.  If it's a journal, go to the journal's webpage and try searching for the author there.

Second, if the source is real, check that it contains what ChatGPT says it does. Read the source or its abstract.