Skip to Main Content

Artificial Intelligence: What is Ai?

Resources for research and use of AI tools Chatgpt, Dall-E and others

What is Ai

What is Artificial Intelligence (Ai)?
Here are explanations from some of our resources:

From Britannica Academic

From CREDO Reference

The Gale Encyclopedia of Science

Login required for off campus access  Off Campus access username and password can be found in Blackboard on the Institution Page. Students: look for Student Tools to Stay Connected at Atlantic Cape. Faculty: look for Key information about Libraries and Tutoring. Or you can contact the library for username and password.


Artificial intelligence is used across all industries and academic subjects. The term is used to describe everything from finding the best route on Apple and Google Maps, to self-driving cars, algorithms to display a list in a certain order on a website or in a social media app, and facial recognition software to unlock a smart phone. It is part of our everyday lives, at work, in school and at home.

Computer scientist John McCarthy coined the term “Artificial Intelligence” in 1955 in connection with a proposed summer workshop at Dartmouth College, which many of the world's leading thinkers in computing attended. As part of refining his ideas about AI, he also invented the programming language lisp in 1958. In 1965, McCarthy became the founding director of the Stanford Artificial Intelligence Laboratory (SAIL), where research was conducted into machine intelligence, graphical interactive computing, and autonomous vehicles

Modern AI platforms are composed of multiple groups of algorithms with different goals. These platforms take training data to "learn", and then pass on what it has learned to a model which uses this knowledge to generate some output. 


Milestones in Ai

1950 - Turing Test: Alan Turing proposed the Turing Test to assess a machine's ability to exhibit intelligent behavior simular to a human

1955 - John McCarthy coins the term Artificial Intelligence 

1966 - ELIZA: Joseph Weizenbaum created ELIZA, the first chatbot at MIT.

1986 - Mercedes-Benz demonstrates a driverless van. The same year Carnegie Mellon University demonstrate a driverless Chevy van Called NavLab1

1997 - Deep Blue Beats Kasparov: IBM's Deep Blue defeats world champion chess master Carry Kasparov, showcasing Ai.s strategic abilities.

2011- IBM's Watson wins on Jeopardy, demonstrating its ability to answer questions in natural language

2014 - Alexa: Amazon introduces alexa a virtual assistant that allows users to interact with devices using voice command.

2016 - AlphaGo Beats Lee Sedol: Google DeepMind's AlphaGo defeated world champion Lee Sedol in the game of Go, a feat that was previously thought to be at least a decade away due to the game's complexity.

2017 - Amper become the first Ai music composer, collaborating with human musicians to create music.

2020s - Generative AI and Large Language Models: Open Ai launches GPT-3 chatbot for automated conversations. It uses natural language processing and deep learning to generate human like text.

Ai Vocabulary

  • Ai Prompt Engineer: In prompt engineering, you choose the most appropriate formats, phrases, words, and symbols that guide the Ai to interact with your users more meaningfully. Prompt engineers use creativity plus trial and error to create a collection of input texts, so an application's generative Ai works as expected.
  • Algorithm: A set of instructions or sequence of steps that tells a computer how to perform a task or calculation. In some Ai applications, algorithms tell computers how to adapt and refine processes in response to data, without a human supplying new instructions.
  • Artificial intelligence (AI): "is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but Ai does not have to confine itself to methods that are biologically observable" (McCarthy, n.d.)
  • Autonomous: A system in which a machine makes independent, real-time decisions based on human-supplied rules and goals.
  • Bias:: Assumptions made by a model that simplify the process of learning to do its assigned task. Most supervised machine learning models perform better with low bias, as these assumptions can negatively affect results.
  • Big Data: The massive amounts of data that are coming in quickly and from a variety of sources, such as internet-connected devices, sensors, and social platforms. 
  • Chatbot: An Ai system that mimics human conversation. While some simple chatbots rely on pre-programmed text, more sophisticated systems, trained on large data sets, are able to convincingly replicate human interaction.
  • Data Labeling: Often, human annotators are required to label, or describe, data before it can be used to train a machine learning system. In the case of self-driving cars, for example, human workers are required to annotate videos taken from dashcams, drawing shapes around cars, pedestrians, bicycles and so on, to teach the system which parts of the road are which. 
  • Deep Learning: A subset of machine learning. Deep learning uses machine learning algorithms but structures the algorithms in layers to create "artificial neural networks." These networks are modeled after the human brain and are most likely to provide the experience of interacting with a real human.
  • Foundation model: General-purpose Ai is known as a Foundation model or base model. GPT-3.5, for example, is a foundation model. ChatGPT is a chatbot: an application built over the top of GPT-3.5, with specific fine-tuning to refuse dangerous or controversial prompts.
  • Generative AI: GPT is short for “Generative Pre-trained Transformer,” "Generative" means that it can create new data, in such as text, in the likeness of its training data. “Pre-trained” means that the model has already been optimized based on this data, meaning that it does not need to check back against its original training data every time it is prompted. And “Transformer” is a powerful type of neural network algorithm that is especially good at learning relationships between long strings of data, for instance sentences and paragraphs.
  • Hallucination: Hallucination refers to an incorrect response from an AI system, or false information in an output that is presented as factual information.
  • Label:: A part of training data that identifies the desired output for that particular piece of data.
  • Large Language Models (LLM):. Ai that is trained on huge quantities of human language, sourced mostly from books and the internet. They learn common patterns between words in those datasets, The more data and computing power LLMs are trained on, the more novel tasks they tend to be able to perform. Language models can also be prone to severe problems like Biases and Hallucinations.
  • Machine learning: The capacity of computers to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and infer from patterns in data (Oxford English Dictionary)
  • Model: The word “model” is shorthand for any singular Ai system. Examples of Ai models include OpenAI’s ChatGPT and GPT-4, Google’s Bard and LaMDA, Microsoft’s Bing, and Meta’s LLaMA.
  • Multimodal system: A multimodal system is a kind of Ai model that can receive more than one type of media as input—like text and imagery—and output more than one type of signal. 
  • Neural Networks:. Neural networks are by far the most influential family of machine learning algorithms. Designed to mimic the way the human brain is structured, neural networks contain nodes—analogous to neurons in the brain—that perform calculations on numbers that are passed along connective pathways between them. During training, large quantities of data are fed into the neural network, which then, in a process that requires large quantities of computing power, repeatedly tweaks the calculations done by the nodes. Via a clever algorithm, those tweaks are done in a specific direction, so that the outputs of the model increasingly resemble patterns in the original data. 
  • Reinforcement learning: Reinforcement learning is a type of machine learning in which an algorithm learns by interacting with its environment and then is either rewarded or penalized based on its actions.
  • Supervised learning: .Supervised learning is a technique for training Ai systems, in which a neural network learns to make predictions or classifications based on a training dataset of labeled examples. The labels help the Ai to associate, for example, the word “cat” with an image of a cat. With enough labeled examples of cats, the system can look at a new image of a cat that is not present in its training data and correctly identify it. Supervised learning is useful for building systems like self-driving cars, which need to correctly identify hazards on the roads, and content moderation classifiers, which attempt to remove harmful content from social media. 
  • Training data: Training data is the data used to "teach" a machine learning system to recognize patterns and features. Typically, continual training results in more accurate machine learning systems. Likewise, biased or incomplete datasets can lead to imprecise or unintended outcomes.
  • Turing Test: In 1950, the computer scientist Alan Turing set out to answer a question: “Can machines think?” To find out, he devised a test he called the imitation game: could a computer ever convince a human that they were talking to another human, rather than to a machine? The Turing test, as it became known, was a way of assessing machine intelligence. If a computer could pass the test, it could be said to “think” In recent years, as chatbots have become more powerful, they have become capable of passing the Turing test. But, their designers and plenty of Ai ethicists warn, this does not mean that they “think” in any way comparable to a human.
    • Turing, writing before the invention of the personal computer, was indeed not seeking to answer the philosophical question of what human thinking is, or whether our inner lives can be replicated by a machine; instead he was making an argument that, at the time, was radical: digital computers are possible, and there are few reasons to believe that, given the right design and enough power, they won’t one day be able to carry out all kinds of tasks that were once the sole preserve of humanity.
  • Unsupervised learning: Unsupervised learning is one way that a neural network can be trained. Unlike supervised learning, in which an Ai model learns from carefully labeled data, in unsupervised learning a trove of unlabeled data is fed into the neural network, which begins looking for patterns in that data without the help of labels. This is the method predominantly used to train large language models like GPT-3 and GPT-4, which rely on huge datasets of unlabeled text. One of the benefits of unsupervised learning is that it allows far larger quantities of data to be ingested,. Drawbacks to unsupervised learning can increase the likelihood of biases and harmful content being present in training data due to reduced human supervision. To minimize these problems, unsupervised learning is often used in conjunction with both supervised learning and reinforcement learning, by which models that were first trained unsupervised can be fine-tuned with human feedback.