Artificial
Artificial
Intelligence
How technology is shaping everything we do
In 1994, Jimmy Lin ’97 predicted that artificial intelligence, or AI, would become a reality during his lifetime. He was 15 and competing against a small group of engineers and computer scientists for the Loebner Prize, awarded to the person who creates the most human-like computer program. Lin’s program was designed to recognize and respond to familiar key words in human speech. “I hesitate to call it artificial intelligence,” he told reporters at the time. “I like to refer to it as a bag of tricks.”
1796
Jonathan Swift’s satiric novel, Gulliver’s Travels, refers to the Engine, a large contraption used by scholars to generate new ideas, sentences and books.
1950
British mathematician Alan Turing publishes an academic paper addressing whether machines can think. He developed the Turing Test, a way to measure machine intelligence by assessing its ability to mimic human conversation and behavior. (The Loebner Prize competition is based on the Turing Test.)
1956
Dartmouth College mathematics professor John McCarthy coins the term “artificial intelligence” during the Dartmouth Summer Research Project on Artificial Intelligence, a conference exploring how machines could simulate human intelligence.
1958
Perceptron, the first artificial neural network, is developed by American psychologist Frank Rosenblatt. The program makes decisions in a way similar to the human brain. It can distinguish between punch cards marked on the left and right and is described by its creator as the first machine capable of having an original idea.
1960
Adaline (Adaptive Linear Neuron), a single-layer artificial neural network, is developed by Stanford University professor Bernard Widrow and his student Marcian Hoff. It’s an adaptive system for pattern recognition and the foundation for future advances in neural network and machine learning.
1997
Deep Blue, developed by IBM, is the first computer system to defeat a reigning world chess champion, Garry Kasparov. The computer’s underlying technology advances the ability of supercomputers to tackle complex calculations to perform tasks like uncovering patterns in databases.
2012
AlexNet, a deep learning neural network with eight layers, is a breakthrough in image recognition, identifying images of dogs and cars at a level similar to humans.
2017
Google Research develops Transformer, a neural network architecture that can train a computer to recognize the next word in a chain of words.
2019
OpenAI’s Generative Pretrained Transformer 2 (or GPT-2) demonstrates the power of natural language processing. GPT-2 is able to predict the next item in a sequence, perform tasks such as summarizing and translating text. GPT-3, introduced in 2020, is able to produce text often indistinguishable from human writing.
2021
DALL-E, a neural network that creates pictures from language prompts is introduced by Open AI.
2022
ChatGPT, Open AI’s chatbot, built on a large language model, introduces generative AI, which can create new content based on existing data. It can produce text, images, videos, audio and more.
2023
Google Labs releases Notebook LM, which summarizes up to 50 sources, including documents, videos and books.
2024
Using Google’s AI algorithms, Google Research and Harvard publish the first synaptic resolution of the human brain. Open AI releases Sora, an AI tool that creates videos from text, images and other video.
Debbie Kane is a longtime contributor to The Exeter Bulletin. Her work has also appeared in AIA NH Forum, New Hampshire Home and NewHampshire Magazine.
This article was originally published in the spring 2025 edition of The Exeter Bulletin.