Your Cart
Loading
brief history podcast ryan spanier myths machines artificial intelligence evolution stories

Podcast Transcript | From Myths to Machines

Welcome to the Sleeping Badly podcast. This is Ryan Spanier. Not too much housekeeping today; I'm a little bit closer to where I want to be in terms of the music for this podcast. More refinements surely to come, and I'm feeling a little bit more comfortable with the project as a whole, but there's still a long way to go. At the moment, my thought is to create five episodes of a reasonably high quality before I begin advertising the podcast. I also need to come up with a solution regarding the website. More on that to come. With all that out of the way, today I'm going to be giving a brief history of artificial intelligence. seems to be the talk of town these days So I want to get out at least a little bit of content about it So let's get started!


00:01:02


Artificial intelligence often shortened to AI refers to the capability of machines or software to perform tasks that normally require human intelligence such as reasoning learning and problem solving While AI is often thought of as a 21st century innovation its conceptual roots reach deep into ancient mythology medieval philosophy and centuries of scientific exploration AI has a rich and complex history early ideas and mechanical prototypes through the birth of computer science the modern era of machine learning deep learning and large scale AI models Although AI as we know it depends on modern digital computation the dream of creating artificial beings is thousands of years old Long before the invention of human cultures imagined artificial life In Greek mythology the god Hephaestus


00:02:02


said to have built Talos a giant bronze machine that guarded the island of In Jewish folklore Gollum an animated being made of was brought to life through mystical incantations Ancient Chinese legends speak of mechanical servants built for kings And in the Arab world the ninth century inventor Al Jazari described a programmable machine Automata stories demonstrate that the idea of creating intelligence whether through divine magical or mechanical means has long been part of human imagination By the Middle Ages and the myths began giving way to real mechanical devices In 1206 Al Jazari designed machines such as a programmable musical boat with moving figures the 15th century Europe Leonardo da Vinci sketched plans for a humanoid robot knight that could move its arms and jaw One of the most famous early examples of mechanical ingenuity came in the 18th Jacques de Vaucanson's mechanical duck which could flap its wings Eat and mimic digestion.


00:03:13


These devices were not intelligent in the modern sense, but they did demonstrate that human ingenuity could stimulate aspects of life through mechanical engineering. Artificial intelligence depends on the idea that reasoning can be formalized into rules and symbols. In the 17th century, philosophers like Descartes and proposed that human reasoning could be reduced to mathematical operations. Leibniz even dreamed of a universal characteristic language, a symbolic logic system that could represent all human thought and be manipulated by calculation. This idea foreshadowed. The symbolic AI systems of the 20th century began in the late 19th Charles Babbage designed the analytical engine, a mechanical general-purpose computer. Although never completed in his lifetime, it was the first design that could, in theory, be programmed for different tasks.


00:04:14


Ada Lovelace, who was working with Babbage, recognized the broader implications of such a noting that it could manipulate not just numbers but symbols, laying the conceptual groundwork for computing as we know it. In the late 19th and early 20th centuries, symbolic logic developed by George Boole and later Gottlob Frege created formal systems for. These mathematical tools would later become critical in programming artificial reasoning. By the 1930s, Alan Turing's work on computation formalized the concept of an abstract machine that could perform any calculation given the right instructions. The Turing machine, in his 1936 paper on computable numbers, provided the theoretical foundation for all modern computers. World War II accelerated computing research; Turing worked at Bletchley Park on code-breaking machines like the Bombe, while in the United States, the ENIAC was built for ballistic calculations.


00:05:19


These machines demonstrated that electronics could handle complex Tasks at unprecedented speed. In 1950, Turing proposed what became known as the Turing Test, a practical criterion for machine intelligence. If a computer could engage in conversation indistinguishable from that of a it could be said to 'think.' The term artificial intelligence itself was coined by John McCarthy at the 1956 Dartmouth Summer Research Project on artificial intelligence. This event is often considered the official birth of AI. Attendees included McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, many of whom became key figures in AI research. During AI's early decades, researchers were extremely optimistic were built to play chess solve algebra problems and prove mathematical theorems The Logic Theorist in 1955 by Alan Newell and Herbert Simon could prove theorems from Principa Mathematica.


00:06:29


Eliza in 1966 by Joseph Weizenbaum simulated a psychotherapist using pattern matching However enthusiasm often outpaced reality Early AI struggled with the common sense problem and combinatorial explosion of possible solutions in complex problems By the 1970s funding slowed leading to the first AI winter In the late 1970s and early 1980s AI research shifted toward quote expert systems end quote programs that used large hand coded rule sets to mimic the decision making of human specialists Some famous examples include Mycin which diagnosed bacterial infections and recommended antibiotics Another was Dendryl which analyzed chemical mass spectrometry data These systems showed commercial promise but were expensive to as updating knowledge bases required intensive manual labor By the late 1980s researchers began moving away from purely symbolic AI towards machine learning where algorithms learn from data rather than relying on fixed rules Neural networks inspired by the brain structure Had been sidelined since the 1969 Perceptron's critique by Minsky and Papert, but in the 1980s, backpropagation— a method for training multilayer networks— revived interest.


00:08:00


With the rise of the internet, vast amounts of data became available, and computing hardware, especially GPUs, allowed for more complex models. Speech recognition, computer vision, and machine translation saw rapid progress. In 2012, Geoffrey Hinton's team won the ImageNet competition with a deep convolutional neural network, AlexNet, which dramatically outperformed previous methods, triggering a massive shift towards deep learning. AlphaGo defeated world champion Lee Sedol at Go in 2016. Then, the GPT series demonstrated powerful natural language processing and still continues to this day. DALL-E and other generative models expanded AI into creative domains. By the 2020s, AI systems could generate text, images, audio, and even code. Models like GPT-4 and beyond combined vast datasets with sophisticated architectures to achieve broad capabilities. The growing power of AI raised pressing concerns about bias, misinformation, surveillance, job displacement, and control over autonomous systems.


00:09:13


The history of artificial intelligence is a story of. Human imagination, ambition, and perhaps above all, from ancient myths to deep neural networks, AI has evolved through cycles of optimism and setbacks. As the field continues to advance, the central question remains: how will humanity shape AI, and how will AI shape humanity? That's all for today on the Sleeping Badly podcast. If you like the show, please leave us a rating wherever you get your podcasts; it really helps! I'm your host, Ryan Spanier, and I hope you'll join us on the next episode. Thanks foragical or mechanical means has long been part of human imagination By the Middle Ages and the myths began giving way to real mechanical devices In 1206 Al Jazari designed machines such as a programmable musical boat with moving figures the 15th century Europe Leonardo da Vinci sketched plans for a humanoid robot knight that could move its arms and jaw One of the most famous early examples of mechanical ingenuity came in the 18th Jacques de Vaucanson's mechanical duck which could flap its wings Eat and mimic digestion. These devices were not intelligent in the modern sense, but they did demonstrate that human ingenuity could stimulate aspects of life through mechanical engineering. Artificial intelligence depends on the idea that reasoning can be formalized into rules and symbols. In the 17th century, philosophers like Descartes and proposed that human reasoning could be reduced to mathematical operations. Leibniz even dreamed of a universal characteristic language, a symbolic logic system that could represent all human thought and be manipulated by calculation. This idea foreshadowed. The symbolic AI systems of the 20th century began in the late 19th Charles Babbage designed the analytical engine, a mechanical general-purpose computer. Although never completed in his lifetime, it was the first design that could, in theory, be programmed for different tasks. Ada Lovelace, who was working with Babbage, recognized the broader implications of such a noting that it could manipulate not just numbers but symbols, laying the conceptual groundwork for computing as we know it. In the late 19th and early 20th centuries, symbolic logic developed by George Boole and later Gottlob Frege created formal systems for. These mathematical tools would later become critical in programming artificial reasoning. By the 1930s, Alan Turing's work on computation formalized the concept of an abstract machine that could perform any calculation given the right instructions. The Turing machine, in his 1936 paper on computable numbers, provided the theoretical foundation for all modern computers. World War II accelerated computing research; Turing worked at Bletchley Park on code-breaking machines like the Bombe, while in the United States, the ENIAC was built for ballistic calculations. These machines demonstrated that electronics could handle complex Tasks at unprecedented speed. In 1950, Turing proposed what became known as the Turing Test, a practical criterion for machine intelligence. If a computer could engage in conversation indistinguishable from that of a it could be said to 'think.' The term artificial intelligence itself was coined by John McCarthy at the 1956 Dartmouth Summer Research Project on artificial intelligence. This event is often considered the official birth of AI. Attendees included McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, many of whom became key figures in AI research. During AI's early decades, researchers were extremely optimistic were built to play chess solve algebra problems and prove mathematical theorems The Logic Theorist in 1955 by Alan Newell and Herbert Simon could prove theorems from Principa Mathematica Eliza in 1966 by Joseph Weizenbaum simulated a psychotherapist using pattern matching However enthusiasm often outpaced reality Early AI struggled with the common sense problem and combinatorial explosion of possible solutions in complex problems By the 1970s funding slowed leading to the first AI winter In the late 1970s and early 1980s AI research shifted toward quote expert systems end quote programs that used large hand coded rule sets to mimic the decision making of human specialists Some famous examples include Mycin which diagnosed bacterial infections and recommended antibiotics Another was Dendryl which analyzed chemical mass spectrometry data These systems showed commercial promise but were expensive to as updating knowledge bases required intensive manual labor By the late 1980s researchers began moving away from purely symbolic AI towards machine learning where algorithms learn from data rather than relying on fixed rules Neural networks inspired by the brain structure Had been sidelined since the 1969 Perceptron's critique by Minsky and Papert, but in the 1980s, backpropagation— a method for training multilayer networks— revived interest. With the rise of the internet, vast amounts of data became available, and computing hardware, especially GPUs, allowed for more complex models. Speech recognition, computer vision, and machine translation saw rapid progress. In 2012, Geoffrey Hinton's team won the ImageNet competition with a deep convolutional neural network, AlexNet, which dramatically outperformed previous methods, triggering a massive shift towards deep learning. AlphaGo defeated world champion Lee Sedol at Go in 2016. Then, the GPT series demonstrated powerful natural language processing and still continues to this day. DALL-E and other generative models expanded AI into creative domains. By the 2020s, AI systems could generate text, images, audio, and even code. Models like GPT-4 and beyond combined vast datasets with sophisticated architectures to achieve broad capabilities. The growing power of AI raised pressing concerns about bias, misinformation, surveillance, job displacement, and control over autonomous systems. The history of artificial intelligence is a story of. Human imagination, ambition, and perhaps above all, from ancient myths to deep neural networks, AI has evolved through cycles of optimism and setbacks. As the field continues to advance, the central question remains: how will humanity shape AI, and how will AI shape humanity? That's all for today on the Sleeping Badly podcast. If you like the show, please leave us a rating wherever you get your podcasts; it really helps! I'm your host, Ryan Spanier, and I hope you'll join us on the next episode. Thanks for listening.


Purchase Episode


Visit our social pages on Instagram, Facebook, and LinkedIn.


To hear episode previews, head to Apple Podcasts, Spotify, iHeart Radio, or Libsyn.


Home