Intro
Humanity’s history is a process of ever-changing transformation. Some epochs have allowed our civilization to change faster than others. The Greek civilization and the enlightenment are two epochs where humans have seen abrupt changes. Today, we find ourselves nearing a very special moment, a crux in the history of our civilization. Some people have named this moment the singularity, a term that originated in 1950 and today is as relevant as it ever has been.
Technology has reached a point where computers are able to compose, talk, write and paint as well as we do. We are teaching machines to think and we are getting ever closer to a moment where AI will change this world. I argue that AI will be the final propeller of the singularity, a moment where technology is irreversible.
Imagine a world where machines can do what we do. They can think, create, and innovate just like us. That world might be closer than you think.
Origins of the singularity
The concept of the Singularity traces its roots back to 1950. It was then that Johnny von Neumann shared these comments with his friend and fellow mathematician, Stanislaw Ulam:
"The ever-accelerating progress of technology and changes in the mode of human life […] gives the appearance of approaching some essential singularity in the history of the race."
Von Neumann was not talking about artificial intelligence at the time. Although in those years the idea of machine intelligence was being discussed by other thinkers, notably Alan Turing, von Neumann used the term to describe a point in the future where technological growth becomes uncontrollable and irreversible, leading to unforeseeable changes.
The origins of AI
Right about the same time, computer precursor Alan Turing foresaw the plausibility of the emergence of artificial intelligence. His road to the idea was a culmination of his work in various fields, including logic, mathematics, and cryptography. Initially his work on the Entscheidungsproblem (decision problem) led him to define the Universal Turing Machine, which can be seen as a precursor to the modern computer. This work naturally extended to the question of whether these machines could replicate human thought processes.
Among Turing's most famous contributions to the field of AI is the "Turing Test". In his 1950 paper "Computing Machinery and Intelligence," Turing proposed a test to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human. Instead of asking "Can machines think?", he reframed the question to "Can machines do what we can do?" The test involves a human evaluator who engages in a natural language conversation with a machine and a human without knowing which is which. If the evaluator cannot reliably distinguish between the machine and the human based on the conversation, then the machine is said to have passed the test. We are way past that point, LLMs will pass that test, they can do what we can.
When singularity meets AI
Von Neumann and Turing met in 1935 at Cambridge while von Neumann was giving the course on almost periodic functions. They met again during Turing’s first year at Princeton, in January, 1946. Von Neumann shared some research problems with Turing at that time (e.g. approximation of continuous groups with finite groups) yet there is no record of them discussing machine intelligence or runaway technological progress. The mathematical community was tight-knit, and both of these individuals were giants within it, arguably two of the most intelligent men of their time.
Singularity and AI are not novel ideas, they were foreseen and they are expected. Sixty years later we are watching both of these ideas meet. At the turn of the 21st century Ray Kurzweil, a prominent futurist, inventor, and author who has written extensively about the technological singularity believes that AI is central to this technological revolution. Let’s discuss just how Mr Kurzweil thinks this will happen:
One of Kurzweil's central arguments is that technological progress, especially in the realm of information technology, is not linear but exponential. He often cites Moore's Law, which observes that the number of transistors on a microchip doubles approximately every two years, leading to an exponential increase in computing power. Moore’s law, he argues, will lead to rapid and unprecedented advancements in the development of AI. Although Moore’s law is a controversial example, as recently NVIDIA’s CEO Jensen Huang, of all people, declared it dead, I think Kurzwell is correct in arguing this. The market has never needed more computing power than now, and companies like NVIDIA and AMD are addressing it. Every new GPU generation brings around 70% to 80% increase in performance, which in compound is exponential, the last 4 generations met 4x increase in speed, way faster than what developers can probably take advantage of.
Having compute capacity available removes a key constraint but Kurzweil also introduced the concept of the "Law of Accelerating Returns," which posits that as technologies advance, they enable faster and more efficient ways to develop newer technologies. This feedback loop means that technological progress accelerates over time. In the context of AI, this suggests that once AI reaches a certain threshold of capability, it will be able to improve itself at an ever-increasing rate, leading to rapid advancements. This sounds plausible but I think that we must not forget it is us humans the ones who drive technological change. I can attest that this is correct. Just seeing how the open source community is finding 100% efficiencies on LLM model training week over week and reading on how Google employees fear that there is no moat in AI, makes me believe that we are driving technological change at Moore’s law pace.
The third concept that Kurweil introduced was that of using the human brain as a benchmark which is no other than the rise of artificial general intelligence. Kurzweil believes that once we have a sufficiently detailed understanding of the human brain, we will be able to create machines that replicate its functions. How this understanding is achieved I believe came in the training of large language models.
Many philosophers have proposed in the past the idea that language mirrors the logical structure of the world. In his "Tractatus Logico-Philosophicus," Wittgenstein proposed the picture theory of language, suggesting that propositions picture states of affairs in the world. He believed that the logical structure of language reflects the logical structure of reality. Decades earlier, Bertrand Russell in "Principia Mathematica," sought to develop a perfect logical language that would mirror the logical structure of the world. He believed that ordinary language often contains ambiguities and confusions, but a logically perfect language would represent the world's structure without such issues. In ancient Greece, Aristotle believed that there is a correspondence between the structure of language (particularly in terms of subject-predicate structure) and the structure of the world. For him, categories of being are reflected in the categories of language. This idea is not new, it’s been with us for centuries. A large LLM like GPT 4 was trained on 13 trillion tokens and generated 1.7 trillion parameters. There is a lot of logical structure represented and encoded in that training. We did model the human mind, we didn’t need to understand it, we just needed to let machines read and learn. Even with a dirty process and using brute compute force, it worked.
In the road to AGI (artificial general intelligence) the missing part might just be self-improvement, after all humans are experts on learning, However, once machines start doing this, they will quickly surpass us due to their ability to process information more quickly and without fatigue. If then we allow it to replicate and recreate, it will be able to design and improve upon subsequent generations. This self-improvement capability will lead to an explosive growth, eventually resulting in superintelligent AI that far surpasses human intelligence. This all sounds dangerous and a dystopic, but what if it’s the only way to save ourselves, what if it came to that.
There will be many technologies that will drive the singularity. Kurzweil doesn't believe AI will do it on its own either. He envisions a convergence of various technologies, including nanotechnology, biotechnology, and AI. These combined advancements will lead to profound changes.
In the field of biotech I expect an explosion in innovation not only driven by AI but also by genetics. We are already seeing incredible advances in aging and cures for metabolic syndrome, cancer and alzheimer. Our knowledge about the human body and its processes is increasing at a similar Moore’s law pace. We are already seeing those technologies in action, mRNA vaccines, CRISPR-Cas9 (gene editing), stem cell therapy and artificial organ growth are some that come to mind.
Singularity will not just be about machines surpassing human intelligence, it’s our own augmented brains that will begin to change. Heavy users of CHATGPT are executing dozens of chats per day , they use it for everything. Kurweil also predicts that humans will merge with machines, enhancing our own cognitive and physical abilities. This fusion of biology and technology will redefine what it means to be human. We are already different, we can’t separate from our phones for even a minute. In a few years we will probably just insert a chip in our brains. If you want to see the future just follow Neuralink, at the time of this writing they already got approval for human trials.
In summary, singularity will be caused by the exponential growth of technology, the self-improving nature of advanced AI, our increasing understanding of human logic, and the convergence of various revolutionary technologies. The singularity is a pivotal point in the future where technological growth becomes so rapid and profound that it fundamentally changes our civilization.
The signs around us
Identifying the exact moment of the singularity might be challenging, as it's not a single event but a theoretical point of rapid, exponential technological growth. As we approach the singularity, there are several indicators we can watch for:
- Rapid Technological Advancements: A noticeable acceleration in technological breakthroughs and innovations.
- AI Self-improvement: Observing AI systems that can autonomously improve their capabilities without human intervention and do so at an accelerating rate.
- Economic Disruptions: Significant shifts in the job market and economy due to automation and AI advancements.
- Shifts in Human Identity: As humans merge more with technology, there might be profound discussions and shifts in our understanding of identity, consciousness, and what it means to be alive.
- Unprecedented Solutions: If we start seeing solutions to age-old problems (e.g., curing major diseases, solving complex mathematical problems) emerging at a pace and depth previously thought impossible, it might be an indicator of the singularity's onset.
I can already observe all of these signs to some degree. The speed at which we see technological advancement, even in particular fields, is uncanny. Even experts cannot seem to keep up. The appearance of self improving AI has already started in my opinion, and the key to it is autonomy. We already have agents like Auto GPT or BabyAGI out there, that appeared very early and that we are using on a day to day basis. Chat GPT’s code interpreter is a technology that is meant to produce code as good as the one it's made out of. Finally, I expect that the economic impact in the knowledge economy is going to be absolutely wild.
WRAPPING UP
Given these signs, it's clear that the singularity isn't just a distant concept; it's a reality we're steadily approaching. The singularity isn't just about machines; it's about us. It's about our adaptability, our resilience, and our capacity to evolve. This revolution will redefine our values, our institutions, and our identity. We must be courageous because technology is the source of our progress as a civilization. The best way to solve our problems is with technology, that has always been the case in our history and will be in the future.