In this episode of “The Lex Fridman Podcast,” Lex Fridman interviews Stephen Wolfram, a computer scientist, mathematician, theoretical physicist, and founder of Wolfram Research. They delve into the computational nature of reality, the challenges of building super intelligent AGI, and the potential of large language models like ChatGPT. They also discuss the importance of symbolic programming, the understanding of natural language, and the implications of AI systems on society.
Stephen Wolfram explores the computational nature of reality and its implications. Computational irreducibility, the phenomenon where certain systems cannot be simplified or reduced, plays a significant role in understanding the future of everything. Observers extract a thin summary of the world’s detail, but the question remains whether this summary is true. The human mind represents approximations and ambiguity, which may be a feature rather than a flaw.
Symbolic programming allows for the encapsulation of human thinking into computational possibilities. Human language, not directly computable, requires symbolic representation. Symbolic expressions, structured like functions and arguments, serve as a good match for human conceptualization. Efforts like Wolfram Alpha and ChatGPT aim to represent natural language precisely so it can be computed. They curate data and compute things from it, turning natural language into computational language.
ChatGPT’s success raises questions about the structure of language beyond grammar. There are finite rules for constructing semantically correct sentences, which may not always be realized in the world but are consistent with our imagination and understanding of physical reality. Large language models like ChatGPT can understand the laws of language and thought, which can be made explicit with effort and the discovery of computationally reducible aspects. The goal is to create a formalism for describing the world that allows for a tower of consequences.
AI systems lack intrinsic objectives and require humans to define them. The interplay between individual innovation and collective inertia is complex, with both positive and negative outcomes. Large language models like ChatGPT could potentially manipulate society if given more control. Understanding computational irreducibility is crucial for managing AI systems and realizing that not everything can be controlled. Society must adapt to the idea that AI without an owner can easily exist on the internet, and computational contracts may become a significant part of the world.
Stephen Wolfram’s exploration of the computational nature of reality and the challenges of building large language models like ChatGPT provides valuable insights into the future of AI and its impact on society. Symbolic programming and the understanding of natural language play crucial roles in bridging the gap between human thinking and computational possibilities. As AI systems continue to evolve, it is essential to navigate the complexities of computational irreducibility and ensure responsible development and use of AI technologies.