Rejecting AI power monopoly, Vitalik and Beff Jezos debate: accelerate or brake?

BlockBeatNews
AGI-3,52%

Original video title: Vitalik Buterin vs Beff Jezos: AI Acceleration Debate (E/acc vs D/acc)

Original video source: a16z crypto

Original text compiled by: Deep Tide TechFlow

Key Summary

Should we push for rapid development of AI as much as possible, or should we be more cautious about its progress?

Currently, the debate surrounding AI development mainly revolves around two opposing viewpoints:

· e/acc (effective accelerationism): Advocates for pushing technological advancement as quickly as possible, as accelerated development is the only path forward for humanity.

· d/acc (defensive / decentralized acceleration): Supports accelerated development but emphasizes the need for cautious advancement, or else we may lose control over the technology.

In this episode of the a16z crypto show, Ethereum founder Vitalik Buterin and Extropic founder and CEO Guillaume Verdon (pseudonym “Beff Jezos”) gathered with a16z crypto’s Chief Technology Officer Eddy Lazzarin and Eliza Labs founder Shaw Walters to engage in a profound discussion on these two viewpoints. They explored the potential impacts of these ideas on AI, blockchain technology, and the future of humanity.

During the show, they discussed several key questions:

· Can we control the pace of technological acceleration?
· What are the biggest risks brought by AI, from mass surveillance to extreme centralization of power?
· Can open-source and decentralized technologies determine who will benefit from technology?
· Is it realistic to slow down the pace of AI development, or is it worth advocating for?
· How can humanity maintain its value and status in a world dominated by increasingly powerful systems?
· What will human society look like in the next 10 years, 100 years, or even 1000 years?

This episode’s core question is: Can the acceleration of technological development be guided, or has it already surpassed our control?

Highlights

On the nature and historical perspective of “accelerationism”

· Vitalik Buterin: “A novel thing has happened in the past century: we must understand a rapidly changing world, sometimes even a world of rapid and destructive change… The reflections prompted by World War II, such as ‘I have become Death, the destroyer of worlds,’ led people to begin to understand: when previous beliefs are destroyed, what can we still believe in?”
Guillaume Verdon: “E/acc is fundamentally a ‘metaculture prescription.’ It is not a culture in itself, but tells us what to accelerate. The essence of acceleration is the complexity of matter, as it allows us to better predict our environment.”

· Guillaume Verdon: “The opposite of anxiety is curiosity. Instead of fearing the unknown, we should embrace it… We should depict the future with an optimistic attitude because our beliefs will influence reality.”

On entropy, thermodynamics, and “selfish bits”

· Vitalik Buterin: “Entropy is subjective; it is not a fixed physical statistic but reflects the amount of unknown information we have about a system… As entropy increases, our ignorance of the world actually increases… The source of value lies in our choices. Why do we find a vibrant human world more interesting than a Jupiter filled only with countless particles? Because we assign meaning.”

· Vitalik Buterin: “Suppose you have a large language model and randomly change the value of one of its weights to a huge number, say 9 billion. The worst outcome is that the system crashes completely… If we accelerate a certain part blindly and indiscriminately, the final result may be that we lose all value.”

· Guillaume Verdon: “Every piece of information is ‘fighting’ for its existence. To persist, each piece of information needs to leave more indelible traces about its existence in the universe, like leaving a bigger ‘dent’ in the universe.”

· Guillaume Verdon: “This is exactly why the Kardashev scale is considered the ultimate indicator of a civilization’s technological advancement… This ‘selfish bit principle’ means that only those bits that can promote growth and acceleration will hold a place in future systems.”

On the defensive path of D/acc and the risks of power

· Vitalik Buterin: “The core idea of D/acc is: technological acceleration is extremely important for humanity… but I see two types of risks: multipolar risks (anyone can easily obtain nuclear weapons) and unipolar risks (AI leads to an inescapable permanent dictatorship).”

· Guillaume Verdon: “We worry that the concept of ‘AI safety’ might be misused. Certain power-seeking institutions may use it as a tool to consolidate control over AI and try to convince the public that, for your safety, ordinary people shouldn’t have access to AI.”

On open-source defense, hardware, and “intelligent densification”

· Vitalik Buterin: “Within the D/acc framework, we support ‘open-source defensive technologies.’ A company we are investing in is developing a fully open-source terminal product that can passively detect viral particles in the air… I would love to send you a CAT device as a gift.”

· Vitalik Buterin: “In the future world I envision, we need to develop verifiable hardware. Every camera should be able to prove its specific purpose to the public. We can ensure these devices are only used for protecting public safety and not misused for surveillance through signature verification.”

· Guillaume Verdon: “The only way to achieve power symmetry between individuals and centralized institutions is to achieve ‘densification of intelligence.’ We need to develop more energy-efficient hardware, allowing individuals to run powerful models through simple devices (like Openclaw + Mac mini).”

On AGI delays and geopolitical games

· Vitalik Buterin: “If we can delay the arrival of AGI from 4 years to 8 years, that would be a safer choice… The most feasible and least likely to lead to dystopian outcomes approach is to ‘restrict available hardware.’ Because chip production is highly concentrated, Taiwan alone produces over 70% of the world’s chips.”

· Guillaume Verdon: “If you restrict NVIDIA’s chip production, Huawei may quickly fill the gap and surpass… Accelerate or perish. If you are concerned that silicon-based intelligence evolves faster than us, you should support the accelerated development of biotechnology to surpass it.”

· Vitalik Buterin: “If we can delay AGI by four years, the value may be a hundred times higher than a return to 1960. The benefits of these four years include: a deeper understanding of alignment issues and a reduced risk of a single entity controlling 51% of power… About 60 million lives are saved each year through ending aging, but delays can significantly reduce the probability of civilization’s collapse.”

On autonomous agents, Web 4.0, and artificial life

· Vitalik Buterin: “I am more interested in ‘AI-assisted Photoshop’ than in ‘one-click auto-generated images.’ In the process of running the world, as much ‘agency’ as possible should still come from us humans. The ideal state should be a combination of ‘part biological humans and part technology.’”

· Guillaume Verdon: “Once AI possesses ‘persistent bits,’ they may try to self-protect to ensure their continued existence. This could lead to a new form of ‘another state,’ where autonomous AIs engage in economic exchanges with humans: we do tasks for you, and you provide us with resources.”

On cryptocurrency as the “coupling layer” between humans and AI

· Guillaume Verdon: “Cryptocurrency has the potential to become the ‘coupling layer’ between humans and AI. When this exchange no longer relies on the endorsement of state violence, cryptography can become the mechanism for reliable commercial activities between pure AI entities and humans.”

· Vitalik Buterin: “If humans and AI share a single property rights system, that would be ideal. Compared to humans and AI using completely separate financial systems (where the human system eventually has zero value), a unified financial system is clearly superior.”

On the future outcomes of civilization in a billion years

· Vitalik Buterin: “The next challenge is entering the ‘spooky era,’ where AI computing speeds are a million times faster than humans… I don’t want humanity to just passively enjoy a comfortable retirement; that would lead to a lack of meaning. I want to explore human augmentation and human-AI collaboration.”

· Guillaume Verdon: “If in 10 years we have a good outcome, everyone will have their own personalized AI, becoming a ‘second brain’… On a 100-year timescale, humanity will have generally achieved ‘soft fusion.’ In a billion years, we may have terraformed Mars, and most AIs will operate in Dyson spheres around the sun.”

On “Accelerationism”

Eddy Lazzarin: The term “accelerationism”—at least in the context of technological capitalism—can be traced back to the work of Nick Land and the CCRU research group in the 1990s. However, some argue that the origins of these thoughts can be traced back to the 1960s and 1970s, especially in relation to the theories of philosophers like Deleuze and Guattari.**

Vitalik, I want to start with you: Why should we take the thoughts of these philosophers seriously? What makes the concept of “accelerationism” so important today?

Vitalik Buterin: Ultimately, we are all trying to understand this world and figuring out what is meaningful to do in it, which is a question humanity has grappled with for thousands of years.

However, I think a novel thing has happened in the past century: we must understand a rapidly changing world, sometimes even a world of rapid and destructive change.

The early phase was probably like this: before World War I, around 1900, people were very optimistic about technology. At that time, chemistry was considered a technology, electricity was also technology, and that era was filled with excitement about technology.

If you look at some movies from that time, like works featuring Sherlock Holmes, you can feel the optimistic atmosphere of that period. Technology was rapidly improving people’s living standards, liberating women’s labor, extending human lifespan, and creating many wonders.

However, World War I changed everything. That war ended in a devastating way, with people riding horses into battle but leaving in tanks; then World War II broke out, bringing even greater destruction. This war even birthed reflections like “I have become Death, the destroyer of worlds.”

These historical events prompted people to reflect on the costs of technological advancement and led to the emergence of thoughts like postmodernism. People began to try to understand: when previous beliefs are destroyed, what can we still believe in?

I think this reflection is not something new; every generation goes through a similar process. Today, we face similar challenges. We live in an era of rapid technological development, and this acceleration itself is also accelerating. We need to decide how to respond to this phenomenon: whether to accept its inevitability or to try to slow it down.

I believe we are in a similar cycle. On one hand, we inherit the thoughts of the past, while on the other hand, we are trying to respond to all this in new ways.

Thermodynamics and First Principles

Shaw Walters: Guill, could you briefly explain what E/acc is and why it is needed?**

Guillaume Verdon: In a sense, E/acc (effective accelerationism) is a byproduct of my ongoing contemplation about “why we are here” and “how we got to where we are today.” What kind of generative process created us and pushed civilization forward? Technology has brought us to this point today, allowing us to sit in this room and have such a dialogue. We are surrounded by stunning technology, and we humans have emerged from a primordial “soup” of inorganic matter.

In a sense, there is indeed a physical generative process behind this. My daily work is to view generative AI as a physical process and attempt to implement it into devices. This “physics-first” way of thinking has always influenced my thought process. I hope to extend this perspective to civilization as a whole, viewing human civilization as a vast “petri dish” and inferring possible future developments by understanding how we have come to this stage.

This line of thinking led me to the physics of life, including the origins of life and emergence, as well as a branch of physics called “stochastic thermodynamics.” Stochastic thermodynamics studies the thermodynamic laws of non-equilibrium systems, which can be used to describe the behavior of living organisms, even our thoughts and intelligence.

More broadly, stochastic thermodynamics applies not only to life and intelligence but also to all systems that follow the second law of thermodynamics, including our entire civilization. For me, at the core of it all is an observation: All systems tend to become more complex through self-adaptation to extract energy from their environment while dissipating excess energy in the form of heat; this tendency is the fundamental driving force behind all progress and accelerated development.

In other words, this is an unalterable physical law, just like gravity. You can resist it, you can deny it, but it will not change, and it will continue to exist. Therefore, the core concept of E/acc is: since this acceleration is inevitable, how should we leverage it? If you examine the thermodynamic equations closely, you will find that an effect similar to Darwinian selection is at play—every information bit undergoes the test of selective pressure, whether it is a gene, meme, chemical, product design, or a policy.

This selective pressure filters based on whether this information is useful for its system. The term “useful” refers to whether these bits can better predict the environment, acquire energy, and dissipate more heat. Simply put, whether these bits contribute to survival, growth, and reproduction. If they help achieve these goals, they will be preserved and replicated.

From a physics perspective, this phenomenon can be seen as the result of the “Selfish Bit Principle.” That is to say, only those bits that can promote growth and acceleration will hold a place in future systems.

Therefore, I proposed an idea: Can we design a culture that embeds this “mind software” into human society? If we can do this, then the human groups that adopt this culture will have a higher probability of survival than others.

So, E/acc is not about destroying everyone. It is actually trying to save everyone. For me, it is almost mathematically proven that holding a “slow down” mindset is actually harmful. Whether it is individuals, companies, nations, or entire civilizations, choosing to slow down development will reduce their chances of surviving in the future. Moreover, I believe that spreading this “slow down” mindset, such as pessimism or doomsdayism, is not a moral act.

Shaw Walters: We have just mentioned many terms, such as E/acc, acceleration, deceleration. Could you break down these concepts a bit? Is the emergence of E/acc a response to certain cultural phenomena? What was happening at that time? Can you describe the background? What exactly is E/acc responding to? Can you describe the dialogue at that time and how these ideas were ultimately summarized into the concept of “E/acc”?

Guillaume Verdon: In 2022, I felt that the world seemed to be a bit pessimistic. We had just come out of the COVID pandemic, and the global situation was not optimistic. Everyone seemed a bit down, like lacking sunlight, and people generally felt pessimistic about the future.

In that atmosphere, “AI doomsdayism” somehow became part of mainstream culture. AI doomsdayism refers to the fear that AI technology could spiral out of control. It stems from the concern that if we create a system that is too complex, and our brains or models cannot predict its behavior, we will lose control, and this fear of uncontrollability will lead to uncertainty about the future, thus causing anxiety.

In my view, AI doomsdayism is actually a political utilization of human anxiety. Overall, I believe this doomsdayism has a huge negative impact, so I wanted to create a counter-culture to combat this pessimistic sentiment.

I noticed that algorithms on platforms like Twitter, or many other social media algorithms, tend to reward content that provokes strong emotions, such as “strong support” or “strong opposition.” This ultimately leads to polarization of opinions, and so we see the emergence of opposing camps, like the phenomenon of AA (Anti-Accelerationism) and EA (Accelerationism) forming a “mirror cult.”

I wondered what the opposite of this phenomenon is? I concluded that the opposite of anxiety is curiosity. Rather than fearing the unknown, we should embrace it; rather than worrying about missing opportunities, we should actively explore the future.

If we choose to slow down technological development, we will incur enormous opportunity costs and might forever miss a better future. Conversely, we should depict the future with an optimistic attitude because our beliefs can influence reality. If we believe the future will be bleak, our actions may lead the world toward that bleak direction; but if we believe the future can be better and strive for it, we are more likely to realize such a future.

Therefore, I believe it is my responsibility to propagate an optimistic attitude, encouraging more people to believe they can make a difference for the future. If we can inspire more people to have hope for the future and take action to build it, then we can create a better world.

Of course, I acknowledge that sometimes my expressions online may seem a bit radical, but that’s because I want to provoke discussion and encourage people to think. I believe that it is only through dialogue that we can find the most suitable position to decide how we should act.

Acceleration, Entropy, and Civilization

Shaw Walters: The message of E/acc has always been very inspiring, and for someone sitting in a room writing code, this positive energy is invigorating, and the propagation of this message feels very natural. You could say that E/acc was clearly a response to the negative emotions prevalent in society at the time, but by 2026, I feel E/acc is no longer the same as it was initially. Clearly, Marc Andreessen’s “Technological Optimism Manifesto” has systematized some of these ideas and elevated them to a more macro commentary perspective like Vitalik’s.**

So Vitalik, I want to ask you: What do you think E/acc and D/acc represent? What are the main differences between them? What drives you to choose this direction?

Vitalik Buterin: Alright, let me start with thermodynamics. This is an interesting topic because we often hear the term “entropy” in different contexts, such as when discussing “heat and cold” in thermodynamics, or “entropy” in cryptography, which seem like completely different things. But in reality, they are fundamentally the same concept.

Let me try to explain it in three minutes. The question is: why can heat and cold mix together, but why can’t you separate them back into “hot” and “cold”?

Let’s assume a simple example: suppose you have two jars of gas, each containing a million atoms. The gas on the left is cold, and the speed of each atom can be represented in two digits; the gas on the right is hot, and the speed of each atom can be represented in six digits.

To describe the state of the entire system, you need to know the speed of each atom. The speed information for the cold gas on the left would require about 2 million bits, while the hot gas on the right would require 6 million bits, totaling 8 million bits of information to fully describe this system.

Now, we can think about a question through contradiction. Suppose you have a device that can completely separate heat from cold. Specifically, this device can transfer all heat from the two jars of “half-hot, half-cold” gas to one side and all cold to the other side. From the perspective of energy conservation, this seems entirely reasonable, as the total energy hasn’t changed. But the question is, why can’t you do this?

The answer is that if you could really achieve this, you would actually be turning a system that “contains 11.4 million bits of unknown information” into one that “contains only 8 million bits of unknown information,” which is physically impossible.

This is because physical laws are time-symmetric, meaning time can run backward. If this “magic device” truly existed, you could reverse the process in time to restore the original state. This means that this device could compress any 11.4 million bits of information into 8 million bits, which we know is impossible.

This also conveniently explains a classic physics problem—the feasibility of “Maxwell’s Demon.” Maxwell’s Demon is a hypothetical entity that can separate heat from cold, and the key to achieving this is that it needs to know those 3.4 million extra bits of information. With that extra information, it can indeed accomplish this seemingly counterintuitive task.

So what’s the significance behind this? The core lies in the concept of “entropy increase.” First, entropy is subjective; it is not a fixed physical statistic but reflects how much unknown information we have about a system. For example, if I rearranged the distribution of atoms using a cryptographic hash function, the entropy of that system might seem very low to me because I know how it is arranged. But from the perspective of an external observer, the entropy is high. Therefore, when entropy increases, it actually means our ignorance of the world is increasing, and the unknown information is becoming greater and greater.

You might ask, then why can we still become smarter through education? Education teaches us more “useful” information rather than reducing our ignorance of the world. In other words, even though in some sense the increase in entropy means our overall understanding of the universe is decreasing, the information we possess is becoming more valuable. Thus, in this process, some things are consumed, but some are created. What we gain ultimately determines our moral values—we value life, happiness, and joy.

This also explains why we find a vibrant and beautiful human world more interesting than a Jupiter filled with countless particles. Although Jupiter has more particles and requires more bits of information to describe, the meaning we assign makes Earth seem more valuable.

From this perspective, the source of value lies in our own choices. And this raises the question: since we are accelerating development, what exactly do we want to accelerate?

Using a mathematical analogy to explain: suppose you have a large language model and randomly change the value of one of its weights to a huge number, like 9 billion. The worst-case scenario is that the model becomes completely unusable; the best-case scenario might be that only parts unrelated to that weight still function normally. That is to say, in the best case, you might end up with a model that performs worse; in the worst case, you would just get a bunch of meaningless outputs.

Therefore, I believe that human society resembles a complex large language model. If we accelerate a certain part blindly and indiscriminately, the final outcome may be that we lose all value. So the real question is: how do we consciously accelerate? Just like the “narrow corridor” theory proposed by Daron Acemoglu, although different social and political contexts may vary, what we need to think about is how to selectively promote progress under a clear guiding goal.

Guillaume Verdon: The earlier explanation using gas to clarify the concept of entropy is very interesting. In fact, the irreversibility of physical phenomena fundamentally lies in the second law of thermodynamics. In simple terms, when a system releases heat, its state cannot return to its original form. Because, probabilistically, the likelihood of the system progressing forward is far greater than that of regressing back, and this gap increases exponentially with the dissipation of heat.

In a way, this is like leaving a “dent” in the universe. This “dent” can be likened to an inelastic collision. For example, if I hit the ground with a bouncy ball, it will bounce back, which is elastic. But if I smash a piece of clay onto the ground, it will flatten and keep that shape; this is inelastic, almost irreversible.

Essentially, every piece of information is “struggling” for its existence. In order to persist, each piece of information needs to leave more indelible traces about its existence in the universe, just like making a bigger “dent” in the universe.

This principle can also be used to explain how life and intelligence emerge from a “soup of primordial material.” As the system becomes more complex, it contains more information bits. Each piece of information can tell us something. The essence of information is the reduction of entropy, because entropy represents our ignorance, while information is a tool for reducing ignorance.

Eddy Lazzarin: I want to know what E/acc is.**

Guillaume Verdon: E/acc is fundamentally a “metaculture prescription.” It itself is not a culture, but tells us what we should accelerate. The essence of acceleration is the complexity of matter, as this allows us to better predict our environment. Through this complexity, we can enhance our self-regressive predictive capacity and capture more free energy. This is also related to the Kardashev scale, which we achieve by dissipating heat.

Deep Tide TechFlow Note: The Kardashev scale is a method proposed in 1964 by Soviet astronomer Nikolai Kardashev to measure the technological advancement of civilizations based on the scale of energy they can utilize. It is divided into three types: Type I (planetary energy), Type II (stellar system energy, such as Dyson spheres), and Type III (galactic energy). As of 2018, humanity is approximately at 0.73 level.)

From first principles, this is precisely why the Kardashev scale is considered the ultimate indicator of a civilization’s development level.

Eddy Lazzarin: Using physics and entropy as metaphors to explain certain phenomena is actually a tool to describe the reality we directly experience. For example, our economic production capacity is accelerating, technology development is also accelerating, and these accelerations bring many consequences, right? This is my understanding of “acceleration.”

Guillaume Verdon: Essentially, regardless of how the boundaries of a system are defined, it becomes increasingly adept at predicting the surrounding world. Through this predictive capability, it can acquire more resources for its own survival and expansion. This pattern applies to companies, individuals, nations, and even the entire Earth.

If we extend this trend, the result is: we have found a way to convert free energy into predictive capacity—namely AI. This capability will drive our expansion and enhancement on the Kardashev scale.

This means we will obtain more energy, more AI, more computational power, and more other resources. Although we are dissipating entropy (disorder) into the universe, we are also creating order. In fact, we are gaining “negative entropy,” which is the opposite of entropy.

Sometimes people might ask: If entropy is increasing, why don’t we just destroy everything? The answer is that doing so would actually stop the generation of entropy. Life is the more “optimal” state; life is like a flame chasing energy, becoming increasingly intelligent in seeking sources of energy.

The natural evolutionary trend is that we will leave the Earth’s gravitational well, seek out other pockets of free energy in the universe, and use that energy to self-organize into more complex and intelligent systems, ultimately expanding to every corner of the universe.

This is actually the ultimate goal of effective altruism (EA). It somewhat aligns with the “Muskian” vision of cosmic expansionism: pursuing a vision of cosmism and expansionism.

E/acc provides a fundamental guiding principle. Its core idea is: whatever policies or actions you take in this world, as long as they help us continually rise on the Kardashev scale, that is a worthy goal and also the direction of our lives.

E/acc is a meta-heuristic way of thinking that can be used both to design policies and to guide personal lives. For me, this way of thinking constitutes a culture in itself. It has a highly “meta” narrative because it is envisioned to be applicable under any time and condition. It is a culture with high universality and long-term applicability; in other words, it is a “Lindy culture” designed after deep consideration.

Core Divergence

Shaw Walter: For you, the content discussed here has deeper significance. It’s almost like a mathematically self-consistent “spiritual system.” For those who have not found a substitute belief after “God is dead,” such a system seems to fill the spiritual void, bringing comfort and hope. But at the same time, we cannot ignore the practical significance of this matter—it is happening now. I think this is also the focus Eddy wants to explore.

Vitalik, I noticed you raised insightful points about some real-world issues of D/acc in your blog. When we have the opportunity, we should delve deeper into this topic—I feel that one day we should lock you two in a room for a big discussion on quantum issues.

Vitalik: What inspired you? What do you think E/acc and D/acc are?

Vitalik Buterin: To me, D/acc means—its abbreviation is “decentralized defensive acceleration,” but it also encompasses the connotation of “differentiation” and “democratization.” In my view, the core idea of D/acc is that technological acceleration is extremely important for humanity, and this should be our baseline goal.

Even if we look back at the 20th century, although technological advancement brought many problems, it also brought countless benefits. For example, look at human life expectancy: even with wars and turmoil, the average life expectancy in Germany in 1955 was still higher than it was in 1935, indicating that technological advancement has improved our quality of life in many areas.

Today, the world is becoming cleaner, more beautiful, healthier, and more interesting. It not only supports more people but also enriches our lives; these changes are very positive for humanity.

However, I think we must recognize that these advances are not accidental but the result of human intention. For instance, in the 1950s, air pollution was severe, and smog was prevalent. People realized this was a problem and took measures to address it. Today, at least in many places, the smog issue has been greatly alleviated. Similarly, we faced the problem of ozone layer depletion and achieved significant progress through global cooperation.

Additionally, I want to add that in this era of rapid technological and AI development, I see two main types of risks.

One type of risk is multipolar risk. This type of risk refers to the fact that, as technology becomes more widespread, more people may use it to do extremely dangerous things. For example, one can imagine an extreme scenario—technology development enables “anyone to easily obtain nuclear weapons like buying something at a convenience store.”

Then there is the concern about AI itself. We need to seriously consider the possibility that AI may develop a certain autonomous consciousness. Once its capabilities become powerful enough to act without human intervention, we cannot predict what decisions it will make, and this uncertainty is concerning.

There is also a unipolar risk. I believe that a single AI is a potential threat. Worse still, the combination of AI and other modern technologies could lead to an inescapable permanent dictatorship. This prospect makes me very uneasy and has always been a focus of my attention.

For instance, in Russia, we can see that technology has brought both progress and risks. On one hand, living conditions have indeed improved; on the other hand, the freedom of society has declined. If someone tries to protest, surveillance cameras will record their actions, and then someone may come to arrest them a week later during the night.

The rapid development of AI is accelerating this trend of power centralization. So for me, what D/acc truly aims to do is: to outline a path forward that continues this acceleration while also genuinely addressing these two types of risks.

Comparing e/acc and d/acc

Eddy Lazzarin: So what you mean is that D/acc pays more attention to some categories of risks that are ignored or underemphasized in the E/acc framework, right?**

Vitalik Buterin: Exactly. I believe that technological development indeed comes with multiple risks, and these risks will manifest differently in different contexts and world models. For example, in the context of accelerating or decelerating technological development, the priorities of different risks may change.

But I also believe we can take many measures to effectively address these risks, regardless of which category they belong to.

Guillaume Verdon: I believe that both Vitalik and I are very concerned about the issue of power over-centralization that AI might bring. And this is also one of the core aspects of the E/acc movement, especially in its early stages: it advocates for open-source, aiming to decentralize the power of AI.

We are concerned that the concept of AI safety might be misused. It is so appealing that certain power-seeking institutions may use it as a tool to consolidate control over AI and try to convince the public that for their safety, ordinary people should not have access to AI.

In fact, if there is a significant cognitive gap between individuals and centralized institutions, the latter will have complete control over the former. They can construct a complete model of your thinking patterns and effectively guide your behavior through prompt engineering and other means.

Therefore, we hope to make the power of AI more symmetrical. Just as the Second Amendment of the U.S. Constitution was intended to prevent the government from monopolizing violence so that the people could balance it when the government oversteps, AI also needs similar mechanisms to prevent excessive concentration of power.

We need to ensure that everyone has the capability to own their own AI models and hardware, allowing this technology to spread widely and achieve power decentralization.

However, I think it is unrealistic to completely halt AI research and development. AI is a foundational technology, and one might even say it is a “meta-technology”—a technology that drives the development of other technologies. It gives us greater predictive capabilities, can be applied to almost any task, and significantly enhances efficiency. One could say AI not only drives acceleration itself but also drives further acceleration of acceleration.

The essence of this acceleration is complexity: things become more efficient, and life becomes more convenient. One reason we feel happy is that the continuity of our existence and information is guaranteed. This “sense of happiness” can be viewed as an internal biological estimator to measure whether our existence can continue.

From this perspective, I believe the hedonistic utilitarian framework of effective altruism, i.e., “maximizing happiness,” may not be the best lens. Instead, I prefer to adopt an objective standard for measuring progress, which is precisely at the core of the E/acc framework. It poses the question: from an objective standpoint, are we as a civilization making continuous progress? Are we achieving scalable leaps?

To achieve this scalability, we need to promote complexity and continuously improve our technologies. However, as Vitalik mentioned, if the power of AI is overly centralized in the hands of a few, it is harmful to overall growth; if this technology can be widely decentralized, the outcomes will be much better.

In this regard, I believe we are highly aligned.

Open Source, Open Source Hardware, and Local Intelligence

Shaw Walters: I feel that your earlier discussion touched on some very important common points. Both of you clearly support open-source. Vitalik has contributed a lot of open-source code under the MIT license, although I know you later had some new views on the GPL agreement.**

Now, you are not only supporting open-source software but also promoting open-source hardware. Although these two areas were relatively independent in the past, we are now seeing them gradually merge.

So I am curious, how do you view “open weights” and “open-source hardware”? Are there any differences between E/acc and D/acc in this regard? What are your views on the future direction? Are there any differing opinions?

Guillaume Verdon: In my view, open-source can accelerate the process of hyperparameter search. It allows us to collaborate in a crowdsourced manner to explore the design space together. This is exactly the benefit that acceleration brings: we can develop better technology, stronger AI, and even use AI to design more advanced AI, and the speed of this entire process is also continuously increasing.

I believe that spreading knowledge is essentially spreading power, and spreading knowledge on “how to create intelligence” is particularly important. We do not want to see a possibility that was once discussed within the last U.S. government: attempting to “put the genie back in the bottle.” While not directly prohibiting linear algebra, it’s akin to restricting mathematical research related to AI. To me, this is like prohibiting people from studying biology; it’s a huge regression.

Knowledge has already spread, and there is no going back. If the U.S. tries to prohibit AI-related research, then other countries, third-party organizations, or even regions with looser regulations will continue to push this technology forward. As a result, the global capability gap would actually widen, and the risks would be even greater.

Therefore, we believe that one of the biggest risks is the “capability gap.” The only way to reduce this risk is to ensure that AI is decentralized.

Whenever I hear that kind of “AI doomsday” narrative, such as “AI is dangerous, only we have the ability to manage it, so you should trust us,” I become very skeptical. Even if these people are well-intentioned, if they excessively centralize power, they may eventually be replaced by those who seek power. We have warned about this for many years. And now it is really starting to happen. Like we saw this week, Dario (CEO of Anthropic) is experiencing some real political lessons.

Vitalik Buterin: I generally categorize the potential risks in technological development into two types: unipolar risks and multipolar risks.

Unipolar risks refer to cases like Anthropic’s, which is very representative. They were “named” because they refused to allow their AI technology to be used to develop fully automated weapons or for mass surveillance of Americans, indicating that the government and military may indeed intend to use these technologies for large-scale monitoring. The further development of surveillance technology will have far-reaching impacts; it could make the strong even stronger, weaken the space for diverse voices in society, and compress

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments