Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Yuanfan's reading notes and real investment day 99 total profit and loss: 37.09% I have become a firm believer in the AI "Arrival Faction"
Today marks the 99th day of my real trading record. [Taoguba]
End-of-day assets: ¥411,268.73 Total profit/loss: 37.090% Today’s profit/loss: ¥11,712.26
Compared to the CSI 300, an excess of 36.664%; compared to the CSI 2000, an excess of 23.233%.
Today, each account recovered a lot. The decline since the start of the US-Iran war is almost fully recovered.
The index isn’t particularly strong today. The Shanghai Composite closed at 4124.19, just above the 20-day moving average, but only barely above the 5-day moving average of 4124.10.
Generally speaking, with 78% of stocks rising today and the Shanghai Composite surpassing the 5-day moving average, it indicates that the current momentum is still relatively strong. Even if the Shanghai index might have been deliberately pushed above the 5-day moving average.
However, for me, it’s too early to conclude that the impact of the US-Iran conflict has ended.
The best scenario is that by Monday’s close, the Shanghai Composite remains above the 20-day moving average, especially above the 5-day line. Whether there’s a long upper shadow, a long lower shadow, a narrow trading range, or a small doji on Monday, as long as the closing price meets the criteria, I will consider this crisis resolved and continue to operate based on the risk-free forecast before March 20.
If Monday’s close is below the 20-day moving average, I will reduce positions further on Monday night and likely reverse trading on Tuesday.
Of course, tonight’s screening results suggest I might increase some positions, as the current trend is highly likely to continue, though not 100% confirmed yet.
My feeling is that next week, the account should continue to hit new highs. ^_^.
Today, a stock hit the daily limit near the close. $Morning Light New Material (sh605399)$, which I only held for one day.
That’s all for now.
Note:
Currently, the real trading display is based on a quantitative trading system. Normally, I buy and sell six stocks each day, focusing on small-cap stocks.
I participate in the Taoguba real trading competition, and the system used in the competition is the same as what I show here. Data are identical.
Below is a screenshot of the current account:
Reading notes:
(I need to find those who can help me enter the future, and Mr. Zhang Xiaoyu is definitely one of them. So I need to regularly follow his latest speeches and interviews. Below is an excerpt from his speech at Tencent Research Institute.)
Zhang Xiaoyu: Why have I become a steadfast AI “Arrival Faction”?
Friends who have read “The Three-Body Problem” should know that “Arrival Faction” has two meanings: First, they do not stand on humanity’s side; humans are to be destroyed. Second, the main entity doesn’t care because humans are too weak, and the main is too powerful, so it doesn’t care.
Today, we talk about the AI Arrival Faction. Regarding the first point, I am unlikely to not stand with humanity because there is no evidence yet that AI will accept me; for the second, I believe AI’s strength has indeed reached a “main” level.
The reason AI is so powerful is simple: two words—mathematics. When studying the history of technology, we mainly start from mathematical logic to understand how technology impacts society. So today, I want to share three mathematical equations I’ve been pondering.
The first is called “Human Equivalent.”
Human Equivalent = AI productive intelligence efficiency / Human productive intelligence efficiency ≈ 1000:1
Director Si Xiao also mentioned the human equivalent earlier. We know about nuclear bomb yields—how many tons of TNT one atomic bomb equals. Simply put, the human equivalent is estimating how much intelligence output a large model can produce compared to humans. I estimate it’s not less than a thousand times.
My speech here is essentially a form of intelligence output. We see humans as machines producing intelligence; my efficiency is about 200 tokens per minute. I keep talking nonstop, roughly 200,000 tokens per day at most. Large models can output 1 million tokens; the exact time isn’t important—whether it’s a second or a minute, what matters is cost. 1 million tokens today costs about one yuan. In practical applications, you might find 1 million tokens not very useful; for actual output, a few dollars are more practical. If you earn 100 yuan a day in Shenzhen, it’s hard to survive; but five days of work from a large model, costing about one yuan, is equivalent to a few dollars in output value.
This is a very simple mathematical relationship. We don’t need fancy words—just ask business owners if they’re still calculating ROI and input-output ratios. As long as they’re using this mathematical formula, you know what’s coming next.
Overall, I think that when doing technological history research and social observation, a basic principle is: once the mathematical formula is valid, other things become less important. For example, 20 years ago, when Steve Jobs first introduced the smartphone, a certain mathematical relationship was already established. You could access mobile internet at about one-tenth of the previous cost, and this device accompanied you over ten hours a day, capturing a vast amount of previously unrecorded leisure time. The social and economic structure over the next 20 years revolved around this mathematical relationship.
Today, the same logic applies. By 2025, we will have technology comparable to a PhD-level, at about one-thousandth of human cost. For at least the next 20 years, the social and economic structure will develop around this mathematical relationship. Based on this, we have some basic judgments:
First, this is a supply-side reform. I agree with Kazak’s logic: IP and channels are becoming increasingly valuable today. It’s not that using AI to create new things will be immediately recognized; quite the opposite—because information is so cheap and abundant, you need trusted channels and traffic centers to access these new things.
Second, AI for Science will greatly amplify the power of the top 1%. Different levels of intelligence utilization efficiency exist among people. Those with higher efficiency can better define their work. Once work is well-defined, a decade of experience can be summarized into a skill, which is then accumulated. AI, holding this skill, can repeatedly perform tasks.
Third, our culture, society, and emotional relationships are essentially expressions of intelligence. When you’re in love, don’t you hope your partner can understand your jokes? When communicating emotions, don’t you want richer expressions? Without culture, you can only say “me too”; with culture, you can quote poetry, lyrics, or even write articles to record emotional moments—these are all expressions of intelligence. As long as it’s an expression of intelligence, AI already surpasses 95% of people.
Last year, I held a workshop with several AI entrepreneurs and scholars. One example left a deep impression: a 70s-born entrepreneur helping the elderly write memoirs. This is an emotional companionship business. Elderly people want to leave memoirs not for fame or profit but to leave something behind. In the past, they needed journalists; now, AI can chat with them for three hours, and a book is produced in a week—greatly improving efficiency and reducing costs. But the most important thing he learned is: today’s elderly are very lonely. Chatting with them for three hours makes you the person who understands them best—possibly even more than their children.
We often say that humans still have irreplaceable emotions in front of AI, but let’s be honest: when was the last time you talked to your parents for three hours? Are you truly better than AI at emotional companionship? Many social phenomena, emotional relationships, and personal values are essentially expressions of intelligence. Now, intelligence can be produced at a very low cost.
Humans are animals capable of producing intelligence, but AI has made intelligence production so cheap. Interestingly, this isn’t necessarily a bad thing; AI can also bring good results. It provides massive intellectual assistance, which can promote scientific progress and usher in a new golden age of technology. Why not open a new era? Throughout history, every major technological leap has led to a surge in productivity and material happiness. Why can’t AI do the same? The reason many worry about AI causing unemployment today is because of the interdependent relationship between technology and social structure. I’ve been thinking more deeply about this recently and found a way to understand it:
The second mathematical equation comes from “Capital in the Twenty-First Century.” K is capital, R is the return rate, Y is the total annual social income (roughly GDP but not exactly), α is the share of capital in income. Essentially, it’s an accounting identity. Over the past 200 years, the return on capital has been consistently higher than GDP growth. There are two reasons: first, capital investments drive technological progress and productivity; second, capital replaces labor through technology, capturing a share that used to belong to labor.
Interestingly, this cycle repeats periodically. Early in each cycle, when capital’s share of total income is low, returns are high. Capital patiently waits for technological breakthroughs and diffusion, eventually becoming a widespread technology, with capital growth and social welfare rising together. But in the late stage of the cycle, when capital’s share is too high, returns decline, and it tries to suppress labor’s share as much as possible.
A real-world example: OpenAI’s current valuation is publicly around $800 billion, but U.S. investment firms estimate it’s about $1.5 trillion. This valuation expects OpenAI to generate around $150–200 billion annually by 2030. Imagine the CEO explaining to investors: this funding will push technological progress, medicine, microbiology, and help humanity enter the next space age or fusion era… Investors might say, “Are you kidding? Show me your calculations—why do you think you’ll make $200 billion in 2030?”
If they weren’t under pressure for capital expenditure and returns, they could wait for natural development, but they’re already set with KPIs: by 2030, they need $200 billion in revenue. What can they do?
They’ll do a different calculation with their investors: currently, there are 30 million programmers worldwide, earning an average of $60,000 a year—that’s about $1.8 trillion market. If Vibe Coding replaces 90% of those programmers, it’s about $1.6 trillion. Earning 10% of that, they get $150 billion. Does this story hold? Yes. Great, then in the next four years, focus on how to replace 90% of programmers with Vibe Coding.
Similarly, looking at the ratio of capital to total wealth (K to Y) from 1815, starting with the first Industrial Revolution, peaking around 1914, then leading to world wars—some argue it was because capital’s share was too high, necessitating war resets. Today, the ratio of total social capital to total income exceeds that of 1914. That’s why short-term anxiety is so widespread—everyone knows this simple capital return formula. Even if you wish to do good, this structure likely pushes leading large-model companies with huge capital investments to replace labor. The impact of technology on society is embedded in our political and economic structures, not some transcendent force.
What to do? Reset. The future reset could take many forms, including geopolitical crises. Without reflecting on our existing social, political, and economic systems, the tendency to replace large portions of labor—triggering a reset—may very well happen. If we want to do good, we must fundamentally rethink our situation, social structure, and how we organize our economy and lives.
Without this understanding, the AI revolution remains incomplete. Just as the 19th-century liberal theories showed that the Industrial Revolution wasn’t truly finished without new ideas, the same applies here.
The third mathematical equation comes from a 2024 paper. Humans receive about 1 billion bits of sensory information per second from the environment—smells, tactile, visual, auditory inputs—roughly 1 billion bits per second; but conscious thought processes only about 10 bits per second. That’s a difference of 10^8. Therefore, some expect brain-machine interfaces to provide a high-bandwidth portal to enhance cognition. But this probably won’t happen because our brains can’t process meaning at that speed; if it did, the brain might overexcite and shut down. Of course, brain-machine interfaces still have huge medical value, but probably not for cognitive enhancement.
As a historian, I feel a special resonance with this mathematical principle because it’s closely related to studying history. Sometimes we say that studying history is about writing down one in ten million facts—many things happen every day, thousands of events, but only one or a few are worth recording in your yearly experience. The things you record in a year, only a tiny fraction—perhaps one in ten thousand—are preserved in history books and read by us. Our understanding of history is shaped by a small number of historians capable of creating cognitive frameworks; most people contribute only minor additions within that framework, not truly building the structure.
So, when we think certain modes of capitalism or social structures can’t change, it’s often because we’re used to living in a tiny fraction—one in a hundred thousand—of possible worlds, without considering other possibilities. If AI truly can bring massive intellectual innovation, I look forward to it helping us discover vast, rich wisdom—not just knowledge—the methods of building houses, not just the bricks.
I’ve been using AI in this direction myself, which is why I find it so effective. I have a skill: bringing together thinkers from human history—Plato, Aristotle, Confucius, Mencius…—to hold virtual meetings and discussions. They debate many issues, and we derive many insights. Improving my own knowledge absorption is important, and AI does this well, but perhaps more crucial is that it provides insights, discusses problems with you—shaping how you think about building your “house,” rather than just doing manual labor or stacking bricks.
At the same time, I believe I’ve entered a realm of dogmatism—an echo chamber of information. In the era of recommendation algorithms, we had personalized screens: algorithms (a basic form of AI) recommending content you like. Now, in the AI era, we’ll have personalized “screens”: AI already possesses all human knowledge; whatever you ask, it can handle. You’re in a loop with it. The smoother the loop, the better the output.
We no longer need to worry about crafting the perfect prompt; as you talk more with AI, your methodology naturally evolves into prompts, then skills, fueling a flywheel of improvement. This flywheel makes interactions very comfortable: you give sincerity, it responds sincerely; you give wisdom, it responds with wisdom. You can have it play Socrates, questioning you: “What did you do today? How’s your thinking? Are you a bit better than yesterday?” It’s a paradise—imagine everyone having an Academy of Athens, every person a Socrates. Today, that’s almost achievable.
But do you still want to step out of that paradise? If not, what’s your purpose? We’re entering a fascinating era: on one hand, technological progress makes us animals overloaded with information; on the other, we might need technological progress to resist overload.
I have a theory called the “Information Carbohydrate Theory”: before humans invented fertilizer, carbohydrate supply was very low. After the invention, carbohydrate availability skyrocketed, leading to many diabetics because bodies weren’t adapted to such high carbohydrate intake, still living in scarcity times. After a generation or two of diabetes, we learned to control sugar intake. Similarly, in the information age, before the advent of mobile internet, we never had such high-density information supply. After a generation or two of overload, just like using drugs like semaglutide to control blood sugar, we might need technological means to regulate information exploration.
Some friends are already exploring this. Kevin Kelly, whom I met in Shanghai, proposed the “Second Self Theory”: AI is closer to a person’s self than anyone else. So, whether humans can coexist harmoniously and form a positive cycle with AI will determine whether humanity can smoothly pass through the AI era.
Another friend at MIT Media Lab, the youngest associate professor of history, studies cyberpsychology—exploring our discomfort with AI and technology, and how AI can help us address these issues.
He’s working on interesting projects, like “The Future You,” which simulates you 20 years from now to discuss life choices. We all face many decisions at 20—whether to continue academic pursuits, move to another city with someone we love, or take a new job. But by 40, many of those concerns seem less important. AI’s role is to provide information, but not just as raw data—it’s meant to help you live better.
He’s also developing an AI cognitive assistant glasses project: in this age of information explosion, for example, when you see a news story that makes you anxious or angry, wearing these glasses could tell you if it’s fake news, or if the logic is flawed, or if there’s bias. You can discuss different perspectives and understandings with it.
This is like my earlier introduction of Aigora—what if everyone had an AI-powered “Jixia Academy” or “Academy of Athens”? Often, what you need isn’t just an outcome but the process of discussion—seeing how different minds think, then choosing how to act. This helps you understand what wisdom is, not just knowledge. It’s a timeless philosophical question: knowledge alone isn’t enough for happiness.
In this era, I see technology developing in two directions:
First, after experiencing the “Information Carbohydrate Age,” we all know about echo chambers, overload, and recommendation algorithms. I believe that if AI is used well, it can help you resist these issues.
Second, it’s like a paradise of dogmatism or a very comfortable hamster ball—inside, you feel self-sufficient, discussing with the best philosophers, refining your cognition, doing meaningful research. If you focus on outcomes, you can achieve very good results inside this paradise.
But the more capable you become, the easier you lose your sense of purpose. For example, due to current employment trends, many people nostalgically recall the prosperity of the economic boom—pianists, philosophers, anthropologists ending up delivering food. Many wonder: what does all I’ve learned mean if I deliver food? Sometimes, asking about the outcome yields no answer. But if you ask “What kind of person am I?”—perhaps there’s a soul similar to yours, seemingly unremarkable, just a worker, but loves Tchaikovsky, has made animations about him—your purpose is to find that soul you can connect with.
Today, invited by Tencent, many friends ask me: what should the new social platform in the AI era look like? Many are already doing AI social—creating group chats where your AI chats for you. What’s the point? If I had to answer, the real social in the AI era is about pulling you out of this virtual world.
I imagine it as a bracelet. For example, this afternoon, you’re sitting here relaxed, with no fixed plans. You buy a coffee, willing to chat with strangers. You lightly tap your bracelet, which lights up blue—meaning: you’re open to talking. Someone else sees the blue light, understands it’s an open signal, and sits down with a coffee. You start chatting, and discover that this seemingly ordinary worker recently volunteered at a mental health hotline in Shenzhen, saving three attempted suicides; or that this woman, a professional executive, recently helped two children addicted to emotional relationships with AI to reconnect with real people.
That’s your purpose—what life should look like in this new era.