I recently read AI 2027, a forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean, which you can read here.
It was a great reading for my flight between Zurich, Switzerland, and Larnaca, Cyprus. I have to give kudos to Edelweiss Air, because the food they served was amazing, way better than Lufthansa, which didn’t even bother with coffee on a similar route a couple of days ago.
It’s a small detail, but it got me thinking about how we notice differences in unexpected places. That’s how I approach the future of artificial intelligence, work, and society, not from the usual angles, but from my own path, one shaped by building robots in 2005 in Colombia, writing Beat the Robots book in 2017 in the UK, and designing AI algorithms to figure out how to give micro-loans of less than $50 to people at the bottom of the pyramid in Latin America. Add to that my current PhD in asset tokenization in Cyprus, where I’m digging into how digital currencies might shift the world in the next decade, and you’ve got a perspective that’s a mix of industry grit and academic curiosity.
My take isn’t just another forecast. It’s about looking at the world differently. When I read papers or essays on AIlike Daniel’s, which I respect, they often project from a specific lens, usually a Western or tech-hub one. That’s fine, and AI 2027 does a great job sparking conversation, which I think is its real strength. But I want to push that conversation further, maybe even stir some controversy, because I see things from a place most don’t.
Back in 2005, I was building robots at the IEEE computer society, clunky, noisy things that could barely move without a glitch programmed in Assembler. That did not prepare me for what I saw in 2017, when I wrote Beat the Robots, and I was already arguing that AI wouldn’t just replace jobs; it’d force us to rethink what work even means. Take Latin America, where I built a micro-loan algorithm with NaaS. The bottom of the pyramid isn’t some abstract concept, it’s people selling fruit on the street, fixing bikes, or weaving baskets. My AI didn’t judge them by credit scores (they don’t have any); it looked at patterns, like how often they sold, who they traded with, tiny signals banks ignore. With loans as small as $33, they’d fix their cart or more supplies, and suddenly their income increased. AI didn’t take their jobs, it gave them a boost. The future of work isn’t all doom and automation; it’s also about amplifying what people already do, especially in places the tech world forgets.
Now, let’s talk about digital currencies and asset tokenization, my PhD playground. In ten years, I see a world where value moves differently. Imagine a farmer in Peru tokenizing his land, not selling it, but splitting it into digital chunks people can invest in. He keeps farming, but now he’s got capital to buy better seeds. Or a small business in Colombia turning its inventory into tokens, so it doesn’t need a bank loan with crazy interest. AI will drive this, analyzing risks, matching buyers and sellers, making it work. It’s not just crypto hype; it’s about rewriting who gets access to money. The challenge? Governments and banks won’t like it. They’ll fight to keep control. That’s where the next decade gets messy.
And here’s the controversial bit: universal basic income (UBI) won’t fix this. People love UBI as the big AI solution, robots take jobs, so just give everyone cash. But it’s a band-aid, and a lazy one. Handing out money doesn’t solve the root issues: power, access, skills. In Latin America, $50 can change a life if it’s a loan with purpose, not a handout. Just think how foreign aid has not helped the countries, I remember how the presidents of Uganda, Rwanda and Ghana have opposed against just getting money.
UBI assumes people can’t adapt, that they’re just victims of AI. I’ve seen the opposite, give them tools, not charity, and they’ll build something. Plus, who pays for UBI? Taxes? Inflation? It’s a fantasy that ignores how economies actually work. AI should empower, not pacify.
So, what’s my point? The future isn’t one-size-fits-all. Daniel’s AI 2027 lit a spark, and I’m throwing fuel on it. From robots to micro-loans to tokenized assets, I see AI as a lever, prying open opportunities, but only if we use it right. My view comes from dirt under my nails and late nights with code, not just theory. The world’s changing, sure, but it’s not Zurich-to-Larnaca for everyone. Sometimes it’s a dirt road in Bolivia, and that’s where the real stories are.
Part 1: Open Source, Agents, and the Real Game-Changer
In AI 2027, they talk about how savvy people always find a way to automate their work, and I’ve seen this even before LLMs crashed into our lives. On Reddit, you can dig up tons of examples, people hacking together tools, scripting bots, automating grunt work way before ChatGPT was a thing. They hint at a super-agent, something smarter than a human, which they call “Agent-4”. Their essay pitches a fourth-generation agent by 2027, outsmarting us all.
The essay, rightfully puts the US Agent against the Chinese Agent, but it doesnt talk about the Agent that could change the game, The Open Source Agent.
Just recently, Meta launched Llama 4, the most powerful open-source LLM yet, and it runs on a single H100 GPU. This comes after Google dropped their own open-source LLM that you can even slap onto a regular GPU in your home rig, and I have to say Gemma 3 has been my favorite so far. I’m sitting here wondering why weren’t open-source LLMs part of their conversation in the essay? After the DeepSeek moment shook things up, it’s clear open source is stepping up as a main player in this revolution. I’d even bet that open-source LLMs might outmuscle centralized ones soon.
The essay’s core idea pits the US’s most powerful LLM or agent against China’s, spiraling into some world-ending clash. But they’re missing a third layer, open source doesn’t just juice up both sides, it’s rewriting the whole game. It’s not US versus China with their mega-agents, like OpenBrain or DeepCent, the fictional titans they dream up, it’s about how open source lets anyone build agents that weave into our world. They’re stuck on this logic: more GPUs equals a better agent. That’s the backbone of their story: OpenBrain emulating OpenAI or Grok, slugging it out with DeepCent, a Chinese mega-institution, all about who’s got the bigger stack of chips. I think they’ve got it wrong.
Look at what’s happening now. Major research labs are flipping their playbooks after open-source agents started winning. Take Operator, someone launches it, and a couple hours later, I’m messing with the open-source version online. That speed, that access it’s nuts. The essay’s betting that piling on GPUs is the path to superintelligence, but I’m not buying it. Here’s the reason: one of the biggest pieces that makes us human isn’t just raw smarts, it’s creativity. When you’re sizing up how someone solves a problem, it’s not about who’s the brainiest in the room. The highest IQ doesn’t always crack it. You need IQ plus creativity, the knack for connecting dots that don’t look connected. Experience counts too. Think about a lawyer: you could have one who’s read every law book cover to cover, but when he’s in front of a jury, he gets smoked by some grizzled vet who knows which obscure statute to pair with a sob story to sway the room. Book smarts lose to street smarts there.
So this whole premise that the winner’s the one with the most GPUs stacked up feels off. Open source is showing us you don’t need a billion-dollar data center to make something brilliant. Llama 4 on one H100, Deepseek V3 on a home GPU these aren’t just toys, they’re proof. Agents won’t rule the world because of hardware flexing; they’ll do it by tapping into what we humans already have: the ability to think sideways, to improvise, to see what’s not obvious. The essay’s US-China showdown misses that third rail, open source isn’t just a tool for the big dogs, it’s a wildcard letting anyone, anywhere, build something that could outthink us all.
Part 2: Super Coders, Agents, and the Limits of Automation
In this part, I wanna focus on something real, how tools like Claude 3.7 feel like an amazing coworker who knows how to code, or how other tech can whip up complete video games. Sure, those Vibe coding games are nice for a quick laugh, but they’re nowhere near triple-A titles. Still, we might get there someday. I’ve asked agents to help me find solutions when I’m coding on Google Cloud. As soon as I hit a bug or an error, I’m pinging them. But here’s the problem: sometimes they can’t even find the answers. So what’s going on in these companies? Are they scared of losing jobs? Do they not want to automate stuff? Or are things getting automated and they’re just not telling us? I lean toward the last one, stuff’s happening behind the curtain.
Here’s my take: we’ve got access to everything, data manuals, tutorials, every question on Stack Overflow, every forum post, every YouTube video explaining code. Feed that into agents, and they’ll turn into super coders. I predict we’ll see companies launch these super coders tailored to specific tech stacks, super coder for Oracle, super coder for SAP, super coder for Salesforce. Yeah, they’ll be a big help, but hold up, what’s the number one problem when developers lean hard on agents? They don’t even know what they’re doing anymore. Think about it: a developer’s building something, using an agent to write the code, but then it’s got bugs. He doesn’t know how to spot or fix them, he just asks the agent again. But the agent’s the one that messed up in the first place! If you’ve been around coding long enough, you know the smart move is to roll back to the last working version, not beg the agent to patch its own mess (Sorry Cursor).
By 2027, sure, we’ll have agents outcoding most humans, I don’t doubt that. But AI 2027 goes further, saying Agent 4 will train on Agent 3, kicking humans out of the loop. They paint this picture: smart engineers watching agents build more agents, scratching their heads, not getting it, until one day they’re useless. That’s a lot of sci-fi fluff to me. I’m not buying it, and here’s why.
Let me throw you a challenge. If you’ve stuck with me this far, I’ve been a huge fan of autonomous driving cars since 2013. I geeked out over them back then when I visited Google IO conference when Sundar was still a VP, and when I met teams working on that tech, I was blown away. The cost of LiDAR crashed from $100,000 to under $500 today, insane progress. But try this: take a Waymo car out of San Francisco, a Xiaomi car out of Shenzhen, or a Tesla, and drop them in Bogota, Colombia. Stick the most advanced agent in there, US-made, Chinese-made, whatever. Those cars won’t make it two blocks. The streets are a mess, potholes everywhere, bikes weaving around, motorbikes zipping way too close, people ignoring every rule. The most powerful LLM, no matter who built it, can’t drive there because it wasn’t trained for that chaos. Now you? If you learned to drive in Germany or Japan, you could handle Bogota no problem. Why? Because humans adapt to stuff we’ve never seen, we’re wired for it. Agents? They’re not.
That’s the gap. Super coders, smart agents, they’re coming, and they’ll do a lot. But this idea that they’ll leave us in the dust by 2027, or that more GPUs or fancier training will make them unstoppable, misses what humans bring: the ability to roll with the punches, to figure out the unknown. The essay’s got its head in the clouds on that one.
Part 3: China’s Decentralized Edge
In AI 2027, they say after the US takes the lead and concentrates all its efforts, China will follow suit and do the same. But I think they’re missing something big: they don’t understand what happened to China after Deng Xiaoping. China didn’t climb to where it is by piling everything into one spot. Actually, they did it by decentralizing their efforts. I know that sounds nuts coming from the CCP, but check the data. Compared to the US, the land of the free, which runs a federal system, sure, but when it comes to innovation, it’s all jammed into Silicon Valley. One region, one powerhouse. Yeah, you’ve got Boston, Massachusetts, Texas, or New York doing their thing, but the real juice is still in California.
Now, hop a plane to China. You’ll be shocked at how advanced some of their mid-tier cities are. I remember my first trip there, landing in Dalian, China for the Summer Davos World Economic Forum Meeting. I couldn’t believe it, crazy skyscrapers, tech everywhere, everything so well organized. This isn’t some city you’d rattle off when you think of China, like Shanghai or Beijing. But that’s been the Party’s play for years, spreading investment across cities, sparking innovation in places you wouldn’t expect. Check out top AI conferences these days, researchers aren’t just from Shanghai, Beijing, or Shenzhen. They’re from cities with names I can barely pronounce, but their work’s on par with San Francisco’s best.
So when the 2027 authors say China’s gonna concentrate all its efforts like the US, I think they’re blind to what’s already happening. China’s been decentralizing inside a centralized system sounds counterintuitive, but it’s real. In some ways, they’ve been more federated than the United States. Look at their nuclear plants scattered around the country, built to match energy needs for decades ahead. That’s not random; it’s strategy. China’s not about to funnel everything into one mega-hub. They’re already leaning on the infrastructure they’ve spread out and the sharing mechanisms they’ve locked in, all while sticking to those four cardinal principles from Deng Xiaoping’s era like “upholding the dictatorship of the proletariat,” as they call it.
The essay’s got this US-China race pegged as a head-to-head, centralized slugfest. But China’s playing a different game, distributed, connected, and already rolling. That’s what they’re not seeing.
Part 4: The Spy Game and China’s Edge
The AI 2027 essay paints this picture that the Chinese are always trying to steal secrets from American agents, spinning out how it might all unfold. But let’s break it down and see what’s really going on. I mean, the FBI’s already flagged that Chinese cybersecurity programs are more advanced than anyone else’s globally they’re not kidding around. So when the essay talks about kicking Chinese spies out of the equation, it’s not really digging into what both sides can actually do. It’s skimming the surface, not comparing capabilities.
After President Trump started axing federal employees, some hints about China’s spy game popped up. They’re not slowing down, China’s gonna keep pushing their HUMINT and SIGINT efforts, sniffing out the weakest links, compromising as many people as they can to siphon secrets from companies. I saw this coming. Recently, the Department of Justice dropped a new directive, telling the Americans to cut back on ties with Chinese folks because they’re scared of leaks. They’re worried about who’s got access to the good stuff.
Take this scenario: if China wanted to snatch the open weights from Agent-4, they could totally pull it off by zeroing in on the human factor. Think about it, top US officials have screwed up basic security before. Why’d the Secretary of Defense add a random journalist to a private Signal group? That’s a head-scratcher. Now imagine China faking someone’s identity, slipping them into that group. The new “member” sends a shady link, and bam, the Secretary or some bigwig clicks it. Even Jeff Bezos fell for that once on WhatsAppgot hacked through a link. If a billionaire can mess up, anyone can.
China’s been the champ at hoovering up dataway beyond what the US could dream of. They’ve got CCTV everywhere, SuperApps like WeChat, 5G hardware, and IoT devices. It’s a goldmine. Their agents are sharp, and yeah, they open-source some models, but that’s just flexing. The next war’s gonna be 6G, and they’re already gearing up. Meanwhile, in the spy game, Chinese nationals and descendants are working in top US labs and agencies. That’s why the DOJ’s freaking out, trying to dial back those relationships. China doesn’t have that problem; they’re not sweating their own people.
Part 5: Weights Theft, Jailbreaks, and Bioweapons
The AI 2027 paper goes on about a weights thief like China’s gonna swipe the guts of some American agent. But I think most attacks won’t need that. They’ll just use jailbreak tricks, like the ones Pliny’s been cranking out. That guy’s a machine, hours after a new model drops, he’s already cracked it wide open. Every time. I recommend following him on Twitter @elder_plinius and messing around with his jailbreak code yourself. Test those LLMs, see how they buckle. I asked him once how he pulls it off so fast, and he hit me with this: “Minimalist universal jailbreaks are tricky but doable if you can find some sort of backdoor; generally the more minimalist you go, the more targeted/custom-tailored to the query it tends to have to be.” That’s the game finding the weak spot and prying it loose.
Then there’s the bioweapons angle. The essay’s fretting about access to that kind of know-how, but all the info’s already out there you can snag it on the black market if you know where to look. Having it doesn’t jack up the attacks, though. Take CRISPR tech’s been public for years, and we haven’t seen a single big incident. Not one. The knowledge is floating around, sure, but it’s not like everyone’s suddenly cooking up plagues. The paper’s hyping a threat that’s more noise than reality on that front.
Part 6: AI Slowdowns, Trust, and Lobbying Limits
The AI 2027 essay sketches out this scene where the US and China negotiate to pump the brakes on AI development, some world-ending risk they’re dodging. But I don’t see it playing out like that. China’s been stacking up nuclear weapons for the last two decades, testing hypersonic missiles to flex their deterrence. They’re not messing around. So even if someone’s waving a red flag about an AI going rogue straight out of Mission Impossible 8 (Actually I recommend watching Mission: Impossible – The Final Reckoning) countries aren’t gonna stop researching. They can’t trust each other. Look at Iraq got hit because they “supposedly” had weapons of mass destruction, and all we found was a pile of gold. Trust’s a ghost in this game.
They say Agent-4 could optimize power plants, and yeah, that works on paper, tweaking the math, and boosting efficiency. But then it’s gotta get past the real world: approvals, tests, results. The US has been dragging its feet on new nuclear reactors forever, red tape’s a beast. Agent-4 might have all the smarts, but it’s not blasting through that mess. And this bit about it lobbying because it can read Slack and emails? That’s missing the point. Lobbying Isn’t just data, it’s dinners, parties, golf, and human-to-human schmoozing. Unless the NSA hands Agent-4 some privileged access to dig up dirt and blackmail committee members, I don’t see it sweet-talking a politician into anything “in his best interest.” It’s not that slick.
Here’s the flip side: the second cabinet members get Agent-4 access, it’s a neon sign for espionage. Think about it, the Secretary of Defense added a journalist to his Signal group. One dumb move, and the door’s wide open. That’s the weak spot, not some mastermind AI lobbyist.
One thing from the essay I actually loved, though, was the Zoom-style avatar chat interface. That’s the kind of thing I’m excited to see come to life.
Part 7: Why having more Robots doesn’t mean cleaner streets
In the 2027 essay they talked about how the Advanced Super Intelligence will create a world with cleaner cities by 2029. But the transformation argument presents several challenges. First, their assumption that increased technology inherently leads to cleaner cities is flawed. A walk through the streets of San Francisco reveals a decline in urban cleanliness, highlighting that technological advancements alone don’t guarantee environmental improvements. In contrast, cities like those in Qatar maintain cleanliness through stringent anti-littering laws, demonstrating the effectiveness of cultural and legal frameworks over mere technological solutions.
Similarly, the deployment of Artificial Superintelligence (ASI) surveillance systems doesn’t necessarily equate to enhanced security. Despite London’s extensive CCTV network, incidents like phone thefts or stabbings in areas such as Mayfair persist, underscoring that technology alone cannot resolve societal issues without accompanying cultural and policy changes.
Addressing these problems requires cultural solutions. For instance, there have been cases in the UK where individuals faced legal consequences for online expressions, such as tweets or private WhatsApp messages. This indicates a focus on regulating digital speech, yet crime rates continue to rise, suggesting that technological oversight without cultural and systemic reforms is insufficient.
The essay also explores the integration of AI into defense. Insights from Palantir’s collaboration with NATO allies during the Russian-Ukrainian conflict could shed light on AI’s role in military decision-making. Palantir’s software has been instrumental in Ukraine’s targeting operations, analyzing vast data sources to aid in identifying and engaging targets. However, the military-industrial complex, responsible for producing traditional weaponry like tanks and airplanes, may pose challenges to the adoption of AI-driven weapons systems, potentially due to entrenched interests and existing production paradigms. Just ask how complicated it has been for Palmer Luckey to get contracts with the Department of Defense to develop AI driven systems.
Moreover, the preference of world leaders for human translators over automated systems may stem from a desire to prevent miscommunications or concealed intentions, emphasizing the irreplaceable value of human judgment in sensitive communications.
The prevailing model suggests that increasing GPU capabilities lead to better algorithms and models. Yet, significant human innovation often arises from constraints. For example, researchers developed optimized data transmission methods for telemedicine in response to limited internet bandwidth, illustrating that limitations can drive creative solutions.
The paper does not address how Agent 5 manages memory issues. Questions arise: Is the memory shared? Can it be disconnected or restarted? Personal experiences, such as encountering memory limitations after discontinuing OpenAI services, highlight the importance of understanding and managing memory resources effectively.
Part 8: Why Universal Basic Income Won’t Work And What Might
The AI 2027 essay talks about how universal basic income (UBI) will keep people in place, but I’ve gotta bring up what I was ranting about to a friend at the ACM conference in Catania, Sicily in April, 2025.
Every time I hear CEOs from top research labs, they’re all about UBI like it’s the magic fix to save humanity. It sounds nice on paper, sure. But back in 2017, I interviewed a global UBI expert for the Beat the Robots book, and the big issues they flagged still stick with me. I’m gonna explain why I think it’s a nonstarter. Right now, the world’s freaking out over Donald Trump’s tariffs, half the planet’s cheering, half’s cursing. It’s a divide: good for one country, bad for others. China’s jacking up tariffs in response, Vietnam’s begging to ditch them entirely. Point is, every country’s playing for its own team, not the world’s. They’re all about their own citizens’ welfare, not some global utopia.
Based on the 2027 AI: We’ve got advanced superintelligence running wild. It starts eating up human jobs, unemployment spikes. Sure, new jobs might pop up, but if the AI forecasts are right (and the essay leans that way), human unemployment’s gonna climb. So who’s paying the taxes for UBI? No one wants to touch that. The essay imagines a supercompany in the US worth over $10 trillion and another in China with a matching valuation. Okay, say those giants are footing the tax bill in their home turf, but they’ve gutted jobs elsewhere, outside the US and China. Do you really think, out of the goodness of their hearts, they’ll ship UBI cash to other countries? No way. Go to Yemen and ask folks if they’d fund Saudi Arabia. Ask Israelis if they will help Palestinians? Swing by Ukraine, and ask if they would be cool with Russians getting a cut? UBI forgets humans are split by culture, religion, history, borders. It might work where welfare’s already a thing like in the EU with insurance or jobless benefits but Latin America, Africa, South Asia? Not that simple.
So if UBI’s not it, what is? That’s a whole other essay, but let’s shift gears. The 2027 essay missed a player I’ve been hyping: open-source agents. I think they’re the real deal, valuable tech that could lift the rest of the world. If you’re partying in Rio de Janeiro, Brazil, eating killer food in Cambodia, or strolling through Cape Town, South Africa, these places are gonna lean on open-source models to tackle their local headaches. They won’t cook up state-of-the-art agents or steer the big outcomes fair enough. But what they can do is grab those open-source models, pair them with open-source hardware, and build solutions that fit. That’s gold. Imagine autonomous tractors in these countries not imported from China or the US, but built locally. What if we rig up IoT systems to boost production right there? Or hook up hardware to predict weather shifts, or kickstart a robotics scene using the latest open-source agents? There’s so much potential here, and it’s about quality of life not waiting for handouts or trillion-dollar giants. That’s where we should aim. Maybe that’s my next essay on my next book.
In conclusion, I extend my gratitude to the authors of the 2027 essay. I highly recommend reading it, as it opens new avenues for discussing the future. The horizon is bright, my friends; let’s continue building towards it. Let me know your thoughts!
Originally published in Linkedin.
12 responses to “Why UBI’s a Fantasy and Open-Source AI LLM Could Save the World Instead”
androxal no perscription usa fedex shipping
BUY androxal COD
buy rifaximin purchase discount
rifaximin no perscription no fees overnigh
get enclomiphene canada online order
online order enclomiphene generic release date
kamagra pas de script du jour au lendemain
generique kamagra naturel
how to order flexeril cyclobenzaprine generic south africa
can i get flexeril cyclobenzaprine without prescription
discount dutasteride cost per tablet
cheap dutasteride cheap drugs
how to order fildena us pharmacies
fildena ed drug
best price gabapentin
gabapentin no prescription cod
order itraconazole price in canada
prescription itraconazole without
get avodart usa mastercard
cheap generic avodart from canada
buying xifaxan cost insurance
purchase xifaxan online uk
kamagra generické
kamagra recept od s online