Accepting Pandora's Box
It’s been over three years since ChatGPT started the revolution of AI. As a curious person I’ve been looking at that for quite some time, but I still remember the first time I realized how we got here and what the impact was. If you know me you know I’ve always been reluctant with AI, what some would call a hater, although that’s far from reality. I loved the idea of AI, just not the timeline we live in.
This essay gathers my personal thoughts on the matter. Something I’ve been pondering for a long time. It will start with quite a lot of doom, but hopefully gets better by the later sections.
People see ChatGPT replying to them and think it’s AGI (artificial general intelligence). But if you know how the tech works you realize it’s just old tech but with tons of data and power thrown at it. It’s just a probability machine: a bunch of matrices doing expensive math.
It’s like the glorified autocomplete, although we have to admit that the “glorified” word is doing A LOT in that sentence. It’s autocomplete with an end goal and semantic knowledge, trying to follow a reasonable direction.
When I visualize the internals of an LLM as a huge multidimensional space with words and vectors, the engineer in me smiles. It’s actually pretty amazing that with math we can represent a vector that goes from man to woman, and if you apply it to king it gives back queen. This is the dumbest example and probably not something people are hyped about, but this sort of thing is the one that made me excited about AI tech.
But none of this was what surprised me in the past years, what surprised me is how dumb this whole tech is. It’s not smart, it’s not thinking, it doesn’t have knowledge or experience. It’s just an autocomplete.
And that was enough.
For me, that was the most surprising thing, the fact that something so simple in the basics, turned out to be enough to replicate humans in more ways than we could imagine. And yet that was not enough of a surprise, because the cherry on top is how realizing this, made me question my own beliefs.
Look, I’m not a philosopher, but I was top of the class in high school philosophy and I’ve always been curious and learned things along the way. It also helps my best friend is actually a philosopher. And yet, it’s not something I spend too much time thinking about, but when I do, I’ve always had the feeling that there was nothing special about humans. It’s probably my engineer background, but I’ve always thought that we are just emergent behavior. A powerful concept that I believed could explain everything.
And yet, here I was, seeing how a bunch of matrices behaved like humans and I started thinking if maybe we had something else they didn’t. We must, right?
But fear of AGI is not at all what made me dislike AI. Once I got past the technical awe, the real discomfort wasn’t intelligence, it was how it was built.
The biggest heist in history
Let’s go back to how these things got here. With data. A. LOT. OF. DATA. To get the current models where they are nowadays, they’ve had to be trained on vast amounts of human text. For all intents and purposes, they have been trained on the entirety of human knowledge, imagine, the entire internet, plus everything written on paper. And here is where one of my biggest gripes with this technology is. It wouldn’t exist if it wasn’t because a few privileged decided to ignore all laws and morality and steal all of it without giving anything back to the authors. We’re talking about terabytes of books, uncountable images, songs, movies… anything that a human has created was up for grabs for these progressive thieves.
And the fact that the top few can get away with this without real consequences, while somebody goes to jail for downloading a bad movie is insane, but is the world we live in.
People will try to defend the morality of this, and to that I call it bullshit. We all want progress, we all love the ideal future that AI promises. But let’s not be hypocrites here, we’re building this future by stealing from millions and enslaving others.
Modern slavery
Look, AI was not the problem, it just exacerbated it. It’s been clear for years that this late capitalism we live in doesn’t work. I mean, it’s the best thing we have, just like democracy, but the systems are falling apart. While a few billionaires keep getting richer, the rest of the world is in a place that we shouldn’t be in 2026. Guys, look around the current world, we’re killing Earth, letting rich people destroy democracies, and we keep dealing with wars to fuel the ego of others. And then realize it’s 2026. These things shouldn’t happen two decades into the 21st. It could have been solved. What are we doing?
Well it’s just how the system works. For some to win others have to lose. I don’t believe that’s reality, but that’s what humans desire.
Enter AI, with all its promises and the impact that will have. Jobs will be lost, not changed. This is not impacting a specific role, or sector. This is changing everything. The industrial revolution will be nothing compared to this. And jobs won’t “evolve”, they will disappear. But not for free. Business will now have to pay the billionaires instead of the hard workers. Because after all the stealing, this is not something that is given back to society, this is kept in the pockets of the privileged ones.
I can’t believe that the ones actually giving back to society are the Chinese, releasing open models while the Americans keep pretending they are the saviors and convincing politics to let them mess with the world as they desire. What a world we live in!
And people that think jobs won’t disappear, I hope you are right. I’ve heard many versions of this.
“Arts won’t ever die cause humans like human art.” Yes, sure, the Banksys will survive this. But how many of those do we have, and how many can sustain themselves? Because for every master there are millions trying to survive with their art. And if that was already difficult, what do you think is gonna happen? And no, this is not like the revolutions of photography, cinema or even Photoshop. They could be, but corporate greed will make sure it’s not the case.
“Chess was solved by AI years ago, and now chess is more popular than ever before.” This is the worst take ever, because how many chess players do it for a living? Again, we’re not talking about hobbies disappearing, we’re talking about ways of surviving disappearing.
And what happens when the youngsters growing up accept that watching a game of FIFA, machine vs. machine, is equally exciting as watching real players? Or a race in the F1 videogame? I’ve been thinking this for years, even before the current AI generation, back when football videogame graphics and audio commentary were already getting hard to distinguish from a broadcast. Imagine now, or in a few years, with the current pace of AI advancements. And have you seen the recent robots with an agility and human movement that surpasses the best gymnasts? Not even sports are safe.
But that’s only the economic end of it. The social end is what happens to privacy when a few companies own the interface to everything.
The end of privacy
And the other consequence of these AIs being owned by a few is that if you want to participate in society you will have to not only pay those folks, but also give them all your information. This has already been a trend for years, but again, AI just turns it to the max. We’re already seeing people give all their bank accounts, all their medical records and therapy conversations to these companies. And the only thing they say is in a small letter saying they won’t train on it and don’t store it. Something that nobody can confirm, and a line that can be removed easily in any update.
Pandora’s box
And with all of this in mind, already years ago, I realized that was it. Pandora’s box was opened. As soon as there was proof that LLM capabilities were possible, nothing else mattered. People asking to slow progress, do a hard restart to respect morality and laws, how this won’t be such a big deal… all illusions and impossibilities. Pandora’s box was opened. Nothing would have ever been the same. It was just a question of time.
And that was before 2025, when things really changed.
I won’t pretend all this darkness went away. The theft, the inequality, the privacy decline… they remain real and unresolved. But something else happened too that we can’t deny. The technology crossed a tipping point. It got better, much better. Not just incrementally, but in ways that forced me to stop just watching from the sidelines and actually engage publicly with what this means for my work, my writing, and my future. What follows are the written thoughts of that reckoning.
Before 2025
During those beginning times I struggled with the internal fight of hating the world AI was being born into, and the excitement of progress. Of course, the constant conversation and everybody trying to fit AI everywhere ultimately burned the excitement for many. But deep down it was still there.
The psychology of having to resist the impending progress was not the problem. The problem was rejecting something I knew could be used for good, because it was tainted from the start. And that’s why from time to time I felt like shit for using ChatGPT and Claude to help me fix typos in a couple of my posts, or to criticize my writing. I refused to use them for anything more, because the hypocrisy it would imply, but it was hard to deny that it was a powerful tool that could be used for good.
It was a time for play. Things were not that good yet, sure we could pay private companies to generate images and illustrations, instead of paying humans to do it, and it had some sort of fun. The fun you have when playing with a new toy. But besides using it to show your friends, there was not much value in it. And yet, I could see my own shame growing as I felt how I was betraying my fellow humans just for playing with it.
When the future we all dreamed, where machines would do the hard jobs and humans could live a better life, was turned upside down. We had machines doing the fun and creative works, and humans still breaking their backs with manual labor. Who had that in their century bingo card?
But the worst was yet to happen. I could see artists, writers, all fighting back. It was just a posture to show the world that things mattered. Of course it wouldn’t change anything, individuals have little power in today’s world. But at least the feeling that things were not okay was there.
And then, the agentic revolution started. And programmers were the first ones to praise this new overlord. This is something I never understood. How could they all be building the path for their own demise?
Before we get to 2025, one quick detour into my own frustrations with the industry, because it explains why AI felt like it fit so easily.
The decline of software
Software has been in decline for years, even before AI. In a world ruled by corporate greed, where quality doesn’t matter, it’s just a question of time before things turn for the worse. And I think it happened in three ways.
First, mediocrity became the norm. AI can take over very easily in a world where the job is already “copy paste from Stack Overflow and ship.” And that was praised. The industry even started pushing back on being able to ask certain things in interviews. We were forced to disregard the experience and quality of people, just because we needed more people typing letters on the screen. These might sound like harsh words, but it’s the reality that all of us in the industry know, and we tried not to talk about. Because you don’t need hundreds of developers to make a product. And the proof is how many of these businesses create unrelated and unnecessary things. Time was free, until it wasn’t.
Second, output mattered more than craft, because capitalism was the driver of everything. Look, we’re not stupid here. Everything is a business. We all need to put food on our tables and buy those yachts. But when that is the only goal, instead of the consequence of doing a good job, it’s a question of time before things go sideways. The lack of care, and quality reached incredible lows. I recently saw a video of Microsoft Word opening on an old Pentium and it took less than a second. I still remember when macOS apps opened and the icon in the dock didn’t even bounce once. Or when loading a website was instant when we got ADSL. Now people are happy if anything takes 10s. Insanity. But as long as the bottom line is fine, we accept it.
Third, knowledge and control stopped mattering. And this is the worst. Very few care about really understanding how things work, or having control of the thing they are building. Their tools, their frameworks, whatever it is. We’ve just accepted that not knowing is fine.
So all of the above were things that were happening before AI, in a world where software was in decline. When AI arrived, it was the perfect storm.
And yet, since it’s here to stay, and is clearly the future, you better accept it. At some point it stopped being a question of if, and became a question of when — and the when was now.
End of 2025, the moment things really changed
2025 saw more evolution in AI than even before. That’s when I had to take a personal decision and just accept Pandora’s box was here. Luckily, I’ve always been looking from the side, so when I decided to get into AI it didn’t take me long to catch up. After all, my brain has certain facility for this sort of thing, that I wish I had for other real life stuff :D
During 2025 new AI models became actually really good, not only at simulating human text, but also at interacting with external systems. In my mind, this is what happened:
- First, LLMs became real, but their utility was capped by their static knowledge. Deep learning and inference are still separate steps. An LLM can’t continuously learn.
- Then we connected them to the internet. We taught them how to request help, and classic software around them helped by searching and feeding them results. Real-time, up-to-date knowledge that they can load into the inference context and use to be way more accurate.
- Then we generalized that request for help by giving them tools. A way to interact with external systems, with their surroundings. Now they could read and write files, search for things, hit APIs, and manage machines. Not just talk, but do.
- But text was not enough, so we gave them senses. Models that could understand images or other media used to be siloed. Then multimodal models showed up and now they could understand images, your screen, your voice, and even generate them.
- And then the biggest trick of them all was born. Reasoning, fake thinking. Models can just autocomplete, but if they autocomplete to themselves first, not for humans, it changes the outcome. They could now talk to themselves. Fill the inference context with steps, constraints, intermediate thoughts. It’s expensive, but it makes them feel smarter. This quickly became where most generated tokens are used (it’s estimated that 80% of tokens generated nowadays are just for reasoning, insane).
- And finally, agentic harnesses became good. Classic software that wraps the model with tools, retries, state, and structure, so it can do real work end-to-end, not just answer a question.
With this, AI left the chat interface. The chat box is just a UI. The real thing is when the model lives inside your workflow: your editor, your terminal, your files, your browser. AI isn’t useful stuck in a chat, it’s useful as part of your computing life.
This is when it was clear that programming was done.
AI writing all the code
So I’ve been seriously using AI for a few months now, more seriously in the last couple. And I’m still grasping the impact of reality. It’s still frustrating that I can only do this thanks to employers riding the hype and providing AIs for free for work. For personal stuff I’m paying the lowest tier available and thinking about it as paying for a learning course. But let’s not forget some people can’t afford it.
AI now can write full applications on its own.
Are they good? No. Does anybody care? No.
But remember, it’s all a bubble. So we keep hearing how people are letting AI write 100% of the code. And then we see Windows being broken in a thousand new innovative ways. And it’s not just memes. Executives are saying it out loud. Firing thousands of people, saying “I need less heads” just to hire them again later.
Even the AI labs are doing it. The head of Anthropic’s Claude Code said he hasn’t written code by hand in months, and that Claude Code writes most of Claude Code. And OpenAI engineers have said similar things about their own day-to-day coding too. And I believe them, because the tools they are releasing are the worst apps I’ve seen. I’ve never seen my M4 MacBook Pro suffer so much just by having their apps open doing nothing. But again, it doesn’t matter.
Today, AI is not as good as me. That’s the truth. I don’t think it’s even faster when you think about all the wasted time. But that doesn’t matter anymore. Because in a few weeks, months, or years in the worst case, it will. Because AI can do a thousand tasks at a time while I can only do one, or half if I keep being interrupted with meetings.
And in practice, it’s messy. You ask for a tiny fix and it adds three new files. You push back and it rewrites half the module, always saying “yes, you are right.” It keeps adding code and abstractions when a couple of precise incisions would cure the patient. You ask for a test and it papers over the symptom instead of the cause. It’s great at a level that would have been unimaginable just a few months ago, but it gets brittle fast the bigger the codebase gets.
So what is the future? I don’t know, but I do have a hope. A hope for people to realize that you don’t go from a prompt to something decent. That you still need an engineer, with experience and a clear goal, to guide the AI to do the actual work. Because otherwise, the AI will surely do something, but it might have spent your monthly budget for nothing useful.
Are these just illusions and hopes? Maybe. I still want to have a job after all. But it’s also the reality after having worked with it. Is it good? Yes. But it needs somebody that knows what they’re doing. Without it the AI is just a machine that vomits words. And guess what’s the goal of good software engineering? Reducing complexity. AI is not doing that, at all, at least not now. This might make good programmers even better, and mediocre developers disappear.
But the reality is that all the magic wins that the bubble keeps announcing, are mostly made up. Not all of them. It’s true that people are doing more than before, people have multiple agents coding and shipping. And that’s only doable by not caring about the output. Because the solution to AI’s problems is just more AI. But the ones to learn from are those that do the same, and still care. Still make sure things work and are safe, maybe they do it themselves, or maybe they run other AIs to do it, or have a proper testing infrastructure in place, whatever it is, if you still care, thank you.
But there are two categories where I find that AI excels, and there is no denying that it has already changed many things in my day to day.
First is everything that doesn’t matter. All those random scripts, all those tasks that are useless, all those ideas that you wanted to prototype but didn’t have the time or energy. Just let AI burn the atmosphere for a while and see what comes out. You probably will use it once and forget, or trash it and start over. Who cares! (the planet does, but it’s fine, it will be a problem for our kids). This has already allowed me to clean up a backlog of personal tasks that I’ve wanted to do for a long time and that I never thought I would get to. Some of these things are now cheap to try.
And the second aspect that is related to the first one, a specific case of it, is anything that has to do with manipulating your computer. This is actually where I’ve found AI more useful. I still find that for coding it’s still a couple of models away from satisfying me personally, but for this it’s already a game changer. Configuring your zsh, managing the AI agents setup themselves, fixing or working on home servers, etc. There are so many things that it can do once it lives in your machine that are not really related to making a product. This is actually my recommendation for where to start, it’s a game changer. And it’s such a game changer that even Anthropic is now, months after, including this sort of functionality on their consumer app.
What is also very curious is how I don’t think that even the bubble realizes the impact of this. Because if everybody can have a piece of software that they need, for their specific task and taste, just by asking for it. Who do you think is gonna pay all these SaaS that you are all pumping out? Yes, exactly. Nobody will. And if your argument is that there will still be a need, well, then the bubble is lying. You can’t have your cake and eat it too. So in the end we’re going back to the main issue, the money will only go to the billionaires that own these models. And if nobody gets paid, nobody spends money. Checkmate.
Just to leave the programming aspect aside, this is the reality. We could argue if it makes any sense where we are going, but we can’t argue that we aren’t going there. So my recommendation is that you start getting on the bandwagon for learning how to use these tools. Become as effective with them as you are with your framework of choice. But still be an engineer, be curious, learn what the AI is doing, understand it. Don’t lose yourself. But with that done, go ahead and ship a thousand projects that you could have never done before!
And, as I always preach, self-reflect, don’t be an automaton. AI is clearly making people lazier and dumber, I’ve already seen it happen. This technology has something that makes our human brains tickle. The dopamine of having impressive outputs just by writing a sentence is undeniable. We like quick rewards, and AI coding is like TikTok for developers. Don’t let it rot your brain. Be one of the few that actually still has curiosity and desire to learn and grow as a human. If you do, AI is an amazing tool that will facilitate your learning and growth. Use it for good.
And if code was my professional dilemma, writing was the personal one.
Writing
But then, a huge part of my life is also writing. Sure, it’s just a hobby so I’m less concerned with it than programming, but I still care and I empathize with my fellow authors that I admire so much. Let’s leave aside how they’ve been stolen, and let’s focus on how AI impacts myself.
In the beginning I tried it and saw that it could write something, but it was not good. But it could point out a few things that helped me learn how to write better. I still want control and ownership, so I always have the last decisions and the words are my own, but it was clear it could help in the learning process.
Nowadays, with a full agentic setup, the questions come again. How helpful can it be? Because the limitations are almost gone. So it’s all about using it for the good. We need to ignore the hypocrisy of how AI was born, as I said, not much we can do at this point as much as it pains me. But what is that? I still want every single word to be mine. I don’t mind it fixing typos, because we’ve done that with automated classical tools for years. But I don’t want it writing or even rewriting.
But I worry, how long is that gonna last? I do feel it inside me. I still love authorship, and the whole point for me is to express myself and use my words, as I’ve been doing for 3500 words of this essay. But I’m afraid if those feelings change at some point, if the value of utility of helping me put my thoughts in text grows exponentially, I don’t know what will happen.
And I hate it, and hate myself for it.
Music, my own hypocrisy
The reason I can feel the pressure on writing is because music has made me confront an uncomfortable truth: I’m a hypocrite.
I can’t judge music in the same way, I’m not a musician or have ever had music as a creative output, but my point is that I’ve reflected myself and I see how I do care a lot about programming, writing, visual arts… and I would never accept something without a human behind. But then for programming the industry is pressuring me to actually change that.
But then we have music. I listen to music a lot. It helps me focus and enjoy myself. But since I was a teen I noticed how I didn’t care about the artist behind. Sure, I loved their music, and I knew who the artists were, so I got excited when they did something cool, or sad if the band changed a member. But I’ve never been a fanboy of the bands. It was the same for authors by the way, but that has actually changed over time, although not as much as others, I feel like I can enjoy the art even if I despise the artist.
And I’m not sure if it’s that, or something else, but I’ve found myself realizing that I wouldn’t care to be in a future where all the music I listen to is AI generated. And that realization scared me. I really don’t want the promised future in Black Mirror where we open the TV and we get real-time AI-generated movies just for us. That sounds disgusting. But for music? I don’t know why but I don’t care. I’ve spent some time generating my own songs, giving it lyrics I wrote myself inspired by my own fantasy stories, and the output… is literally the kind of music I listen to so I don’t see a difference.
I don’t feel a difference.
And now I realize this must be how 90% of society feels with everything else, from art to books, to movies. If I, as a person who cares deeply about authorship, can’t be concerned with fully AI created music, why would others care about AI art, AI books, AI anything? That’s the terrifying part. Not that I might accept AI music. But that I now understand why everyone else will accept everything else.
Is this what society is gonna devolve into? I don’t know, but I can’t deny my own feelings.
Bubbles exploding
So is this a bubble? Yes.
Will it explode? I don’t think so.
I’m far from an expert, but my feeling is that when we think about economy bubbles we keep thinking about how they were in the past. But looking at this one, where there is a self-fueling cycle of made up money, but kept in the hands of the very powerful that are already very rich, I don’t see how the bubble breaks.
Even if things don’t go as they are promising, it doesn’t matter. They already have the deals, they have the hardware, they have the control, and they have the politicians. Sure, maybe some cryptobros cry about it, maybe the bubble is not visible in the sky anymore… but it won’t explode. It will just fade under the vast ocean of wealth that you can’t even imagine.
AGI
This is a funny one, because it all depends on your definition of AGI. I don’t think we’re even close to AGI. But I do think we’ll reach a point where the label stops mattering, because it will feel like it. As I said, back when the first versions of ChatGPT came out, I was surprised by how such a simple technology was enough to replicate human behavior. I think the same will happen again, but this time with what will feel like AGI: a few improvements, maybe a bit better models, and suddenly the debate becomes just semantic masturbation.
Just look at things like the Claude constitution, and have a conversation with Opus. Sure, it’s still just matrices doing expensive math, but then… those things that we still don’t fully understand start to feel like something more.
My ideal future
Given everything I’ve said so far, it’s not that difficult to imagine what my ideal future for this would look like. If I was in charge of designing the future with pervasive AI, I would do it like this.
- AI needs to be open, not kidnapped by the billionaires. So the future is open models competing for improvements. Thank god some are already doing this.
- AI needs to be local. Hardware and software need to make huge improvements. Is this even possible? I’m not sure, because right now AI just works by wasting resources like nothing I’ve ever seen before. But hopefully it can be solved.
- AI needs to be integrated. I’m sorry, but as much as OpenAI, and especially Anthropic, are trying to keep the usage of their models exclusively to their shitty tools, I don’t think that’s the future that will work. My impression is that AI agents want to live where everything else lives: the operating system. So it wouldn’t surprise me if the long-term winners end up being Apple and Microsoft, integrating AI agents in the operating system PROPERLY. It reminds me of that Steve Jobs story where he basically told Dropbox they were just a feature. But this needs a big mindset shift. Stop putting useless AI chats on every app and corner, instead just bring the AI agents developers are already using, but for the masses. This is probably a bad prediction, but this is where I think AI should go. Imagine Siri, and the promises given in WWDC with Apple Intelligence and app integrations, but done properly.
- AI needs to be private and safe. And this is where I have less hope. If all the above happened, I would still be reticent without privacy and security. Right now this is not feasible, we are very far from it. Hallucinations are part of the technology, not a bug. So will the industry be able to figure out how to make it safe and secure, while still giving it access to everything? Because if something we’ve learned in the past few months is that AI is way more capable the more tools you give it. Keeping it sandboxed is not the future.
- AI needs to empower humans, not replace them. And this is something that surely won’t happen as long as our impostor democracies keep being fueled by late capitalism.
I know I’m leaving a bunch of things out, as this just refelcts my current state of mind. But at least that’s the minimum bar for my ideal AI future.
So what?
But that’s not the world we have. So here’s where I am right now.
I don’t know why I felt like I had to write this (this is what an AI would also say lol), but it came from deep down.
I’m not an AI hater anymore. I’m not an AI-crypto-bro either. But the reality has changed in the past few months, and denying it won’t help me.
So don’t be surprised if the day I come back with tech articles or videos, they are about AI.
As per my writing, as I said, that will still be me. Maybe it’s a mistake and society will embrace AI books. But I still want to express myself and put MY thoughts on paper. I’m probably okay with fixing typos and critiques so I learn and improve. Time will tell how wrong this is.
This essay starts with doom, and a lot of that doom stays. But I’m done pretending I can sit this out. I’m choosing to engage anyway and enjoy it along the way.