This Is Your Brain on Generative AI
đď¸ 2025-06-04

Next time I rant, you'll be the first to know. Free forever.
Letâs talk about generative AI. The anecdotes are piling up left and right. They must now be shared with you in a probably-vain effort to prevent at least one person from pressing enter on their next ChatGPT query without a careful re-examination of the current state of AI.
Around this time two years ago, I was starting my last class for my masterâs at UPenn: Natural Language Processing (NLP). I had just finished the intro Artificial Intelligence course in the prior semester and was itching to dive deeper into what makes a system like ChatGPT tick, among other related topics like the history of linguistics.
Iâm not sure if we were quite at the point yet where folks had, for the most part, âtaken sidesâ in the AI discourse; I think we were still closer to something like the early internet phase, where weâre all just trying to figure out what the heck this thing does and what it really means for our daily lives and futures. (Thereâs no doubt that early warning signs regarding AI were already blaring from creative communities like artists and writers, and we should recognize that their concerns have largely borne out in reality.)
I figured that in the best case after taking this NLP course, I would be advancing in knowledge that would serve me well in the rest of my career endeavors; in the worst case, I was placing the bet that knowing how generative AI worked under the hood would serve me well in making moral determinations and helping others understand what weâre dealing with. This was hardly an existential range of scenarios for me personally, but Iâm confident now that something closer to the worst case is playing out.
I have three huge concerns that I keep seeing manifest in the wild:
- The energy/productivity tradeoff of generative AI is nowhere near being an even trade
- People are forfeiting the ability to think critically (we should be treating our brains like muscles)
- People are further losing the ability to communicate with other humans, in an era where the internet + smartphone combo has already led us to all-time highs in isolation đ¨
Some of this post is anecdotal because I want to convey how inundated I am with generative AI stories, both from tech-forward and tech-neutral friend circles, in an effort to show real-world examples of these three concerns. I will defer to the real researchers in drawing scientific conclusions that back up these stories (spoiler: they already are). If youâre looking for more of a primer, many others have done great work in explaining the inner workings of ChatGPT and other generative AIs at this point. This recent podcast episode by Alie Ward is a great place to start.
Forgetting How to Think
If youâre unfamiliar with GitHub Copilot, it can write code snippets for you based on a natural language request. If I say âwrite a small Python function that adds two numbersâ, it could capably give something like this (which was not AI-generated):
def add_two_numbers(a: int, b: int):
return a + b
This isnât limited to Copilot: ChatGPT and most other agents can generate code like this as well now.
I already know too many software engineers that partially regret their use of these tools. For the sake of brevity, if I were to take the âaverageâ of these stories and combine them into a single experience (not entirely unlike a Large Language Model would do đ), it would go something like this:
- One month in: âman, this thing was really helpful in writing some boilerplate starter code that I could build on top of. Saved me a bunch of time!â
- Three months in: âIâm using AI to generate a good chunk of my code now.â
- One year in: âIâm so screwed. I got moved to a new team, I cannot get acclimated to their complex codebase, and AI snippets arenât working. Have I forgotten how to think?â
It would be a catastrophic outcome for society if generative AI were to un-train part of a working class that has already achieved all-time record levels of productivity. But in these stories, it sounds like thatâs precisely whatâs happening.
And the damage isnât limited to just workers. We also have ample evidence already that students are leaning on AI more than ever:
Absolutely cooked.
â Ĺink. (@link.spacelawshitpost.me) May 13, 2025 at 7:48 PM
[image or embed]
And this is putting a strain on both professors grading the work, and those students not using AI that are trying to prove their work is genuine.
ChatGPT, like all of the capital innovations of the last, let's say twenty years, is better understood not as something that *eliminates* work, but rather as something that *moves* work onto someone else.
â (VFP) Strategic Gravitas Reserve (@braak.bsky.social) May 17, 2025 at 9:24 AM
[image or embed]
Donât count on âAI checkersâ to resolve this situation, either. Theyâre still so ineffective that they claim that the literal Book of Genesis is AI generated.
Corporate Wants to Hear from You
Since I finished the NLP course and my degree, the corporate environments Iâve worked in or heard about from friends have all been pushing AI at something like a parabolic pace. Management implementing a soft mandate that employees try to use agents for writing more code. A backend engineer receiving nothing but âcomplete machine-generated garbageâ from a frontend team member that is actually âdampening our teamâs efficiency, if anythingâ. Whatâs going on here?
We have some clues to follow. I reside in the United States, so those of us living here have the luxury of always knowing why Corporate wants us to do something: profits.
Weâre approaching a point where there are fewer tools untouched by AI than those that have an AI assistant. A sampling:
- Gmail has Gemini built-in now, with an avalanche of features from speech translation to image generation.
- JIRA has a Rovo Chat button at the top right corner that can answer questions like âWhat should I work on next?â (Isnât this the entire job of management, to provide this kind of direction?)
- Slack recently launched âSlack AIâ, which promises to âSpeed things up and save time with powerfully simple AI, right where you need it.â Itâs effectively AI chat search (Slack already had excellent search capability â effective searching is just a tragically underrated skill), plus AI summarization (which research indicates is getting worse).
- Third-party tools like Unblocked can be called upon in Slack to âhelp you understand the nuances and specifics of your codebase â how it works, why it was written, and why it works the way it does.â Iâm somewhat heartened and surprised to at least see this Disclaimer on the Unblocked page:
Disclaimer: Unblocked utilizes LLMs to generate responses, which may result in occasional inaccurate or nonsensical responses.
So to recap:
- weâve traded human interaction and all of the social/coworking gains of knowledge exchange, forâŚmore chatbots
- during recall or summarization tasks, the chatbots can be just as incorrect as a human work colleague with a below-average memory capacity
- perhaps most importantly, the chatbot can be not just wrong, but confidently wrong, so hopefully you have some sharp colleagues that will catch the misinformation before it spreads (colleagues that you probably should have just taken two seconds to bother for the right answer in the first place)
In practice, Iâve probably seen about a few dozen colleagues call upon AI assistants at this point, whether in public channels or shared in DMs, and I always make a point to record their reactions (was this helpful or not?) and attempt to understand the AI answer myself. There are cases where the response will have the tidbit of context someone was hunting for, and there are cases where the response will discuss a different matter entirely, or just straight up lie.
No, seriously: ChatGPT will lie in fantastic ways, without abandon.
This would almost certainly be disqualifying or fireable behavior coming from any newly-hired employeeâŚbut for some reason, we have no problem deploying this behavior in robot-form instead of human-form.
So why on earth would we accept this awful trade agreement? Well, back to the opener: itâs profits. Management feels that they are tantalizingly close to eliminating one of the greatest costs of doing business, which is paying employees. If they can convince employees to provide just a little more training data to these tools in the form of âthis answer was great/this answer suckedâ feedback, perhaps the AI can reach that critical point where itâs at least as good as the average salaried employee, one who they no longer need to employ. But rest assured that generative AI is not quite there yet, which is why weâre getting examples of companies exploding once itâs revealed that their AI was just 700 employees in a gigantic trenchcoat. We as workers are still critical to this equation (and in my estimation, we will remain critical for years to come).
These issues with AI functionality are all in addition to the outsized energy cost of creating and using these models, which I personally referenced when answering a recent employee survey question asking âwhatâs holding you back from using AI tools at work today?â:
If theyâre to consume 10x-100x the energy of a normal search, they should be able to 10x-100x my productivity correspondingly (they do not). Otherwise, they need to be on a path to reduce energy consumption per query/training cycle, and I do not believe the US-based Al tools (or hardware providers like Nvidia) are on a trajectory to accomplish this anytime soon. Further developments like the recent DeepSeek efficiency gains are a prerequisite to me considering Al tool adoption in my workflow (I suspect that weâll require a new academic breakthrough like the transformer architecture discoveries that initially kicked off this hype cycle.)
Now with all of this AI hype in mind, you might be thinking that a company such as OpenAI must be extracting massive profits from other businesses at this point, as they continue their B2B crusade to push their AI models onto white-collar workers in every industry. And youâd be extremely wrong, according to Ed Zitron:
Based on previous estimates, OpenAI spends about $2.25 to make $1. At that rate, itâs likely that OpenAIâs costs in its rosiest revenue projections of $12.7 billion are at least $28 billion â meaning that itâs on course to burn at least $14 billion in 2025.
It should alarm us all that some of the worldâs richest investors continue to incinerate cash at this scale. The closest analog I can think of in recent memory is Uber, which also burned a tremendous amount of cash to give us all artificially cheap car rides for years. Then, once they finally had enough market control (and successfully suppressed any significant driver earnings), they basically rose their prices to the level of a normal taxi cabâŚwhich in reality was always the price they needed to charge in order to be a profitable business.
But the generative AI takeover is much more far-reaching than ride-sharing at this point. The worldâs premier GPU supplier, Nvidia, has grown into a multi-trillion market cap thanks to generative AIâs intense demand for powerful hardware. Google and Microsoft are baking AI into every nook and cranny of their office productivity suites, which are used across the vast majority of U.S. corporations. Investors are clearly banking on companies like OpenAI eventually reaching a critical mass of pricing power, when the worldâs most powerful companies form an unbreakable dependence on AI models that will yield unfathomable riches. The next phase of this market domination journey is âone app to rule them all,â a vision of the future where all weâll need is a single app to access our money, conversations, documentsâŚeverything, undoubtedly with an AI assistant in tow. The biggest tech personalities share this vision, but the lesser names want in too.
Once more from Ed â an apt summarization of not just the state of generative AI, but capitalism itself:
Its future is dependent â and this is not an opinion, but objective fact â on effectively infinite resources.
And yetâŚthis isnât even my greatest concern with generative AI right now.
Losing our Humanity
In too many instances like the ones above, we seem perfectly willing to toss aside human interaction for the cold embrace of a ChatGPT âbestieâ. This is the most disturbing development to me. And itâs being actively encouraged by the usual suspects: U.S. big tech overlords that want to continue consuming as much of the pie as they possibly can, and hype-train personalities that are looking for a boost in their social media impressions. Examples:
- Zuck wants most of your friends to be AI.
- Multi-millionaire entrepreneurs think âyour grandkids will marry an AI human.â
- Even the most bullish AI communities are having to ban deluded users that claim theyâre getting answers to the universe from their LLM of choice. âItâs unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better.â
Weâre way out over our skis, and yet weâre still being bombarded by these kinds of stories from all angles. Weâve collectively signed humanity up for a âHow to Get By Without Thinkingâ course at a time when we need thoughtfulness more than ever, all while some are claiming that AI will dominate academic research âpretty soonâ. (Who do we think will be doing the research to achieve such a feat in the first place???)
Iâm astounded by the frequency of how often I now see someone advocating for their own demotion, pay decrease, and/or job loss. Cory Doctorow described this phenomenon beautifully in a recent essay:
In modern automation/labor theory, this debate is framed in terms of âcentaursâ (humans who are assisted by technology) and âreverse-centaursâ (humans who are conscripted to assist technology)âŚ
There are plenty of workers who are excited at the thought of using AI tools to relieve them of some drudgework. To the extent that these workers have power over their bosses and their working conditions, that excitement might well be justified. I hear a lot from programmers who work on their own projects about how nice it is to have a kind of hypertrophied macro system that can generate and tweak little automated tools on the fly so the humans can focus on the real, chewy challenges. Those workers are the centaurs, and itâs no wonder that theyâre excited about improved tooling.
But the reverse-centaur version is a lot darker. The reverse-centaur coder is an assistant to the AI, charged with being a âhuman in the loopâ who reviews the material that the AI produces. This is a pretty terrible job to have.
And yet, day after day, I see and hear examples of this âreverse-centaurâ surrender. âIt sure would be cool to get AI to do this thing for me, and then I can just review its work.â (If you have an AI channel in your work chat, look there â I bet youâll find a case fitting this description within 60 seconds.) Even lawyers are already doing this, at their own peril.
Why are we advocating for our own disposability? Are people hearing themselves when they produce these âideasâ, or even thinking about what happens one step further down this road? You are handing over the value of your humanity to ChatGPT and your employer, and they cannot wait to cash the check.
What Is Going On?
There must be an explanation, so I present to you what I presume will be an unpopular theory about why this is happening. The people most likely to advocate for their own destruction like thisâŚare doing an unfulfilling, unimportant job. The things they create at work each day â whether itâs spreadsheets, slide decks, chunks of code, or support tickets â are subsumed by the Corporate America machine and converted to profit. Once wages are paid, thereâs virtually nothing of value to society left behind. (Maybe a future employee gets a marginal benefit from the work being well documented with appropriate context).
Thatâs it. The dark reason that a viable âAI assistantâ seems within reach to these employees is because they have no ethical reservations about using LLMs in their current state, and they do not care that what they are creating at work is no longer the product of their own humanity; they only care that it helps them achieve a passable enough work product to earn this monthâs salary, with no regard for their own future. In short, itâs one-step thinking (where Step Two is job loss, and Step Three is trying to update your resume to convince a different employer to hire you instead of ChatGPT).
Itâs escapism. Itâs belief in a flawed fantasy that they suddenly have more agency over their workload â ironically, the opposite is true. Theyâre handing what little control they have away.
By contrast, the fiery pushback against generative AI that we see from communities of artists, writers, photographers, and other creative professionalsâŚis precisely because they DO care about what they create. Itâs not just a means to an end for them â itâs everything. Itâs their art. Art is eternal, and life is short. The entire concept of generative AI being used for art is offensive enough (just ask Miyazaki) before we even start considering that many generative AI models only work at all because of the unauthorized use of artistsâ creative property in the first place.
Are we not supposed to create things anymore? Are all of our business proposals and essays destined to be regurgitated from a statistically significant sample of other peopleâs words from the past, instead of from our own minds? Can we not be bothered to engage in the creative process? We were taught in the earliest ages of public school to absorb ideas, then bring them back out into the world with our own flair. The drawings we illustrated, the persuasive speeches we wrote, the games we invented at recessâŚdoes any of that matter anymore?
If society no longer values authentic creation of works by human hands, then what are we even doing here? Iâm nowhere close to a philosophy buff, but even I can lean on Descartes for this one: itâs âI think, therefore I amâ. If it becomes âI cannot thinkââŚthen what?
We must decide, both as individuals and as a collective, what we value. Iâve worked on projects centered around ChatGPT and interacted with the model through coursework. I donât completely rule out here that thereâs some sort of responsible path forward with this tech, without causing further serious damage to society. But we are not on this path. I wonât stop being an AI skeptic until the industry takes its potentially destructive footprints more seriously, both in the way it consumes the finite resources of our only Earth, and how it affects people through the ways we work, live, and interact with each other.
And even if I could put aside the natural and societal impactsâŚIâve always wanted my own ideas documented for the world to see, ever since my earliest writings. This will never change. The alternative just doesnât seem as fun.