This Is Your Brain on Generative AI

🗓️ 2025-06-04

A drawing of my childhood persona, Mr. Man, wearing a long sleeve tie-dye shirt and long green pants. He's making the classic Hand Behind Head anime pose with a single sweat drop on the left side of his forehead.
me, ~2010

Let’s talk about generative AI. The anecdotes are piling up left and right. They must now be shared with you in a probably-vain effort to prevent at least one person from pressing enter on their next ChatGPT query without a careful re-examination of the current state of AI.

Around this time two years ago, I was starting my last class for my master’s at UPenn: Natural Language Processing (NLP). I had just finished the intro Artificial Intelligence course in the prior semester and was itching to dive deeper into what makes a system like ChatGPT tick, among other related topics like the history of linguistics.

I’m not sure if we were quite at the point yet where folks had, for the most part, “taken sides” in the AI discourse; I think we were still closer to something like the early internet phase, where we’re all just trying to figure out what the heck this thing does and what it really means for our daily lives and futures. (There’s no doubt that early warning signs regarding AI were already blaring from creative communities like artists and writers, and we should recognize that their concerns have largely borne out in reality.)

I figured that in the best case after taking this NLP course, I would be advancing in knowledge that would serve me well in the rest of my career endeavors; in the worst case, I was placing the bet that knowing how generative AI worked under the hood would serve me well in making moral determinations and helping others understand what we’re dealing with. This was hardly an existential range of scenarios for me personally, but I’m confident now that something closer to the worst case is playing out.

I have three huge concerns that I keep seeing manifest in the wild:

  1. The energy/productivity tradeoff of generative AI is nowhere near being an even trade
  2. People are forfeiting the ability to think critically (we should be treating our brains like muscles)
  3. People are further losing the ability to communicate with other humans, in an era where the internet + smartphone combo has already led us to all-time highs in isolation 🚨

Some of this post is anecdotal because I want to convey how inundated I am with generative AI stories, both from tech-forward and tech-neutral friend circles, in an effort to show real-world examples of these three concerns. I will defer to the real researchers in drawing scientific conclusions that back up these stories (spoiler: they already are). If you’re looking for more of a primer, many others have done great work in explaining the inner workings of ChatGPT and other generative AIs at this point. This recent podcast episode by Alie Ward is a great place to start.

Forgetting How to Think

If you’re unfamiliar with GitHub Copilot, it can write code snippets for you based on a natural language request. If I say “write a small Python function that adds two numbers”, it could capably give something like this (which was not AI-generated):

def add_two_numbers(a: int, b: int):
    return a + b

This isn’t limited to Copilot: ChatGPT and most other agents can generate code like this as well now.

I already know too many software engineers that partially regret their use of these tools. For the sake of brevity, if I were to take the “average” of these stories and combine them into a single experience (not entirely unlike a Large Language Model would do 🙂), it would go something like this:

  • One month in: “man, this thing was really helpful in writing some boilerplate starter code that I could build on top of. Saved me a bunch of time!”
  • Three months in: “I’m using AI to generate a good chunk of my code now.”
  • One year in: “I’m so screwed. I got moved to a new team, I cannot get acclimated to their complex codebase, and AI snippets aren’t working. Have I forgotten how to think?”

It would be a catastrophic outcome for society if generative AI were to un-train part of a working class that has already achieved all-time record levels of productivity. But in these stories, it sounds like that’s precisely what’s happening.

And the damage isn’t limited to just workers. We also have ample evidence already that students are leaning on AI more than ever:

And this is putting a strain on both professors grading the work, and those students not using AI that are trying to prove their work is genuine.

ChatGPT, like all of the capital innovations of the last, let's say twenty years, is better understood not as something that *eliminates* work, but rather as something that *moves* work onto someone else.

[image or embed]

— (VFP) Strategic Gravitas Reserve (@braak.bsky.social) May 17, 2025 at 9:24 AM

Don’t count on “AI checkers” to resolve this situation, either. They’re still so ineffective that they claim that the literal Book of Genesis is AI generated.

Corporate Wants to Hear from You

Since I finished the NLP course and my degree, the corporate environments I’ve worked in or heard about from friends have all been pushing AI at something like a parabolic pace. Management implementing a soft mandate that employees try to use agents for writing more code. A backend engineer receiving nothing but “complete machine-generated garbage” from a frontend team member that is actually “dampening our team’s efficiency, if anything”. What’s going on here?

We have some clues to follow. I reside in the United States, so those of us living here have the luxury of always knowing why Corporate wants us to do something: profits.

We’re approaching a point where there are fewer tools untouched by AI than those that have an AI assistant. A sampling:

  • Gmail has Gemini built-in now, with an avalanche of features from speech translation to image generation.
  • JIRA has a Rovo Chat button at the top right corner that can answer questions like “What should I work on next?” (Isn’t this the entire job of management, to provide this kind of direction?)
  • Slack recently launched ”Slack AI”, which promises to “Speed things up and save time with powerfully simple AI, right where you need it.” It’s effectively AI chat search (Slack already had excellent search capability — effective searching is just a tragically underrated skill), plus AI summarization (which research indicates is getting worse).
  • Third-party tools like Unblocked can be called upon in Slack to “help you understand the nuances and specifics of your codebase — how it works, why it was written, and why it works the way it does.” I’m somewhat heartened and surprised to at least see this Disclaimer on the Unblocked page:

Disclaimer: Unblocked utilizes LLMs to generate responses, which may result in occasional inaccurate or nonsensical responses.

So to recap:

  • we’ve traded human interaction and all of the social/coworking gains of knowledge exchange, for…more chatbots
  • during recall or summarization tasks, the chatbots can be just as incorrect as a human work colleague with a below-average memory capacity
  • perhaps most importantly, the chatbot can be not just wrong, but confidently wrong, so hopefully you have some sharp colleagues that will catch the misinformation before it spreads (colleagues that you probably should have just taken two seconds to bother for the right answer in the first place)

In practice, I’ve probably seen about a few dozen colleagues call upon AI assistants at this point, whether in public channels or shared in DMs, and I always make a point to record their reactions (was this helpful or not?) and attempt to understand the AI answer myself. There are cases where the response will have the tidbit of context someone was hunting for, and there are cases where the response will discuss a different matter entirely, or just straight up lie.

No, seriously: ChatGPT will lie in fantastic ways, without abandon.

This would almost certainly be disqualifying or fireable behavior coming from any newly-hired employee…but for some reason, we have no problem deploying this behavior in robot-form instead of human-form.

So why on earth would we accept this awful trade agreement? Well, back to the opener: it’s profits. Management feels that they are tantalizingly close to eliminating one of the greatest costs of doing business, which is paying employees. If they can convince employees to provide just a little more training data to these tools in the form of “this answer was great/this answer sucked” feedback, perhaps the AI can reach that critical point where it’s at least as good as the average salaried employee, one who they no longer need to employ. But rest assured that generative AI is not quite there yet, which is why we’re getting examples of companies exploding once it’s revealed that their AI was just 700 employees in a gigantic trenchcoat. We as workers are still critical to this equation (and in my estimation, we will remain critical for years to come).

These issues with AI functionality are all in addition to the outsized energy cost of creating and using these models, which I personally referenced when answering a recent employee survey question asking “what’s holding you back from using AI tools at work today?”:

If they’re to consume 10x-100x the energy of a normal search, they should be able to 10x-100x my productivity correspondingly (they do not). Otherwise, they need to be on a path to reduce energy consumption per query/training cycle, and I do not believe the US-based Al tools (or hardware providers like Nvidia) are on a trajectory to accomplish this anytime soon. Further developments like the recent DeepSeek efficiency gains are a prerequisite to me considering Al tool adoption in my workflow (I suspect that we’ll require a new academic breakthrough like the transformer architecture discoveries that initially kicked off this hype cycle.)

Now with all of this AI hype in mind, you might be thinking that a company such as OpenAI must be extracting massive profits from other businesses at this point, as they continue their B2B crusade to push their AI models onto white-collar workers in every industry. And you’d be extremely wrong, according to Ed Zitron:

Based on previous estimates, OpenAI spends about $2.25 to make $1. At that rate, it’s likely that OpenAI’s costs in its rosiest revenue projections of $12.7 billion are at least $28 billion — meaning that it’s on course to burn at least $14 billion in 2025.

It should alarm us all that some of the world’s richest investors continue to incinerate cash at this scale. The closest analog I can think of in recent memory is Uber, which also burned a tremendous amount of cash to give us all artificially cheap car rides for years. Then, once they finally had enough market control (and successfully suppressed any significant driver earnings), they basically rose their prices to the level of a normal taxi cab…which in reality was always the price they needed to charge in order to be a profitable business.

But the generative AI takeover is much more far-reaching than ride-sharing at this point. The world’s premier GPU supplier, Nvidia, has grown into a multi-trillion market cap thanks to generative AI’s intense demand for powerful hardware. Google and Microsoft are baking AI into every nook and cranny of their office productivity suites, which are used across the vast majority of U.S. corporations. Investors are clearly banking on companies like OpenAI eventually reaching a critical mass of pricing power, when the world’s most powerful companies form an unbreakable dependence on AI models that will yield unfathomable riches. The next phase of this market domination journey is ”one app to rule them all,” a vision of the future where all we’ll need is a single app to access our money, conversations, documents…everything, undoubtedly with an AI assistant in tow. The biggest tech personalities share this vision, but the lesser names want in too.

Once more from Ed — an apt summarization of not just the state of generative AI, but capitalism itself:

Its future is dependent — and this is not an opinion, but objective fact — on effectively infinite resources.

And yet…this isn’t even my greatest concern with generative AI right now.

Losing our Humanity

In too many instances like the ones above, we seem perfectly willing to toss aside human interaction for the cold embrace of a ChatGPT “bestie”. This is the most disturbing development to me. And it’s being actively encouraged by the usual suspects: U.S. big tech overlords that want to continue consuming as much of the pie as they possibly can, and hype-train personalities that are looking for a boost in their social media impressions. Examples:

We’re way out over our skis, and yet we’re still being bombarded by these kinds of stories from all angles. We’ve collectively signed humanity up for a “How to Get By Without Thinking” course at a time when we need thoughtfulness more than ever, all while some are claiming that AI will dominate academic research “pretty soon”. (Who do we think will be doing the research to achieve such a feat in the first place???)

I’m astounded by the frequency of how often I now see someone advocating for their own demotion, pay decrease, and/or job loss. Cory Doctorow described this phenomenon beautifully in a recent essay:

In modern automation/labor theory, this debate is framed in terms of “centaurs” (humans who are assisted by technology) and “reverse-centaurs” (humans who are conscripted to assist technology)…

There are plenty of workers who are excited at the thought of using AI tools to relieve them of some drudgework. To the extent that these workers have power over their bosses and their working conditions, that excitement might well be justified. I hear a lot from programmers who work on their own projects about how nice it is to have a kind of hypertrophied macro system that can generate and tweak little automated tools on the fly so the humans can focus on the real, chewy challenges. Those workers are the centaurs, and it’s no wonder that they’re excited about improved tooling.

But the reverse-centaur version is a lot darker. The reverse-centaur coder is an assistant to the AI, charged with being a “human in the loop” who reviews the material that the AI produces. This is a pretty terrible job to have.

And yet, day after day, I see and hear examples of this “reverse-centaur” surrender. “It sure would be cool to get AI to do this thing for me, and then I can just review its work.” (If you have an AI channel in your work chat, look there — I bet you’ll find a case fitting this description within 60 seconds.) Even lawyers are already doing this, at their own peril.

Why are we advocating for our own disposability? Are people hearing themselves when they produce these “ideas”, or even thinking about what happens one step further down this road? You are handing over the value of your humanity to ChatGPT and your employer, and they cannot wait to cash the check.

What Is Going On?

There must be an explanation, so I present to you what I presume will be an unpopular theory about why this is happening. The people most likely to advocate for their own destruction like this…are doing an unfulfilling, unimportant job. The things they create at work each day — whether it’s spreadsheets, slide decks, chunks of code, or support tickets — are subsumed by the Corporate America machine and converted to profit. Once wages are paid, there’s virtually nothing of value to society left behind. (Maybe a future employee gets a marginal benefit from the work being well documented with appropriate context).

That’s it. The dark reason that a viable “AI assistant” seems within reach to these employees is because they have no ethical reservations about using LLMs in their current state, and they do not care that what they are creating at work is no longer the product of their own humanity; they only care that it helps them achieve a passable enough work product to earn this month’s salary, with no regard for their own future. In short, it’s one-step thinking (where Step Two is job loss, and Step Three is trying to update your resume to convince a different employer to hire you instead of ChatGPT).

It’s escapism. It’s belief in a flawed fantasy that they suddenly have more agency over their workload — ironically, the opposite is true. They’re handing what little control they have away.

By contrast, the fiery pushback against generative AI that we see from communities of artists, writers, photographers, and other creative professionals…is precisely because they DO care about what they create. It’s not just a means to an end for them — it’s everything. It’s their art. Art is eternal, and life is short. The entire concept of generative AI being used for art is offensive enough (just ask Miyazaki) before we even start considering that many generative AI models only work at all because of the unauthorized use of artists’ creative property in the first place.

Are we not supposed to create things anymore? Are all of our business proposals and essays destined to be regurgitated from a statistically significant sample of other people’s words from the past, instead of from our own minds? Can we not be bothered to engage in the creative process? We were taught in the earliest ages of public school to absorb ideas, then bring them back out into the world with our own flair. The drawings we illustrated, the persuasive speeches we wrote, the games we invented at recess…does any of that matter anymore?

If society no longer values authentic creation of works by human hands, then what are we even doing here? I’m nowhere close to a philosophy buff, but even I can lean on Descartes for this one: it’s “I think, therefore I am”. If it becomes “I cannot think”…then what?

We must decide, both as individuals and as a collective, what we value. I’ve worked on projects centered around ChatGPT and interacted with the model through coursework. I don’t completely rule out here that there’s some sort of responsible path forward with this tech, without causing further serious damage to society. But we are not on this path. I won’t stop being an AI skeptic until the industry takes its potentially destructive footprints more seriously, both in the way it consumes the finite resources of our only Earth, and how it affects people through the ways we work, live, and interact with each other.

And even if I could put aside the natural and societal impacts…I’ve always wanted my own ideas documented for the world to see, ever since my earliest writings. This will never change. The alternative just doesn’t seem as fun.