The (negative) Impact of AI on Humans
AI, probably the buzzword of the last few years, is at the forefront, for better or worse. Big tech wants us to believe that using their LLMs and coding agents makes us so much more productive, that we don’t even need many of programmers and software engineers anymore. With the click of a button, everyone can create prototypes, full-scale apps, or even SaaS. While, in my opinion, LLMs do change the industry in some sense, not everything can be replaced by them, and humans are still the backbone of the development process. Much can be said about the quality of LLMs and coding agents: security issues, code quality, cost, dependence, etc., in this article I want to outline the point that is most important to me, personally, that is: learning.
GitHub CEO: manual coding remains key despite AI boom
Most tech CEOs don’t really want to say this, but the GitHub CEO recently did: LLMs are not going to replace (all) manual coding. If we see thousands of layoffs, only for the company to then to announce an investment of tens of billions of dollars into countries in Asia, we can safely assume that even they know. Telling the shareholders that it’s AI that saves them money, is at least a bit dishonest.
The CEO of Github states:
The transformation aligns with historical patterns in software development where new tools and abstractions change how developers work without eliminating the need for human ingenuity
and
the most successful implementations will be those that augment rather than replace developer expertise
This is an opinion I also stand by.
Neutral opinions
One thing is for sure: trusting tech companies, on how much time one can save using their product, is definitely not the way to objectively tell, how influential AI is on our workflow.
It’s difficult to determine how much we can actually benefit from LLMs, but looking at somewhat neutral sources is the best we can do.
Deloitte wrote on their blog:
Gen AI coding tools are often seen as more like autocorrect: A tool that is used dozens of times per day and enhances productivity by roughly 10 to 20 minutes per day.
This seems like a low number. Whether a parking spot was available or not in the morning, or even the quality of lunch might have a bigger influence on someones day.
But still, 10 minutes are 10 minutes, so what’s the harm ? Well, let’s see the negative side of this massive time save.
Current research #1
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task is a paper about the cognitive debt when using an LLM to write essays.
Participants were tasked to write an essay and were grouped into three groups: 1. brain-only, 2. search-machine usage, 3. LLM usage.
LLM diminished users’ inclination to critically evaluate the LLM’s output or ”opinions”
No surprise here: using LLMs for writing essays, results in a decrease in critical thinking and critical evaluation of the written content.
Echo chamber effect: most of the essays were alike
Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group
Thinking critically and being creative are the most important assets we as humans have. Why would we let a machine take over ?
Essays written with the help of LLM carried a lesser significance or value to the participants, as they spent less time on writing, and mostly failed to provide a quote from theis essays
In the study they actually used EEG (Electroencephalography) to measure brain activity, but this sentence alone should really make you think. They can’t quote a single sentence from their own essay ? I’m sure they learned a lot.
Brain-only group reported higher satisfaction and demonstrated higher brain connectivity
This should be the goal. Not only learning while working, but also being satisfied with your work.
Current research #2
Need another source ? Here we go!
The Impact of Large Language Models on Programming Education and Student Learning Outcomes
Our results reveal a significant negative correlation between increased LLM reliance for critical thinking-intensive tasks such as code generation and debugging and lower final grades. Furthermore, a downward trend in final grades is observed with increased average LLM use across all tasks.
Using LLMs for studying results in lower final grades. Probably not surprising after you read the last section. In my opinion, it really does make you think though: After being deployed for 5 years, do you really only want to have the experience of, let’s say, 2.5 years ? This should not be your goal, and makes you less qualified and satisfied overall.
My message to the world
You probably already know, what I want to say, but here it goes anyway: AI can help, but is not the single solution to all problems. Use it for what it’s good at, and let the rest to the human brain.
Here are two questions to life by:
- Am I just lazy, or does using LLMs actually help me save time for this process ?
- Can / should I learn something by doing it myself ?
Always ask yourself these two questions, and I’ll guarantee you are not going to replaced by AI. Developers who know their stuff will always be desirable.
Last quote:
GitHub CEO Thomas Dohmke: The companies that are the smartest are going to hire more developers