đ¤
Please Shutup About AI
âPlease donât call it âAIâ, these are âLarge Language Modelsâ, nothing more!â he screamed, into the void, it being far too late to change
This article is intended to be sent to both AI evangelists and skeptics. The message is the same: please, shut up already.
Iâve seen a few trends rise and fall over the course of my career in tech, and for the first time I work full-time on a topic right in the hype cycle. Not only that, but I also work on a specific subject that happens to cause people to involuntary start talking over me about how great or terrible my work is: AI legal research assistance.
If you take nothing else from this article, I want you to understand these two points:
- When reading any media on AI start by telling yourself âItâs definitely not as good as they say it is, but itâs definitely better than they say it isnât.
- AI will make you faster at what youâre doing, but it will not make you better.
To explain the first bullet point, letâs start with the reality: AI technology is good, and it continues to get better. Large language models opened pandoraâs box, and thereâs no going back. If youâre a software engineer and you start using Copilot, for a large portion of the industry, you will be able to write code faster.
But how good is AI really? Well, it can be difficult to say. Though the reason for that isnât because the answer is difficult to understand, itâs just incredibly difficult to find a straight answer to anything online. The reason for this I believe is a unique mix of circumstances, which make it stand out from trends weâve seen in the past.
Circumstance 1: AI Actually Works
The first reason for the confusion is straightforward. AI works. It does things that were considered science fiction less than 5 years ago. Our lizard brains are incredible at adapting to our surroundings, so itâs easy to forget this. I remember when ChatGPT first released, and I showed it to my friends. I remember opening the chatbox and typing âWrite me a story about a beagle and bunny that go to the movies together, in the style of Dr. Seussâ. Peopleâs jaws hit the floor.
Oh my god it even rhymes!
This is different than trends in the past. Remember the cryptocurrency hype cycle of the 2010âs? Aside from being a convenient way to order drugs on the internet, the concept of NFTâs, DAOâs, and other decentralized platforms never truly solidified. I know the price of bitcoin is regularly skyrocketing, but this is caused by hyperstition. Bitcoin goes up because everyone believes it will go up. Crypto produces little, if any, value to the world and the average person nowadays equates the word crypto to âscamâ and âget rich quick schemeâ. I can spend months in the tech world without bumping into it, because it simply isnât a very disruptive piece of technology.
On the contrary, ask your local high school english teachers about AI. Hell, look at the academic dishonesty section on the syllabus of almost any college course. AI cannot be ignored anywhere to the same extent, and this goes beyond academia. For better or worse, itâs causing major disruptions in the way our society traditionally functions.
Circumstance 2: AI News Is Largely Run By Marketing
If youâre unfortunate enough to have to browse LinkedIn and also work in tech, you will likely see no shortage of hot takes about how great AI is and itâs incredible âtransformative natureâ and other bullshit terminology that doesnât mean anything. You see allegedly credible people making grand claims. From the CTO of OpenAI claiming that ChatGPT will have PhD Intelligence to Mark Zuckerberg claiming that AI will obsolete the need for mid-level engineers, it is easy to get sucked into the hype cycle and think that weâre on the cusp of reaching some form of true artificial general intelligence, but this is where the first half of bullet point one applies.
Itâs definitely not as good as they say it is
Many CEO tech bros and billionaires in silicon valley desperately want to fire all their staff except the female secretary they find attractive and make bank from an army of AI minions. Itâs a utopian vision for them, and because of circumstance 1 (AI actually works) itâs exceedingly easy to convince the venture capital part of industry to give them money. This has fueled a vicious cycle. If the biggest players are all stating that AGI is right around the corner, then to secure funding your glorified ChatGPT wrapper needs to do the same. This effect, in my opinion, is reaching a critical point with the Stargate project the US government announced. We are essentially throwing $500 billion dollars at OpenAI to boil the ocean even harder in pursuit of a better language model. While the bitter lesson teaches us that mountains of compute regularly wins in the world of AI, models like DeepSeek R1 illustrate that the development progress that these companies are making are perhaps not as technically expensive as they claim they need to be. This inundation in over-promised results leads us to the third and final circumstance that makes learning about AI progress so difficult.
Circumstance 3: Marketing Claims Can be Extremely Easy to Debunk
This I feel is also an uncommon circumstance in hype cycles. At its peak if I had told you âMt. Gox, the largest crypto platform, isnât secure!â that was not something that was easy to prove or disprove. However letâs look back at that PhD claim made by the former CTO of OpenAI and see what the exact quote was:
âIf you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,â Murati says. âAnd then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, weâre looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.â
Smart high-schooler you say? Then why canât it tell me how many râs are in strawberry?
Although we have begun to overcome this anecdote, thereâs an endless wealth of these retorts and they are completely valid given the claims from those in circumstance 2. The companies at the top also arenât the only ones who the AI skeptics can easily point to.
Remember the Rabbit R1? With its Wrapper to ChatGPT âLarge
Action Modelâ that will make it so you can interface with technology
entirely through natural langauge? Well, it barely worked,
and it turns out you
donât need an extra device in your pocket.
Ok but what about software development? Didnât you see how Devinâs
Wrapper to ChatGPT integrated development environment enables
you to manage an entire software project without writing any code
yourself? For the price of $500 a month you can watch it spend 10
minutes trying to checkout a branch and then you have a 15%
chance of it actually writing a meaningful snippet of code.
The current environment of AI products has created a wealth of punching bags, and a huge conglomerate have put on their gloves. This brings me to the second half of the first bullet point:
itâs definitely better than they say it isnât.
Back when the r in strawberry complaint was widely cited as to why âAI is just a garbage stochastic parrotâ, there were many people who correctly pointed out why LLMâs have a hard time solving that problem. They werenât trying to disprove the counter-narrative that AI is being overhyped, but are part of a third group of people who sit between the skeptics and evangelists. This group is the smallest, and primarily composed of people who actually know what theyâre talking about, but because of the polarizing effect these circumstances have the skeptics often immediately label them as apologists.
It can be exceedingly difficult to try and talk about the nuance of AI on the internet because circumstance 3 has bred a rabid group of skeptics who love to jump into the discussion and immediately start screaming âhallucinationsâ like itâs some kind of incredible âgotchaâ.
These 3 circumstances have created an extremely vocal minority of evangelists who make any incremental progress sound like a breakthrough. Then it is quickly debunked by people who are tired of hearing about OpenAI. However people are debunking the grand claim, and not the incremental progress. The truth lies in the center of these boundaries, AI is not as good as people keep saying it is, but itâs much better than these people like to say it isnât.
So How Good is it Really?
Iâm fortunate to work in a group who communicate progress of generative AI through papers on arxiv.org instead of LinkedIn posts with emojis for bullet points. Working with LLMs across two radically different paradigms, legal research and software development, has led me to a simple conclusion:
AI will make you faster at what youâre doing, but it will not make you better.
What does this mean? Well, at this point you can ask many a senior software engineer and hear frustrations about coworkers using Copilot to generate large volumes of garbage code. You can also find incidents of lawyerâs citing non-existent cases out of ChatGPT. If we apply the rationale above, the reason becomes obvious: The engineers sending in bad code werenât good engineers, and those were also terrible lawyers to begin with. Sometimes youâll see hear claims that sound like counter-examples of this, such as:
But look I canât program and with an LLM I made an HTML snake game in minutes!
The truth is, you actually could make that snake game. It could have been you that followed the tutorial whose output is being regurgitated into your chat window. Itâs just that it would have taken hours instead of minutes (assuming you donât blindly copy and paste). Software and information on how to write it has developed to such a sophisticated point of abstraction that anyone can learn how to do it in a shorter amount of their freetime than they realize. The LLM didnât make you any better at writing software, and you may not have noticed but it also robbed you of gaining useful knowledge from the process. Circumstance 2 has caused the evangelists to do strange things because of this fact. When Mark Zuckerberg claimed that all of his mid-level engineers could be replaced by AI, heâs saying you can have a near-zero experienced human writing the code, thus self-reporting to the world that mid-level engineers at Meta probably arenât very talented. I donât believe thatâs true, but because of circumstance 2 heâs forced to make claims that are frankly insulting to many engineers in his own company.
AI sometimes makes people try to convince others theyâre dumber than they really are, that they couldnât do the task without it, or theyâre not needed for it. Other times it fools them into thinking they understand more than they do. They see problems get solved faster, and work done sooner, and draw this false conclusion. This can have detrimental effects on the junior community, but going deeper on this topic warrants a separate post.
So, will installing Copilot make you a 10x engineer? No. Will it make you a worse engineer? No. You also wonât become a great/terrible writer for using ChatGPT, or a great/terrible employee for trying the new AI feature that may or may not be useful. AI wonât make someone worse and it wonât make someone better, the increase in productivity will simply lead to higher visibility of their capabilities because theyâre able to do more. If youâre good at what you do, it will show. If you arenât good at what you do, it will also show.
Generative AI is an exciting new technology with a lot of future potential, and I want to explore it with everyone. so, evangelists and skeptics, please shut up. Letâs make it easier to see whatâs really going on.
- Scronkfinkle