Scronkfinkle Projects & Blog

🤐

Please Shutup About AI

“Please don’t call it ‘AI’, these are ‘Large Language Models’, nothing more!” he screamed, into the void, it being far too late to change

This article is intended to be sent to both AI evangelists and skeptics. The message is the same: please, shut up already.

I’ve seen a few trends rise and fall over the course of my career in tech, and for the first time I work full-time on a topic right in the hype cycle. Not only that, but I also work on a specific subject that happens to cause people to involuntary start talking over me about how great or terrible my work is: AI legal research assistance.

If you take nothing else from this article, I want you to understand these two points:

  • When reading any media on AI start by telling yourself “It’s definitely not as good as they say it is, but it’s definitely better than they say it isn’t.
  • AI will make you faster at what you’re doing, but it will not make you better.

To explain the first bullet point, let’s start with the reality: AI technology is good, and it continues to get better. Large language models opened pandora’s box, and there’s no going back. If you’re a software engineer and you start using Copilot, for a large portion of the industry, you will be able to write code faster.

But how good is AI really? Well, it can be difficult to say. Though the reason for that isn’t because the answer is difficult to understand, it’s just incredibly difficult to find a straight answer to anything online. The reason for this I believe is a unique mix of circumstances, which make it stand out from trends we’ve seen in the past.

Circumstance 1: AI Actually Works

The first reason for the confusion is straightforward. AI works. It does things that were considered science fiction less than 5 years ago. Our lizard brains are incredible at adapting to our surroundings, so it’s easy to forget this. I remember when ChatGPT first released, and I showed it to my friends. I remember opening the chatbox and typing “Write me a story about a beagle and bunny that go to the movies together, in the style of Dr. Seuss”. People’s jaws hit the floor.

Oh my god it even rhymes!

This is different than trends in the past. Remember the cryptocurrency hype cycle of the 2010’s? Aside from being a convenient way to order drugs on the internet, the concept of NFT’s, DAO’s, and other decentralized platforms never truly solidified. I know the price of bitcoin is regularly skyrocketing, but this is caused by hyperstition. Bitcoin goes up because everyone believes it will go up. Crypto produces little, if any, value to the world and the average person nowadays equates the word crypto to “scam” and “get rich quick scheme”. I can spend months in the tech world without bumping into it, because it simply isn’t a very disruptive piece of technology.

On the contrary, ask your local high school english teachers about AI. Hell, look at the academic dishonesty section on the syllabus of almost any college course. AI cannot be ignored anywhere to the same extent, and this goes beyond academia. For better or worse, it’s causing major disruptions in the way our society traditionally functions.

Circumstance 2: AI News Is Largely Run By Marketing

If you’re unfortunate enough to have to browse LinkedIn and also work in tech, you will likely see no shortage of hot takes about how great AI is and it’s incredible “transformative nature” and other bullshit terminology that doesn’t mean anything. You see allegedly credible people making grand claims. From the CTO of OpenAI claiming that ChatGPT will have PhD Intelligence to Mark Zuckerberg claiming that AI will obsolete the need for mid-level engineers, it is easy to get sucked into the hype cycle and think that we’re on the cusp of reaching some form of true artificial general intelligence, but this is where the first half of bullet point one applies.

It’s definitely not as good as they say it is

Many CEO tech bros and billionaires in silicon valley desperately want to fire all their staff except the female secretary they find attractive and make bank from an army of AI minions. It’s a utopian vision for them, and because of circumstance 1 (AI actually works) it’s exceedingly easy to convince the venture capital part of industry to give them money. This has fueled a vicious cycle. If the biggest players are all stating that AGI is right around the corner, then to secure funding your glorified ChatGPT wrapper needs to do the same. This effect, in my opinion, is reaching a critical point with the Stargate project the US government announced. We are essentially throwing $500 billion dollars at OpenAI to boil the ocean even harder in pursuit of a better language model. While the bitter lesson teaches us that mountains of compute regularly wins in the world of AI, models like DeepSeek R1 illustrate that the development progress that these companies are making are perhaps not as technically expensive as they claim they need to be. This inundation in over-promised results leads us to the third and final circumstance that makes learning about AI progress so difficult.

Circumstance 3: Marketing Claims Can be Extremely Easy to Debunk

This I feel is also an uncommon circumstance in hype cycles. At its peak if I had told you “Mt. Gox, the largest crypto platform, isn’t secure!” that was not something that was easy to prove or disprove. However let’s look back at that PhD claim made by the former CTO of OpenAI and see what the exact quote was:

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

Smart high-schooler you say? Then why can’t it tell me how many r’s are in strawberry?

Although we have begun to overcome this anecdote, there’s an endless wealth of these retorts and they are completely valid given the claims from those in circumstance 2. The companies at the top also aren’t the only ones who the AI skeptics can easily point to.

Remember the Rabbit R1? With its Wrapper to ChatGPT “Large Action Model” that will make it so you can interface with technology entirely through natural langauge? Well, it barely worked, and it turns out you don’t need an extra device in your pocket.

Ok but what about software development? Didn’t you see how Devin’s Wrapper to ChatGPT integrated development environment enables you to manage an entire software project without writing any code yourself? For the price of $500 a month you can watch it spend 10 minutes trying to checkout a branch and then you have a 15% chance of it actually writing a meaningful snippet of code.

The current environment of AI products has created a wealth of punching bags, and a huge conglomerate have put on their gloves. This brings me to the second half of the first bullet point:

it’s definitely better than they say it isn’t.

Back when the r in strawberry complaint was widely cited as to why “AI is just a garbage stochastic parrot”, there were many people who correctly pointed out why LLM’s have a hard time solving that problem. They weren’t trying to disprove the counter-narrative that AI is being overhyped, but are part of a third group of people who sit between the skeptics and evangelists. This group is the smallest, and primarily composed of people who actually know what they’re talking about, but because of the polarizing effect these circumstances have the skeptics often immediately label them as apologists.

It can be exceedingly difficult to try and talk about the nuance of AI on the internet because circumstance 3 has bred a rabid group of skeptics who love to jump into the discussion and immediately start screaming “hallucinations” like it’s some kind of incredible “gotcha”.

These 3 circumstances have created an extremely vocal minority of evangelists who make any incremental progress sound like a breakthrough. Then it is quickly debunked by people who are tired of hearing about OpenAI. However people are debunking the grand claim, and not the incremental progress. The truth lies in the center of these boundaries, AI is not as good as people keep saying it is, but it’s much better than these people like to say it isn’t.

So How Good is it Really?

I’m fortunate to work in a group who communicate progress of generative AI through papers on arxiv.org instead of LinkedIn posts with emojis for bullet points. Working with LLMs across two radically different paradigms, legal research and software development, has led me to a simple conclusion:

AI will make you faster at what you’re doing, but it will not make you better.

What does this mean? Well, at this point you can ask many a senior software engineer and hear frustrations about coworkers using Copilot to generate large volumes of garbage code. You can also find incidents of lawyer’s citing non-existent cases out of ChatGPT. If we apply the rationale above, the reason becomes obvious: The engineers sending in bad code weren’t good engineers, and those were also terrible lawyers to begin with. Sometimes you’ll see hear claims that sound like counter-examples of this, such as:

But look I can’t program and with an LLM I made an HTML snake game in minutes!

The truth is, you actually could make that snake game. It could have been you that followed the tutorial whose output is being regurgitated into your chat window. It’s just that it would have taken hours instead of minutes (assuming you don’t blindly copy and paste). Software and information on how to write it has developed to such a sophisticated point of abstraction that anyone can learn how to do it in a shorter amount of their freetime than they realize. The LLM didn’t make you any better at writing software, and you may not have noticed but it also robbed you of gaining useful knowledge from the process. Circumstance 2 has caused the evangelists to do strange things because of this fact. When Mark Zuckerberg claimed that all of his mid-level engineers could be replaced by AI, he’s saying you can have a near-zero experienced human writing the code, thus self-reporting to the world that mid-level engineers at Meta probably aren’t very talented. I don’t believe that’s true, but because of circumstance 2 he’s forced to make claims that are frankly insulting to many engineers in his own company.

AI sometimes makes people try to convince others they’re dumber than they really are, that they couldn’t do the task without it, or they’re not needed for it. Other times it fools them into thinking they understand more than they do. They see problems get solved faster, and work done sooner, and draw this false conclusion. This can have detrimental effects on the junior community, but going deeper on this topic warrants a separate post.

So, will installing Copilot make you a 10x engineer? No. Will it make you a worse engineer? No. You also won’t become a great/terrible writer for using ChatGPT, or a great/terrible employee for trying the new AI feature that may or may not be useful. AI won’t make someone worse and it won’t make someone better, the increase in productivity will simply lead to higher visibility of their capabilities because they’re able to do more. If you’re good at what you do, it will show. If you aren’t good at what you do, it will also show.

Generative AI is an exciting new technology with a lot of future potential, and I want to explore it with everyone. so, evangelists and skeptics, please shut up. Let’s make it easier to see what’s really going on.

  • Scronkfinkle