Use Less AI

Use Less AI

Overview

Large language models (LLMs), also commonly called “AI” are not very good, in multiple senses of the word. There are some cases where the technology may be less corrosive to the human spirit, so in this blog post I wanted to try to articulate the thoughts and feelings I have on the matter.

The big problem is falling for the silly lie that the computer can think for you, when the poor model genuinely cannot think. This blog post is also to encourage you to grow and maintain your personal reasoning and organizing skills. If these tools ever become nearly as clever as advertised, then it will only become proportionately more important to be thoughtful, curious, and independent.

The thing is, sometimes using an LLM isn’t absurdly bad for you, at least according to some researchers. The key seems to be maintaining creative and executive control, and only having the tooling do small tasks after the core efforts are done by the user. So, I argue that you should always be doing your own outlines, reasoning, reading, and slow-but-deep-thought. Only after doing the heavy lift will an LLM be potentially not-useless: the so-called Brain-to-LLM use case1 in research as will be discussed later, but just mentioned here briefly.

Finally, I feel that while I agree with many anti-AI folks’ sentiment, I did not quite feel it was quite right to judge any use of AI as grounds to dismiss a person’s creative works. To me, it seems important the degree to which a person defers to an LLM or not, not so much that they used the class of technology at all. There seems a significant degree of difference between “ChatXYZ Draw Me A Picture of Cool Fight Action Scene with Explosion” and: “PrivateModuleLLMZ –please “fill in the keyframes in between these thousands of hand-drawn animation frames associated with section 7.6.2 of the storyboard” –reference styles.md,storyboard.tex –exclude-dir=temp where the styles, storyboard, and keyframes are human-generated and the privately-ran, specialized LLM does a gritty task the artist despises. The more the prompting looks like script-use where there is a dominance of human effort, the less uncomfortable I feel.

Since I don’t want to be a fundamentalist about much of anything except for curiosity, I would rather encourage people to thus use “AI” as little as possible (particularly in the sense of deferring to the output of the models) than take a hard stance of no LLM use in any instance whatsoever. When I say “Use less AI” it doesn’t have the us-them divide; it’s not a club that you can be excommunicated from.

The Dunning-Kruger Wetware Waterslide (“AI” Can Make You Dumb AND Confident)

Let’s focus in on some particular hazards of heavy “AI” abuse. The first is the way that LLM technology acts as a Dunning-Kruger accellerator. Meaning that if you are on the upper end of high-competence/low-confidence, you may tend to second-guess the LLM in proportion to your understanding of their funciton; you will probably use them less as a matter of course. Whereas, if you are in the valley of low-competence/high-confidence, then “AI” will probably make you feel quite special and let you swirl in your own lack of understanding due to sycophancy. In this latter case an individual may use “AI” often, iterating wastefully on prompts and risking slipping into superstition or psychosis.

In a recent study at MIT1 where users started on their own (Brain) or with an “AI” (LLM) and then after would do a task with the other, it is shown that starting off with or relying entirely on an LLM has a measurable impact on the brain of the user. Starting off with an LLM and then trying to clean up after the fact, the LLM-to-Brain approach referenced in the paper, has substantial drawbacks. The first and most obvious is that the user doesn’t form their own mental model of the solution when relying on the LLM, and thus they incur what the researchers call a cognitive debt that must be settled when the user attempts to understand or modify the solution. The researchers found a measurable difference in the brains of LLM-to-Brain problem-solvers which reflects the fact they didn’t generate the solutions, so they wouldn’t understand them as well. The LLM users could not quote their own paper as well as the Brain-first users, but it’s obvious why: the LLM users didn’t go through those critical early drafting and planning phases that represent the skeleton of a project. The planning and outlining are integral to user’s internalizing of whatever they are working on, whereas starting with an LLM’s output represents having to “reverse-engineer” output which may be full of errors, not-even-wrong “hallucinations,” and even proprietary material. No wonder even Search Engine users had stronger ownership: when browsing stackoverflow and other sites for ideas at least the Search Engine user is selecting which things align with what the user is planning. But LLM users don’t plan at all, they have to work backwards and forwards at once trying to understand the code presented to them while also making changes that align with vague goals.

What’s further devestating is that relying on the chat bot risks lowering your personal intellectual ceiling the longer you use it. If you allow “AI” to fill in the gaps where you aren’t so experienced, you will be less able to catch it’s mistake. Your areas of greatest ignorance could become the seeds of an “AI psychosis” if you aren’t engaging in consensus reality regarding the topic. I think of people who try to use “AI” for legal purposes (any they are not legal experts) and are utterly bewildered when this makes their situation worse.

I have noticed in some users of this technology a predilection to consider the answers of the slop oracle to be a sort of golden standard. Sort of like how during the peak of the search engine era, when Google’s top results were considered sufficient reference for many who became casually dependent on that technology. It is still common parlance to tell others to “just Google it” about some things.

So when learning new things, reference materials that are independent of these tools. Even if you have used an LLM to say, learn what some boilerplate for a simple thing may look like, don’t throw the entire problem statement into the prompt box and expect to have functional material that you can understand without some of your own reading documentation (no “summaries”) and writing little pieces of code at a time. The cumulative effect of doing the hard work yourself lets you consistently outperform the “AI” in due time. But the effect of using an “AI” heavily to “help you” learn in an area you know almost nothing about, you can get so lost nobody will know what you are talking about. Actual teachers say things like: “I don’t know”, “you are wrong, here is a source,” and finally: “I thought you were wrong, but after we worked through the problem step-by-step, I see my error.”

Ask an LLM how many letter “r’s” are present in strawberry and if it’s wrong, it may be confidently wrong. If a human user absorbs that confidence, it seems like it can thus intensify the Dunning-Kruger effect.

“AI” De-Skills the User

If you use large language models to make major decisions, write your code for you, replace the thinking and working that you would otherwise do on your own, then you are most certainly allowing those various skills to atrophy. The mind obeys the use-it-or-lose-it principles just as much as muscles in the body, so if you use “AI” to try do the things you are most talented at and you decide to grow reliant on the LLM for that, then you will gradually lose your dearly-earned skill to a capricious subscription.

The LLM-to-Brain users from the MIT study never planned the solution to their task, and thus missed an important part of the process of doing anything: going through the exercise of forming a plan and iterating on it without having most of an answer handed to them. By using “AI” one is paying less attention, so less is earned from the foreshortened efforts of LLM-first users.

When you don’t plan or organize your thoughts and projects, instead relying on the LLM to do this, then you will gradually lose the skills to plan and organize.

In a piece from The Harvard Gazette, the author argues: “If AI is doing your thinking for you … that is undercutting your critical thinking and your creativity.”2. The author goes on to note how using “AI” tools to generate material for a job interview which leads to all the heaviest LLM users to blend together.

There remains a real temptation to use LLMs though! It’s because they can outperform Search Engines in some respects in ways which feel quite satisfying to venture capitalists. As noted in the aforementioned MIT study, there is a frictionless nature to using a natural-language chatbot. But like a certain mathematician character from Chricton’s Jurassic Park warned, power without earning it through skill and discipline, is particularly dangerous.

Leaning more into the nature of the technology exposes a user to the risks. Technology are like drugs, and drugs are technology. We can become addicted to drugs and technology for psychological reasons, such as the convenience or other relief provided. This is to highlight the need to respect the way technology can impact a person’s health, skills, and social connection.

“AI” Makes Compliant & Alienated User

We are already talking more like the bots.3

The early moments of industrial revolution a 130 years ago featured standardized parts for manufacturing, which has mostly been to the advantage of users as well as manufacturers. However, during today’s apparent Fourth Digital Revolution, it seems that LLMs can coach us all to sound the same. Oh, wait, let me rephrase that captures such a pattern: … sound not like individuals — but the same.

So, there is a regression to a sort of uniform style as well as capability. Unlike the Search Engine Era’s additional commands (think DuckDuckGo bangs “!”, and commonly-used commands like “site:foo.com” and “-{exclude term}”) a skill still sometimes collectively called “Google-Fu.” But there isn’t really an equivalent, consistent skillset surrounding prompting. The Bayesian nature of the beast appears fundamentally a bit frustratingly inconsistent like that. “Prompt engineering” is mostly hope and digital alchemy, at least from my perspective, which is that of a hardware person talking out of their depth.

The homogeneity of “AI”-speak and the types of sentence structures also risk a kind of dissociation from the rawness of individual expression. Using an LLM can launder a person’s expressions through a prism of faux-nice and professionalism, at the cost of actually getting someting across. People may filter difficult feedback through a washing of sycophancy, and make no urgently-needed changes. So there is a risk of being alienated from the conversations and social circumstances if relying on “AI” to navigate social things.

There may be some value in using an LLM to respond to a creature of corporation, to avoid the associated psychic damage of creating human-generated material which is judged by an unthinking LLM. But using an LLM to polish an apology could easily make it sound like a defensive politician wrote it, and so you risk setting people off against you if you try to use the context-free suggestions from the autocomplete. An imperfect letter written with heart is better than boilerplate written by a machine.

But using an LLM to write poetry or to summarize how you feel? Only you can express yourself, using an LLM to auto-complete your own thoughts will alienate you from your own expression. But, at least your manner of speech will fit in if you just copy what the bot says.

Of course, this hyper-compliance leads to a major weakness of trying to use LLMs in academia: the patterns are so rigid and detectable that LLM users all sort of sound like each other, and you never build up a way of expressing yourself. If you want to use stable diffusion for your art, it will at best give you a blend of existing styles from its training data. If you use it for your writing, you will get a thoughtless mush of excellent authors and noxious fanfiction amateurs with no real capacity to register things like style in the way humans can.

So don’t lose your voice or your style. Make bad art, write noxious fanfiction like an amateur: because it will be more satisfying and yours in a way the soft-serve slop can’t approach.

Why Not Be A Hardliner?

So I clearly do not have a bright and cheery view of LLMs. Why am I not advocating for “use NO AI?” you may wonder? Using no LLMs whatsoever is a mighty fine goal actually, and if you choose that, that’s cool. Humans have made do without them for all time until now.

The reason that I say “use less” instead of “use none” is because I am easily exhausted by overly strict ways of doing things, and I am not sore for excuses to cut people off. Maybe it’s the way I grew up in an impossibly strict and suffocating home environment, but I can’t find myself readily signing up for black-and-white thinking after therapy. So I am wary of all-or-nothing propositions, especially ones where the expectation is to not engage with anyone who uses any LLMs. This seems ripe conditions for witch-hunt behavior where the accusation of using an LLM can cause a person to say, be added to a blocklist or other digital blowback.

I also think that black-white thinking has its own way of smothering curiosity and nuance. Even though I think the people who staunchly refuse LLMs are doing a good thing for themselves, I do not think that using an LLM in a certain way and in certain cases makes-or-breaks a person’s character for life. Even if a person was addicted to LLM-to-Brain use, anyone can change their mind. If someone uses less AI by way of being predomantly reliant on their own, a Brain-sometimes-to-LLM approahc, or even swears off it for themselves, then use of an LLM at all doesn’t seem like the point of no respect.

The problem is the way technology used. If a person uses an “AI” thoughtlessly, too-often, and carelessly as a means to offshore their soul to a cloud server, then they are going to become that much more thoughtless, dependent, and howlingly energy-inefficient as the dependency settles in.

All this to say “use less AI” is an open invitation to change one’s mind. To try to use it a little less before using it much less. Maybe my meager message of digital harm reduction falls short for passionate advocates for the no-AI-use-ever crowd, but I hope to avoid getting slop-addicts defensive and still make the same case the hardliners do, minus the judgement.

Technology Is Neutral

One pesky matter is that large language models are, like any technology, fundamentally neutral. The pieces of technology within a gun are not evil. Even when assembled and prepared, to judge an object to be moral or not is a kind of prejudice about how it will be used. If you deem an object evil, it would follow that it is because there are only evil uses for it.

A pacifist may argue that weapons merely existing are evil, because any and all use of weapons is inherently evil.

But if you believe that some forms of violence are acceptable, then a weapon is probably a neutral technology to you. Where the goodness or evil about it is only associated with the actions taken by the person wielding it.

The Corrupting Influence of Ownership

With great power comes great corruption.

As anyone can see, propertiarianism is quite horrible. The fact that these strange models are owned by someone most certainly affects how we are expected use them. Since billionaires are the absolute worst of humanity, the models they create exist to help the most powerful to maintain or extend their dominance.

By de-skilling workers and artists who drink too deeply of the current “AI” mythology, and by disarming the curiosity of new learners, large language models serve the interests of the rich in that they can drive down wages, eliminate categories of work, and make sure that the future generations are disinformed and propagandized.

Even though the present state of the technology doesn’t actually save time when a developer is trying to use “AI” in their work, and even though it slows down experienced developers by 19%4, this has not stopped companies from falling over themselves to lay off employees and expecting “AI” adoption to lead to lowering those pesky costs (humans).

Now, these results may change as the technology evolves. There could be a time in the near future when the tools actually become a boon to devs instead of a ~20% hit. If such a moment comes, then this would no doubt accellerate the economic turmoil that happens when work becomes more and more automated away, all while people are still going to be required to pay money to not be driven from their homes and then further punished for any continued poverty.

The problem is the dominating mindset of the wealthiest humans and their insatiable lust for further influence. So as the tools currently exist, great caution is warranted when using, so this is a great reason to use less AI.

Use Less AI “Rules”

In no order, some of the things that a person who uses less AI will tend to do:

  • If You Use LLMs At All, Be A Brain-to-LLM User1
  • Use a Smaller or Purpose-Fit Model
  • Use a Private Model – avoid the cloud
  • If Possible, Use Exclusively Your own Material to Train an LLM
  • Don’t Trust LLM Outputs As Being The Ground Truth56
  • Don’t Re-prompt/Re-run or Use Carelessly
  • Always Be Able To Make Progress When the “AI” Servers Shut Down Or Get Unaffordable2
  • When you use “AI,” You Can Lose Around A Fifth Of Your Effort4

Bibliography

Link to how I do my citations in markdown: