LLMs Are The New Software
When you can “speak” (almost) anything into existence, does that mean you should?
LLMs are fundamentally changing the way we interact with computers and technology.
You and I have adapted, adopted, contorted, and distorted ourselves to get anything done so far—from using punch cards, CD-ROMs, floppy disks, command-line interfaces, learning code, learning how to use software for anything and everything, agonizing over the blue screen of death, moving a mouse around, clicking, tapping, staring at a screen for hours on end, destroying our joints, neck, spine—and hunched over our phones.
So far, technology has been used on tech’s terms.
But with LLMs, we can now interact with and use technology on human terms. Through voice or text alone, you can prompt LLMs (and models for video, images, and audio) for whatever you want—and you’ll get it.
You don’t even need to know how to code anymore. Just describe the kind of app you want made and ChatGPT, for example, will code it for you.
We’re moving on from Software and into Large Language Models. Speak and it’ll happen. The parallels between that and, say, Yahweh’s “let there be light”, are uncanny.
In the same potential for unlimited creativity and creation, through your words, also lies the greatest danger:
Who controls the LLM? Whoever does is in charge of language—and your ability to express and formulate your thoughts as you’re using an LLM.
So far, the makers of all the LLMs are making the calls, of course. Companies, like OpenAI, Google, Microsoft, and so on, who have spent billions of dollars and resources to make them are also filtering, fine-tuning, and making sure certain biases remain.
But when LLMs are becoming ubiquitous, and as Microsoft, Google, and everyone else start integrating them into software everywhere—how do you feel about the political bias, censorship, and control of language that will show up inside, say, Microsoft Word?
One day you’ll be typing out something innocuous about, say, gender and politics. Next thing you know, a Clippy on steroids will pop up and ask if you really meant to say that? If you insist, eventually Word will refuse to let you type out the sentence altogether.
Through the filtering, blocking, and censoring of what LLMs are “allowed” to say, you’ll have command and control of what people can say.
Eventually, this makes its way into what people think (and don’t think). If you control the telegraph, along with the bridges and banks—your revolution will succeed.
This is the bad news. The good news is that you have options. The better news is that LLMs, used properly, can help us end the often abusive relationship we’re stuck in with technology and, ironically enough, help us become more fully human.
Used well, you can sharpen your thinking, explore your creativity, create useful things, and automate writing your weekly reports for your boss (who will have an AI read your report anyway—this is the future, by the way; robots writing things for other robots).
And if you end up with your own, personal LLM (trained on your data, photos, emails, documents, notes and whole corpus) on your own device, you’ll maintain privacy, ownership, and free expression.
Still, and this is at the heart of how you can get the most out of LLMs for yourself: it’s not about how much potential these tools have. In many ways, the potential is already endless and we’re still in the early stages.
It’s about how you use them, as you’re using them. Their power is latent until a user with intent and imagination picks them up and makes something.
A hammer is always in search of a nail and you’re the only one that can identify a nail and bang away. That same hammer, though, can be used to crack nuts, tenderize meat, or crush spices. A hammer can be used as a percussion instrument or together with a chisel to create sculptures.
With a hammer in your hand, what makes your whacks produce Michelangelo’s David or a deformed work like the new MLK sculpture, The Embrace, is the intent, imagination, sense of taste, and discernment guiding your hand, which in turn guides the tool.
There’s one more layer to this, and that’s where we started: Technology has spent a few hundred years forming us. We have yet to reform technology.
Now, with AI and LLMs, we can. For the first time in a few hundred years, tech can happen on our, human terms.
The most obvious starting point is with the one LLM that so far rules them all: ChatGPT. With the recent rollout of Plugins, you can now get more done, easier. There’s a library of over 100+ plugins and the list is growing. They’re all performing discrete tasks, many aimed at cutting time and making you “more productive”, whatever that means.
But even something as simple as being able to “talk” to a bunch of PDFs has an endless amount of uses.
And if you don’t want to use ChatGPT, there are countless of apps offering the same functionality.
Now, what’s so great about uploading PDFs and being able to “chat” with them?
Imagine you’re a lawyer with all the case histories, court transcripts, and documents that go with your case. You can use an LLM to understand it better, see connections, formulate and organize your arguments, and write your opening statements and closing arguments.
I’m not telling you to use it verbatim, but it could be an additional paralegal on your staff (that’s even cheaper than the peanuts you’re throwing them now).
Imagine you have records of your health, your medical history, doctor’s visits, prescriptions, and everything else. Upload. Talk to your documents. Uncover what might be actually wrong, new potential treatments, diets, supplements, and more.
Sure, it’s not a doctor and you shouldn’t blindly follow the information you get. But what if you could go to your doctor with a little more understanding of your health—and challenge them when they’re trying to hook you on SSRIs? Or asking better questions.
What about your mental health? There’s an LLM for that, SafeguardGPT. It’s one out of many of them but, as an example, it applies “psychotherapy” with 4 types of agents and a human moderator. It simulates a therapy session and perhaps you can get a better understanding of yourself through it.
The use cases of LLMs are already unlimited, especially one you involve plugins and fine-tuning, amongst other ways of interacting with them.
This also brings me back to the point earlier: how you use LLMs. Just because you can use an LLM for therapy—should you? Is there one better way to use it, versus another? Is there a good use of it for you that won’t work for someone else? Remember, technology forms us. LLMs are forming us now and will only continue, as they’re integrated into every piece of software you use.
Can you use an LLM in a way that you shape it into a better version?
What if you could use LLMs for argument mapping (AM)?
This is Argdown, like Markdown but for argumentation. You can write a pros and cons list as simple as if you’re writing a tweet.
You can also “logically reconstruct complex relations between arguments or dive into the details of their premise-conclusion structures.”
Can you imagine a scenario where an LLM like this could be useful for you? What if you become too dependent on this and lose your crafted skill of argumentation in the process?
There’s a dilemma at the heart of every LLM and it’s simply this:
How much, and how, do you use it? Because you will lose out on something in the process.
The more exposure you have to anything AI touches, the less resilient you are for the future. Unless you figure out how to make it be a partner in your formation, as you’re using it. No technology, especially an LLM, is neutral. And usage will always form us in particular ways.
Is it better to see LLMs as aspects of reality, so that it’s not a question of whether we use them but what type of LLM we use—and what type we don’t use?
Another helpful perspective is Albert Borgmann’s “device paradigm”. Devices are things which take a more complex task and make it happen at the push of a button.
Apply this to LLMs. Any time, and everytime, we outsource a task to an LLM, we lose the kind of formation that the task provided. The path forward is to use LLMs with an awareness of what we gain and lose.
However, LLMs are malleable. You can train, fine-tune, or use embeddings in a way where you don’t lose out on the formation you’d otherwise miss.
You can have LLMs interrogate you, argue with you, challenge your assumptions, challenge what you’re saying and thinking.
LLMs are even hitting human-level creativity. When humans (blind to the origin) generate ideas and 5 LLMs generated ideas, and humans rated them, there was no major difference between LLM and human-generated ideas.
And this is still early. This is version 1 of LLMs. The decision of the decade is coming at you hard; the decision of what LLMs you use and how you use them now.
You might not care about LLMs but LLMs care about you. They’re being integrated into everything and giving large corporations, and soon enough, the Government, full control and influence over the language you’re using.
In turn, this determines what kind of LLMs are available to use, and how. You can either make active choices now, or those choices will be made for you.
Talk again soon,
Samuel Woods
The Bionic Writer