Could Shakespeare Write This? Or Would You Click On This Ad?
Somewhere, a gaggle of writing snobs turn up their noses and say: “Well! That’s not very good now, is it?”
What if you could write a Shakespearean sonnet in seconds?
Someone, somewhere will say: “That’s not a very good Shakespearean sonnet!”
Of course not.
Pretty good on the very first try with no other input than the first line.
And it’s pretty good for a robot.
On the first, very simple prompt.
If you’re a poet or inclined to pen poems every now and then, you can use that as a starting point or just edit it together.
How about this start of a short story:
Let’s get the hecklers out of the way:
“But Sam, this is not a very good short story!”
No, but it’s pretty good for a robot.
And it’s sort of funny. You could easily expand that into a short story. It has everything.
How about this Facebook ad:
Not very good.
But if you’re a half-decent copywriter, you can edit together something better.
On the topic of ads and generating ideas for ads:
Decent starting point. But not very good.
Let’s ramp up and really twist the knife:
Sit down to put your shoes on.
Man boobs ruining your summer.
Get your wife to take notice.
No energy to play with your kids.
Ouch.
Very specific scenarios you can imagine.
A half-decent copywriter can edit that together pretty easily.
With the examples above, do you see what you could do with just GPT-3?
In the meantime…
TECH & TOOLS
How does AI actually work, anyway?
I’m far from an expert, but let’s start with a few definitions:
AI is when machines demonstrate intelligence at the human level and beyond.
Machine learning (ML) is how machines learn to be intelligent by identifying patterns in data. ML models have two artifacts:
Label: The output that the machine wants to predict (e.g., video watch time).
Features: Inputs to achieve the output (e.g., video views and likes).
At a high level, machine learning works in 3 steps:
Prepare data: Machines need lots of quality data to learn. For example, to convert text to images, a ML model needs to learn from millions of images with text labels. ML engineers typically spend 80% of their time manually cleaning the data in a process called feature engineering.
Train model: Next, ML engineers split the data into a training set and a test set. The machine uses the training set to build the model and then uses the test set to improve the model’s accuracy. The model’s algorithms can be:
Simple like a linear regression (e.g., a person’s weight = 80 + 2 * height). The model will adjust the feature weights (e.g., change 2 to 1.5) through repeat iterations to more accurately predict the label (output).
Complex like a neural network. The model will not only assign weights to features but also create new features automatically. Most models based on images or natural language use neural networks.
Build user experience: After training the model, the team needs to build a UX where people can supply inputs to get their desired output. How the model works is a black box even to ML engineers, so the user experience needs to be clear, believable, and actionable.
In short:
Machines need large quality data sets to learn. They get better by identifying patterns and predicting outcomes through repeat iterations.
When GPT-3 writes a sonnet, it’s identifying patterns and predicting what words should come next. It looks like magic when it happens in seconds and when the output is something that looks as if a human wrote it.
Talk to you again next week!
Cheers,
Samuel Woods
The Bionic Writer