Software Engineer, Story Teller
741 stories
·
16 followers

Translating Latin demonology manuals with GPT-4 and Claude

1 Comment

Translating Latin demonology manuals with GPT-4 and Claude

UC Santa Cruz history professor Benjamin Breen puts LLMs to work on historical texts. They do an impressive job of translating flaky OCRd text from 1599 Latin and 1707 Portuguese.

"It’s not about getting the AI to replace you. Instead, it’s asking the AI to act as a kind of polymathic research assistant to supply you with leads."

Via Hacker News

Read the whole story
WorldMaker
148 days ago
reply
I’m mostly sure this the premise to a 90s cyber-horror movie or five and also a tossed side plot from a Laundry Files novel for being too unbelievable in fiction.
Louisville, Kentucky
Share this story
Delete

Saturday Morning Breakfast Cereal - Sympathy

1 Comment and 2 Shares


Click here to go see the bonus panel!

Hovertext:
The real trick is when companies use it to get you to buy things like my new book Bea Wolf, which will change your life.


Today's News:
Read the whole story
WorldMaker
468 days ago
reply
“Douchey in word and deed”
Louisville, Kentucky
Share this story
Delete

Is the AI spell-casting metaphor harmful or helpful?

1 Comment and 3 Shares

For a few weeks now I've been promoting spell-casting as a metaphor for prompt design against generative AI systems such as GPT-3 and Stable Diffusion.

Here's an example, in this snippet from my recent Changelog podcast episode.

Relevant section towards the end (transcription assisted by Whisper):

When you're working with these, you're not a programmer anymore. You're a wizard, right? I always wanted to be a wizard. We get to be wizards now. And we're learning these spells. We don't know why they work. Why does Neuromancer work? Who knows? Nobody knows. But you add it to your spell book and then you combine it with other spells. And if you're unlucky and combine them in the wrong way, you might get demons coming out at you.

I had an interesting debate on Twitter this morning about whether or not this metaphor is harmful or helpful. There are some very interesting points to discuss!

The short version: I'm now convinced that the value of this metaphor changes based on the audience.

The key challenge here is to avoid implying that these systems are "magical" in that they are incomprehensible and mysterious. As such, I believe the metaphor is only appropriate when you're talking to people who are working with these systems from a firm technical perspective.

Expanding the spell-casting metaphor

When I compare prompts to spells and I'm talking to another software engineer, here's the message I am trying to convey:

Writing prompts is not like writing regular code. There is no API reference or programming language specification that will let you predict exactly what will happen.

Instead, you have to experiment: try different fragments of prompts and see what works. As you get a feel for these fragments you can then start exploring what happens when you combine them together.

Over time you will start to develop an intuition for what works. You'll build your own collection of fragments and patterns, and exchange those with other people.

The weird thing about this process is that no-one can truly understand exactly how each fragment works - not even the creators of the models. We've learned that "Trending on artstation" produces better images with Stable Diffusion - but we can only ever develop a vague intuition for why.

It honestly feels more like fictional spell-casting than programming. Each fragment is a new spell that you have learned and can add to your spell book.

It's confusing, and surprising, and a great deal of fun.

For me, this captures my experience working with prompts pretty accurately. My hope is that this is a useful way to tempt other programmers into exploring this fascinating new area.

The other thing I like about this metaphor is that, to my mind, it touches on some of the risks of generative AI as well.

Fiction is full of tales of magic gone wrong: of wizards who lost control of forces that they did not fully understand.

When I think about prompt injection attacks I imagine good wizards and evil wizards casting spells and counter-spells at each other! Software vulnerabilities in plain English totally fit my mental model of casting spells.

But in debating this on Twitter I realized that whether this metaphor makes sense to you relies pretty heavily on which specific magic system comes to mind for you.

I was raised on Terry Pratchett's Discworld, which has a fantastically rich and deeply satirical magic system. Incorrect incantations frequently produce demons! Discworld wizards are mostly academics who spend more time thinking about lunch than practicing magic. The most interesting practitioners are the witches, for who the most useful magic is more like applied psychology ("headalogy" in the books.)

If your mental model of "magic" is unexplained supernatural phenomenon and fairies granting wishes then my analogy doesn't really fit.

Magic as a harmful metaphor for AI

The argument for this metaphor causing harm is tied to the larger challenge of helping members of the public understand what is happening in this field.

Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’ by Emily M. Bender is a useful summary of some of these challenges.

In Technology Is Magic, Just Ask The Washington Post from 2015 Jon Evans makes the case that treating technology as "magic" runs a risk of people demanding solutions to societal problems that cannot be delivered.

Understanding exactly what these systems are capable of and how they work is a hard enough for people with twenty years of software engineering experience, let alone everyone else.

The last thing people need is to be told that these systems are "magic" - something that is permanently beyond their understanding and control.

These systems are not magic. They're mathematics. It turns out that if you throw enough matrix multiplication and example data (literally terabytes of it) at a problem, you can get a system that can appear to do impossible things.

But implying that they are magic - or even that they are "intelligent" - does not give people a useful mental model. GPT-3 is not a wizard, and it's not intelligent: it's a stochastic parrot, capable of nothing more than predicting which word should come next to form a sentence that best matches the corpus it has been trained on.

This matters to me a great deal. In conversations I have had around AI ethics the only universal answer I've found is that it is ethical to help people understand what these systems can do and how they work.

So I plan to be more intentional with my metaphors. I'll continue to enthuse about spell-casting with fellow nerds who aren't at risk of assuming these systems are incomprehensible magic, but I'll keep searching for better ways to help explain these systems to everyone else.

Read the whole story
WorldMaker
510 days ago
reply
My core metaphor has been “casinos” and I have to explain a lot what I mean here, but it feels ethically responsible to me. These massive ML models are amazing casinos. Input the right number of chips, spin the right wheels, things light up and noises go off and it’s exciting! As the Gambler’s Paradox tells us, humans are terrible in casinos and assume too many things will run in their favor.

When I do break into magic metaphors it is with terror and not with respect to ML models. I think it is very important the ethical bottom half here: people think magic means all sorts of things and can over assume things that aren’t correct. (Myself, I have horrific “dark fairy tale” nightmares that the internet may be a fairy space and a lot of our fairy tale traditions taught us to beware any magic mirrors. What would they think of an age where nearly everyone has a magic mirror in their pocket or their hands at all times? If there are demons on the internet they are made of people, but that doesn’t mean we should let our guard down.)
Louisville, Kentucky
Share this story
Delete

Quoting Ruha Benjamin

1 Comment

Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy

Ruha Benjamin

Read the whole story
WorldMaker
542 days ago
reply
This is the plot and half the memes and recurring themes of Westworld in a nutshell. Even the Futurama Biblical Mythology Season 2 (you can’t feed AI all the parts of the bible and not expect them to reenact even the gruesome parts and build a Robot Devil or three).
Louisville, Kentucky
Share this story
Delete

Graphic Designers

1 Comment and 8 Shares
They might make it past that first line of defense. For the second, you'll need some picture frames, a level, and a protractor that can do increments of less than a degree.
Read the whole story
WorldMaker
704 days ago
reply
On JoCoCruise we had some Elevator carpets labeled “Comic San” in Papyrus and “right-aligned sans serif Helvetica” in none of those (it was some random serif font badly left aligned) and some graphic designer friends refused to use those specific elevators the entire week 😹
Louisville, Kentucky
Share this story
Delete

Saturday Morning Breakfast Cereal - Fair

1 Comment and 2 Shares


Click here to go see the bonus panel!

Hovertext:
I'm ready to start this cult if anyone wants to join me.


Today's News:
Read the whole story
WorldMaker
715 days ago
reply
Basically the plot of Severance. A thesis statement for the show delivered early-ish in the first episode: “I’m an atheist, I believe hell is a product of human imagination, unfortunately I also think humans are capable of building anything they can imagine.”
Louisville, Kentucky
Share this story
Delete
Next Page of Stories