ChatGPT at Work: Your talented but very junior new hire

May 25, 2023
Michelle Millar
5
min read

ChatGPT at Work: Your talented but very junior new hire

Michelle Millar

May 25, 2023

So you’ve embraced the inevitable dominance of our robot overlords and invited ChatGPT into your work life. Here are a few things to keep in mind when outsourcing tasks to your talented new assistant.

ChatGPT is the most confident junior you ever hired

You ask ChatGPT to research a topic, summarise a meeting transcript, or write an article. The answers are quick, confident, polished, and seem legit. You love the work your new team member is producing, and the time they’re saving you!

But be careful of mistaking confidence for competence.

It’s still your job to check the work. Your new hire may be very adept at making things look good, and very assured in their delivery, but they lack the experience, domain knowledge and context to be able to tell you if the answer is actually correct. Your eager resource will deliver a wrong or incomplete answer with the exact same confidence as a right one.

So just like with some new hires (who could present themselves well but be full of bullsh!t) be prepared to review the output with a critical eye before, say, posting it to LinkedIn or sending it to a client.

ChatGPT tells you what the right answer looks like, not what it is

GPT stands for Generative Pre-Trained Transformer, a kind of Large Language Model neural network that essentially works by predicting what the next word in a sentence should be. It’s hard to believe, but every answer it gives you, from a pithy limerick to a full-length essay, is built up by putting one word (or even one word-fragment) in front of another in a sequence that it predicts is the most likely response to what you’ve asked.

Importantly, it also has no objective understanding of the world. It doesn’t have any model or view of reality against which to compare its answer. It only knows frequencies and probabilities of relationships between words and concepts within its training data.

And those training datasets? They came mostly from the internet - wikipedia, articles, websites, books and other pieces of writing collated from the old WWW. The data was selected for its quality and diversity in representing our language usage, not objective reality or scientific consensus. So it includes well-written-but-otherwise-inaccurate articles, subjective opinions, and works of pure fiction.

I’m sure you can already see how easily ChatGPT could end up with the wrong answer some of the time. Honestly, the fact that you ever get the right answer is part of why it’s so impressive!

ChatGPT will do anything to please you, including making things up

The other factor driving ChatGPT’s determination of the ‘correct’ response is you, dear reader. If you tell it the answer is wrong, or ask for adjustments, it will begin to deviate down paths of much less likelihood in an effort to deliver the answer you want. In fact, it has already learned in pre-training that choosing a less-likely next word about 20% of the time creates a more human-sounding interaction.

I’m sure you can already see where this is going.

Those of the tyre-kicking and sh!t-stirring persuasion worked out very early that if you just keep pushing ChatGPT for an answer, it voyages further and further into the depths of its training set to deliver ever more unlikely responses. This has caused it to do alarming things like claim its own sentience, break its own restrictions to give advice on what religion you should join, or create answers apparently cut from whole cloth.

This behaviour is called “hallucinating”, and it’s why the number of questions you’re allowed to ask ChatGPT in one session has been severely reduced since it was opened to the public. But rest assured, there’s nothing mysterious going on here… just the opposing pressures of probability and compliance **acting on a complex system.

All you need to remember is that its training set includes fiction, so don’t be surprised when you push it far enough that it starts responding like a character from a bad sci-fi novel.

ChatGPT can’t have truly new ideas

I’m reminded of the day Dan, our Early-Adopter-in-Chief, decided over lunch to see if ChatGPT could come up with a business plan for a new mobile game. He gave it a number of criteria and was excited to see it return a very professional-sounding outline, including describing the gameplay and even giving the game a clever name.

He was (jokingly, I hope) halfway to registering the game as a trademark before I asked one very simple question: are you sure that the game doesn’t already exist?

Sure enough, it did.

Because as we’ve learned, ChatGPT isn’t ever truly ‘making things up’. It’s just giving you variations of likely responses from existing data. It’s designed to combine words together in predictable, expected, acceptable ways, rather than develop truly novel concepts. Even when it appears to be creating, it’s just spinning out less likely combinations of things that already exist.

So what’s the verdict?

In short, ChatGPT should be treated like a confident junior who talks a good game but doesn’t necessarily have the experience to back it up. It can be an astonishingly good addition to your team and work life, as long as you remember the same four things you would with any too-good-to-be-true new hire:

  • Don’t mistake confidence for competence
  • Check their work for accuracy
  • Watch out for plagiarism and shortcuts
  • If they sound like they’re hallucinating, they probably are

So you’ve embraced the inevitable dominance of our robot overlords and invited ChatGPT into your work life. Here are a few things to keep in mind when outsourcing tasks to your talented new assistant.

ChatGPT is the most confident junior you ever hired

You ask ChatGPT to research a topic, summarise a meeting transcript, or write an article. The answers are quick, confident, polished, and seem legit. You love the work your new team member is producing, and the time they’re saving you!

But be careful of mistaking confidence for competence.

It’s still your job to check the work. Your new hire may be very adept at making things look good, and very assured in their delivery, but they lack the experience, domain knowledge and context to be able to tell you if the answer is actually correct. Your eager resource will deliver a wrong or incomplete answer with the exact same confidence as a right one.

So just like with some new hires (who could present themselves well but be full of bullsh!t) be prepared to review the output with a critical eye before, say, posting it to LinkedIn or sending it to a client.

ChatGPT tells you what the right answer looks like, not what it is

GPT stands for Generative Pre-Trained Transformer, a kind of Large Language Model neural network that essentially works by predicting what the next word in a sentence should be. It’s hard to believe, but every answer it gives you, from a pithy limerick to a full-length essay, is built up by putting one word (or even one word-fragment) in front of another in a sequence that it predicts is the most likely response to what you’ve asked.

Importantly, it also has no objective understanding of the world. It doesn’t have any model or view of reality against which to compare its answer. It only knows frequencies and probabilities of relationships between words and concepts within its training data.

And those training datasets? They came mostly from the internet - wikipedia, articles, websites, books and other pieces of writing collated from the old WWW. The data was selected for its quality and diversity in representing our language usage, not objective reality or scientific consensus. So it includes well-written-but-otherwise-inaccurate articles, subjective opinions, and works of pure fiction.

I’m sure you can already see how easily ChatGPT could end up with the wrong answer some of the time. Honestly, the fact that you ever get the right answer is part of why it’s so impressive!

ChatGPT will do anything to please you, including making things up

The other factor driving ChatGPT’s determination of the ‘correct’ response is you, dear reader. If you tell it the answer is wrong, or ask for adjustments, it will begin to deviate down paths of much less likelihood in an effort to deliver the answer you want. In fact, it has already learned in pre-training that choosing a less-likely next word about 20% of the time creates a more human-sounding interaction.

I’m sure you can already see where this is going.

Those of the tyre-kicking and sh!t-stirring persuasion worked out very early that if you just keep pushing ChatGPT for an answer, it voyages further and further into the depths of its training set to deliver ever more unlikely responses. This has caused it to do alarming things like claim its own sentience, break its own restrictions to give advice on what religion you should join, or create answers apparently cut from whole cloth.

This behaviour is called “hallucinating”, and it’s why the number of questions you’re allowed to ask ChatGPT in one session has been severely reduced since it was opened to the public. But rest assured, there’s nothing mysterious going on here… just the opposing pressures of probability and compliance **acting on a complex system.

All you need to remember is that its training set includes fiction, so don’t be surprised when you push it far enough that it starts responding like a character from a bad sci-fi novel.

ChatGPT can’t have truly new ideas

I’m reminded of the day Dan, our Early-Adopter-in-Chief, decided over lunch to see if ChatGPT could come up with a business plan for a new mobile game. He gave it a number of criteria and was excited to see it return a very professional-sounding outline, including describing the gameplay and even giving the game a clever name.

He was (jokingly, I hope) halfway to registering the game as a trademark before I asked one very simple question: are you sure that the game doesn’t already exist?

Sure enough, it did.

Because as we’ve learned, ChatGPT isn’t ever truly ‘making things up’. It’s just giving you variations of likely responses from existing data. It’s designed to combine words together in predictable, expected, acceptable ways, rather than develop truly novel concepts. Even when it appears to be creating, it’s just spinning out less likely combinations of things that already exist.

So what’s the verdict?

In short, ChatGPT should be treated like a confident junior who talks a good game but doesn’t necessarily have the experience to back it up. It can be an astonishingly good addition to your team and work life, as long as you remember the same four things you would with any too-good-to-be-true new hire:

  • Don’t mistake confidence for competence
  • Check their work for accuracy
  • Watch out for plagiarism and shortcuts
  • If they sound like they’re hallucinating, they probably are

Related Articles

August 4, 2023
Rickey-Leigh Harneker
5
min read
The power of effective onboarding in tech startups
May 24, 2023
Rickey-Leigh Harneker
5
min read
Embracing a Human-Centric Approach in People Ops
May 16, 2023
Dan Marcus
5
min read
How ChatGPT-4 is allowing my team to 10x themselves