ChatGPT Prompt Engineering for Developers (Summary the Course)

Raghavi_bala
4 min readMay 18, 2023

To start with, I didn’t use Chat GPT to write this, well I would have if Chat GPT could summarise a video for us, but we do have AI tools that’ll do it for us. But this written by ME.

Deeplearning AI released a ChatGPT Prompt Engineering for Developers course recently. Things are getting serious if Andrew NG himself is getting involved. So, I’ve tried to compose this 8 session course into a concise article. We’re focusing on prompts for Information Tuned LLM, which is Chat GPT in our case.

The course itself is divided into 7 sessions and so will this article be.

Guidelines

There are few general guidelines that developers are asked to follow, to help the models understand the task better.

Principle 1: Write Clear and Specific Prompts.

Simply said, be specific and detailed. This is were delimiters comes into play. If your prompt happen to contain text, comprise it inside a delimiter. A delimiter could be anything ```, “””, < >, <tag> </tag> of this sort. This will help the model differentiate the actual prompt from the input text.

In addition to this, we can also specify the format we want our output to be in. You could ask for a HTML or json format as well.

Principle 2: Give the model time to think.

Models are just like us, if given a complex task it might need more than one shot at it. These LLM’s work better when given specific step by step instructions.

As a specific example, while asking the model to evaluate a problem result. It got it wrong. But when instructed to work out the problem by itself and then match the result. It got it right. So, it’s always better practise to have the model work out the solution before asking it to correct it.

Limitations: The biggest limitations of such LLM’s are hallucination. This can be solved by asking the model to produce the relevant sightings.

Iterative

Getting proper output from such models require an iterative approcch. Just like how we teach kids something.

Idea => Implementation => Experiment Result => Error Analysis

This would be the lifecycle of Fine tuning LLM’s to fit your specific requirements.

Summarizing

We all know that LLM’s are very good at summarizing. But something I learnt in this course is that, we can ask them to summarise based on the reader’s requirement.

Let’s take an example of a paragraph about specifications of a Sofa. While we summarise this, the model might miss out on few things. We can customise our output based on the reader. The consumer might be interested in the look, cost of the Sofa, while the manufacturer might be interested in the material and other technical specifications. So, we might include that information in the prompt.

Inferring

Inferring is a task that can be performed using LLM’s. Inferring could be as simple as giving the model some information and questioning the model based on it. We could use this for the age old sentiment recognition problem. (This just came to my head while writing this blog). So, we can try to use LLM’s even to make datasets.

Transforming

Transforming could be anything that involves modifying the input. This could be as simple as translation. The cool thing about Chat GPT is that, sometimes even when you don’t the language that your trying to translate, Chat GPT does. So, basically you can just specify the language you want it to translate into and it’ll work just fine.

This also includes tone transformations, where you want the context to remain whilst changing the way it’s said, like you could set it to angry, happy or or just a format no tone.

Spell and Grammar checks are part of this session.

Expanding

It can be viewed as the opposite of sumarizing. We could give the model short input/information and ask the model to generate an output based on that. A use case of this could be something like, asking the model to write a email given the context. Or even better, you could give a short description of a story and ask the model to create a SOP out of it, which you can use for your college applications.

When dealing with cases that require the model to have creativity, we can look into the Temperature param. This controls the creativity of the model. More the temp more creative the model.

The suggestions are more varied when temperature is high

Chatbot

Chat GPT itself is made for this purpose, to act like a Chat Bot. But you can customise it better using System Prompt. Chat GPT has 2 types of Prompt, System Prompt and User Prompt.

User Prompt is nothing short of general prompt, but System Prompt is something new.

System Prompt basically let’s you set a tone to Chat GPT, this is where you can ad your personal touch. You can even ask it roleplay as a doctor or a etomologist. You can give the role, and the character here.

Conclusion

You can read 100 blogs like this, but one golden rule is “Prompt Engineering in itself is an iterative process which needs attention to detail”.

Hope you had a good read 👋

--

--

Raghavi_bala

Data Science Machine Learning Data & Business Analytics