You might think that ChatGPT would be the best resource to learn about how to design a good AI LLM prompt, however humans are definitely better right now.
I'm working to build automated market analysis for business ideas into the CompanyCraft product. I tried using ChatGPT-4 to get help on one of the prompts I'm using. This blog shows you the results of that experiment
This is the business idea I'm testing with (by the way, if anyone wants to make this product, let me know, I'll be your first customer!):
I queried OpenAI's GPT-4 chat API endpoint to have it generate 3 best-fit market names for that business idea. With the prompt I made, it returned these results:
Not bad! But I was getting a bit too much variation on the results at times, so I figured I would ask ChatGPT to improve the prompt.
I copied and pasted the prompt I had made and asked it if I should make any changes
After making the changes it suggested, I got these results:
Almost useless.
The 2 prompts aren't dramatically different. Mine uses a bulleted list of criteria that the market name should meet. ChatGPT's morphs that list into a single line of text (3 sentences).
So, lessons learned, bullet points are good, even if ChatGPT doesn't think so. And humans are still smarter than AI at prompt engineering.