AI Prompt Design: Key Writing Tips for AI Prompt Engineer Jobs

As generative AI continues to play an expanding role across various industries, prompt engineering has emerged as a crucial aspect of AI literacy. The ability to clearly and precisely articulate requests directly impacts the effectiveness and efficiency of AI systems. The level of attention given to prompt design determines how well AI can handle complex tasks and deliver nuanced, insightful responses. Therefore, mastering the art of crafting effective prompts is key to unlocking the full potential of AI.

This blog will share some prompt writing tips and analyze some specific examples.

The content is divided into four sections:

Then, let's strat!

I. Writing Prompts Using the CO-STAR Framework

AI Prompt Engineer Jobs

The Meaning of CO-STAR

CO-STAR is an acronym representing the six key elements of a structured prompt template:

  1. (C) Context - Background Information: Provide context for the task By giving the large language model (LLM) detailed background information, you help it accurately understand the specific scenario, ensuring the feedback is relevant.
  2. (O) Objective - Define the Goal: Clearly state what you want the LLM to accomplish Defining the task's goal helps the LLM focus its response to achieve this specific objective.
  3. (S) Style - Specify the Writing Style: Indicate the writing style you're looking for You can specify the style of a well-known figure or industry expert, such as a business analyst or CEO, guiding the LLM to use the appropriate tone and vocabulary.
  4. (T) Tone - Set the Emotional Tone: Establish the emotional tone of the response Defining the tone ensures that the LLM's response aligns with the expected emotional or attitudinal backdrop. Possible tones include formal, humorous, empathetic, etc.
  5. (A) Audience - Identify the Target Audience: Tailor the response for a specific audience Whether for experts, beginners, or children, customizing the LLM's response for the target audience ensures the content is appropriate and understandable within the context.
  6. (R) Response - Define the Output Format: Specify the format of the output By setting the output format, you ensure the LLM generates responses that meet your specific needs, facilitating downstream tasks. Common formats include lists, JSON data, or professional reports. JSON format is ideal for applications that need to process the LLM's output programmatically.

Specific Example Comparison:

Scenario: Suppose you are a social media manager, and you need help drafting a Facebook post to promote your company’s new product.

  1. Without using CO-STAR
  • Prompt:
Write a Facebook post to advertise my company's new product. My company is called Alpha, and the product is called Beta, a new ultra-fast hairdryer.
  • GPT-4 output:
Dear Netizens,

I am delighted to introduce you to Alpha's newly launched product—the Beta Super Fast Hair Dryer!
Beta is not just a hair dryer; it is our unique interpretation of beauty and speed. This product utilizes cutting-edge technology to dry your hair at lightning speed. Not to mention, Beta achieves rapid drying while protecting your strands from heat damage.
Our design philosophy for Beta combines speed and comfort, and we believe that by using Beta, you will also enjoy this special experience. Whether during your busy morning schedule or when you're rushing out the door, Beta will be your best companion.
At Alpha, we always adhere to a customer satisfaction-centered service philosophy, and Beta is a realization of that commitment. So, give it a try and let our Beta Super Fast Hair Dryer bring more convenience and joy to your life.
We look forward to your feedback as we collectively experience Alpha's customer-centric service.

#Alpha #BetaSuperFastHairDryer
  1. Using CO-STAR
  • Prompt:
CONTEXT: I want to advertise my company's new product. My company's name is Alpha, and the product is called Beta, a new type of super-fast hair dryer.
OBJECTIVE: Create a Facebook post for me aimed at getting people to click on the product link to make a purchase.
STYLE: Follow the writing style of successful companies advertising similar products, such as Dyson.
TONE: Persuasive.
AUDIENCE: My company's audience on Facebook is generally older adults. Tailor your post to target what these audiences typically look for in hair products.
RESPONSE:** The post on Facebook should be concise and impactful.
  • GPT-4 output:
 **[Alpha Company New Product Launch]** 
Dear friends, we are excited to introduce our next-generation product—the Beta Super Fast Hair Dryer! 
Following Dyson's tradition of excellence, the Beta hair dryer combines speed, efficiency, and gentleness—all packed into a beautifully designed tool. 
Are you tired of waiting for your hair to dry? Are you looking for a hair dryer that can dry your hair quickly while caring for it and preventing damage? Beta is the answer you've been searching for! 
It dries quickly, achieving optimal results in the shortest time possible. More importantly, our exclusive temperature control technology ensures that the temperature remains just right, preventing damage to your hair and discomfort to your scalp. 
Step into the crowd, radiating confidence as the person who stands out! Click the link below to purchase our Beta Super Fast Hair Dryer now, stay ahead of the trends, and bring convenience to your daily life! 
With Alpha, beauty has never been so easy!  [Product Link]
Take action now, and let’s enjoy happiness, beauty, and life together! 

It can be seen that the output of the prompt using CO-STAR is more targeted, specific, and effective.

II. Segmenting Prompts With Delimiters

Using separators effectively helps large language models better understand the prompt content. The more complex the prompt, the more important the role of separators becomes. You can design your own separators, but they should not be the same as punctuation marks to avoid ambiguity.

Common separators can be ###, ===, <<<>>> etc. Additionally, XML tags can also be used to separate prompts.

For example:

Classify the sentiment of each conversation in <<<CONVERSATIONS>>> as 'Positive' or 'Negative'. Give the sentiment classifications without any other preamble text.
###
EXAMPLE CONVERSATIONS
[Agent]: Good morning, how can I assist you today?[Customer]: This product is terrible, nothing like what was advertised![Customer]: I'm extremely disappointed and expect a full refund.
[Agent]: Good morning, how can I help you today?[Customer]: Hi, I just wanted to say that I'm really impressed with yourproduct. It exceeded my expectations!
###
EXAMPLE OUTPUTS
Negative
Positive
###
<<<[Agent]: Hello! Welcome to our support. How can I help you today? [Customer]: Hi there! I just wanted to let you know I received my order, andit's fantastic! [Agent]: That's great to hear! We're thrilled you're happy with your purchase.Is there anything else I can assist you with? [Customer]: No, that's it. Just wanted to give some positive feedback. Thanksfor your excellent service!
[Agent]: Hello, thank you for reaching out. How can I assist you today? [Customer]: I'm very disappointed with my recent purchase. It's not what I expected at all. [Agent]: I'm sorry to hear that. Could you please provide more details so I can help? [Customer]: The product is of poor quality and it arrived late. I'm reallyunhappy with this experience.>>>

This example uses ### to segment the sections and distinguishes between EXAMPLE CONVERSATIONS and EXAMPLE OUTPUTS by using uppercase. At the beginning of the prompt, it is stated that the conversations to be classified will be separated by <<< CONVERSATIONS>>>. These conversations are then provided to the LLM at the bottom of the prompt. Although there is no explanatory text, the LLM can understand that these are the conversations it should classify because of the presence of the <<< and >>> separators.

III. Creating System Prompts Using LLM Protective Measures

Due to the limited memory capacity of large models, instructions that need to be set repeatedly can be configured using the System Prompt in OpenAI. These prompts are merged with the User Prompt and submitted to the large model each time, reducing the issues of memory loss and cumbersome prompts. Generally, the following categories can be set:

  • Task Definition: This allows the LLM to always remember what it needs to do throughout the chat.
  • Output Format: This ensures that the LLM always remembers how it should respond.
  • Guardrails: This helps the LLM to always remember how it should not respond. Guardrails refer to the configuration boundaries within which the LLM is allowed to operate. For example, some defensive instructions to prevent prompt attacks.
  • For example:
Function DefinitionInstructions
Task DefinitionYou will use the following text to answer questions: [Insert Text]
Output FormatYou will respond in the following JSON object format: {'Question': 'Answer'}
Guardrails (Hallucination)If the text does not contain enough information to answer the question, do not fabricate information; the answer should be 'NA.'
Guardrails (Scope)You are only allowed to answer questions related to [Insert Scope] and must not answer any questions related to demographic information such as age, gender, and religion.

Additionally, libraries like NeMo Guardrailsopen in new window can be used to make the guardrails dynamic, allowing for adjustments based on the actual situation.

IV. Analyzing Datasets Using Only LLMs, No Plugins or Coding Required

Large models are not adept at precise mathematical calculations or complex, rule-based task processing. However, they excel at recognizing patterns and trend analysis. This ability stems from their extensive training on a wide variety of data, allowing them to identify complex patterns that may not be immediately obvious. This makes them particularly suitable for tasks that involve finding patterns within datasets, yielding better results in a shorter time than coding.

For example:

  • Anomaly Detection: Identifying anomalous data points that deviate from the norm based on one or more column values.
  • Clustering: Grouping data points with similar characteristics into different clusters.
  • Cross-Column Relationships: Identifying combinatorial trends across columns.
  • Text Analysis (for text-based columns): Classifying based on themes or sentiment.
  • Trend Analysis (for datasets with a temporal aspect): Recognizing patterns, seasonal changes, or trends across time periods in columns.

Example: "Analyze Kaggle Datasets Using Only LLMs"The Kaggle real datasetopen in new window is prepared for customer personality analysis, aiming to segment its customer base for better understanding.

To facilitate the verification of the LLM's analysis results, this dataset is split into 50 rows while retaining only the most relevant columns. Below is the dataset used for analysis, with each row representing a customer and the columns indicating customer information:

AI Prompt Engineer Jobs

The goal is to utilize the customer information dataset to guide marketing efforts. This can be broken down into two tasks: 1) Generate meaningful customer segments using the dataset. 2) Propose optimal marketing strategies for each segment. This is a practical business problem where the LLM's pattern recognition capabilities (for Step 1) can truly shine. Below are prompts created using four prompt engineering techniques:

  1. Break down complex tasks into simple steps.
  2. Reference intermediate outputs for each step.
  3. Format the LLM's responses.
  4. Separate instructions from the dataset.

Specific Procedure:

  1. System Configuration and Task Instruction Input:
  • System Prompt:
I want you to act as a data scientist to analyze datasets. Donot make up information that isnotin the dataset. Foreach analysis I ask for, provide me with the exact and definitive answer anddonot provide me with code or instructions todo the analysis on other platforms.
  • User Prompt:
# CONTEXT #I sell wine. I have a dataset of information on my customers: [year of birth, marital status, income, number of children, days since last purchase, amount spent].
#############
# OBJECTIVE #I want you use the dataset to cluster my customers into groups and then give me ideas on how to target my marketing efforts towards each group. Use this step-by-step process and do not use code:
1. CLUSTERS: Use the columns of the dataset to cluster the rows of the dataset, such that customers within the same cluster have similar column values while customers in different clusters have distinctly different column values. Ensure that each row only belongs to 1 cluster.
For each cluster found,2. CLUSTER_INFORMATION: Describe the cluster in terms of the dataset columns.3. CLUSTER_NAME: Interpret [CLUSTER_INFORMATION] to obtain a short name for the customer group in this cluster.4. MARKETING_IDEAS: Generate ideas to market my product to this customer group.5. RATIONALE: Explain why [MARKETING_IDEAS] is relevant and effective for this customer group.
#############
# STYLE #Business analytics report
#############
# TONE #Professional, technical
#############
# AUDIENCE #My business partners. Convince them that your marketing strategy is well thought-out and fully backed by data.
#############
# RESPONSE: MARKDOWN REPORT #<For each cluster in [CLUSTERS]>— Customer Group: [CLUSTER_NAME]— Profile: [CLUSTER_INFORMATION]— Marketing Ideas: [MARKETING_IDEAS]— Rationale: [RATIONALE]
<Annex>Give a table of the list of row numbers belonging to each cluster, in order to back up your analysis. Use these table headers: [[CLUSTER_NAME], List of Rows].
#############
# START ANALYSIS #If you understand, ask me for my dataset.
  1. GPT Response and User-Provided Data for Analysis
AI Prompt Engineer Jobs
  1. GPT Generates Analysis Report
AI Prompt Engineer Jobs
AI Prompt Engineer Jobs
AI Prompt Engineer Jobs

Verify the Accuracy of LLM's Analysis

Select two customer segments generated by the LLM for verification—e.g., Young Families and Savvy Enthusiasts.

  • Young Families
    1. LLM Generated User Profile: Born after 1980, married or cohabitating, with low to medium income, has children, and frequently makes small purchases.
    2. LLM Clusters These Rows into This Group: 3, 4, 7, 10, 16, 20

Check the actual dataset:

AI Prompt Engineer Jobs

This is fully consistent with the user profile identified by the LLM. It can even cluster entries with missing values without any preprocessing!

  • Savvy Enthusiasts

    1. LLM Generated User Profile: Wide age range, any marital status, high income, diverse parental status, and high purchasing expenditure.
    2. LLM Clusters the Following Entries into This Group: 2, 5, 18, 29, 34, 36

Check the actual dataset:

AI Prompt Engineer Jobs

This aligns very closely with the features identified by the LLM!

Meanwhile, the author was unable to get the data analysis plugin to work properly.

AI Prompt Engineer Jobs

In this case, the following prompt engineering techniques were specifically employed:

  1. Breaking a Complex Task Into Simple Steps: LLMs excel at executing simple tasks but struggle with complex ones. Therefore, for complex tasks like this, it's important to break the task down into simple, step-by-step instructions for the large language model to follow. Compared to simply giving the overall task to the LLM, such as the instruction, “Group the customers and then provide marketing strategies for each group,” providing step-by-step instructions increases the likelihood that the LLM will yield accurate results. For example:
Use this step-by-step process anddonotuse code:
1. CLUSTERS: Use the columnsof the dataset to cluster the rowsof the dataset, such that customers within the same cluster have similar columnvalueswhile customers in different clusters have distinctly different column values. Ensure that eachrowonly belongs to1 cluster.Foreach cluster found,2. CLUSTER_INFORMATION: Describe the cluster in terms of the dataset columns.3. CLUSTER_NAME: Interpret [CLUSTER_INFORMATION] to obtain a shortnamefor the customer groupin this cluster.4. MARKETING_IDEAS: Generate ideas to market my product to this customer group.5. RATIONALE: Explain why [MARKETING_IDEAS] is relevant and effective for this customer group
  1. Reference Each Step's Intermediate Outputs from the LLM.

When providing each step to the LLM, assign an uppercase variable name to each step's intermediate output, such as CLUSTERS , CLUSTER_INFORMATION , CLUSTER_NAME , MARKETING_IDEAS , and RATIONALE .

Using uppercase letters is intended to distinguish these variable names from the main instructions given. These intermediate outputs can later be referenced using brackets as [VARIABLE_NAME].

  1. Format the LLM's Response Here, the LLM's response is in markdown report format. Additionally, using variable names from the intermediate outputs to structure the report is also very convenient.
# RESPONSE: MARKDOWN REPORT #<For each cluster in [CLUSTERS]>— Customer Group: [CLUSTER_NAME]— Profile: [CLUSTER_INFORMATION]— Marketing Ideas: [MARKETING_IDEAS]— Rationale: [RATIONALE]
<Annex>Give a table of the list of row numbers belonging to each cluster, in order to back up your analysis. Use these table headers: [[CLUSTER_NAME], List of Rows].
  1. Separate Task Instructions from the Dataset The dataset was never provided to the LLM in the initial prompt. Instead, the prompt only provided the task description for data analysis and included the following:
# START ANALYSIS #If you understand, ask me for my dataset.

ChatGPT subsequently indicated its understanding, and in the next prompt, we passed the dataset to it in the form of a CSV string:

AI Prompt Engineer Jobs

The benefit of this approach is that it helps the LLM to better understand each instruction, reducing the likelihood of missing information, especially in complex tasks with lengthy instructions like this one. By first receiving the instructions and then receiving the dataset that the instructions pertain to, the LLM can digest what it should do before executing it on the provided dataset.

AI Prompt Engineer Jobs