Throughout several articles on various uses for generative AI in construction safety, I have yet to devote many words to the topic of prompting. It isn't because I don't find value in crafting good prompts; I do. However, I've been more interested in discussing the different specific use cases of generative AI in the safety context than finding ways to get better outputs from a chat session.
However, crafting good prompts leads to better output from generative AI. Therefore, let's cover some basic prompting techniques that will help anyone using generative AI get better results.
Be Specific, but not too specific
The average person using generative AI doesn’t provide enough context to the model, and often the prompts are too simplistic. A rule of thumb is to think of the model as a fifth grader. You want to provide sufficient detail so that it understands what you want but not so much that you confuse it.
Example prompt 1: Create a list of biases in construction safety
Output 1:

Example prompt 2: You are an expert in construction safety incident investigations. Provide a list of biases that can impact the quality of accident investigations in construction.
Output 2:

Result: The second prompt provides similar results to the first but includes information relevant to incident investigations. It also begins to discuss how these biases might affect the incident investigation process.
Chain-of-thought (CoT) Prompting
I discussed CoT in a previous article. Improved LLMs have reduced the need for CoT. Nevertheless, it remains a powerful technique for more complex tasks because it encourages a step-by-step problem analysis.
Example:
Instead of asking, “What are the hazards in this task?” use:
“Let’s think through [the task] step by step. What hazards could arise during preparation, execution, and cleanup?”
Analogical Prompting
This prompting method involves asking the LLM to compare one topic to another by analogy. Analogical prompting might be helpful when you explain new topics to a team or crew familiar with one subset of hazards but not another.
Example: “How is addressing confined space entry hazards similar to mitigating risks in high-voltage environments?”
Decomposition Prompting
This method involves breaking down a complex challenge into smaller components. Decomposition prompting is particularly useful when you want the LLM to follow a specific path of reasoning, such as one prescribed by company policy.
Example: Instead of asking, “What caused the incident?” use this: “What were the immediate causes, contributing factors, and underlying system failures?”
Demonstration Ensembling (DENSE)
A paper that examined research into LLM prompting methods describes this approach. I have yet to use it much myself, but it looks promising. The technique involves prompting the LLM to take multiple positions and choosing the best one.
Example: Instead of asking, “How should I manage silica dust exposure?” use: “Present three different solutions for managing silica dust exposure. Analyze the strengths and weaknesses of each, and then recommend the best approach.”
In Summary
Effective prompting techniques can enhance the value of generative AI in safety management or other uses. While improvements to the models themselves have made prompting less important, understanding these prompting methods can lead to more nuanced and valuable outputs. For best results, try many prompts. Don’t be satisfied with the first answer you get. Keep asking questions and trying different prompting techniques until you get the best possible answer for your use case.