Main image of article How to Build Better AI Prompts for Coding Help

AI tools such as ChatGPT can prove helpful when you’re coding. In theory, you can provide the chatbot with a description of what you need coded, and the platform will return the appropriate code snippet. With that in mind, let’s see how we can take help from ChatGPT (and other generative AI products like Google’s Gemini) to the proverbial next level. It could prove crucial to your next job.

Asking AI to Provide Code

For the past couple decades, programmers have had a substitute for AI: the millions of fellow programmers all working together to provide collective help. If you’ve spent any time on websites like Stack Overflow, you know there’s an army of people out there who will quickly provide code examples and advice in response to your questions… if the answer hasn’t already been posted on the site for years.

But just because Stack Overflow or commentary in a GitHub repo says code is correct doesn’t mean it’s correct. Sure, if somebody posts an answer that’s wrong, others are usually quick to point out the error. Then, others might respond with a slight nuance that refines the answer even more. But even then, you can’t just paste the code into your project and hope for the best; you need to go through the code carefully, understand it, and see if it fits your particular application. And if not, refine the code for your case.

The collective online consciousness of worldwide programmers functioned similarly to today’s AI. And that’s what you need to understand when working with AI: the same rules apply.

Suppose you’re writing code that needs to verify whether a string is a valid email address. Instead of simply asking ChatGPT to write code to verify an email address, you need to be much more specific, such as:

I need some C# code that uses regular expressions to verify whether a string is a valid email address using today’s well known top level domains.

For what it’s worth, I fed the above into ChatGPT 4 (the premium one, which costs a monthly fee). After waiting about a minute or so, ChatGPT responded with “Internal Server Error.” So, I tried it again with 3.5. Then it delivered, providing me with some very nice C# code.

But when I looked at the code, I noted a problem: It didn’t bother checking for current top-level domains. Instead, it simply checked whether the string matched the format of a word followed by @ followed by a string of letters and digits, followed by a period, followed by a string of letters and digits. Nowhere did it check whether the final string was in fact a valid top-level domain. (For what it’s worth, there are hundreds of top-level domains, and a regular expression would be quite huge to include them all. Maybe ChatGPT’s solution was better.)

In any case, whatever ChatGPT gives you, you need to verify it carefully. Don’t just drop it into your code. Be as specific as possible in your prompts, but still be skeptical. ChatGPT and other similar AI tools are not perfect.

Asking AI to Find Bugs

AI is pretty good at finding bugs in your code, but you need to give it detailed prompts. It is possible to provide code in your prompt, but not give a description of what the code is supposed to do, and just ask ChatGPT to find any bugs.

For example, suppose you have a function that takes an integer and returns the string “odd” if the number is even, and vice versa. (This function might be part of a game that intentionally says the wrong answer.) If you just ask ChatGPT to find any bugs, it will spot that and point it out as a bug (it’s pretty impressive that ChatGPT can analyze code, including the strings, but nevertheless you really need to give it more to go on).

You can place bug queries in two places: comments in your code (ChatGPT will read them!) as well as in the prompt. Your prompt might be:

Please find bugs in this function which returns ‘even’ when given an odd number, and ‘odd’ when given an even number.

Of course, this is a very contrived example, but it serves the point. The adage “I’m not a mindreader” applies here: ChatGPT can’t distinguish what you intend your code to do with what your code actually does. The lesson here: be as detailed as possible with your prompts.

Here’s an example of a prompt for a more sophisticated function:

This function takes as a string a US address. If the zip code contains only five digits, the function appends ‘-0000’ to the zip code, and returns the entire address as a string with the updated zip code. If instead a nine-digit zip code is provided, it checks if the zip code is in the format NNNNN-NNNN where N is a single digit. If that format is not present, it reformats zip code accordingly, removing any spaces and inserting a dash. Please find any bugs.

You would then paste in your code along with the prompt.

Even then, ChatGPT might identify a bug when the code is actually correct. In that case, you can continue the conversation: “For that particular set of use cases, the code is actually correct. Can you try again?” And ChatGPT will dutifully do so.

Unit Tests

Now let’s consider unit tests. AI tools like ChatGPT can be great for generating unit tests. However, there’s an important factor you need to consider: if you put your code into an AI tool, and ask for it to generate unit tests, it will do so; however, it doesn’t know if the code you’ve written is actually correct. In fact, it will start out by assuming your code is correct. And from there it will build unit tests which all pass. But if you have bugs in your code, the generated unit tests will not pick up the bugs.

When asking an AI tool to build unit tests, you have to follow the standard procedure for creating unit tests by creating the tests before you write a line of code. That means you need to describe in as much detail what the code will do, and then ask the AI for the unit tests, without actually providing code.

For example, suppose you’re planning to write a function that tests whether a number is within a certain range, say 0 to 10. If the number is less than 0, it will return the string “Less than 0.” If the number is greater than 10, it will return the string “Greater than 10.” For all others, it will return the string “Between 0 and 10 inclusive.”

A good prompt would be:

I'm going to write a function that returns the string ‘Less than 0‘ for integers passed in that are less than 0. It returns ‘Greater than 10’ for integers passed in that are greater than 10. For all other integers, it returns ‘Between 0 and 10.’ What unit tests should I create?

This is where the AI tool can help you twice over. After you have your unit tests, and then after you write your code, you can ask the AI tool whether the code is correct. And in many cases it will catch the bugs for you.

Remember, most AI tools such as ChatGPT can maintain a conversation thread. You could provide the previous prompt about the unit tests; later, in the same thread, you could ask: I have written a function that is supposed to satisfy the above requirements. Does it have any bugs in it? And then you can paste your code in.

Yes, your unit tests should find the bugs, but ChatGPT will find them faster and show you how to fix it.

And then you can implement those fixes and also run the repaired code through your test suite, which includes the tests ChatGPT provided. As your app grows and your team adds new features, you’ll run those unit tests with every full pass test. (Remember, unit tests aren’t only run just once. With every run of the automation tools, you typically run all unit tests for the entire application to make sure new code didn’t break existing code.)

Help with Concepts

ChatGPT doesn’t have to only help you write code; it can also help you understand concepts you’ll need in building your app.

Suppose you’re building a billing app for a small utility company. You’ve gathered the requirements from your client. You fully understand what the app needs to do, and you’re ready to start modeling the database.

But this is not the time to just ask ChatGPT for a list of tables in a phone billing app and run with it; most likely it will be missing things. For example, the FCC has strict rules on what reports need to be sent to it on a regular basis. Your tables need to include data for those reports. If you miss that and your client is unable to deliver those reports, they’re going to be very unhappy when they get fined by the FCC.

That means not relying only on ChatGPT, and instead doing a lot of research in other places, such as the FCC’s own site. Assuming you’ve done that step, then you can ask ChatGPT for help.

In the case of utilities such as phone services, it’s easy to make a lot of mistakes if you haven’t already been involved in building such an app before. For example, services such as caller ID will have a price associated with them. It would seem logical to put service name and price into a table... except services change price over time, usually going up every year or two, and that means you need to have multiple rows for a single service, and along with service name and price, you also need start date and end date.

This is where ChatGPT can help. Ask it something like:

When building a pricing table in my phone billing app, what columns should I include, and also explain to me in detail the columns people are most likely to forget to include?

You can even be more broad by asking it:

What tables should I include in a phone billing app, and tell me about tables people might forget to include.

From there you can build a dialog:

Tell me about that table called ServiceLocation. Why do I need that?

And as ChatGPT answers, you’ll learn more about it. Remember to mention the requirements you gathered. Two important prompts might be:

Here are the requirements the client gave me. Did they forget anything?

Do the tables you provided support all these requirements?

Conclusion

Always remember: ChatGPT isn’t perfect. It (and similar generative AI tools) might give you incorrect information, which is why you need to carefully review and test anything it might give you. And don’t only use AI tools: Look up government regulations, interview people in the field, and look at competing apps. Once you have everything in order, generative AI has a better chance of becoming a truly trusty assistant.