Main image of article How Leaders Can Integrate AI into Their Teams Workflow

If you’re a team lead or manager or product owner, you’ve probably wondered how you can effectively integrate AI into your developer team’s workflow. You recognize that AI can’t (yet) replace developers. And you also recognize that developers are probably resistant to bringing AI into their workflows. What can you do?

Let’s tackle the latter issue: Developers who are resistant to integrate AI into their workflow. If you have such developers on your team, they’re probably mid-to-senior level, very likely older, and very good at what they do. They haven’t really embraced AI themselves: for example, instead of asking ChatGPT for help on a problem, they might still go to StackOverflow.

They might not trust AI. They know in their hearts that computers cannot code computers as well as people can. And that’s still true, even with AI strengthening its code-generation abilities. But they might be surprised how AI can help them code faster and more efficiently—despite some core flaws.

We want them to realize that these tools really can help them be more productive. Here’s a suggestion:

  • Add GitHub Copilot to VS Code and activate it.
  • Go to Leetcode and pick a coding challenge in a particular language.
  • Open up VS Code and create a new file in that same language.
  • Bring the senior dev by to watch.
  • Type in (don’t paste!) the Leetcode function name.
  • Watch VS Code immediately generate a solution.
  • Let the senior developer analyze the solution to make sure it’s correct.

Then ask the senior dev for a function name. Ask them to make sure the function name specifically describes exactly what the function should do (such as ConvertPigLatinBackToEnglish)/ Let the senior dev type it in and watch their reaction. Then here’s the key: let them analyze the code Copilot created and make sure it’s correct.

Bingo! We want them to understand that AI can help write code, but it still needs a human to verify it’s correct. And in the end, developing code this way is usually much faster and more productive than the old way.

Whatever the age and composition of your developer team, the key is helping them understand that AI is a power tool, akin to using a cordless drill to put a shelf together rather than doing everything by hand. Start by explaining what AI won’t do for them. It will not:

  • Debug every error
  • Rewrite their legacy code base
  • Finish building what’s required for the current sprint

Instead, it will be their trusty assistant and helper. For the older devs, remind them of how they’ve embraced syntax highlighting and popup suggestions, when in days past such things simply didn’t exist. AI is just the next step in that sequence.

For the junior devs, you’ll likely have the opposite problem: they’re going to want AI to write all their code. And that’s okay… as long as they realize that they still need to verify that the code is correct and plays well with the broader codebase.

In essence, explain to everyone that AI can:

  • Write boilerplate code faster (and that’s it’s just starter code, and needs to be revised by a human).
  • Explain unfamiliar code patterns. For many of us who have embraced AI, we find ourselves spending less time asking it to write code for us, and more time explaining things. Did you find an impossible regular expression that you’d like to use but you’re concerned it’s too complex? Drop it into Github Copilot’s chat (or ChatGPT) and ask for an explanation of it.
  • Summarize design documents, because your team will still need to document every aspect of its process.
  • Generating unit tests: this is important and comes with a very important caveat: you must provide the AI tool with the specifications for the function, and ask it to help build test cases; do not simply drop your code in and ask for test cases. The problem is the AI tool will assume the provided code is correct and build multiple test cases that automatically pass.
  • Creating README.md files. This can be tricky, because most of us know that ChatGPT and other AI tools can struggle to produce clean copy. Treat the output as a starting point for revision; always check every line for technical accuracy.

It takes time to learn how to effectively prompt an AI for a response you can actually use. Those who don’t know their way around prompting usually neglect to provide the system with all the necessary information. For example:

  • “Why would a list not sort faster?”

While that might be okay to get started, a much better prompt provides far more detail:

  • “I’m using Python and I needed to sort 10,000 objects, where each one is stored in a dictionary with id, firstname, lastname, and location. I’m trying to sort them by location first, lastname second, and firstname third. I’m trying to use Python’s built in sorted() function but it’s not giving me the expected results. Here’s the code I wrote…”

The latter is far more specific, obviously, and far more likely to deliver the results you need. Encourage developers to interact frequently with AI and follow guides to expand their prompting knowledge.

We’re at a point where we need to start building official policies surrounding AI, especially in industries that leverage sensitive data, such as finance and healthcare. An official AI policy can help ensure that all AI usage is secure and accurate For example, you might create a policy that:

  • Code pasted from AI must go through the same set of tests code written manually must go through.
  • Tests must be developed independent of the code itself so that false positives don’t occur.

That’s a great start. But what about code documentation? People have been using automated documentation tools for years. Is it okay to drop the code into ChatGPT and ask it to provide documentation for the API your developers are building. Again, there’s no right or wrong answer; rather, a policy must be in place about it.

Policy starts with assessing current and anticipated AI use; from there, it’s helpful to implement “pilot policies” to see what will work in terms of AI and actual workflows. Based on the data and feedback from those early tests, you’ll need to figure out how to scale AI policy across the organization.

Here’s the harsh reality: although senior developers might be resistant to AI, the younger developers will most likely embrace it. They’re going to use it in their work. They’re going to use it to answer questions; they’re going to use it to speed up their development. What’s important, then, is to allow them to do so, but move them forward on the right path:

  • Encourage them to use it; even purchase team installations of tools like GitHub Copilot.
  • Allow them to practice with it.
  • Ensure they’re developing proper tests; one possibility is to let the senior developers build the unit tests.
  • Set expectations whereby AI should only assist in writing code, but not replace it.
  • Continue with regular code reviews; even AI-generated code needs good reviews.
  • Allow the senior devs (including those who resist AI!) to mentor the junior devs as they explore how AI can help them.

AI is certainly here to stay. With the above guidelines, you can use AI to help boost your team’s development process while ensuring the tech stack stays secure.