During the summer, GitHub released Copilot, a coding autocomplete tool it also claimed would help software developers “quickly discover alternative ways to solve problems, write tests, and explore new APIs.” Copilot leveraged an algorithm called OpenAI Codex, itself adapted from OpenAI’s GPT-4 natural-language generator.
Four months later, do developers actually use (and like) Copilot? For some programming languages, GitHub claims that up to 30 percent of newly written code is suggested by Copilot. “We hear a lot from our users that their coding practices have changed using Copilot,” Oege de Moor, VP of GitHub Next, recently told Axios. “Overall, they're able to become much more productive in their coding.”
However, Copilot isn’t the software Terminator doomed to destroy your software-development job. As Axios pointed out, a recent study by New York University found that “approximately 40 percent” of code generated by Copilot had cybersecurity vulnerabilities. There’s also the question of accuracy: On its FAQ page for the tool, GitHub claims Copilot was 43 percent correct in filling in blanked-out sections of a Python function (rising to 57 percent when the tool was allowed 10 attempts); while that’s significant, it also suggests Copilot could produce buggy (and vulnerable) code without close human monitoring.
But as with other A.I. tools, the goal is to improve Copilot’s accuracy over time. There’s every possibility that coding autocomplete platforms will advance in coming years; if they become incredibly accurate, they’ll play a larger and larger role in coding, especially in areas such as legacy-code maintenance. That said, there are other aspects of a developer’s job—including abstract thinking and creativity—that machines likely won’t replace anytime soon, if ever. In the meantime, keep an eye on how the A.I. and machine-learning markets are evolving, because these technologies will play a much larger role in multiple industries.