
While many tech professionals focus on AI's technological limitations and how to develop more sophisticated models, AI ethics remains the proverbial elephant in the room. We’re all enamored with the possibilities of the technology, but haven’t yet agreed on why AI is a good option or when to use it.
We’ve been using AI to create code for years and may not have realized it. Even simple features like code auto-completion hints at an AI model buried in the IDE. However, what’s coming next will put AI front-and-center for the creation of apps and services.
We asked experts for their insights on AI and coding to understand better when you should (and shouldn’t) use AI-generated code from an ethical standpoint.
When is it a smart idea to use AI-generated code?
Max Shak, CEO and founder of zapiy.com and nerDigital, has experience using AI-generated code and is keenly aware of its upsides and pitfalls: “AI-generated code can be a lifesaver when automating repetitive tasks that don’t require complex problem-solving.” For example, AI can generate boilerplate code for setting up frameworks, APIs, or data pipelines, freeing developers to focus on more strategic aspects of their projects.
Michael Berlet, senior software engineer at The Walt Disney Company, works on automation daily. Berlet agrees with Shak that AI can be great for boilerplate implementations such as “database connections, encryption/decryption workflow templates, and context-specific configurations.”
Berlet adds that AI can be helpful “when you're trying to learn data structures and algorithms and understand the time complexity and tradeoffs of particular implementations.” This is another way AI-generated code and services can save you time.
Shak says: “When developers need to rapidly prototype an idea or build a minimum viable product (MVP), AI can accelerate the process by generating code snippets or entire functions based on provided requirements. This can be particularly useful when time to market is critical.”
When is it a bad idea to use AI-generated code?
“AI lacks the deep contextual understanding required for tasks that involve intricate business logic,” Shak tells Dice. “Using AI-generated code in these situations may lead to functionality not aligning with business needs or objectives, requiring extensive revisions.”
Berlet agrees: “If you work in an organization that uses confidential business logic, exposing company practices to an LLM could be fed back into a training model that competitors might use.”
AI is known to “hallucinate,” Berlet reminds us. “[If] you are not technical and cannot validate the correctness of any generated code, AI should be avoided. LLMs can hallucinate on more nuanced use cases, so you can easily get into trouble if the user can't do due diligence on edge cases with generated code.”
“It’s dangerous to blindly copy and paste AI-generated code into production environments without thoroughly reviewing it,” Shek adds. “AI tools may offer shortcuts, but without proper testing and validation, these can lead to unpredictable behavior, inefficiencies, or even outages.”
Shek and Berlet agree that AI should be avoided when security and confidentiality are in play. Because AI is so new, weaving any AI-powered tools and services through a security stack might compromise users, and the model you’re using might be more public than you’re aware of, which can result in a sneaky data leak.
When is AI-generated code most useful?
“Over the course of any given workday or project—and depending on deadline—developers have limited mental bandwidth to provide value to an organization in a timely manner,” Berlet tells Dice. “AI-generated code can significantly reduce the amount of boilerplate code a developer might write, make it easier to traverse API documentation for a specific business requirement, and provide context on industry best practice for common problems found across companies.”
“[It’s great for] efficiency and productivity,” Shak adds. “AI can take on mundane coding tasks, which allows developers to focus on creative and strategic problem-solving. It can generate snippets in seconds that would otherwise take minutes or hours, significantly boosting productivity.”
Shak also suggests that AI can be useful when learning a new programming language or framework: “For developers looking to learn a new programming language or framework, AI-generated code can offer quick examples and help them understand syntax and best practices more quickly. It’s a practical way to experiment and learn through real-time feedback.”
Should AI-generated code be avoided altogether?
“AI can generate code based on patterns, but it doesn’t truly understand the context behind the problem it’s trying to solve,” adds Shak, underscoring that AI is not a technology you can granularly control. “This can lead to code that functions on the surface but fails to meet the nuanced requirements of the project.
“Many AI models are trained on publicly available code, and there’s always the risk that AI-generated snippets could resemble copyrighted or proprietary code. This could lead to legal issues if not handled properly.”
“You should never use AI-generated code in a production environment without understanding what's happening with it and having tested it,” Berlet adds. “If you're sharing company-proprietary implementations, you can also get into legal trouble or expose company secrets to a competitor.”
Is AI-generated code ethical?
“Generally speaking, yes,” Berlet says. “There are common reproducible patterns in software development that span across languages, frameworks, and use cases, and a major part of why software has exploded in use over the past few decades is that software developers benefit heavily from sharing useful implementations. Like practical mathematicians, software developers are more interested in whether their calculations lead to a correct answer than if they had devised the underlying algorithm. Code reuse can be very efficient and, if understood by the end-user, encouraged.”
But Shak adds: “It depends on how it’s applied. Factors like intellectual property, the security implications of the generated code, and transparency should be considered. For instance, if AI-generated code introduces copyright issues or developers don’t fully understand the implications of the code they’re implementing, it can raise ethical concerns.”
Conclusion
AI is too new to rely on entirely for most coding projects. Most developers and engineers are happy using AI for automated tasks such as testing or code auto-completion. And that’s likely where the line should be drawn unless you’re going to do the work to verify AI-generated code or results manually.
When verifying results, remember that AI might not save you time. If verification and fine-tuning take longer than simply doing it yourself, AI has failed at its true mission: to help humans be more efficient and productive.