Great code isn’t easy to write. Despite the many blog posts that promise people can learn to code in a few weeks with the latest boot camp, most developers take years to hone their craft. Computers speak in absolutes — 1s and 0s. Humans speak in messy, imprecise, and emotional language. Figuring out a way to bridge the gap between the two is no small feat.
ChatGPT and other AI-powered coding platforms are poised to change all that. Any would-be developer can ask the large language model chatbot to write an application, and it will produce line after line of code. Projects that would otherwise take hours or days to complete can be finished in minutes. Developers can type in a few sentences, kick their feet up, and spend the rest of the day learning to play the guitar. Or not.
The issue with using AI to write code is that it does so like humans do, with wildly inconsistent and frequently error-riddled results. This is not to say developers should avoid using AI to write code. That ship has sailed. Though it’s only been slightly over a year since ChatGPT was released, almost 85% of developers have already used at least one AI tool to write code, an unprecedented adoption rate. In the right circumstances and for the right tasks, AI can be a helpful tool for speeding up the development cycle. The key to using AI tools for coding is to realistically understand its strengths and weaknesses and know how to mitigate its risks.
🤔 The Uncertain Regulatory Landscape
Given the seductive ease of writing code with AI, it’s unrealistic to expect developers to avoid doing so. Even the most die-hard AI opponents recognize that the genie is out of the bottle. The White House’s recent executive order on AI outlines both the possibilities and risks of artificial intelligence. While there isn’t comprehensive legislation yet enacted addressing the commercial use of generative AI tools, there’s good reason to believe it’s on the horizon. The European Union’s AI Act is expected to pass before the end of the year, which will make it the first global standalone AI regulation.
Much like the EU’s General Data Protection Regulation (GDPR), the effects of the AI Act will be far-reaching because it will affect any business that provides services to any EU citizen. What this means for developers is that their software applications — whether AI-generated or not — will have to comply with the new and existing laws. In addition to AI-specific regulations, the software must also comply with existing cybersecurity and data protection regulations that are rapidly expanding in scope.
⚠️ Problems With AI-Generated Code
The gravest danger of coding with AI is its security vulnerabilities. According to a 2022 study, people who used AI assistants to write code falsely believed their code was more secure than those who didn’t. This false sense of security is a serious issue for businesses that are attempting to write compliant code for mission-critical operations.
The speed at which generative AI produces code is also a security risk. AI can create code to be released into production in a fraction of the time humans require, eliminating many of the safety measures implicit in the slower, more contemplative human pace.
In addition to security concerns, developers need to be aware of quality issues with AI code. Much of the code created by AI models is just bad for several reasons. Part of the poor quality of AI code is due to users who either don’t make effective requests or don’t know enough to evaluate the results they get.
Another part of the problem is the nature of AI models. Over time, AI models tend to get dumber, a process called degenerative learning. Models initially trained on diverse datasets that included rare and outlier events eventually lose their recollection of the more unusual events. As a result, their responses tend to become more flat and narrow. The models continue to train on their own responses and the effect becomes exaggerated.
👨💻 Best Practices for Writing Code With AI
Despite the drawbacks, there’s no doubt a place for AI-generated code. It’s unrealistic to expect AI to generate clean code for an entire application. Similarly, people who don’t know how to write code won’t have good results. To coax high-quality code from platforms like ChatGPT, developers need to understand what they’re asking for and what good results look like.
AI does better with narrow, specific requests rather than vague, general ones. As with any output by generative AI tools, teams should never blindly trust AI code. Generative AI is prone to hallucinations and errors, so AI output needs to be verified by experienced developers. Organizations should train developers on how AI works and implement best practices for ensuring the quality and security of the finished codebase.
Code security tools can help teams use AI-generated code to their best advantage by testing code for security vulnerabilities. Software application security testing (SAST) will automatically scan the codebase to find security flaws introduced by human programmers or AI. SAST flags security and quality concerns so developers can clean up the codebase and eliminate vulnerabilities and bugs.
🔒 Secure Your Code With Kiuwan
Kiuwan makes coding with AI safer by providing an end-to-end security platform that empowers developers to shift left and find security flaws earlier in the software development lifecycle. Code Security (SAST) helps teams comply with the most stringent cybersecurity frameworks, including OWASP, NIST, and CWE.
Insights, Kiuwan’s software composition analysis (SCA) tool, scans codebases for open-source components. Teams may not realize that code generated by AI contains open-source components. Insights can identify hidden open-source components, including their associated versions and licenses.
Taken together, Kiuwan’s comprehensive toolset provides complete application security. It works with all major programming languages and frameworks. Contact us today for a free trial or click the link below for a quick demo!