Two person rule used in missile control. U.S. Air Force photo by Senior Airman Jason Wiese
🔥Hot topic; What's your strategy for AI-assisted coding? Today it's an incredible productivity and learning boost, but the more we use it and think it through, establishing safeguards today is essential. We can't simply rely on our friendly cloud provider - they're also just figuring it out, and there is a lot to figure out.
Three no-brainer safeguards to check with your team are:
Double-down on test-driven development practices
Template your boilerplate (as much as possible)
Start the conversation now on when and how your team should use tools like co-pilot
Like many of us, I've been on an crash course in AI and AI assisted coding over the past months. Testing the current capabilities and talking with or listening to experts on what may come next, I still have more questions than answers.
However, what's clear to me is:
💖 The technology is phenomenal and irresistible. We will use it for larger and larger coding jobs.
💰 The tsunami of products and marketing will continue as an arms race among tech giants
🤷 Those closest to the field are openly cautioning that it's not fully understood and will unexpectantly hallucinate/make stuff up/be wrong.
🤔 For software teams, AI working alongside us needs to rethink our responsibility model.
What I'm less clear on:
If AI is language agnostic, will today's programming skills give way to higher-order computational prompt languages?
We already need help understanding large unstable codebases and nests of microservices. If AI creates exponentially more code, will we need to surrender the expectation that we understand how the systems truly operate and behave?
Many other what-ifs as raised in The AI Dilemma (well worth a watch)
Back to the present. Following this month's product announcements from Google's IO and Microsoft's Build conferences (assume we watch this space for Amazon), someone on your team will ask to use a Copilot or similar tools to help with their job. So what should your AI coding strategy or policy be?
GitHub Copilot FAQs. Treat generated code as you would any other internet copy n paste
tl;dr
If you're using an AI to work on an important codebase:
Treat it like weaponized stack-overflow. What you ask may be seen by others. What you get back may have glitches. Be sure you at least understand it before committing the code change.
Ask how will you verify its work?
Ask why should you trust the organisation behind the AI?
🛟Test-Driven Development: Gives you options
Not all changes are equal. The early use case might be coding individual functions, creating tests etc. Sure, easy. Go for it. The pair-programming benefits of productivity and personal learning are immense. What about other changes:
Infrastructure as code?
Features of your authentication flow?
Systems handling of personally identifiable data?
Core algorithm tweaks?
Firmware changes?
The blast radius of some changes going wrong or having unexpected behaviours is vast. To help manage this responsibility, for years, software teams have successfully relied on test-driven development (TDD), but many teams still need to adopt it. The rise of AI coding has moved TDD from a sound idea to a fundamental requirement for all software teams.
Trust but verify
Stephen Wolfram speaking with Lex Fridman on AI and the future of computation
In Stephen Wolfram's recent interview, he continually points out that a large-language model is fundamentally just concerned with working out the next best word (or part of) to add to a response. When it makes a wrong choice, you can get an answer that looks good and is coherent but is entirely inaccurate or false. Ideas are underway to help with this; many involve one AI governing another AI. However, that's all still up in the air.
In the same interview, Stephen discusses the challenge of making a safe sandbox and a confronting moment where he consciously allowed an AI to run code it generated on his machine. The mind boggles at the security and compliance duct tape that such a scenario makes null and void.
These scenarios underscore the importance of establishing your team's practice early to remain accountable for the software being deployed.
In addition to TDD, another change we can make immediately is to be deliberate when and how we use AI. A software team variation on the two-person rule for launching missiles would be:
AI when implementing code changes 🆗
AI to generate tests 🆗
AI to make the change AND verify it ❌
Combining TDD with agreed usage of AI will at a minimum give Engineers and their teams options to ensure we don't slowly loose control and understanding of changes entering our systems.
⚙️ Template your boilerplate
Custom snippets in VS Code
When using tools like ChatGPT or GH Copilots, a common delight is the boilerplate coding - instantly getting the base structure of a function or component in place. Normally, this is written/copied or re-generated repeatedly across CRUD operations and everyday UI interactions. If not already, start putting this boilerplate code into your approved project templates and IDE snippets. Regardless of whether it is human or AI-generated, reviewing this code once and making it standard is a good idea. It will rarely change, so save yourself the toil of constantly reviewing it. It will also be easier to automate the detection that the approved boilerplate code is being applied.
🪄 My AI Wishlist
On a positive note, I still have high hopes for how AI can truly transform software engineering in the coming years. As of today, those are:
Writing the code is no longer the major constraint on Engineers delivery and operations capability
Performance tuning expert is available 24x7
Pair programming will become the norm (AI or human).
TDD will become the norm
Less reliance on pull requests
Less key experienced person dependencies
Private enterprise-scale refactoring - training private custodian models on your repositories atop large-scale public LLMs
Thank you to Ákos Muráti, Declan Mungovan, Eugene "Oldman" Starodedov Oliver Hyde, and Paul Meyrick for helping with the chats and grounding for this piece.
Finally, if you're still on the positive side of AI and see it all as upside, I do encourage you to watch the below piece from the same people as the Social Dilemma on Netflix.
Originally published on LinkedIn.
Comments