ArcTouch CTO explains how generative AI can ‘make us better software developers’
As ArcTouch employee No. 3, Paulo Michels has been building lovable apps for clients since the dawn of the App Store. Now as chief technical officer, Paulo has seen the rise — and sometimes the fall — of many technologies over the course of his 15 years here.
So, with generative AI dominating our industry’s collective consciousness, we thought it would be useful to hear Paulo’s position on technologies like ChatGPT. We’ve recently written about how adding “AI inside” can advance our clients’ apps and websites, but there’s another use for generative AI — as a tool to accelerate and optimize commercial software development.
“In the end, it’s not about human vs. AI — but how AI can make us better software developers.”
We sat down with Paulo to interview him and here’s what he told us.
Do you feel comfortable using generative AI tools to write code?
Yes. I believe generative AI will be very beneficial to developers when used in the right context and with the proper safeguards in place. At ArcTouch, we’re already using them in controlled environments, where we can properly measure their impact and manage the risks.
We see these tools as assistants, doing some of the more repetitive and tedious work, or quickly making suggestions for how to write algorithms that solve common problems. Most of the largest developer platforms now have AI-based code generation tools available. Microsoft has GitHub Copilot and Amazon has CodeWhisperer. Google recently added co-developer functionality to Android Studio at Google I/O, and I would expect Apple to announce something similar for Xcode at their upcoming developer conference.
It’s still the responsibility of the developers to review the work of this assistant and identify bugs and inefficiencies. This responsibility isn’t new for experienced developers like the ones we have at ArcTouch. We understand that any code or tool from external sources used in our projects becomes our responsibility — whether it’s a third-party library, contributions from other developers, or references found online on sites like Stack Overflow. Simple techniques of mature development teams, such as peer code reviews and unit testing, can help protect individual developers from potential mistakes caused by using these sources.
What are the benefits of using generative AI to write software?
Generative AI tools can help developers to write code more quickly and efficiently. It can also help with overall productivity, quality, performance, and security.
Can AI eventually replace human developers?
I don’t believe so — but there’s no question that generative AI will change our jobs, just as it will many types of work. It has the potential to be an incredibly useful tool. Ultimately, a developer’s job is as a builder to solve technical challenges – not just write code. If AI can take care of some of the more mundane tasks and low-level code generation, then a developer’s time can be freed up to focus on the more high-value tasks. These include understanding the user needs, solving the right business problems, thinking through the user experience, etc. That’s where the real value is.
Would you consider using AI to check code for errors?
At the moment, generating unit testing for existing code is actually one of the most useful things that AI coding tools can do for us. However, it shouldn’t be the only method used to check for code errors.
We have also been seeing a lot of opportunities in applying generative AI to assist in creating test scenarios, writing automated tests, and simulating human behavior when interacting with the software. Generative AI is beneficial in transforming human language into executed code and vice-versa, which is at the core of practices like behavior-driven development (BDD) and acceptance test-driven development ATDD, etc.
Would you be comfortable using a program or app that AI had written?
Yes, if it was partially written by AI, under human supervision. I certainly would not want to run an app on my device if it was entirely written by AI, and wasn’t reviewed and tested by a human.
What are the risks of using AI to write software?
Here are some of the common risks when using AI-generated code:
- Quality: The generated code may not be optimized or efficient. This could lead to issues with performance, scalability, or maintainability.
- Security: The AI tools generate code based on patterns and previous programming styles, which may include security vulnerabilities in the generated code.
- Code consistency: It’s important for a code base to be consistent, following well-defined coding standards, and aligned with the overall project architecture and abstractions. Generated code doesn’t always consider the context of where it’ll be used and doesn’t follow these project-level best practices.
Besides the technical risks, there are some other indirect (but not less important) risks:
- Intellectual property: The AI tools may be trained on open-source code that is not necessarily licensed for commercial use. Certain tools use your code as input to improve the AI model, which could expose your intellectual property. The more mature tools provide granular control over some of these aspects, which is something we carefully consider before deciding what is appropriate for each project.
- Legal: The use of AI tools may raise legal questions about the ownership of generated code, and potential liability for errors in the generated code.
Will AI make software development more accessible to people who don’t have coding skills?
Certainly. AI can play the role of a personal tutor for those who are new to coding or even experienced developers who are new to a programming language or framework.
Do you think AI models designed to generate code are more likely to make mistakes than human developers?
It depends on the complexity of the software and the seniority of the software developer. In the end, it’s not about human vs. AI — but how AI can make us better software developers.
Do you see any flaws in how large-language-model (LLM) generative AI is trained?
LLM cannot test for software correctness — it can only replicate what it learned. From this perspective, LLMs are susceptible to repeat mistakes and not always understanding the whole context as it makes decisions.
Are there risks that a code-generating AI might ingest and then output bad code?
Yes. Many are concerned about the lack of transparency in AI model training, which may include ingesting low-quality code or even malicious code. Development teams can mitigate this through proper code review.
For custom AI models — where companies may train AI on their data — development teams can be selective about the ingested code and avoid error-inducing input.
What are the chances that an AI might train on and then output proprietary code?
Existing tools claim that the chances are minimal or even null if configured to use only properly licensed open-source code input. This is still a concern and something that we take into consideration when deciding to use such tools. We are also transparent about it with our clients — prior to using generative AI for anything, not just for software code — so they understand the risks involved.
How else can generative AI make us better software builders?
AI can streamline somewhat tedious tasks, reducing human error and increasing efficiency. But it can also help development teams ideate — to generate new concepts and build prototypes — and to aid in UX/UI design. It’s still early days, but there’s little doubt in my mind that AI will be a powerful tool for builders throughout all stages of software product development.
Need help with your AI product strategy?
Adding generative AI to your existing apps and websites offers your customers new, intelligent experiences. Our “AI Inside” design and development sprints can help. Want to talk about your AI strategy? Contact us for a free consultation.