AI tools and techniques are rapidly expanding in software as organisations aim to streamline large language models for practical applications, according to a recent report by tech consultancy Thoughtworks. However, improper use of these tools can still pose challenges for companies.

In the firm’s latest Technology Radar, 40% of the 105 identified tools, techniques, platforms, languages, and frameworks labeled as “interesting” were AI-related.

Sarah Taraporewalla, Thoughtworks’ APAC Chief Technology Officer, explained in an exclusive interview with TechRepublic  that AI tools and techniques are proving themselves beyond the AI hype that exists in the market.

Profile photo of Sarah Taraporewalla.
Sarah Taraporewalla, APAC Chief Technology Officer, Thoughtworks.

“To get onto the Technology Radar, our own teams have to be using it, so we can have an opinion on whether it’s going to be effective or not,” she explained. “What we’re seeing across the globe in all of our projects is that we’ve been able to generate about 40% of these items we’re talking about from work that’s actually happening.”

New AI tools and techniques are moving fast into production

Thoughtworks’ Technology Radar is designed to track “interesting things” the consultancy’s global Technology Advisory Board have found that are emerging in the global software engineering space. The report also assigns them a rating that indicates to technology buyers whether to “adopt,” “trial,” “assess,” or “hold” these tools or techniques.

According to the report:

  • Adopt: “Blips” that companies should strongly consider.
  • Trial: Tools or techniques that Thoughtworks believes are ready for use, but not as proven as those in the adopt category.
  • Assess: Things to look at closely, but not necessarily trial yet.
  • Hold: Proceed with caution.

The report gave retrieval-augmented generation an “adopt” status, as “the preferred pattern for our teams to improve the quality of responses generated by a large language model.” Meanwhile, techniques such as “using LLM as a judge” — which leverages one LLM to evaluate the responses of another LLM, requiring careful set up and calibration — was given a “trial” status.

Though AI agents are new, the GCP Vertex AI Agent Builder, which allows organisations to build AI Agents using a natural language or code first approach, was also given a “trial” status.

Taraporewalla said tools or techniques must have already progressed into production to be recommended for “trial” status. Therefore, they would represent success in actual practical use cases.

“So when we’re talking about this Cambrian explosion in AI tools and techniques, we’re actually seeing those within our teams themselves,” she said. “In APAC, that’s representative of what we’re seeing from clients, in terms of their expectations and how ready they are to cut through the hype and look at the reality of these tools and techniques.”

SEE: Will Power Availability Derail the AI Revolution? (TechRepublic Premium)

Rapid AI tools adoption causing concerning antipatterns

According to the report, rapid adoption of AI tools is starting to create antipatterns — or bad patterns throughout the industry that are leading to poor outcomes for organisations. In the case of coding-assistance tools, a key antipattern that has emerged is a reliance on coding-assistance suggestions by AI tools.

“One antipattern we are seeing is relying on the answer that’s being spat out,” Taraporewalla said. “So while a copilot will help us generate the code, if you don’t have that expert skill and the human in the loop to evaluate the response that’s coming out we run a risk over risk of overbloating our systems.”

The Technology Radar pointed out concerns about code-quality in generated code and the rapid growth rates of codebases. “The code quality issues in particular highlight an area of continued diligence by developers and architects to make sure they don’t drown in ‘working-but-terrible’ code,” the report read.

The report issued a “hold” on replacing pair programming practices with AI, with Thoughtworks noting this approach aims to ensure AI was helping rather than encrypting codebases with complexity.

“Something we’ve been a strong advocate for is clean code, clean design, and testing that helps decrease the overall total cost of ownership of the code base; where we have an overreliance on the answers the tools are spinning out … it’s not going to help support the lifetime of the code base,” Taraporewalla warned.

She added: “Teams just need to double down on those good engineering practices that we’ve always talked about — things like, unit testing, fitness functions from an architectural perspective, and validation techniques — just to make sure that it’s the right code that is coming out.”

How can organisations navigate change in the AI toolscape?

Focusing on the problem first, rather than the technology solution, is key for organisations to adopt the right tools and techniques without being swept up by the hype.

“The advice we often give is work out what problem you’re trying to solve and then go find out what could be around it from a solutions or tools perspective to help you solve that problem,” Taraporewalla said.

AI governance will also need to be a continuous and ongoing process. Organisations can benefit from establishing a team that can help define their AI governance standards, help educate employees, and continuously monitor those changes in the AI ecosystem and regulatory environment.

“Having a group and a team dedicated to doing just that, is a great way to scale it across the organisation,” Taraporewalla said. “So you get both the guardrails put in place the right way, but you are also allowing teams to experiment and see how they can use these tools.”

Companies can also build AI platforms with integrated governance features.

“You could codify your policies into an MLOps platform and have that as the foundation layer for the teams to build off,” Taraporewalla added. “That way, you’ve then constrained the experimentation, and you know what parts of that platform need to evolve and change over time.”

Experimenting with AI tools and techniques could pay off

Organisations that are experimenting with AI tools and techniques may have to shift what they use, but they will also be building their platform and capabilities over time, according to Thoughtworks.

“I think when it comes to return on investment … if we have the testing mindset, not only are we using these tools to do a job, but we’re looking at what are the elements that we will continue to just build on our platform as we go forward, as our foundation,” Taraporewalla said.

She noted that this approach could enable organisations to drive greater value from AI experiments over time.

“I think the return on investment will pay off in the long run — if they can continue to look at it from the perspective of, what parts are we going to bring to a more common platform, and what are we learning from a foundation’s perspective that we can make that into a positive flywheel?”

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays