Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Op-Ed: How generative AI can empower developers to shift left

2023 is shaping up to be a year of artificial intelligence (AI) milestones, with the rapid pace of change impacting almost every industry.

user iconPeter Marelas
Mon, 05 Jun 2023
Op-Ed: How generative AI can empower developers to shift left
expand image

Within the field of generative AI in software development alone, we have started to see a glimpse of the potential for transformation.

McKinsey defines generative AI as algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. Once trained, generative AI can then generate new examples similar to those it was trained on. Generative AI is set to revolutionise how engineers communicate with technology and how technologies interact with each other.

According to IDC Group vice president Stephen Elliot, the broader goal for AI advancement should “go beyond chatbots and language processors, to holistic intelligence solutions that ensure a single source of truth, break down silos, ensure knowledge workers work smarter and deliver clear business results”.

============
============

The recently announced New Relic Grok represents a breakthrough in AI-powered assistants. New Relic Grok is the world’s first generative AI observability assistant powered by large language models (LLMs). By extracting the benefits of large language models, New Relic Grok has the potential to dramatically amplify an engineer’s productivity across every stage of the software development life cycle.

The evolution of observability with generative AI

The breadth of new possibilities for using AI in any field can seem immense. Having established the foundation model for AI advancement in observability, we now have a clear roadmap ahead of us, steered by the goal of empowering engineers to identify and resolve issues faster. What lies ahead is a phased evolution where each application of generative AI improves and enhances the user experience.

Phase 1: Optimise user experience with contextual AI assistance

Today’s generative AI solutions are largely out-of-band. That is, a user must context switch from a domain-specific product to an artificial general intelligence assistant like ChatGPT to benefit from generative AI. An example of this is asking ChatGPT a “How to ...” question about a specific product and then switching to that product to perform the instruction. While this approach is useful, it is not optimal.

In the next phase, the user experience will be improved as products will introduce domain-specific in-band assistants that behave like ChatGPT and are available directly within the product experience. These assistants will be fine-tuned to answer domain-specific questions and perform domain-specific tasks. They will offer the added advantage of taking the user’s full context into consideration without requiring the user to provide the context with each question. This will allow the assistant to respond to questions and tasks in a way that reflects the user’s current context and state. And given these assistants will have a much narrower scope, they will be far less likely to produce plausible but factually incorrect responses, which plague existing general-purpose AI assistants. Providing factual and correct responses will be necessary before customers will trust generative AI to automate the complex task of troubleshooting and eventually remediating incidents, a common use case for observability solutions.

Phase 2: Improve decision making and efficiency with predictive/prescriptive intelligence

The next phase of the AI assistant will be to produce insights and advice without being asked. For example, a user browsing an APM-enabled application may receive unsolicited advice to “adjust settings in the Java VM to improve performance”. By accepting the recommendation, the assistant can schedule a task to implement the recommendation, and by rejecting the recommendation, the assistant learns to avoid recommendations that users are more likely to reject. For engineers, the automated recommendations will provide immediate value without requiring years of experience to unlock such value.

One may ask how does an AI assistant acquire such knowledge? Does it learn through observation, or does it learn from someone else’s experience? Both scenarios are plausible, and platforms that acquire knowledge from observing users will require transparent controls, guidelines and governance in place to prevent data and prompts from being used to enhance models without appropriate consent. Something to note here is that generative AI is only as good as the data it has at its disposal. By combining large language models with the breadth of a unified telemetry data platform, New Relic Grok is designed to provide better, more reliable and high-quality AI responses.

Phase 3: Shift left with autonomous discovery and automation

The next and perhaps final phase will see the introduction of greater autonomy and automation to support the entire practice of shifting left in observability, with AI assistants acting on behalf of users with varying degrees of autonomy and human-level supervision.

In this phase, engineers will have the ability to task the assistant with a common objective, including constraints and guardrails that it must follow to achieve the objective. For example, the assistant may be tasked to seek out opportunities to improve the performance of a particular service without increasing resource allocations. In this scenario, the assistant may leverage prior knowledge in identifying and solving N+1 query pattern problems by analysing distributed traces. To identify these opportunities, devise a plan and validate the approach, the assistant will have a variety of telemetry data, tools and environments at its disposal, not dissimilar to a human.

These may include development, testing, automation, simulation and experimentation tools, followed by development, testing, staging and eventually, production environments. Through a process of self-awareness and self-supervision, the assistant will devise a plan that includes the set of tasks and sequencing necessary to achieve the objective within the allowed constraints and guardrails. The plan and results used to validate the approach in non-production environments will be shared with a human for approval before the assistant implements the approach in production.

Giving an assistant the freedom to explore, discover, experiment and iterate on objectives it has knowledge of will lead to an inflection point that finally frees engineers to focus on more complex and unique situations that demand human-level intelligence.

While reaching the stage of autonomous discovery and automation may seem unrealistic today, there are more challenging autonomous systems already in existence, namely autonomous vehicles. Embracing the evolution of generative AI across the entire software development life cycle will help organisations reach new levels of efficiency and performance.

Peter Marelas is chief architect, Asia-Pacific and Japan, at New Relic.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.