Modernizing Legacy Systems with LLMs
Practical Strategies for Smarter, Leaner Digital Operations
Brighterwork.com features "Brighter Work," a newsletter where tech professionals share expertise on the latest business technology challenges. Stay up-to-date with modern insights covering a wide range of relevant tech topics.
Legacy digital systems are essential yet increasingly burdensome for modern organizations. While replacing them outright can be risky and expensive, doing nothing exposes businesses to security, scalability, and productivity risks. Large Language Models (LLMs) offer a new approach — one that enables incremental modernization through augmentation rather than disruptive overhaul. This paper explores pragmatic, AI-assisted modernization strategies that unlock immediate value while managing risk and ensuring business continuity.
Legacy systems remain the operational backbone for many organizations, yet they present clear challenges. High maintenance costs arise as specialized talent and outdated technologies drive up support expenses. Limited agility hampers the ability to integrate and scale operations effectively. Security vulnerabilities become more pronounced as unsupported platforms leave businesses exposed to risks. Additionally, older systems often result in data silos, making valuable information difficult to access or analyze, further inhibiting strategic decision-making.
Organizations are aware of these issues but often avoid full-scale replacement due to cost, complexity, and risk of disruption. However, recent advances in large language models have shifted the modernization landscape. Far beyond chatbots, LLMs can now understand and process unstructured and semi-structured data, automate human-like tasks with context awareness, and integrate seamlessly via APIs into existing software ecosystems.
By acting as cognitive "co-pilots," LLMs augment rather than replace legacy systems, adding value without demanding wholesale replacement. Organizations across various industries are already demonstrating how LLMs can augment legacy systems effectively. For example, Morgan Stanley has integrated OpenAI-powered assistants to help financial advisors quickly access information from decades-old document repositories, improving knowledge retrieval and client service. SAP introduced Joule, a generative AI copilot embedded into its ERP suite, enhancing users' ability to navigate complex workflows and surface actionable insights without replacing core systems. Virtusa, an AWS partner, uses generative AI tools to automate the conversion of mainframe COBOL programs into Java services, preserving business logic while transitioning to a cloud-friendly architecture. Canadian Tire Corporation has leveraged Microsoft Azure and OpenAI Service to create digital assistants that help over 3,000 employees save valuable time daily, streamlining tasks across legacy systems. Meanwhile, the U.S. Office of Personnel Management is employing AI to assist in modernizing COBOL-based retirement systems by analyzing and categorizing millions of lines of legacy code, facilitating a more efficient modernization process.
These emerging use cases demonstrate how LLMs are being practically applied to deliver high returns on investment. Many organizations are adding natural language interfaces to legacy applications, making them more accessible and user-friendly. Others are deploying LLMs to handle complex data wrangling tasks, allowing them to extract, summarize, and report on difficult-to-access legacy data with ease. In support environments, LLMs are automating routine maintenance tasks, from generating technical documentation to triaging tickets and providing contextual recommendations to support staff. Consider, for example, a legacy ERP system that has been revitalized by integrating an LLM-powered assistant. This assistant can answer user queries, produce reports, and offer intelligent suggestions, breathing new life into otherwise rigid and dated infrastructure.
Rather than advocating for risky and disruptive rip-and-replace approaches, LLM-powered modernization focuses on safe, incremental transformation. Organizations are increasingly introducing LLMs alongside existing applications to handle repetitive and knowledge-driven tasks, reducing strain on legacy systems without altering their core. Through secure API connections, legacy platforms can be extended and enhanced, enabling LLMs to act as intelligent intermediaries that bridge old and new. This typically involves exposing select functions or data sources from legacy systems through RESTful or GraphQL APIs, which LLM-powered services can call dynamically to retrieve context-specific information or execute business logic. Authentication and role-based access control ensure these integrations remain secure and auditable. Depending on the organization's infrastructure, the LLM can be deployed in the cloud, on-premises, or in a hybrid setup, and may be further enhanced with tools like vector databases, RAG pipelines, and fine-tuning based on domain-specific data. These integrations enable LLMs to act not only as user-facing assistants but also as internal workflow engines that augment processes with intelligence and automation.
In addition, many are adopting sidecar architectures, where LLM-powered microservices complement legacy systems rather than replace them. In this pattern, the LLM operates as an adjacent process or service—often containerized and running in parallel to the legacy application—interacting via message queues, RESTful APIs, or shared databases. This allows the LLM to handle cognitive tasks such as summarization, classification, or data extraction without altering the core legacy codebase. These microservices can be deployed independently, scaled on demand, and maintained separately from the main application, minimizing risk while accelerating innovation. This design also supports modular experimentation: organizations can pilot AI features on the side before fully integrating them into more critical workflows. This approach preserves business continuity while unlocking new efficiencies and capabilities, ensuring modernization efforts remain pragmatic and low risk.
AI-driven modernization no longer requires expensive and risky rip-and-replace strategies. LLM-powered augmentation offers a scalable, pragmatic pathway to unlock the full potential of legacy systems. At Brighterwork, we specialize in helping companies craft responsible, ROI-focused AI strategies. From assessment to implementation, our team can help you explore how LLMs can revitalize your digital infrastructure.
Morgan Stanley. "Morgan Stanley Research Announces AskResearchGPT." Morgan Stanley, 2023, https://www.morganstanley.com/press-releases/morgan-stanley-research-announces-askresearchgpt.
"SAP Debuts Joule, a Generative AI Copilot for Intelligent Insights." SAP, 2023, https://www.sap.com/products/technology-platform/joule.html.
Masood, Adnan. "The Unreasonable Effectiveness of Generative AI in Legacy Application Modernization." Medium, 2023, https://medium.com/@adnanmasood/the-unreasonable-effectiveness-of-generative-ai-in-legacy-application-modernization-0e81d294f894.
"How Real-World Businesses Are Transforming with AI." Microsoft, 2024, https://blogs.microsoft.com/blog/2025/04/22/https-blogs-microsoft-com-blog-2024-11-12-how-real-world-businesses-are-transforming-with-ai.
"AI-Powered Legacy System Modernization." TechSur Solutions, 2024,https://techsur.solutions/ai-powered-legacy-system-modernization-transforming-federal-it-with-less-disruption.