Legacy System = Caution Comes First

Rebuild, maintain, refactor (not necessarily in that order)

A relationship that started with rebuilding a legal document routing app grew into our maintaining their entire portfolio of systems. Over several years of maintenance we also rebuilt small pieces and larger pieces including a customer facing website and other apps and modules along the way. Finally, the client was in a position to initiate a comprehensive rebuild, but this story is about our investment with the client in getting to that point. It is a story of the journey, not the destination.

When the president of a leading litigation support services company contacted us to help rebuild their routing application, we were immediately interested. The application supported internal operations by enabling scheduling and tracking of routes to deliver and pick up legal and related documents. We do well in highly regulated, high compliance industries like finance, medical devices, and student loans. The requirements associated with the litigation documentation services industry are similar in terms of privacy, audit preparedness, and the need for meticulous approach and record keeping.

Most importantly, we liked the commitment and credibility of the company’s people. The president had been a pioneer in introducing technology and automation to the litigation support services industry. The IT manager had been with the company for over a decade and was a strong leader in the systems engineering space. He was responsible for the technological direction of the company. He knew the technology inside and out, as well as the stakeholders and how to prioritize among departments, apps, and services.

As far as technical support, the company had an infrastructure team but only one in-house developer. A previous developer, who had been largely responsible for developing the current application environment, was no longer with the company. The remaining developer had a tremendous responsibility to maintain 10-20 years of code supporting 40 apps and 40 services.

We were confident this would be a rewarding project for us and them.

Team structure: a strong client-side asset

The Integrant team included a technical project lead (TPL), developers, and testers. The client team included a dedicated developer and strong, astute IT manager. The IT Manager was our technical liaison between us and the business users and other stakeholders. On a daily basis he was able to prioritize us, manage our backlog, and coordinate stakeholder expectations and requirements. He served as a client-side subject matter expert (SME) who worked with business users to rephrase requirements from “what I want” to “why this feature/requirement will help achieve our goal.”

In addition to these tasks, the IT manager also handled all infrastructure, connection, and internet issues. He relied on us to be systematic, thorough, methodical and meticulous in anticipating and analyzing how changes to one app might impact others.

Rebuilds: operations routing app and customer website

We applied a development team to the task of rebuilding the routing app. The client liked the result and subsequently asked us to maintain their entire environment, including all apps and services. We would be serving as the client’s development department including ongoing maintenance with app rebuilds as time and budget permitted.

After two years of systems maintenance we were asked to rebuild the client’s customer facing website. We began due diligence to ensure that the rebuild would positively impact all users and all modules connected to the website either directly or indirectly.

For requirements gathering and demo purposes for both rebuild efforts we joined the client’s IT manager and internal developer to interface directly with the business users. Our contacts included:

  • Account executives
  • External users of the public site (no direct relationship; account executives represented their interests)
  • Marketing
  • Operations
  • Lab management
  • IT
  • CEO

As is usually the case, most stakeholders had conflicting goals and were not technical. In order to truly partner with the IT manager, we needed to ensure that when we suggested ideas we did so with an understanding of the client’s business and with a focus on business implications and conflicts.

In preparing for, building, testing and deploying both rebuild efforts, our TPL worked with the client onsite. The TPL analyzed what the client currently had and worked out high level design documents and a requirements doc. Functioning in some cases as an architect and business analyst, the TPL prepared documentation that would be helpful in extracting test cases, architecture, and development code.

Years later the public site and the routing app were still working properly and supporting major browsers with minor issues. We recently updated the sites to work more effectively with Edge and mobile browsers.

Helping in many areas, but to what end?

Behind the impressive bottom line revenue success associated with a top-notch service provider was a back end environment that was outdated and expensive to maintain. When we looked under the hood we realized that over a 10-20 year period the applications had been developed piece meal and weren’t based on standard or accepted software development methodology, architecture, or best practice.

We were supporting 80 apps and services, or about 95-97% of their apps. The term the client used accurately was that we were “babysitting” the entire app environment. The client had database problems and server problems, and we were maintaining legacy apps that had been developed in such a way that they could not function without constant attention.

With respect to maintenance of a system with so many interdependent parts: When an issue was reported It was our responsibility to identify implications one change would have on other modules and present alternatives when warranted. This included exploring inter-dependencies, extensibility, and ROI. The client’s IT manager contact ensured stakeholders understood conflicting requirements among departments and helped to resolve them, prioritize projects, and remove other obstacles.

The bigger issue became how much time we were investing in maintenance vs doing the job we knew should be done. Maintenance was time consuming and included using exploratory and troubleshooting skills to guess where the initial issue originated, e.g.:

  • Was it an infrastructure issue?
  • Network related?
  • Server related?
  • User related?
  • Time period related (end of month, end of year)?
  • App related?

We referred to ourselves less often as programmers and more often as wizards, especially in the area of QC:

  • Sometimes the developer wore the hat of a tester and tested from within Visual Studio. He would modify the app to make it testable; he would test the code myself.
  • In some cases we were trying to guess the business, guessing how an app should behave, and building test cases with no requirements.
  • In other cases we would resolve bugs even if this meant refactoring a legacy app. Our practices made it easier to trace and fix issues than in the past, but still, we were not moving the client forward, just maintaining a very tenuous status quo.
  • Nevertheless we achieved impressive testing coverage. In the beginning of our engagement test coverage was at 10%. We grew that number to 90%.

There were issues that were recurring and we knew they would come up again because we hadn’t solved them, just bandaged them. The team would for instance take a module that they were quite sure was about to die and just make sure it worked on the next version of Windows, or the next IE browser. We were not able to address improper design of the database or the architecture itself.

In addition, some technical issues surrounding keeping very old modules functioning required historical knowledge references that were not always present in the age group of the current team.

We worked behind the scenes to refactor where we could to keep things running. Part of our team culture is sometimes called The Boy Scout Rule: leave the campground cleaner than you found it. We continued to support, inform, and protect until the timing was right for a more comprehensive rebuild.

We knew that the rapid increase in data associated with the day to day operations of this growing business couldn’t be supported by the current environment. Additionally, an outside climate of cutting edge technologies, systems, and infrastructure meant that the client’s apps would soon become obsolete. The client’s IT Manager wanted to make a strong case for investment and we could help.

Software on the verge of obsolescence: red flags

We saw as we began enterprise-wide app maintenance that the client’s systems were dying. Specific markers included those below. These are common markers that signal that a rebuild may be warranted.

  • The database design was acquiring data from the 90s and growing in massive scale. It was referencing tables that held data without any use. This caused severe issues in performance. When we tried to clean up some data the whole thing was in danger of falling apart. No outsizing solution was in place.
  • The app and service modules were all connected and were part of a very big platform. One database was working inside all modules. Business knowledge was not documented.
  • Business rules were included in the database and applied using triggers.
  • The scheme itself included:
    • Redundancies and inconsistencies.
    • No proper transaction management.
    • No roll back transaction support.
    • No proper design for indices to support performance of quitting the tables.
    • Many business rules were hard coded, not based on configurations, e.g., “If invoice is over $200, add $5 fee,” but this was buried inside the database itself.
  • The environment had multiple .NET frameworks working for different apps. Some DLLs were shared across those apps. 36 versions of the same DLL were being used in different apps. Within the core business DLL there were many versions of the same libraries being referenced inside different projects. This was the result of 10 years of continuous development with no updating of related parts or reuse of the same libraries in order to end up with a consistent code base.
  • Some apps were without source code. In these cases we reverse engineered to get the source code. Specific prerequisites for these apps made it almost impossible to deploy them in our QC environment, so we built enhancements to support the testing team.
  • Some apps were not testable. Some apps only worked with certain accounts from certain machines with certain IE versions.
  • To perform app work in our environment required modifying the configuration inside the code and having a separate build for QC, or having the client rebuild the configuration so we could deploy on multiple environments.
  • There was no disaster recovery plan, load balancing in the production environment, staging to test changes before we deployed in production, proper user management, proper security, proper server map, or proper design for virtualization.
  • There was a dependence on IE 6, Windows XP, and very specific hardware regarding lab management, CD burners, and scanners.
  • The production environment still depended upon Windows 2003, and .NET version 2.0. Some of the associated libraries were obsolete and companies weren’t supporting them anymore. This meant that we had to find a workaround when updating technology. Similarly, there were licensing issues with the current software.
  • All configuration was hard coded and buried inside the code, including:
    • Connection streams
    • Email accounts
    • Passwords
    • License keys
  • There was no unified logging and no event logging for some apps.
  • Each business user encountered the same pain and the same issues every time using the same back doors. Sometimes they would use a bug as a feature to get a workaround. Workarounds were everywhere.
  • The app environment was not scalable or extensible.

The biggest and most important aspect of the markers above was that these issues didn’t just impact the client internally; they impacted the public website and relations between the client and their end users.

Although a rebuild was definitely called for in theory, the client’s budget wouldn’t support a full rebuild all at once, and in any case it wasn’t possible to take the whole environment down and rebuild because too much of the business relied on the existing systems. So a per-app approach was warranted.

Rebuild plan approval

We had earned the client’s trust through two app rebuilds plus several years of exploring, refactoring, and maintaining. But the timing and budget had to be right for a complete rebuild.

Our proposal to the client included a complete redesign of every aspect of the environment using proper business process modeling. Putting aside individual modules, the proposal looked at the problem from the perspective of “What do you need to do to get your job done?” All tools, technologies, platforms, and apps would be developed based upon answers to this question and incorporating the differing needs of all stakeholders.

The proposal included three phases. The first would identify apps that were redundant or no longer in use, and consolidate and eliminate accordingly. The second phase would pull out functions and modules that could run independently to reduce impact and interdependencies. The third phase would involve rebuilding the remaining apps that were dependent upon each other in an environment that was extensible and maintainable.

With a strong and well-respected IT manager as our internal champion, the proposal was approved. In addition, the IT manager asked the parent company’s VP of program development and the VP of technology to join a meeting where we reviewed the proposal and our work with this client to date. He was impressed at the extent to which we had worked on our own initiative and behind the scenes to keep our client’s systems functioning.

Technical courage comes in many flavors

There is an awesome rebuild ahead, but a lot of work still to do in maintaining an antiquated app environment. If the satisfaction is not in the journey it does not exist for this type of project. So how does our team stay motivated?

Most importantly, our client side technical contact, the IT manager, is well respected within the client company and among all stakeholders. He is able to prioritize tasks and business needs within a disjointed technical environment. In addition, the Integrant side technical project lead (TPL) empowers his team to dig in, explore, refactor, take ownership, adopt modules and protect them. So if, say there is a call at 3:00 am from the client with a server issue, the TPL will ping his team and invite them to join the troubleshooting call if they are able. More often than not, most if not all team members will jump on the call to be part of the solution.

The enterprise rebuild work ahead will be rewarding and just as importantly, we have enjoyed every challenge associated with the rebuild and maintenance work to date.

If you’re looking to rebuild a legacy app or keep one going until then, we’re here to help. Reach out to us at info@integrant.com or http://www.integrant.com/contact/.

If you want more on legacy rebuilds check out Desktop App Enhancement Leads to Rebuild or Legacy App Rebuild for Medical Manufacturer.