Improving application speed and performance when market value depends upon it

Our real estate client had tried with two other vendors to get their web application off the ground, and both times application speed had been a deal breaker. It took way too long for reports to generate. The president of the company knew he had something the market needed but in the state it was in, no one was going to buy it. 

The financial reporting platform would help real estate property managers and Homeowner Associations (HOAs) to plan their reserve funding over a 30-year period to cover planned, unplanned, and incidental expenses associated with real estate maintenance. Juggling known and unknown expenses in the short and long-term required both long-term planning and adaptations.  

Without the platform, funding scenarios were modeled using cumbersome Excel workbooks and pivot tables. This application would put real-time knowledge and planning power directly into the hands of the property managers who needed it to make short- and long-term decisions. 

Previous development teams struggled to properly architect, design, and build an application for complex reporting. The software we were tasked to rebuild suffered from performance issues and was not designed to scale.  

Speed matters 

When the client met Integrant one of the previous vendors had created an application that functioned, but it was nowhere near ready for launch. The users explained only half-jokingly that running a report on the application required “requesting it, then going to the break room for a coffee while it was being generated.”  

In order to launch the platform, performance needed to improve drastically. The goal was to move report generation speed from watching paint dry to the blink of an eye.  Below we will unpack the modifications we made in architecture and approach to hit the necessary performance goals. 

Function  

The application would take user data inputs and produce funding projection reports. Reports would include expenses like landscaping, lighting, plumbing, roofing, pest control, fencing, stucco, pools, and pavement. Myriad variables would be modeled including interest rates, inflation, and monthly fees per unit. For example, a user could back into a stepped increase in monthly fees necessary to plan for a complete roof replacement of all units over a 10-year period, while at the same time maintaining a 100% fully funded reserve throughout a 30-year span. 

Importantly, the user would to be able to easily play with variations in data including time frame, types of expenses, and values.

Performance as an afterthought 

The application had not previously been built to handle these myriad moving parts. The application lived on one server. This meant that the reporting engine lived in the same place as the data input engine, along with all other business domains including finance, help desk, and project management. Built as it was, the system could not be divided which negatively impacted time, server load, and scalability. Every time a report was generated the system would  grab all data everywhere and calculate in real time. This took forever! 

Also, every report request was handled as a new incident. So if 100 users were generating a report, the report would be generated 100 times. 

Similarly, a change in a single data point would result in recalculation of all reports in the database, regardless of whether or not all reports were impacted by the data point change. 

Applying business logic to optimize performance 

In order to allow customized report generation on the fly we built in pre-calculation. We created two different modules–one for reading, and one for writing/reporting, each hosted on its own server. Supporting the reporting module we used Azure service bus and cloud technology. When a user submitted new data it would be published to the service bus. This module would run 24/7 and generate all reports needed for the system. The reporting domain would grab data from the service bus for report generation as needed. So when the user was navigating reports, they would already be calculated. The result: If a million users generate a report, the report would be generated only one time. This made the application not only fast, but scalable. 

We also added modules for other domains beyond reading and writing including users, financial, property, and help center. This further allowed us to scale out without impacting the system.  

On top of this we added logic that modified the way new data was handled in terms of report preparation. If a new data point were added, only impacted reports would be recalculated. If a user changed a financial or other data point the system would re-calculate and then make the associated reports customizable. So if data changes the system would look at its impact, e.g., is the data change related to a specific property or report? If so the system would calculate the impacted report only. If a new piece of data only impacts 1 of those reports, that would be the only report re-calculated. This saved both time and resources for the I/O device (input/output) and the hard disk. This became even more important because as we dug into user goals with the client we expanded report types from the previous 5, to 12 reports. (More information here about what your vendor needs to know for complex reporting.) 

Also, as a side note, the use of domain driven design (DDD) for the development supported the separation of domains from a business perspective. This helped to streamline database calls and make the system more scalable and flexible from the ground up.

What speed requires 

Unlike in the B2C or e-commerce space where performance is easily linked to page views, conversions, and revenue, the business impact of B2B application performance issues is hard to quantify because the connection between platform performance and revenue is usually not direct [source]. But we know the stakes for web application performance are high. In a recent survey 80% of respondents indicated that slow business-critical applications negatively affect business performance [source]. More specifically, poor application performance results in:

  • Loss of Employee Productivity 
  • Diminished Quality of Service 
  • Hindered Collaboration 
  • Lack of Innovation [source]

In this case, improving platform performance required a complete redesign of the application architectureThe result was a decrease in time necessary for report generation that went above and beyond the client’s expectations. For the client, this meant the application was released and this die-hard entrepreneur’s vision finally became a reality! 

 

 


facebook twitter linkdin