Wednesday, 22 November 2017

Chatty Applications and Simple SQL

One type of "poor performance" scenario I have come across a few times is due to what I call "chatty applications". These are applications that execute a disproportionally high number of what look like very simple SQL queries for every business transaction they do. And often this is a deliberate design choice by the application architects and developers, claiming that simpler SQL statements on single tables using indexed columns always leads to efficient execution plans, so it must be better than anything else. Unfortunately it is the high volume of execution of such "simple SQL queries" that is the cause of the poor performance these applications often experience. Both in terms of poor response times for individual users, and poor total throughput across all the users. Why then are such "chatty applications" the cause of poor performance and not the solution?

First, there are network round trips needed for each SQL statement executed, and these are typically milliseconds in terms of order of magnitude of elapsed time. The SQL statement text is sent from the client to the database server, and then the resultant data sent back from the server to the client. And when there are hundreds of SQL statements being executed per business transaction, then these all add up. Executing a "business transaction" in such a "chatty application" can end up spending more time transferring data back and forth over the network than actually doing work with that data on either the client or database server system.

Second, the high SQL execution volume results in high library cache usage and contention for access to it inside Oracle. For each SQL statement executed, Oracle will check if that SQL is already in the library cache (in the shared pool in the SGA) and if so will go ahead and use the associated execution plan. Such library cache look ups involve various temporary internal locks on the library cache, to make sure it doesn't change while your session is examining it. And the more SQL statements being executed leads to more library cache lookups and more contention for the controlling locks on it.

Third, the volume of data transferred to the client and then processed on it can be much higher than if a single, more complex SQL statement was used instead. This is also about network delays, but this time it is due to the amount of intermediate result data being transferred to the client, as it works its way through data from different tables as part of its "business transaction".

Ultimately this all a variation on the "row by row" processing mind set where all the raw data is pulled back into the client, which further processes it such as joining together data from multiple tables. And a "row set" based approach can be more efficient - executing fewer but more complex SQL statements on the database server, to send only the final resultant data back to the client.

I have seen this kind of scenario in action at a number of customers, where it was their application design that was slowing down and limiting the performance of their application. In some cases we were able to identify the highest use cases in the application and successfully modify it to use fewer, more complex SQL queries instead, so that more work is done per execution by far fewer SQL statements. In other cases it was not as clear due to the application developers using a "glue layer" of third party software to handle the application to database mappings and constructing the SQL queries to be executed in each case. Fixing this requires reconfiguration of the "glue layer" to issue more complex SQL queries involving multi-table joins. However, in at least one case the customer did not at that time have enough expertise in how to use this "glue layer" to achieve that. They had just used it in an "out of the box" type fashion, and were now suffering as a consequence of that.

Probably the worst case I saw was an application using another "glue layer" that only ever issued SQL queries for a single column of data at a time of the form "select one-column from table-name where pkid_col = value", and consequently did multiple such SQL queries to get each individual column it needed. And obviously this led to horrible performance and scalability. Luckily the application had been written to use a well defined internal API for data access, which was in turn wrapped around the "glue layer". We were able to insert an extra layer between the application's API and the "glue software", intercepting the application API data access calls before the "glue software" was called. We implemented a caching layer here, based on the primary key of a record from each table, which was always used on each query. For a new record query, with a different primary key value, the API now did a "select *" into a local record buffer of the whole data record, and then returned the one individual column requested. If the primary key requested matched that already in the current record buffer, then the code just returned the individual column value directly from that, avoiding any further SQL execution and executing any code in the "glue layer".

As you might expect performance increased significantly as a result of this change - replacing the multiple single column SQL queries with just one whole record SQL query followed by in-memory, local accesses, avoiding all those network round trips. Also the net load on the Oracle database server decreased significantly, with less contention on the library cache, and the CPU and disk utilisation decreased significantly as well. Consequently the performance of all the other SQL statements being executed on that database server improved as well due to those resources being freed up. A definite "win-win" all around, by accessing the data in the database in a much more efficient way.

No comments: