Previous optimizations have included converting dynamic arrays to dimensioned arrays when handling the TD_ITEM_PEG_ACT_RESULT file items. But even with attributes stored in separate dimensioned elements the immense number of values that may be generated in the real world overwhelm the UniData box and then moving the gigantic business object to the web server overwhelms that box. The resulting user experience "just sucks".
The real detail work of analyzing pegged detail is done in SUB.BUILD.ITEM.ACT.PEGGED, and entails building a number of attributes for each detail and sorting by date and by a peculiar transaction type order. And there is no good way of separating the selection and sorting of keys from generation of detail data as described in the PAGED.BTO document because the whole point is to keep a running total of availability and a number of calculated and summarized values.
To meet these requirements and scale to many thousands of lines of detail I created a process work file keyed by date and by sequence number and wrote simple flat records for each detail. Then SSELECT the work file and update the small simple records with the running total fields. This replaces a LOCATE and INSERT loop that hockey sticks as the the value count exceeds a thousand. And finally write the results in pages to the WEB_COOKIE_DATA file where each page record only contains, say 25 (users current items per page preference) values for each attribute.
The business object returns only the 1st page of pegged detail and the cookie where the rest of the pages may be found. The web page may then use another business object to read any page of the result the user wishes to view.
Where the business object is handling 3000-4000 details the whole-view multi-valued set approach was taking 6+ seconds on a modern Itanium UNIX box. The work file approach reduced this to less than 1/2 second. And testing shows a performance curve up to 20,000 details running at about 7,000 details per second on our development box.
On the web side the performance improvement is even more dramatic as the dataset and grid processing on large local datasets just overwhelms the web server processor. Eliminating the large business object result, and asking the web server to only create objects representing a single page of the pegged detail, results in near instantaneous page changes even with the trip back to the application server to pick up the page data.
Conventional wisdom says memory is fast, disk is slow. But in this particular scenario it is much more efficient to create a work file, populate it, SSELECT it and work with small flat dynamic arrays than to follow the standard path and attempt in memory sorting of large deep dynamic arrays.
No comments:
Post a Comment