Wednesday, July 21, 2010

JSON Conversions

So, I am integrating a provided web page into a Manage 2000 site and I need to supply this external page with a JSON array of data on the querystring based on the contents of a Manage 2000 TM Table.

How to get a JSON serialization out to the client world?

There is a very nice little namespace that I have not previously run across, System.Web.Script.Serialization. And in it you will find a JavaScriptSerializer class (read JSON serializer!).

With the JavaScriptSerializer you can convert a .Net Hash to or from a JSON object, or a .Net System.Array to or from a JSON array, or a bunch of other mappings including your own.

In my case I want to end up with a JSON array of elements with each element comprised of an array of code description pairs.

Private Function GetTMTableAsJSONArray(ByVal TableNbr As String) As System.Text.StringBuilder
Dim result As New System.Text.StringBuilder
Dim TM As New System.Collections.Generic.List(Of Array)
Dim ds As New ROISystems.Components.roiDataSet
ds = TableMaster.GetTable(TableNbr)
For Each entry As DataRow In ds.Tables("VALIDATION_Validation_Info").Rows
Dim row() As String = {"", ""}
row(0) = entry.Item("Code")
row(1) = entry.Item("Desc")
TM.Add(row)
Next
Dim JSONSerializer As New System.Web.Script.Serialization.JavaScriptSerializer
result.Append(JSONSerializer.Serialize(TM))
Return result

End Function

Yes, the JavaScriptSerializer is my new favorite toy for transforming data during client side AJAX activity.

Friday, July 9, 2010

Hyper Activity

A recent treasure from our 1st live Manage 2000 7.3 site led me back to researching performance issues in the Pegged Detail page of ItemActivity. This has been a long standing issue in the field for certain customers on certain parts under certain circumstances.

Previous optimizations have included converting dynamic arrays to dimensioned arrays when handling the TD_ITEM_PEG_ACT_RESULT file items. But even with attributes stored in separate dimensioned elements the immense number of values that may be generated in the real world overwhelm the UniData box and then moving the gigantic business object to the web server overwhelms that box. The resulting user experience "just sucks".

The real detail work of analyzing pegged detail is done in SUB.BUILD.ITEM.ACT.PEGGED, and entails building a number of attributes for each detail and sorting by date and by a peculiar transaction type order. And there is no good way of separating the selection and sorting of keys from generation of detail data as described in the PAGED.BTO document because the whole point is to keep a running total of availability and a number of calculated and summarized values.

To meet these requirements and scale to many thousands of lines of detail I created a process work file keyed by date and by sequence number and wrote simple flat records for each detail. Then SSELECT the work file and update the small simple records with the running total fields. This replaces a LOCATE and INSERT loop that hockey sticks as the the value count exceeds a thousand. And finally write the results in pages to the WEB_COOKIE_DATA file where each page record only contains, say 25 (users current items per page preference) values for each attribute.

The business object returns only the 1st page of pegged detail and the cookie where the rest of the pages may be found. The web page may then use another business object to read any page of the result the user wishes to view.

Where the business object is handling 3000-4000 details the whole-view multi-valued set approach was taking 6+ seconds on a modern Itanium UNIX box. The work file approach reduced this to less than 1/2 second. And testing shows a performance curve up to 20,000 details running at about 7,000 details per second on our development box.

On the web side the performance improvement is even more dramatic as the dataset and grid processing on large local datasets just overwhelms the web server processor. Eliminating the large business object result, and asking the web server to only create objects representing a single page of the pegged detail, results in near instantaneous page changes even with the trip back to the application server to pick up the page data.

Conventional wisdom says memory is fast, disk is slow. But in this particular scenario it is much more efficient to create a work file, populate it, SSELECT it and work with small flat dynamic arrays than to follow the standard path and attempt in memory sorting of large deep dynamic arrays.