Here's the solution I came up with for use with the domlog.nsf file.
Since stat requests are made far less often than they are recorded, adding extra recording / parsing to the capture event didn't seem like a worthwhile server load / cost / complexity tradeoff. Instead, we're implementing an on-demand query of the domlog that will parse data from a specified date range to produce a basic set of results and return it in simple graph form similar to awstats to a web browser.
One list of unique urls will be used to do a lookup on the top n hits so we can get titles with more information available as a separate request to our parsing agent which will be written in java. I'll be adding a single view with UNID and title columns to support this.
Sorry I don't have a sample to show as I havn't developed it yet... just planned out for now.
Here's the solution I came up with for use with the domlog.nsf file.
Since stat requests are made far less often than they are recorded, adding extra recording / parsing to the capture event didn't seem like a worthwhile server load / cost / complexity tradeoff. Instead, we're implementing an on-demand query of the domlog that will parse data from a specified date range to produce a basic set of results and return it in simple graph form similar to awstats to a web browser.
One list of unique urls will be used to do a lookup on the top n hits so we can get titles with more information available as a separate request to our parsing agent which will be written in java. I'll be adding a single view with UNID and title columns to support this.
Sorry I don't have a sample to show as I havn't developed it yet... just planned out for now.