Delays in the region of 10s to open a page make an application almost useless as most users just won't wait that long for stuff to happen. Needless to say I set about trying to work out why it was happening.
From the off the problem appeared to be "their side". Performance on the server I'd been developing was as good as you'd expect from an Ext-based application. Resisting the temptation to simply say "Works fine here!" (hey, I'm a pro, I'd never say that to a paying customer) I got the customer to log the browser-server transactions, so I could take a closer look. Not having access to their servers meant I couldn't debug it directly.
To record the HTTP transaction log I had them install the free Basic Edition of HttpWatch. All they had to do then was start recording, open the application, open a new form, open a new document, edit the document, save the document etc and then stop recording. The result is a .HWL file which I can then open in my version of HttpWatch's "Studio". I can then see all headers, response code and the amount of time taken for all transactions involved.
On opening the the HWL file it was almost immediately obvious there was a problem. Here's a view of the transactions involved:
Notice the Result column. This shows the HTTP status code returned. A code of 200 means everything was "ok". This is "ok" but not what we'd expect to see for all transactions. Especially on subsequent requests for the same resource. What we'd expect is a code of 304 which tells the browser it hasn't changed and it can use the local cached copy instead. Because this isn't happening the cache is never used and so it runs unacceptably slowly.
Why is it doing this though?
Drilling down in to each individual transaction I noticed something common to them all. For each GET request the server was returning a Cache-Control header of no-cache.
While we expect this for certain URLs, such as those ending in ?OpenAgent or ?OpenForm, we certainly wouldn't expect it of requests for files ending in CSS, JS or GIF. The image above shows a request for a JS file which is 500KB in size. You definitely want something like that cached browser-side!
What could be causing this?
As far as I know the only way for this to happen would be if a developer or administrator had previously added a Web Site Rule to the server's directory which looked something like this:
The effect of applying this rule, which I verified by testing on my own server (following a restart), is that the server tells the browser not to cache anything from it.
While I imagine this might have solved whatever problems the developer was having with caching at the time it has - in turn - created other problems, as they're finding now.
It should go without saying, but I'll say it anyway -- never ever use a system-wide configuration change like this as a quick fix to a problem you're having. If you do then you at least need to be mindful of any adverse effects on other applications.
My advise to the customer was that the only real solution was to remove the rule. Although, as I warned them, this could then "break" any systems that have been developed with it in place.
An alternative would be to add subsequent Rules which tell the browser to using caching if the URL ends in *.js etc and hope that it trumps the Rule already in place.