Jeff, and anyone else with an interest:
There is definitely still an issue with SUnitBrowser slowing down the running of tests due to its updating of the display. Sorry about the length of this post, but I felt I needed to present some of the details of what I found in my investigation. Something more resembling conclusions can be found at the bottom. (For reference, this was done on a Win2K 667MHz machine). Starting with a single suite that has 33 TestCases in it: [DateAndTimeTest suite run] millisecondsToRun ---> 24 So it takes about 0.7ms each to run them. Actually, running a test takes about 0.2ms. The rest of the time is building the suite, half of which (at least in larger groups of suites) is in sorting the selectors. Running the same in SUnitBrowser (which displays it as 33 separate lines with 1 TestCase each) [SUnitBrowser allInstances first runAll] millisecondsToRun ---> 307 or more than 12 times as long! And it only gets worse with more tests or larger suites. :-( I temporarily replicated 10 copies of that suite with 33 tests in each. Running without the UI took 232ms. In SUnitBrowser it took 9.4 seconds, or 40 times longer!! For the rest of my investigation I used my current full suite of tests. They are quite a mixed bag, with many of them being more like integration or acceptance tests rather than simple unit tests. To focus just on the UI slowdown on otherwise fast tests, I made them all fast <g> by commenting out TestCase>>runCase. With that I was able to check out 1017 tests in 155 suites. That averages 6.5 tests per suite with the largest having 57. "Running" the tests without the UI took 2.3 seconds, of which 97% was in #buildSuite, with 51% in sorting the selectors. Running the tests in SUnitBrowser took 54 seconds. Even assuming say 1ms per test to actually run a typical unit test, once one has the suite built, it's something like a factor of 50 slowdown to update the UI. Doing a profile of it using Ian's profiler does point out pretty clearly what's going on: Profiler profile: [SUnitBrowser allInstances first runAll] One is struck that 56% of the time is spent doing Collection>>inject:into: for one reason or another. That 50% of the time is in ListViewColumn>>textFromRow:. And a whopping 47% of the time is in SUnitBrowserResult>>numberCorrectFor:. As quick confirmation of this, I made #numberCorrectFor: simply return a constant, and the time to run dropped from 54s to 28.6s. So the biggest problem is, as others have pointed out, that of repeatedly scanning collections to determine the current status of a test suite. It's at least O(n-squared), with the number of tests in the current suite against the total number of tests that have been run so far. You can see a definite slow-down on large suites, and also over time on equally sized suites. Seems that the model change you spoke of is definitely in order -- where the model incrementally updates the status of a suite. That should knock 1/2 the time off. The second hotspot was with 19% spent in ListModel>>findFirst:. Given a testCase, it's looking for what index in the list should be updated. Given that we're running the tests for each line of the list sequentially, there should be a way to completely dispense with this time. But even with all those time wasters eliminated, still seems like a high overhead for the UI update when running a bunch of quick tests. The real question is how much of a slowdown is it worth to get the "eye candy" of continually updated status while all of the tests are run. I'm starting to think not very much. It seems the real value of the detail list comes in selecting which tests to run, and in reviewing the status after running some or all of them. Ah, just tried an experiment - if the "display all results" is off (and the bug fixed), then it runs in 4.3 seconds rather than 54. Now were talking. That's just updating the status bar at the bottom. Even there, there's not really a need to update that status 250 times a second. As far as displaying the details while running, perhaps the ideal solution would be to update at most a few times per second, plus when moving on to a new suite. Seems like we should be able to get it down to a reasonable overhead for fast running tests, while still providing progress feedback even on slower running tests. This last is something I'll try to explore when I get some more time, as it's actually independent of changing the model since the overhead can be reduced by doing fewer updates as well as speeding up each one. Regards, -Bill ------------------------------------------- Bill Dargel [hidden email] Shoshana Technologies 100 West Joy Road, Ann Arbor, MI 48105 USA |
Thanks for your detailed analysis. I had not done it on Dolphin, but I was
aware of the model inefficiencies from profiling it in VA. I also appreciate the feedback on the list update. If I was a vendor, I'd rename that "Display All Results" menu item to "Run High Speed Tests" ;>. I've been lazy about worrying about the speed because of my work style - I'm either working on a particular class, in which I'm running only a handful of tests, or I'm wrapping things up, in which case I'm rerunning all the tests, and I turn the Display All Results off. So I'm living with it. I'm planning on creating a new version of the model that does a faster job of reporting the status. I wanted to wait until SUnit 3.1 was squared away. That is a model level thing that will translate to all ports. Thanks again - You probably got me moving on it - jlo "Bill Dargel" <[hidden email]> wrote in message news:[hidden email]... > Jeff, and anyone else with an interest: > > There is definitely still an issue with SUnitBrowser slowing down the > running of tests due to its updating of the display. > > Sorry about the length of this post, but I felt I needed to present some > of the details of what I found in my investigation. Something more > resembling conclusions can be found at the bottom. > > (For reference, this was done on a Win2K 667MHz machine). > > Starting with a single suite that has 33 TestCases in it: > [DateAndTimeTest suite run] millisecondsToRun ---> 24 > So it takes about 0.7ms each to run them. Actually, running a test takes > about 0.2ms. The rest of the time is building the suite, half of which > (at least in larger groups of suites) is in sorting the selectors. > > Running the same in SUnitBrowser (which displays it as 33 separate lines > with 1 TestCase each) > [SUnitBrowser allInstances first runAll] millisecondsToRun ---> 307 > > or more than 12 times as long! And it only gets worse with more tests or > larger suites. :-( > > I temporarily replicated 10 copies of that suite with 33 tests in each. > Running without the UI took 232ms. In SUnitBrowser it took 9.4 seconds, > or 40 times longer!! > > For the rest of my investigation I used my current full suite of tests. > They are quite a mixed bag, with many of them being more like > integration or acceptance tests rather than simple unit tests. To focus > just on the UI slowdown on otherwise fast tests, I made them all fast > <g> by commenting out TestCase>>runCase. > > With that I was able to check out 1017 tests in 155 suites. That > averages 6.5 tests per suite with the largest having 57. "Running" the > tests without the UI took 2.3 seconds, of which 97% was in #buildSuite, > with 51% in sorting the selectors. > > Running the tests in SUnitBrowser took 54 seconds. Even assuming say 1ms > per test to actually run a typical unit test, once one has the suite > built, it's something like a factor of 50 slowdown to update the UI. > > Doing a profile of it using Ian's profiler does point out pretty clearly > what's going on: > Profiler profile: [SUnitBrowser allInstances first runAll] > > One is struck that 56% of the time is spent doing > Collection>>inject:into: for one reason or another. That 50% of the time > is in ListViewColumn>>textFromRow:. And a whopping 47% of the time is in > SUnitBrowserResult>>numberCorrectFor:. > > As quick confirmation of this, I made #numberCorrectFor: simply return a > constant, and the time to run dropped from 54s to 28.6s. > > So the biggest problem is, as others have pointed out, that of > repeatedly scanning collections to determine the current status of a > test suite. It's at least O(n-squared), with the number of tests in the > current suite against the total number of tests that have been run so > far. You can see a definite slow-down on large suites, and also over > time on equally sized suites. Seems that the model change you spoke of > is definitely in order -- where the model incrementally updates the > status of a suite. That should knock 1/2 the time off. > > The second hotspot was with 19% spent in ListModel>>findFirst:. Given a > testCase, it's looking for what index in the list should be updated. > Given that we're running the tests for each line of the list > sequentially, there should be a way to completely dispense with this > time. > > But even with all those time wasters eliminated, still seems like a high > overhead for the UI update when running a bunch of quick tests. > > The real question is how much of a slowdown is it worth to get the "eye > candy" of continually updated status while all of the tests are run. I'm > starting to think not very much. It seems the real value of the detail > list comes in selecting which tests to run, and in reviewing the status > after running some or all of them. > > Ah, just tried an experiment - if the "display all results" is off (and > the bug fixed), then it runs in 4.3 seconds rather than 54. Now were > talking. That's just updating the status bar at the bottom. Even there, > there's not really a need to update that status 250 times a second. > > As far as displaying the details while running, perhaps the ideal > solution would be to update at most a few times per second, plus when > moving on to a new suite. Seems like we should be able to get it down to > a reasonable overhead for fast running tests, while still providing > progress feedback even on slower running tests. > > This last is something I'll try to explore when I get some more time, as > it's actually independent of changing the model since the overhead can > be reduced by doing fewer updates as well as speeding up each one. > > Regards, > -Bill > > ------------------------------------------- > Bill Dargel [hidden email] > Shoshana Technologies > 100 West Joy Road, Ann Arbor, MI 48105 USA > > |
Jeffrey Odell wrote:
> If I was a vendor, I'd rename that "Display All Results" menu item to "Run > High Speed Tests" ;>. :-) > I've been lazy about worrying about the speed because of my work style - I'm > either working on a particular class, in which I'm running only a handful of > tests, or I'm wrapping things up, in which case I'm rerunning all the tests, > and I turn the Display All Results off. So I'm living with it. It's good to be aware of the speed difference between DisplayAllResults and not. I guess now that I'm no longer blissfully ignorant, I'll either have to change my style or throw in some kind of hack. As background, I had taken Steve Waring's IDE Extension and added F5 as an accelerator for 'Run Tests'. Also added a few extensions to ClassDescription so that any class could build a testSuite (if there happens to be an appropriately named TestCase class). With that, no matter where I happen to be working in a System or Class Browser, I'm usually just one keystroke away from running a set of relevant tests. Incidentally, I also hooked F5 to #runAll while in the SUnitBrowser, so the same tactile memory works for me there as well. To run the tests for more than a single class, I usually just move up the TestCase hierarchy in the browser and hit F5. (Some main distinctions drawn there are Unit Tests for the current development vs. older projects vs. Acceptance Tests). If I want to run everything, I can simply click on TestCase and then hit F5. I've been thinking about adding something that would run all the tests associated with a selected Package(s) but haven't gotten around to that. Incidentally, if anyone's interested in the details of how I set up F5 to run the associated tests, let me know. So my current working style involves hitting F5 a lot. :-) Heck, I'll even use it to get from a class to the associated TestCase class -- just double click on one of the tests after the SUnitBrowser comes up. So now I'm wondering about the tradeoffs between the speed of not showing all results vs. the utility of seeing them. Can't decide if changing the global default would really be worth it. Ah ha! Might not have to. Just hacked the SUnitBrowserModel>>runAll method. It remembers the #showAll status at the beginning, sets it false, runs the tests and then, if it had initially been on, toggles it back on at the end. Net result: the #runAll always goes fast, and shows all the detail when its done (if that's the way the toggle's set in the UI). This seems, for the time being, like a very workable solution for me. The only thing missing is the updates to the detail list while the tests are running. Regards, -Bill ------------------------------------------- Bill Dargel [hidden email] Shoshana Technologies 100 West Joy Road, Ann Arbor, MI 48105 USA |
I spent some time this morning removing the two hotspots you identified
earlier. I have a couple niggles to stamp out, and then I can release it. the time to run under each case is reduced by a large factor. Fortunately, I had unit tests on the SUnitBrowserResult class, so I was able to get it running pretty reliably without too much trouble. If you can send me the change set for your changes, the next step I had in mind was to incorporate Steve's IDE enhancements into the base. Once I release all that, I can turn my attention to SUnit 3.1 and add some capabilities based on that. Thanks for all the feedback - jlo "Bill Dargel" <[hidden email]> wrote in message news:[hidden email]... > Jeffrey Odell wrote: > > > If I was a vendor, I'd rename that "Display All Results" menu item to "Run > > High Speed Tests" ;>. > > :-) > > > I've been lazy about worrying about the speed because of my work style - I'm > > either working on a particular class, in which I'm running only a handful of > > tests, or I'm wrapping things up, in which case I'm rerunning all the tests, > > and I turn the Display All Results off. So I'm living with it. > > It's good to be aware of the speed difference between DisplayAllResults and not. > I guess now that I'm no longer blissfully ignorant, I'll either have to change > my style or throw in some kind of hack. > > As background, I had taken Steve Waring's IDE Extension and added F5 as an > accelerator for 'Run Tests'. Also added a few extensions to ClassDescription so > that any class could build a testSuite (if there happens to be an appropriately > named TestCase class). With that, no matter where I happen to be working in a > System or Class Browser, I'm usually just one keystroke away from running a set > of relevant tests. Incidentally, I also hooked F5 to #runAll while in the > SUnitBrowser, so the same tactile memory works for me there as well. > > To run the tests for more than a single class, I usually just move up the > TestCase hierarchy in the browser and hit F5. (Some main distinctions drawn > there are Unit Tests for the current development vs. older projects vs. > Acceptance Tests). If I want to run everything, I can simply click on TestCase > and then hit F5. I've been thinking about adding something that would run all > the tests associated with a selected Package(s) but haven't gotten around to > that. > > Incidentally, if anyone's interested in the details of how I set up F5 to run > the associated tests, let me know. > > So my current working style involves hitting F5 a lot. :-) Heck, I'll even use > it to get from a class to the associated TestCase class -- just double click on > one of the tests after the SUnitBrowser comes up. So now I'm wondering about the > tradeoffs between the speed of not showing all results vs. the utility of seeing > them. Can't decide if changing the global default would really be worth it. > > Ah ha! Might not have to. Just hacked the SUnitBrowserModel>>runAll method. It > remembers the #showAll status at the beginning, sets it false, runs the tests > and then, if it had initially been on, toggles it back on at the end. Net > result: the #runAll always goes fast, and shows all the detail when its done (if > that's the way the toggle's set in the UI). This seems, for the time being, like > a very workable solution for me. The only thing missing is the updates to the > detail list while the tests are running. > > Regards, > -Bill > > ------------------------------------------- > Bill Dargel [hidden email] > Shoshana Technologies > 100 West Joy Road, Ann Arbor, MI 48105 USA > > |
Jeffrey Odell wrote:
> I spent some time this morning removing the two hotspots you identified > earlier. I have a couple niggles to stamp out, and then I can release it. > the time to run under each case is reduced by a large factor. Fortunately, > I had unit tests on the SUnitBrowserResult class, so I was able to get it > running pretty reliably without too much trouble. The model changes sure help the speed. The slowest mode - showAllResults and showRunningStatusInList - runs nearly 5 times faster than it did yesterday! And with the new option to turn off showRunningStatusInList, it runs 32 times faster. Great job! One tweak would be to include showRunningStatusInList in publishedAspects so that an initial default can be set from the User Preferences. > If you can send me the change set for your changes, the next step I had in > mind was to incorporate Steve's IDE enhancements into the base. Once I > release all that, I can turn my attention to SUnit 3.1 and add some > capabilities based on that. Okay, I'll email you what I did there. > Thanks for all the feedback - Thank you for acting on it so quickly! Regards, -Bill ------------------------------------------- Bill Dargel [hidden email] Shoshana Technologies 100 West Joy Road, Ann Arbor, MI 48105 USA |
<snip>
> > One tweak would be to include showRunningStatusInList in publishedAspects so > that an initial default can be set from the User Preferences. > <snip> Oops - left that out - will put in Release Candidate 2 later today - jlo |
Free forum by Nabble | Edit this page |