Recommended SUnit testing tools?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Recommended SUnit testing tools?

Conrad Taylor
Hi, in VW 7.8.1, there's two different super classes for defining TestCases after loading some popular parcels from the Parcel Manager:  XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was wondering, what's the recommended approach for VW unit testing these days using VW?

Think different and code well,

-Conrad




_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Sherry Michael
Hi Conrad,

If you need to stay compatible with the original SUnit class hierarchy you should continue to
subclass XProgramming.SUnit.TestCase for your testing.  The UI tools for this framework are
contained in the RBSUnitExtensions package.

For VisualWorks we curently use a combination of both the original and Smalltalk.SUnit.TestCase
[SUnitToo], often supported by SUnit-Bridge2SU2 which allows the two frameworks to reasonably
coexist within the same image.  The bridge package is available in the public Store, although I
note I need to replicate some versions to that repository.

There are some differences in how the two frameworks manage tests, particularly with the
inheritance of abstract superclass tests, but a number of us have migrated our tests to the new
framework in order to take advantage of the improved visual tools there [SUnitToo(ls)].  The
semi-persistent state represented by icons in the browser is quite nice, as are the spawned
browsers and debug buttons with which to investigate errors and test failures.

Regards,
Sherry

On 5/3/12 10:27 AM, Conrad Taylor wrote:

> Hi, in VW 7.8.1, there's two different super classes for defining TestCases after loading some
> popular parcels from the Parcel Manager:  XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus,
> I was wondering, what's the recommended approach for VW unit testing these days using VW?
>
> Think different and code well,
>
> -Conrad
>
>
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Steffen Märcker
In reply to this post by Conrad Taylor
I stick to SUnitToo and the related tools. I think that's the facility  
recommended by Cincom.

Regards, Steffen

Am 03.05.2012, 16:27 Uhr, schrieb Conrad Taylor <[hidden email]>:

> Hi, in VW 7.8.1, there's two different super classes for defining  
> TestCases
> after loading some popular parcels from the Parcel Manager:
>  XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was wondering,
> what's the recommended approach for VW unit testing these days using VW?
>
> Think different and code well,
>
> -Conrad
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Niall Ross
In reply to this post by Conrad Taylor
Dear Conrad,

>Hi, in VW 7.8.1, there's two different super classes for defining TestCases
>after loading some popular parcels from the Parcel Manager:
> XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was wondering,
>what's the recommended approach for VW unit testing these days using VW?
>
The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the end
'Extensions and Variants of SUnit in VisualWorks' which covers the other
tools.  Briefly:

 - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility we
know and love

 - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by Travis
Griggs.  It is VW-specific and permits exploration of VW-specific ideas
for SUnit that are not - or not yet - in the cross-dialect framework.  
(Some ideas trialled in SUnitToo, and in Andres' Assessments framework,
have migrated to SUnit and are now also in Pharo, VASmalltalk and
Dolphin SUnit.  Others may always be too VW-specific, or may remain an
area of debate.)

 - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo tests
when the bridge is deployed (done automatically on loading the parcel
and can be done programmatically) and back again when the bridge is
retracted (done automatically on unloading and can be done
programmatically).  This can be useful if for example you want to keep
your tests SUnit test, e.g. because the utility is cross-dialect or for
easier historic comparison, but want to use SUnitToo(ls) UI or wish to
run these tests in a single suite with other SUnitToo tests.

There is a very high degree of similarity between the frameworks:  a
test case should run the same under either.  Some minor differences are:

    - SUnitToo(ls) has an image memory of the last result of run tests:  
open a fresh window on a test and you will see it with an icon of the
last-time-run result.  RBSunitExtension remembers only within each
window:  open another RB, or move off the test pane in the same RB, and
the knowledge of test outcomes is discarded.

    - SUnit has optimistic locking of TestResources by default, with an
optional pessimistic locking pattern.  Thus, for example, if your system
can only be logged in to one database at a time and your overall test
suite includes two database login resources that login to two databases,
you must tag them as belonging to a CompetingResource set.  SUnitToo has
pessimistic locking and (IIRC) no pattern for escape from it at this
time.  Thus you need never tag competing resources, but if you have a
resource that takes 5 minutes to setUp and tearDown (e.g.
installing/uninstalling a complex product), and use it in a suite of
thousands of tests with tens of other resources (e.g. your code
integration suite), then SUnitToo could turn that 5 minutes into an hour
and 5 minutes as it was repeatedly tornDown and setUp again in
pessimistically-calculated competing resource sets.

    - SUnit provides TestCase API on TestResources also, so e.g. if code
in a test case's setUp method starts being too slow as test numbers
grow, it can be refactored to a test resource's setUp method, to be run
once per suite instead of once per test, without needing to be rewritten.

    - SUnitToo randomises each test run.  On the plus side, this means
repeated runs may well find order-dependent errors that Sunit's
consistent run order does not expose.  On the minus side, the run order
is not remembered or recreatable, so just such order-dependent failures
may haunt you as intermittent failures.  (FYI I wish to add a
randomise-run-order feature to SUnit but with memory of the order and
only used when the user selects it.)

Travis may be able to list other differences.  Generally, the intent is
to keep behaviour the same except for areas where ideas for developing
the frameworks are being trialled.

                    HTH
                         Niall Ross

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Conrad Taylor
On Thu, May 3, 2012 at 8:54 AM, Niall Ross <[hidden email]> wrote:
Dear Conrad,


Hi, in VW 7.8.1, there's two different super classes for defining TestCases
after loading some popular parcels from the Parcel Manager:
XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was wondering,
what's the recommended approach for VW unit testing these days using VW?

The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the end 'Extensions and Variants of SUnit in VisualWorks' which covers the other tools.  Briefly:

- SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility we know and love

- SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by Travis Griggs.  It is VW-specific and permits exploration of VW-specific ideas for SUnit that are not - or not yet - in the cross-dialect framework.  (Some ideas trialled in SUnitToo, and in Andres' Assessments framework, have migrated to SUnit and are now also in Pharo, VASmalltalk and Dolphin SUnit.  Others may always be too VW-specific, or may remain an area of debate.)

- The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo tests when the bridge is deployed (done automatically on loading the parcel and can be done programmatically) and back again when the bridge is retracted (done automatically on unloading and can be done programmatically).  This can be useful if for example you want to keep your tests SUnit test, e.g. because the utility is cross-dialect or for easier historic comparison, but want to use SUnitToo(ls) UI or wish to run these tests in a single suite with other SUnitToo tests.

There is a very high degree of similarity between the frameworks:  a test case should run the same under either.  Some minor differences are:

  - SUnitToo(ls) has an image memory of the last result of run tests:  open a fresh window on a test and you will see it with an icon of the last-time-run result.  RBSunitExtension remembers only within each window:  open another RB, or move off the test pane in the same RB, and the knowledge of test outcomes is discarded.

  - SUnit has optimistic locking of TestResources by default, with an optional pessimistic locking pattern.  Thus, for example, if your system can only be logged in to one database at a time and your overall test suite includes two database login resources that login to two databases, you must tag them as belonging to a CompetingResource set.  SUnitToo has pessimistic locking and (IIRC) no pattern for escape from it at this time.  Thus you need never tag competing resources, but if you have a resource that takes 5 minutes to setUp and tearDown (e.g. installing/uninstalling a complex product), and use it in a suite of thousands of tests with tens of other resources (e.g. your code integration suite), then SUnitToo could turn that 5 minutes into an hour and 5 minutes as it was repeatedly tornDown and setUp again in pessimistically-calculated competing resource sets.

  - SUnit provides TestCase API on TestResources also, so e.g. if code in a test case's setUp method starts being too slow as test numbers grow, it can be refactored to a test resource's setUp method, to be run once per suite instead of once per test, without needing to be rewritten.

  - SUnitToo randomises each test run.  On the plus side, this means repeated runs may well find order-dependent errors that Sunit's consistent run order does not expose.  On the minus side, the run order is not remembered or recreatable, so just such order-dependent failures may haunt you as intermittent failures.  (FYI I wish to add a randomise-run-order feature to SUnit but with memory of the order and only used when the user selects it.)

Travis may be able to list other differences.  Generally, the intent is to keep behaviour the same except for areas where ideas for developing the frameworks are being trialled.

                  HTH
                       Niall Ross


Niall/Sherry/Steffen, I would like to thank you for providing feedback to my question.  Next, I feel that the Smalltalk.SUnit.TestCase [SUnitToo] provides some excellent visual feedback on the status of the individual test that XProgramming.SUnit.TestCase does not.  Also, the randomization of the tests are great and works very similar to using Ruby's MiniTest.  Furthermore, the SSpec BDD framework seamlessly integrates with the RB.  Now, it would be good to see the order in which the tests were ran as well as the seed value to the random number generator to repeat a test run.  All in all, the tools in VW are very impressive and makes a developer very productive.  Last but not least,  I tend to like to run all my tests from the keyboard.  Thus, are there any hot keys for interacting with the test runner?

--

Think different and code well,

-Conrad



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Andres Valloud-6
You might also want to look at Assessments (bundle of the same name in
the public Store repository, MIT license).  It offers a much more
flexible implementation of a basic test framework, which is then used to
execute tests from three different variants of SUnit (SUnit, SUnitToo,
and SUnitVM) without needing to modify or override or reparent existing
test classes.  In addition, it implements test based validation, as well
as test based benchmarks and performance measurements.  For references,
see Chapter 4 of "A Mentoring Course on Smalltalk" here:

http://www.lulu.com/shop/search.ep?contributorId=441247

as well as several conference talk slides about these, for example:

http://www.youtube.com/watch?v=jeLGRjQqRf0

and also see for example the paper Extreme Validation here:

http://www.caesarsystems.com/resources/caesarsystems/files/Extreme_Validation.pdf.


Assessments' flexible architecture also allows extending Assessments
without having to override the framework itself.  This is a problem with
extending SUnit, and it will lead to various extensions being
incompatible with each other.

On 5/3/2012 6:59 PM, Conrad Taylor wrote:

> On Thu, May 3, 2012 at 8:54 AM, Niall Ross <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Dear Conrad,
>
>
>         Hi, in VW 7.8.1, there's two different super classes for
>         defining TestCases
>         after loading some popular parcels from the Parcel Manager:
>         XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was
>         wondering,
>         what's the recommended approach for VW unit testing these days
>         using VW?
>
>     The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the
>     end 'Extensions and Variants of SUnit in VisualWorks' which covers
>     the other tools.  Briefly:
>
>     - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility
>     we know and love
>
>     - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by
>     Travis Griggs.  It is VW-specific and permits exploration of
>     VW-specific ideas for SUnit that are not - or not yet - in the
>     cross-dialect framework.  (Some ideas trialled in SUnitToo, and in
>     Andres' Assessments framework, have migrated to SUnit and are now
>     also in Pharo, VASmalltalk and Dolphin SUnit.  Others may always be
>     too VW-specific, or may remain an area of debate.)
>
>     - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo
>     tests when the bridge is deployed (done automatically on loading the
>     parcel and can be done programmatically) and back again when the
>     bridge is retracted (done automatically on unloading and can be done
>     programmatically).  This can be useful if for example you want to
>     keep your tests SUnit test, e.g. because the utility is
>     cross-dialect or for easier historic comparison, but want to use
>     SUnitToo(ls) UI or wish to run these tests in a single suite with
>     other SUnitToo tests.
>
>     There is a very high degree of similarity between the frameworks:  a
>     test case should run the same under either.  Some minor differences are:
>
>        - SUnitToo(ls) has an image memory of the last result of run
>     tests:  open a fresh window on a test and you will see it with an
>     icon of the last-time-run result.  RBSunitExtension remembers only
>     within each window:  open another RB, or move off the test pane in
>     the same RB, and the knowledge of test outcomes is discarded.
>
>        - SUnit has optimistic locking of TestResources by default, with
>     an optional pessimistic locking pattern.  Thus, for example, if your
>     system can only be logged in to one database at a time and your
>     overall test suite includes two database login resources that login
>     to two databases, you must tag them as belonging to a
>     CompetingResource set.  SUnitToo has pessimistic locking and (IIRC)
>     no pattern for escape from it at this time.  Thus you need never tag
>     competing resources, but if you have a resource that takes 5 minutes
>     to setUp and tearDown (e.g. installing/uninstalling a complex
>     product), and use it in a suite of thousands of tests with tens of
>     other resources (e.g. your code integration suite), then SUnitToo
>     could turn that 5 minutes into an hour and 5 minutes as it was
>     repeatedly tornDown and setUp again in pessimistically-calculated
>     competing resource sets.
>
>        - SUnit provides TestCase API on TestResources also, so e.g. if
>     code in a test case's setUp method starts being too slow as test
>     numbers grow, it can be refactored to a test resource's setUp
>     method, to be run once per suite instead of once per test, without
>     needing to be rewritten.
>
>        - SUnitToo randomises each test run.  On the plus side, this
>     means repeated runs may well find order-dependent errors that
>     Sunit's consistent run order does not expose.  On the minus side,
>     the run order is not remembered or recreatable, so just such
>     order-dependent failures may haunt you as intermittent failures.
>       (FYI I wish to add a randomise-run-order feature to SUnit but with
>     memory of the order and only used when the user selects it.)
>
>     Travis may be able to list other differences.  Generally, the intent
>     is to keep behaviour the same except for areas where ideas for
>     developing the frameworks are being trialled.
>
>                        HTH
>                             Niall Ross
>
>
> Niall/Sherry/Steffen, I would like to thank you for providing feedback
> to my question.  Next, I feel that the Smalltalk.SUnit.TestCase
> [SUnitToo] provides some excellent visual feedback on the status of the
> individual test that XProgramming.SUnit.TestCase does not.  Also, the
> randomization of the tests are great and works very similar to using
> Ruby's MiniTest.  Furthermore, the SSpec BDD framework seamlessly
> integrates with the RB.  Now, it would be good to see the order in which
> the tests were ran as well as the seed value to the random number
> generator to repeat a test run.  All in all, the tools in VW are very
> impressive and makes a developer very productive.  Last but not least,
>   I tend to like to run all my tests from the keyboard.  Thus, are there
> any hot keys for interacting with the test runner?
>
> --
> /
> /
> /Think different and code well/,
>
> -Conrad
>
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Conrad Taylor
On Thu, May 3, 2012 at 7:35 PM, Andres Valloud <[hidden email]> wrote:
You might also want to look at Assessments (bundle of the same name in
the public Store repository, MIT license).  It offers a much more
flexible implementation of a basic test framework, which is then used to
execute tests from three different variants of SUnit (SUnit, SUnitToo,
and SUnitVM) without needing to modify or override or reparent existing
test classes.  In addition, it implements test based validation, as well
as test based benchmarks and performance measurements.  For references,
see Chapter 4 of "A Mentoring Course on Smalltalk" here: 

http://www.lulu.com/shop/search.ep?contributorId=441247

as well as several conference talk slides about these, for example:

http://www.youtube.com/watch?v=jeLGRjQqRf0

and also see for example the paper Extreme Validation here:

http://www.caesarsystems.com/resources/caesarsystems/files/Extreme_Validation.pdf.


Assessments' flexible architecture also allows extending Assessments
without having to override the framework itself.  This is a problem with
extending SUnit, and it will lead to various extensions being
incompatible with each other.


Andres, thanks for all the information and I appreciate it.  In regards to
installing the Assessments bundle, does the top level Assessments install
everything that's needed? Next, if I understand you correctly, the installation 
of Assessments require no code changes.  Well, everything is working as
expected and thanks again for Assessments.
 
On 5/3/2012 6:59 PM, Conrad Taylor wrote:
> On Thu, May 3, 2012 at 8:54 AM, Niall Ross <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Dear Conrad,
>
>
>         Hi, in VW 7.8.1, there's two different super classes for
>         defining TestCases
>         after loading some popular parcels from the Parcel Manager:
>         XProgramming.SUnit.TestCase and Smalltalk.SUnit.  Thus, I was
>         wondering,
>         what's the recommended approach for VW unit testing these days
>         using VW?
>
>     The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the
>     end 'Extensions and Variants of SUnit in VisualWorks' which covers
>     the other tools.  Briefly:
>
>     - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility
>     we know and love
>
>     - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by
>     Travis Griggs.  It is VW-specific and permits exploration of
>     VW-specific ideas for SUnit that are not - or not yet - in the
>     cross-dialect framework.  (Some ideas trialled in SUnitToo, and in
>     Andres' Assessments framework, have migrated to SUnit and are now
>     also in Pharo, VASmalltalk and Dolphin SUnit.  Others may always be
>     too VW-specific, or may remain an area of debate.)
>
>     - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo
>     tests when the bridge is deployed (done automatically on loading the
>     parcel and can be done programmatically) and back again when the
>     bridge is retracted (done automatically on unloading and can be done
>     programmatically).  This can be useful if for example you want to
>     keep your tests SUnit test, e.g. because the utility is
>     cross-dialect or for easier historic comparison, but want to use
>     SUnitToo(ls) UI or wish to run these tests in a single suite with
>     other SUnitToo tests.
>
>     There is a very high degree of similarity between the frameworks:  a
>     test case should run the same under either.  Some minor differences are:
>
>        - SUnitToo(ls) has an image memory of the last result of run
>     tests:  open a fresh window on a test and you will see it with an
>     icon of the last-time-run result.  RBSunitExtension remembers only
>     within each window:  open another RB, or move off the test pane in
>     the same RB, and the knowledge of test outcomes is discarded.
>
>        - SUnit has optimistic locking of TestResources by default, with
>     an optional pessimistic locking pattern.  Thus, for example, if your
>     system can only be logged in to one database at a time and your
>     overall test suite includes two database login resources that login
>     to two databases, you must tag them as belonging to a
>     CompetingResource set.  SUnitToo has pessimistic locking and (IIRC)
>     no pattern for escape from it at this time.  Thus you need never tag
>     competing resources, but if you have a resource that takes 5 minutes
>     to setUp and tearDown (e.g. installing/uninstalling a complex
>     product), and use it in a suite of thousands of tests with tens of
>     other resources (e.g. your code integration suite), then SUnitToo
>     could turn that 5 minutes into an hour and 5 minutes as it was
>     repeatedly tornDown and setUp again in pessimistically-calculated
>     competing resource sets.
>
>        - SUnit provides TestCase API on TestResources also, so e.g. if
>     code in a test case's setUp method starts being too slow as test
>     numbers grow, it can be refactored to a test resource's setUp
>     method, to be run once per suite instead of once per test, without
>     needing to be rewritten.
>
>        - SUnitToo randomises each test run.  On the plus side, this
>     means repeated runs may well find order-dependent errors that
>     Sunit's consistent run order does not expose.  On the minus side,
>     the run order is not remembered or recreatable, so just such
>     order-dependent failures may haunt you as intermittent failures.
>       (FYI I wish to add a randomise-run-order feature to SUnit but with
>     memory of the order and only used when the user selects it.)
>
>     Travis may be able to list other differences.  Generally, the intent
>     is to keep behaviour the same except for areas where ideas for
>     developing the frameworks are being trialled.
>
>                        HTH
>                             Niall Ross
>
>
> Niall/Sherry/Steffen, I would like to thank you for providing feedback
> to my question.  Next, I feel that the Smalltalk.SUnit.TestCase
> [SUnitToo] provides some excellent visual feedback on the status of the
> individual test that XProgramming.SUnit.TestCase does not.  Also, the
> randomization of the tests are great and works very similar to using
> Ruby's MiniTest.  Furthermore, the SSpec BDD framework seamlessly
> integrates with the RB.  Now, it would be good to see the order in which
> the tests were ran as well as the seed value to the random number
> generator to repeat a test run.  All in all, the tools in VW are very
> impressive and makes a developer very productive.  Last but not least,
>   I tend to like to run all my tests from the keyboard.  Thus, are there
> any hot keys for interacting with the test runner?
>
> --
> /
> /
> /Think different and code well/,
>
> -Conrad
>
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



--

Think different and code well,

-Conrad



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Niall Ross
In reply to this post by Andres Valloud-6
Dear Conrad, Andres et al,

>Assessments' flexible architecture also allows extending Assessments
>without having to override the framework itself.  This is a problem with
>extending SUnit, and it will lead to various extensions being
>incompatible with each other.

FYI, the mention in my earlier email of SUnit having taken some ideas
from Assessments, as well as from SUnitToo, refers to some of these ways
of extending.  For example, the 'Extensions and Variants of SUnit in
VisualWorks' subsection (in the test chapter of doc/ToolsGuide.pdf)
mentions the pattern for skipping tests.  This is an example of using
ClassifiedTestResult, which owes its inspiration to Assessments.

In 7.9, the RBSUnitExtensions tool allows plugin of TestSuite
subclasses, by doing
    RBSUnitExtension suiteClass: MyTestSuite.
During 7.10, we expect to publish some examples of TestSuite subclasses
(and TestResult subclasses referenced by them) in the OR so people can
use the above to experiment, and maybe some in the community will also
do so:  thus we can find out what subclassing patterns may be best, so
make any (very minor, purely refactoring) changes in the top-level
framework that would help support them.

You are right that SUnitToo(ls) gives better visual feedback than
RBSUnitExtensions (having the SUnit2SU2-Bridge, so we can always use
SUnitToo UI for SUnit tests just by loading it, has made me careless
about porting these UI features - I will try to do that for the next
release).  By contrast, RBSUnitExtensions is a bit ahead when it comes
to handling tests that are extensions in other packages (maybe those
features will also get ported).

             Yours faithfully
                   Niall Ross


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Conrad Taylor
On Sat, May 5, 2012 at 9:09 AM, Niall Ross <[hidden email]> wrote:
Dear Conrad, Andres et al,

>Assessments' flexible architecture also allows extending Assessments
>without having to override the framework itself.  This is a problem with
>extending SUnit, and it will lead to various extensions being
>incompatible with each other.

FYI, the mention in my earlier email of SUnit having taken some ideas
from Assessments, as well as from SUnitToo, refers to some of these ways
of extending.  For example, the 'Extensions and Variants of SUnit in
VisualWorks' subsection (in the test chapter of doc/ToolsGuide.pdf)
mentions the pattern for skipping tests.  This is an example of using
ClassifiedTestResult, which owes its inspiration to Assessments.

In 7.9, the RBSUnitExtensions tool allows plugin of TestSuite
subclasses, by doing
   RBSUnitExtension suiteClass: MyTestSuite.
During 7.10, we expect to publish some examples of TestSuite subclasses
(and TestResult subclasses referenced by them) in the OR so people can
use the above to experiment, and maybe some in the community will also
do so:  thus we can find out what subclassing patterns may be best, so
make any (very minor, purely refactoring) changes in the top-level
framework that would help support them.

You are right that SUnitToo(ls) gives better visual feedback than
RBSUnitExtensions (having the SUnit2SU2-Bridge, so we can always use
SUnitToo UI for SUnit tests just by loading it, has made me careless
about porting these UI features - I will try to do that for the next
release).  By contrast, RBSUnitExtensions is a bit ahead when it comes
to handling tests that are extensions in other packages (maybe those
features will also get ported).


Niall, thanks for the state and future directions that VW will be taking regarding
testing and the associated test frameworks.
 
            Yours faithfully
                  Niall Ross


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



--

Think different and code well,

-Conrad



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Andres Valloud-6
In reply to this post by Niall Ross
Niall et al,

On 5/5/2012 9:09 AM, Niall Ross wrote:

> Dear Conrad, Andres et al,
>
>> Assessments' flexible architecture also allows extending Assessments
>> without having to override the framework itself. This is a problem
>> with extending SUnit, and it will lead to various extensions being
>> incompatible with each other.
>
> FYI, the mention in my earlier email of SUnit having taken some ideas
> from Assessments, as well as from SUnitToo, refers to some of these ways
> of extending. For example, the 'Extensions and Variants of SUnit in
> VisualWorks' subsection (in the test chapter of doc/ToolsGuide.pdf)
> mentions the pattern for skipping tests. This is an example of using
> ClassifiedTestResult, which owes its inspiration to Assessments.

Yes.  And another improvement to SUnit I really liked was to reference
classes via dedicated methods so that subclassing for similar purposes
is easier.  I'm really glad those kinds of ideas have made it to SUnit,
and that they are deemed useful by others.  I still think integrating
multiple extensions seamlessly will remain a bit problematic though,
because sooner or later you will want some sort of multiple inheritance.
  I don't claim to have the crystal ball of SUnit :), but if you want we
can talk about this more offline and see if a few changes to SUnit in
this regard might be helpful.

Andres.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

stephane ducasse-2
Hi niall

At ESUG Brest we discussed important enhancement for SUnit and I never saw them happening.
I remember 3 points but not what exactly. Do you remember?

Stef




> Niall et al,
>
> On 5/5/2012 9:09 AM, Niall Ross wrote:
>> Dear Conrad, Andres et al,
>>
>>> Assessments' flexible architecture also allows extending Assessments
>>> without having to override the framework itself. This is a problem
>>> with extending SUnit, and it will lead to various extensions being
>>> incompatible with each other.
>>
>> FYI, the mention in my earlier email of SUnit having taken some ideas
>> from Assessments, as well as from SUnitToo, refers to some of these ways
>> of extending. For example, the 'Extensions and Variants of SUnit in
>> VisualWorks' subsection (in the test chapter of doc/ToolsGuide.pdf)
>> mentions the pattern for skipping tests. This is an example of using
>> ClassifiedTestResult, which owes its inspiration to Assessments.
>
> Yes.  And another improvement to SUnit I really liked was to reference
> classes via dedicated methods so that subclassing for similar purposes
> is easier.  I'm really glad those kinds of ideas have made it to SUnit,
> and that they are deemed useful by others.  I still think integrating
> multiple extensions seamlessly will remain a bit problematic though,
> because sooner or later you will want some sort of multiple inheritance.
>  I don't claim to have the crystal ball of SUnit :), but if you want we
> can talk about this more offline and see if a few changes to SUnit in
> this regard might be helpful.
>
> Andres.
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Andres Valloud-6
In reply to this post by Andres Valloud-6
Oh, one thing I forgot.  The design rationale for Assessments is in
chapter 5 of the Fundamentals book here:

http://www.lulu.com/shop/andres-valloud/fundamentals-of-smalltalk-programming-technique-volume-1/paperback/product-5299835.html

On 5/3/2012 7:35 PM, Andres Valloud wrote:

> You might also want to look at Assessments (bundle of the same name in
> the public Store repository, MIT license). It offers a much more
> flexible implementation of a basic test framework, which is then used to
> execute tests from three different variants of SUnit (SUnit, SUnitToo,
> and SUnitVM) without needing to modify or override or reparent existing
> test classes. In addition, it implements test based validation, as well
> as test based benchmarks and performance measurements. For references,
> see Chapter 4 of "A Mentoring Course on Smalltalk" here:
>
> http://www.lulu.com/shop/search.ep?contributorId=441247
>
> as well as several conference talk slides about these, for example:
>
> http://www.youtube.com/watch?v=jeLGRjQqRf0
>
> and also see for example the paper Extreme Validation here:
>
> http://www.caesarsystems.com/resources/caesarsystems/files/Extreme_Validation.pdf.
>
>
> Assessments' flexible architecture also allows extending Assessments
> without having to override the framework itself. This is a problem with
> extending SUnit, and it will lead to various extensions being
> incompatible with each other.
>
> On 5/3/2012 6:59 PM, Conrad Taylor wrote:
>> On Thu, May 3, 2012 at 8:54 AM, Niall Ross <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>> Dear Conrad,
>>
>>
>> Hi, in VW 7.8.1, there's two different super classes for
>> defining TestCases
>> after loading some popular parcels from the Parcel Manager:
>> XProgramming.SUnit.TestCase and Smalltalk.SUnit. Thus, I was
>> wondering,
>> what's the recommended approach for VW unit testing these days
>> using VW?
>>
>> The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the
>> end 'Extensions and Variants of SUnit in VisualWorks' which covers
>> the other tools. Briefly:
>>
>> - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility
>> we know and love
>>
>> - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by
>> Travis Griggs. It is VW-specific and permits exploration of
>> VW-specific ideas for SUnit that are not - or not yet - in the
>> cross-dialect framework. (Some ideas trialled in SUnitToo, and in
>> Andres' Assessments framework, have migrated to SUnit and are now
>> also in Pharo, VASmalltalk and Dolphin SUnit. Others may always be
>> too VW-specific, or may remain an area of debate.)
>>
>> - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo
>> tests when the bridge is deployed (done automatically on loading the
>> parcel and can be done programmatically) and back again when the
>> bridge is retracted (done automatically on unloading and can be done
>> programmatically). This can be useful if for example you want to
>> keep your tests SUnit test, e.g. because the utility is
>> cross-dialect or for easier historic comparison, but want to use
>> SUnitToo(ls) UI or wish to run these tests in a single suite with
>> other SUnitToo tests.
>>
>> There is a very high degree of similarity between the frameworks: a
>> test case should run the same under either. Some minor differences are:
>>
>> - SUnitToo(ls) has an image memory of the last result of run
>> tests: open a fresh window on a test and you will see it with an
>> icon of the last-time-run result. RBSunitExtension remembers only
>> within each window: open another RB, or move off the test pane in
>> the same RB, and the knowledge of test outcomes is discarded.
>>
>> - SUnit has optimistic locking of TestResources by default, with
>> an optional pessimistic locking pattern. Thus, for example, if your
>> system can only be logged in to one database at a time and your
>> overall test suite includes two database login resources that login
>> to two databases, you must tag them as belonging to a
>> CompetingResource set. SUnitToo has pessimistic locking and (IIRC)
>> no pattern for escape from it at this time. Thus you need never tag
>> competing resources, but if you have a resource that takes 5 minutes
>> to setUp and tearDown (e.g. installing/uninstalling a complex
>> product), and use it in a suite of thousands of tests with tens of
>> other resources (e.g. your code integration suite), then SUnitToo
>> could turn that 5 minutes into an hour and 5 minutes as it was
>> repeatedly tornDown and setUp again in pessimistically-calculated
>> competing resource sets.
>>
>> - SUnit provides TestCase API on TestResources also, so e.g. if
>> code in a test case's setUp method starts being too slow as test
>> numbers grow, it can be refactored to a test resource's setUp
>> method, to be run once per suite instead of once per test, without
>> needing to be rewritten.
>>
>> - SUnitToo randomises each test run. On the plus side, this
>> means repeated runs may well find order-dependent errors that
>> Sunit's consistent run order does not expose. On the minus side,
>> the run order is not remembered or recreatable, so just such
>> order-dependent failures may haunt you as intermittent failures.
>> (FYI I wish to add a randomise-run-order feature to SUnit but with
>> memory of the order and only used when the user selects it.)
>>
>> Travis may be able to list other differences. Generally, the intent
>> is to keep behaviour the same except for areas where ideas for
>> developing the frameworks are being trialled.
>>
>> HTH
>> Niall Ross
>>
>>
>> Niall/Sherry/Steffen, I would like to thank you for providing feedback
>> to my question. Next, I feel that the Smalltalk.SUnit.TestCase
>> [SUnitToo] provides some excellent visual feedback on the status of the
>> individual test that XProgramming.SUnit.TestCase does not. Also, the
>> randomization of the tests are great and works very similar to using
>> Ruby's MiniTest. Furthermore, the SSpec BDD framework seamlessly
>> integrates with the RB. Now, it would be good to see the order in which
>> the tests were ran as well as the seed value to the random number
>> generator to repeat a test run. All in all, the tools in VW are very
>> impressive and makes a developer very productive. Last but not least,
>> I tend to like to run all my tests from the keyboard. Thus, are there
>> any hot keys for interacting with the test runner?
>>
>> --
>> /
>> /
>> /Think different and code well/,
>>
>> -Conrad
>>
>>
>>
>>
>> _______________________________________________
>> vwnc mailing list
>> [hidden email]
>> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Recommended SUnit testing tools?

Conrad Taylor
On Thu, May 10, 2012 at 10:17 PM, Andres Valloud <[hidden email]> wrote:
Oh, one thing I forgot.  The design rationale for Assessments is in
chapter 5 of the Fundamentals book here:

http://www.lulu.com/shop/andres-valloud/fundamentals-of-smalltalk-programming-technique-volume-1/paperback/product-5299835.html


Andreas, thanks for the reference and I looking forward to reading it.
 
On 5/3/2012 7:35 PM, Andres Valloud wrote:
> You might also want to look at Assessments (bundle of the same name in
> the public Store repository, MIT license). It offers a much more
> flexible implementation of a basic test framework, which is then used to
> execute tests from three different variants of SUnit (SUnit, SUnitToo,
> and SUnitVM) without needing to modify or override or reparent existing
> test classes. In addition, it implements test based validation, as well
> as test based benchmarks and performance measurements. For references,
> see Chapter 4 of "A Mentoring Course on Smalltalk" here:
>
> http://www.lulu.com/shop/search.ep?contributorId=441247
>
> as well as several conference talk slides about these, for example:
>
> http://www.youtube.com/watch?v=jeLGRjQqRf0
>
> and also see for example the paper Extreme Validation here:
>
> http://www.caesarsystems.com/resources/caesarsystems/files/Extreme_Validation.pdf.
>
>
> Assessments' flexible architecture also allows extending Assessments
> without having to override the framework itself. This is a problem with
> extending SUnit, and it will lead to various extensions being
> incompatible with each other.
>
> On 5/3/2012 6:59 PM, Conrad Taylor wrote:
>> On Thu, May 3, 2012 at 8:54 AM, Niall Ross <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>> Dear Conrad,
>>
>>
>> Hi, in VW 7.8.1, there's two different super classes for
>> defining TestCases
>> after loading some popular parcels from the Parcel Manager:
>> XProgramming.SUnit.TestCase and Smalltalk.SUnit. Thus, I was
>> wondering,
>> what's the recommended approach for VW unit testing these days
>> using VW?
>>
>> The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the
>> end 'Extensions and Variants of SUnit in VisualWorks' which covers
>> the other tools. Briefly:
>>
>> - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility
>> we know and love
>>
>> - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by
>> Travis Griggs. It is VW-specific and permits exploration of
>> VW-specific ideas for SUnit that are not - or not yet - in the
>> cross-dialect framework. (Some ideas trialled in SUnitToo, and in
>> Andres' Assessments framework, have migrated to SUnit and are now
>> also in Pharo, VASmalltalk and Dolphin SUnit. Others may always be
>> too VW-specific, or may remain an area of debate.)
>>
>> - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo
>> tests when the bridge is deployed (done automatically on loading the
>> parcel and can be done programmatically) and back again when the
>> bridge is retracted (done automatically on unloading and can be done
>> programmatically). This can be useful if for example you want to
>> keep your tests SUnit test, e.g. because the utility is
>> cross-dialect or for easier historic comparison, but want to use
>> SUnitToo(ls) UI or wish to run these tests in a single suite with
>> other SUnitToo tests.
>>
>> There is a very high degree of similarity between the frameworks: a
>> test case should run the same under either. Some minor differences are:
>>
>> - SUnitToo(ls) has an image memory of the last result of run
>> tests: open a fresh window on a test and you will see it with an
>> icon of the last-time-run result. RBSunitExtension remembers only
>> within each window: open another RB, or move off the test pane in
>> the same RB, and the knowledge of test outcomes is discarded.
>>
>> - SUnit has optimistic locking of TestResources by default, with
>> an optional pessimistic locking pattern. Thus, for example, if your
>> system can only be logged in to one database at a time and your
>> overall test suite includes two database login resources that login
>> to two databases, you must tag them as belonging to a
>> CompetingResource set. SUnitToo has pessimistic locking and (IIRC)
>> no pattern for escape from it at this time. Thus you need never tag
>> competing resources, but if you have a resource that takes 5 minutes
>> to setUp and tearDown (e.g. installing/uninstalling a complex
>> product), and use it in a suite of thousands of tests with tens of
>> other resources (e.g. your code integration suite), then SUnitToo
>> could turn that 5 minutes into an hour and 5 minutes as it was
>> repeatedly tornDown and setUp again in pessimistically-calculated
>> competing resource sets.
>>
>> - SUnit provides TestCase API on TestResources also, so e.g. if
>> code in a test case's setUp method starts being too slow as test
>> numbers grow, it can be refactored to a test resource's setUp
>> method, to be run once per suite instead of once per test, without
>> needing to be rewritten.
>>
>> - SUnitToo randomises each test run. On the plus side, this
>> means repeated runs may well find order-dependent errors that
>> Sunit's consistent run order does not expose. On the minus side,
>> the run order is not remembered or recreatable, so just such
>> order-dependent failures may haunt you as intermittent failures.
>> (FYI I wish to add a randomise-run-order feature to SUnit but with
>> memory of the order and only used when the user selects it.)
>>
>> Travis may be able to list other differences. Generally, the intent
>> is to keep behaviour the same except for areas where ideas for
>> developing the frameworks are being trialled.
>>
>> HTH
>> Niall Ross
>>
>>
>> Niall/Sherry/Steffen, I would like to thank you for providing feedback
>> to my question. Next, I feel that the Smalltalk.SUnit.TestCase
>> [SUnitToo] provides some excellent visual feedback on the status of the
>> individual test that XProgramming.SUnit.TestCase does not. Also, the
>> randomization of the tests are great and works very similar to using
>> Ruby's MiniTest. Furthermore, the SSpec BDD framework seamlessly
>> integrates with the RB. Now, it would be good to see the order in which
>> the tests were ran as well as the seed value to the random number
>> generator to repeat a test run. All in all, the tools in VW are very
>> impressive and makes a developer very productive. Last but not least,
>> I tend to like to run all my tests from the keyboard. Thus, are there
>> any hot keys for interacting with the test runner?
>>
>> --
>> /
>> /
>> /Think different and code well/,
>>
>> -Conrad
>>
>>
>>
>>
>> _______________________________________________
>> vwnc mailing list
>> [hidden email]
>> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



--

Think different and code well,

-Conrad



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc