Canonical test cases?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Canonical test cases?

Phil B
My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
of breakage.  As I've been going through and fixing things I am thinking
that I'm possibly going from doing some things in a broken way in
Squeak/earlier Cuis to a broken way in current Cuis simply because I
don't know what the 'correct' way is.

One example is related to compression: there aren't any canonical
examples of how to compress/decompress files/streams so I poke around
the classes until I get to something that works for me right now having
no idea if what I come up with will still work a few versions from now.
Another example is package loading: I've been calling CodePackageFile
installFileStream:packageName:fullName: but that no longer appears to be
the correct entry point if I want dependency checking.  While I
understand that things are subject to change, would it make sense to
have some documented, fixed entry points that would attempt to make the
most sane choices by default going forward?

My thought was why not use test cases to help document the preferred
(and therefore, longer term supported) way of doing things.  They could
be put in Tests-Examples or something similar and could serve double
duty in the sense that they would not only help point people in the
right direction on how to use a given class, but would also serve as a
warning/reminder to anyone wanting to make changes that these are the
core calls that need to be maintained from a code compatibility
standpoint or otherwise need to be called out when compatibility needs
to be broken.  I imagine this would be a fairly small subset of
classes/methods and not try to anticipate all the edge cases, just deal
with the most common ones.

Does what I'm suggesting make sense / seem worthwhile?


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Wolfgang Eder
hi,
I believe you have an excellent point here!
I believe that tests are a very valuable and fitting
way of documenting an API.

And I believe that there needs to be a distinction made
alone the lines of blackbox vs. whitebox testing,
where the distinction is whether the internal functioning
is tested or not.

Blackbox tests should test just the API, treating the code
as a "black" box, meaning it does not assume or know a lot
about its internal workings.

Whitebox tests should test the internal workings, and these
may break, or not even compile, when refactorings or
restructurings are done.

My point is, all tests need to be documented whether
they are blackbox or whitebox tests, because a user of the
code and tests needs to know whether the tests are
testing the API or the inner workings of the code.

kind regards
wolfgang



> Am 22.02.2015 um 01:02 schrieb Phil (list) <[hidden email]>:
>
> My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
> of breakage.  As I've been going through and fixing things I am thinking
> that I'm possibly going from doing some things in a broken way in
> Squeak/earlier Cuis to a broken way in current Cuis simply because I
> don't know what the 'correct' way is.
>
> One example is related to compression: there aren't any canonical
> examples of how to compress/decompress files/streams so I poke around
> the classes until I get to something that works for me right now having
> no idea if what I come up with will still work a few versions from now.
> Another example is package loading: I've been calling CodePackageFile
> installFileStream:packageName:fullName: but that no longer appears to be
> the correct entry point if I want dependency checking.  While I
> understand that things are subject to change, would it make sense to
> have some documented, fixed entry points that would attempt to make the
> most sane choices by default going forward?
>
> My thought was why not use test cases to help document the preferred
> (and therefore, longer term supported) way of doing things.  They could
> be put in Tests-Examples or something similar and could serve double
> duty in the sense that they would not only help point people in the
> right direction on how to use a given class, but would also serve as a
> warning/reminder to anyone wanting to make changes that these are the
> core calls that need to be maintained from a code compatibility
> standpoint or otherwise need to be called out when compatibility needs
> to be broken.  I imagine this would be a fairly small subset of
> classes/methods and not try to anticipate all the edge cases, just deal
> with the most common ones.
>
> Does what I'm suggesting make sense / seem worthwhile?
>
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Hannes Hirzel
Hi Phil and Wolfgang

Test cases mentioned by Phil are

T1. canonical examples of how to compress/decompress files/streams

T2. package loading. Phil writes "I've been calling CodePackageFile
installFileStream:packageName:fullName: but that no longer appears to be
the correct entry point if I want dependency checking"

Are there more?

There is as well a workspace called 'useful expressions' which might
be used to add a few of these test cases.

And updates to these files

https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/tree/master/Documentation
(just fork the Cuis repo, update the md file and issue a pull request)

You may as well add an issue
https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues

like I just did for T1 and T2
https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues/39

--Hannes


On 2/22/15, Wolfgang Eder <[hidden email]> wrote:

> hi,
> I believe you have an excellent point here!
> I believe that tests are a very valuable and fitting
> way of documenting an API.
>
> And I believe that there needs to be a distinction made
> alone the lines of blackbox vs. whitebox testing,
> where the distinction is whether the internal functioning
> is tested or not.
>
> Blackbox tests should test just the API, treating the code
> as a "black" box, meaning it does not assume or know a lot
> about its internal workings.
>
> Whitebox tests should test the internal workings, and these
> may break, or not even compile, when refactorings or
> restructurings are done.
>
> My point is, all tests need to be documented whether
> they are blackbox or whitebox tests, because a user of the
> code and tests needs to know whether the tests are
> testing the API or the inner workings of the code.
>
> kind regards
> wolfgang
>
>
>
>> Am 22.02.2015 um 01:02 schrieb Phil (list) <[hidden email]>:
>>
>> My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
>> of breakage.  As I've been going through and fixing things I am thinking
>> that I'm possibly going from doing some things in a broken way in
>> Squeak/earlier Cuis to a broken way in current Cuis simply because I
>> don't know what the 'correct' way is.
>>
>> One example is related to compression: there aren't any canonical
>> examples of how to compress/decompress files/streams so I poke around
>> the classes until I get to something that works for me right now having
>> no idea if what I come up with will still work a few versions from now.
>> Another example is package loading: I've been calling CodePackageFile
>> installFileStream:packageName:fullName: but that no longer appears to be
>> the correct entry point if I want dependency checking.  While I
>> understand that things are subject to change, would it make sense to
>> have some documented, fixed entry points that would attempt to make the
>> most sane choices by default going forward?
>>
>> My thought was why not use test cases to help document the preferred
>> (and therefore, longer term supported) way of doing things.  They could
>> be put in Tests-Examples or something similar and could serve double
>> duty in the sense that they would not only help point people in the
>> right direction on how to use a given class, but would also serve as a
>> warning/reminder to anyone wanting to make changes that these are the
>> core calls that need to be maintained from a code compatibility
>> standpoint or otherwise need to be called out when compatibility needs
>> to be broken.  I imagine this would be a fairly small subset of
>> classes/methods and not try to anticipate all the edge cases, just deal
>> with the most common ones.
>>
>> Does what I'm suggesting make sense / seem worthwhile?
>>
>>
>> _______________________________________________
>> Cuis mailing list
>> [hidden email]
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Phil B
Hi Hannes,

On Sun, 2015-02-22 at 08:08 +0000, H. Hirzel wrote:

> Hi Phil and Wolfgang
>
> Test cases mentioned by Phil are
>
> T1. canonical examples of how to compress/decompress files/streams
>
> T2. package loading. Phil writes "I've been calling CodePackageFile
> installFileStream:packageName:fullName: but that no longer appears to be
> the correct entry point if I want dependency checking"
>
> Are there more?
>

Definitely.  But rather than adding more details right now, I was
hoping to stay on the larger question of if trying to get to a
documented, stable set of APIs makes sense and if so how (where
possible... I understand there are areas that are still works in
progress)

> There is as well a workspace called 'useful expressions' which might
> be used to add a few of these test cases.
>
> And updates to these files
>
> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/tree/master/Documentation
> (just fork the Cuis repo, update the md file and issue a pull request)
>
> You may as well add an issue
> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues
>
> like I just did for T1 and T2
> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues/39
>

I thought about that and the documentation is excellent general
primer/tutorial material.  However, I think we'd want to keep that short and
sweet both to encourage people to actually read it as well as minimizing
the risk of 'documentation drift' where the docs no longer match the
reality of the code (or am I the only one who's ever seen that :-)

The advantages of doing the API documentation in test cases would be
that it both ensures that the example code works and that it keeps
working. Anyone making a change, provided they re-run the tests, will
know immediately if they've broken anything and if it is in one of the
'black box' tests that it should be subject to a bit more consideration
(or at least communicating what's changing) since it will likely break
code if it needs to change.

Pull requests are definitely the way to handle whatever, if anything,
gets decided as it's not something that would be reasonable to ask Juan
to do himself.  However, he would need to review said test cases to
validate that, in fact, they are correct in terms of where he sees
things going. (i.e. I can easily produce some test cases for the compression
package but, due to it involving a bit of guesswork on my end, they may
not in fact be canonical examples)

(P.S. thanks for the SQLite package... I'm using it quite a bit
recently)


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Casey Ransberger-2
2 cents on the subject...

Tests shouldn't be doIts. Tests should live with SUnit.

We really will need some work on both SUnit and the tests themselves soon enough. Cuis has been changing at a rate that outpaces the automation of testing, at least with a community of this size. That's sort of been part of the deal with Cuis in recent times.

Andreas also pointed out that our SUnit implementation is a bit behind the curve. We might consider adapting a newer version from Squeak.

That said, now that we've seen demos of the Morphic 3 interface, I do expect the system to stabilize a bit. Right now would be a great time for some brave soul to bring our SUnit implementation a bit closer to that of the other distributions, and backfill some tests.

The hard part is time and energy to do so.

Still, I feel your pain. Updating *any* Smalltalk image has proven fraught with peril in my experience.

A good place to start would be to turn your (unfortunate) experiences into bug reports containing steps to reproduce, expected behavior, and actual (erroneous) behavior. From there we'll at least have somewhere to start.

--C

> On Feb 22, 2015, at 4:24 PM, "Phil (list)" <[hidden email]> wrote:
>
> Hi Hannes,
>
>> On Sun, 2015-02-22 at 08:08 +0000, H. Hirzel wrote:
>> Hi Phil and Wolfgang
>>
>> Test cases mentioned by Phil are
>>
>> T1. canonical examples of how to compress/decompress files/streams
>>
>> T2. package loading. Phil writes "I've been calling CodePackageFile
>> installFileStream:packageName:fullName: but that no longer appears to be
>> the correct entry point if I want dependency checking"
>>
>> Are there more?
>
> Definitely.  But rather than adding more details right now, I was
> hoping to stay on the larger question of if trying to get to a
> documented, stable set of APIs makes sense and if so how (where
> possible... I understand there are areas that are still works in
> progress)
>
>> There is as well a workspace called 'useful expressions' which might
>> be used to add a few of these test cases.
>>
>> And updates to these files
>>
>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/tree/master/Documentation
>> (just fork the Cuis repo, update the md file and issue a pull request)
>>
>> You may as well add an issue
>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues
>>
>> like I just did for T1 and T2
>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues/39
>
> I thought about that and the documentation is excellent general
> primer/tutorial material.  However, I think we'd want to keep that short and
> sweet both to encourage people to actually read it as well as minimizing
> the risk of 'documentation drift' where the docs no longer match the
> reality of the code (or am I the only one who's ever seen that :-)
>
> The advantages of doing the API documentation in test cases would be
> that it both ensures that the example code works and that it keeps
> working. Anyone making a change, provided they re-run the tests, will
> know immediately if they've broken anything and if it is in one of the
> 'black box' tests that it should be subject to a bit more consideration
> (or at least communicating what's changing) since it will likely break
> code if it needs to change.
>
> Pull requests are definitely the way to handle whatever, if anything,
> gets decided as it's not something that would be reasonable to ask Juan
> to do himself.  However, he would need to review said test cases to
> validate that, in fact, they are correct in terms of where he sees
> things going. (i.e. I can easily produce some test cases for the compression
> package but, due to it involving a bit of guesswork on my end, they may
> not in fact be canonical examples)
>
> (P.S. thanks for the SQLite package... I'm using it quite a bit
> recently)
>
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Hannes Hirzel
Casey writes

"Cuis has been changing at a rate that outpaces the automation of
testing, at least with a community of this size. That's sort of been
part of the deal with Cuis in recent times."

Nobody has objections that we need tests. All what has been said is OK.

The challenge is to _do_ it.

Suggestions:

1. Add a gitter chatroom as Amber Smalltalk has and just ask questions
instead of spending long time to find an API entry point.

2. Keep chronologically organized notes about API entry points.

3. Add a small number of tests (because of limit of resources)

4. Use issues in github

5. Maintain an FAQ

6. Run a gh-pages web site

As Casey writes "The hard part is time and energy to do so."

So what can be done with minimal effort and big impact?

--Hannes

On 2/23/15, Casey Ransberger <[hidden email]> wrote:

> 2 cents on the subject...
>
> Tests shouldn't be doIts. Tests should live with SUnit.
>
> We really will need some work on both SUnit and the tests themselves soon
> enough. Cuis has been changing at a rate that outpaces the automation of
> testing, at least with a community of this size. That's sort of been part of
> the deal with Cuis in recent times.
>
> Andreas also pointed out that our SUnit implementation is a bit behind the
> curve. We might consider adapting a newer version from Squeak.
>
> That said, now that we've seen demos of the Morphic 3 interface, I do expect
> the system to stabilize a bit. Right now would be a great time for some
> brave soul to bring our SUnit implementation a bit closer to that of the
> other distributions, and backfill some tests.
>
> The hard part is time and energy to do so.
>
> Still, I feel your pain. Updating *any* Smalltalk image has proven fraught
> with peril in my experience.
>
> A good place to start would be to turn your (unfortunate) experiences into
> bug reports containing steps to reproduce, expected behavior, and actual
> (erroneous) behavior. From there we'll at least have somewhere to start.
>
> --C
>
>> On Feb 22, 2015, at 4:24 PM, "Phil (list)" <[hidden email]> wrote:
>>
>> Hi Hannes,
>>
>>> On Sun, 2015-02-22 at 08:08 +0000, H. Hirzel wrote:
>>> Hi Phil and Wolfgang
>>>
>>> Test cases mentioned by Phil are
>>>
>>> T1. canonical examples of how to compress/decompress files/streams
>>>
>>> T2. package loading. Phil writes "I've been calling CodePackageFile
>>> installFileStream:packageName:fullName: but that no longer appears to be
>>> the correct entry point if I want dependency checking"
>>>
>>> Are there more?
>>
>> Definitely.  But rather than adding more details right now, I was
>> hoping to stay on the larger question of if trying to get to a
>> documented, stable set of APIs makes sense and if so how (where
>> possible... I understand there are areas that are still works in
>> progress)
>>
>>> There is as well a workspace called 'useful expressions' which might
>>> be used to add a few of these test cases.
>>>
>>> And updates to these files
>>>
>>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/tree/master/Documentation
>>> (just fork the Cuis repo, update the md file and issue a pull request)
>>>
>>> You may as well add an issue
>>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues
>>>
>>> like I just did for T1 and T2
>>> https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues/39
>>
>> I thought about that and the documentation is excellent general
>> primer/tutorial material.  However, I think we'd want to keep that short
>> and
>> sweet both to encourage people to actually read it as well as minimizing
>> the risk of 'documentation drift' where the docs no longer match the
>> reality of the code (or am I the only one who's ever seen that :-)
>>
>> The advantages of doing the API documentation in test cases would be
>> that it both ensures that the example code works and that it keeps
>> working. Anyone making a change, provided they re-run the tests, will
>> know immediately if they've broken anything and if it is in one of the
>> 'black box' tests that it should be subject to a bit more consideration
>> (or at least communicating what's changing) since it will likely break
>> code if it needs to change.
>>
>> Pull requests are definitely the way to handle whatever, if anything,
>> gets decided as it's not something that would be reasonable to ask Juan
>> to do himself.  However, he would need to review said test cases to
>> validate that, in fact, they are correct in terms of where he sees
>> things going. (i.e. I can easily produce some test cases for the
>> compression
>> package but, due to it involving a bit of guesswork on my end, they may
>> not in fact be canonical examples)
>>
>> (P.S. thanks for the SQLite package... I'm using it quite a bit
>> recently)
>>
>>
>> _______________________________________________
>> Cuis mailing list
>> [hidden email]
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

KenDickey
On Mon, 23 Feb 2015 08:38:44 +0000
"H. Hirzel" <[hidden email]> wrote:

> As Casey writes "The hard part is time and energy to do so."
>
> So what can be done with minimal effort and big impact?

Looking around, I see that I could make some simple naming changes for test cases.

E.g in package Crypto-Nacl there are test cases (class NaclTests)
  testExampleHighlevel -> testNaclHiglLevelAPI
  textExampleLowLevel -> testNaclLowLevelAPI

We could use other words, but API is shorter than Usage, Example, Interface.

Having specific test names end in 'API' would clue me in.

Adding 'Nacl' means I can use the existing message finder to search for 'api' and note testNacl*API methods.  This lets me find API tests specific to Nacl.

And we can do this as I get the chance.

Other suggestions?
--
-KenD

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
-KenD
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Hannes Hirzel
Other idea: Regularily (e.g. bimonthly) export all the methods (plus
comment) which contain 'test' and 'API' to a markdown file in the
'Documentation' folder.

Check the comments from time to time and generate issues to ensure the
comments are instructive.

We need a process which is quite verbose and which allows us to jump
in after long periods of inactivity and immediatly do something
productive. i.e. enough provide enough context for the activities to
do next.

On 2/23/15, Ken.Dickey <[hidden email]> wrote:

> On Mon, 23 Feb 2015 08:38:44 +0000
> "H. Hirzel" <[hidden email]> wrote:
>
>> As Casey writes "The hard part is time and energy to do so."
>>
>> So what can be done with minimal effort and big impact?
>
> Looking around, I see that I could make some simple naming changes for test
> cases.
>
> E.g in package Crypto-Nacl there are test cases (class NaclTests)
>   testExampleHighlevel -> testNaclHiglLevelAPI
>   textExampleLowLevel -> testNaclLowLevelAPI
>
> We could use other words, but API is shorter than Usage, Example,
> Interface.
>
> Having specific test names end in 'API' would clue me in.
>
> Adding 'Nacl' means I can use the existing message finder to search for
> 'api' and note testNacl*API methods.  This lets me find API tests specific
> to Nacl.
>
> And we can do this as I get the chance.
>
> Other suggestions?
> --
> -KenD
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Phil B
In reply to this post by KenDickey
On Mon, 2015-02-23 at 07:53 -0800, Ken.Dickey wrote:

> On Mon, 23 Feb 2015 08:38:44 +0000
> "H. Hirzel" <[hidden email]> wrote:
>
> > As Casey writes "The hard part is time and energy to do so."
> >
> > So what can be done with minimal effort and big impact?
>
> Looking around, I see that I could make some simple naming changes for test cases.
>
> E.g in package Crypto-Nacl there are test cases (class NaclTests)
>   testExampleHighlevel -> testNaclHiglLevelAPI
>   textExampleLowLevel -> testNaclLowLevelAPI
>
> We could use other words, but API is shorter than Usage, Example, Interface.
>
> Having specific test names end in 'API' would clue me in.
>
> Adding 'Nacl' means I can use the existing message finder to search for 'api' and note testNacl*API methods.  This lets me find API tests specific to Nacl.
>
> And we can do this as I get the chance.
>
> Other suggestions?

I started playing around with a couple of example test cases to see what
I ran into and came up with a distinct class to store all of these per
test category (i.e. under Test-Files a class ApiFile which could have a
method testWriteTextFile)  The rationale was that it might make sense to
keep these test cases separate from traditional test cases which are
free to make calls that users of the class (i.e. who are calling the
supported API) should not.


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Hannes Hirzel
Phil

I have put your note here
https://github.com/Cuis-Smalltalk/Cuis-Smalltalk-Dev/issues/39

On 2/23/15, Phil (list) <[hidden email]> wrote:

> On Mon, 2015-02-23 at 07:53 -0800, Ken.Dickey wrote:
>> On Mon, 23 Feb 2015 08:38:44 +0000
>> "H. Hirzel" <[hidden email]> wrote:
>>
>> > As Casey writes "The hard part is time and energy to do so."
>> >
>> > So what can be done with minimal effort and big impact?
>>
>> Looking around, I see that I could make some simple naming changes for
>> test cases.
>>
>> E.g in package Crypto-Nacl there are test cases (class NaclTests)
>>   testExampleHighlevel -> testNaclHiglLevelAPI
>>   textExampleLowLevel -> testNaclLowLevelAPI
>>
>> We could use other words, but API is shorter than Usage, Example,
>> Interface.
>>
>> Having specific test names end in 'API' would clue me in.
>>
>> Adding 'Nacl' means I can use the existing message finder to search for
>> 'api' and note testNacl*API methods.  This lets me find API tests specific
>> to Nacl.
>>
>> And we can do this as I get the chance.
>>
>> Other suggestions?
>
> I started playing around with a couple of example test cases to see what
> I ran into and came up with a distinct class to store all of these per
> test category (i.e. under Test-Files a class ApiFile which could have a
> method testWriteTextFile)  The rationale was that it might make sense to
> keep these test cases separate from traditional test cases which are
> free to make calls that users of the class (i.e. who are calling the
> supported API) should not.
>
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

KenDickey
In reply to this post by Phil B
On Mon, 23 Feb 2015 14:43:26 -0500
"Phil (list)" <[hidden email]> wrote:

> I started playing around with a couple of example test cases to see what
> I ran into and came up with a distinct class to store all of these per
> test category (i.e. under Test-Files a class ApiFile which could have a
> method testWriteTextFile)  The rationale was that it might make sense to
> keep these test cases separate from traditional test cases which are
> free to make calls that users of the class (i.e. who are calling the
> supported API) should not.

You bring up a good point.

I suspect there are three basic cases:
  [1] A number of small test cases for a wide, shallow API.  E.g. Collection classes.  Many "small" usage tests.
  [2] A few deep calls which demonstrate a usage protocol.  E.g. open/read/close files, sockets; use library wrapper.
  [3] Sample/example UI code which illustrate how to build apps/applets/tools  [Color Picker, File List, ..]

A test class, as you point out, is probably appropriate for case [1], where there would be a zillion test<collection><access>API kinds of things would swamp a name search.  A class naming convention would be good here.

Individual test methods, with a searchable name convention, would be best for [2] where there would be few illustrative usage tests.

We could call out example code in a README doc for [3] where examples might have auxiliary architectural documentation.

--
-KenD

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
-KenD
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Phil B
On Tue, 2015-02-24 at 17:51 -0800, Ken.Dickey wrote:

> On Mon, 23 Feb 2015 14:43:26 -0500
> "Phil (list)" <[hidden email]> wrote:
>
> > I started playing around with a couple of example test cases to see what
> > I ran into and came up with a distinct class to store all of these per
> > test category (i.e. under Test-Files a class ApiFile which could have a
> > method testWriteTextFile)  The rationale was that it might make sense to
> > keep these test cases separate from traditional test cases which are
> > free to make calls that users of the class (i.e. who are calling the
> > supported API) should not.
>
> You bring up a good point.
>
> I suspect there are three basic cases:
>   [1] A number of small test cases for a wide, shallow API.  E.g. Collection classes.  Many "small" usage tests.
>   [2] A few deep calls which demonstrate a usage protocol.  E.g. open/read/close files, sockets; use library wrapper.
>   [3] Sample/example UI code which illustrate how to build apps/applets/tools  [Color Picker, File List, ..]
>
> A test class, as you point out, is probably appropriate for case [1], where there would be a zillion test<collection><access>API kinds of things would swamp a name search.  A class naming convention would be good here.
>
> Individual test methods, with a searchable name convention, would be best for [2] where there would be few illustrative usage tests.
>
> We could call out example code in a README doc for [3] where examples might have auxiliary architectural documentation.
>

Agreed.

2 was what originally got me going on this subject as that's where my
code seems to usually be breaking due to guessing wrong at what the
longer term entry points were.  I wasn't envisioning a kitchen-sink
approach to tests, just some basic 'here's the recommended way to use
this area of functionality' tests.

Unless Cuis evolves into a completely separate Smalltalk dialect, 1 can
probably be handled via traditional unit tests / categorization and the
numerous intro to Smalltalk tutorials out there.  3 seems like the point
where written tutorials (along the lines of what Hannes pointed out
earlier in the thread) and/or videos would best come into play.


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Juan Vuletich-4
Hi Folks,

(below)
On 2/25/2015 12:22 AM, Phil (list) wrote:

> On Tue, 2015-02-24 at 17:51 -0800, Ken.Dickey wrote:
>> On Mon, 23 Feb 2015 14:43:26 -0500
>> "Phil (list)"<[hidden email]>  wrote:
>>
>>> I started playing around with a couple of example test cases to see what
>>> I ran into and came up with a distinct class to store all of these per
>>> test category (i.e. under Test-Files a class ApiFile which could have a
>>> method testWriteTextFile)  The rationale was that it might make sense to
>>> keep these test cases separate from traditional test cases which are
>>> free to make calls that users of the class (i.e. who are calling the
>>> supported API) should not.
>> You bring up a good point.
>>
>> I suspect there are three basic cases:
>>    [1] A number of small test cases for a wide, shallow API.  E.g. Collection classes.  Many "small" usage tests.
>>    [2] A few deep calls which demonstrate a usage protocol.  E.g. open/read/close files, sockets; use library wrapper.
>>    [3] Sample/example UI code which illustrate how to build apps/applets/tools  [Color Picker, File List, ..]
>>
>> A test class, as you point out, is probably appropriate for case [1], where there would be a zillion test<collection><access>API kinds of things would swamp a name search.  A class naming convention would be good here.
>>
>> Individual test methods, with a searchable name convention, would be best for [2] where there would be few illustrative usage tests.
>>
>> We could call out example code in a README doc for [3] where examples might have auxiliary architectural documentation.
>>
> Agreed.
>
> 2 was what originally got me going on this subject as that's where my
> code seems to usually be breaking due to guessing wrong at what the
> longer term entry points were.  I wasn't envisioning a kitchen-sink
> approach to tests, just some basic 'here's the recommended way to use
> this area of functionality' tests.
>
> Unless Cuis evolves into a completely separate Smalltalk dialect, 1 can
> probably be handled via traditional unit tests / categorization and the
> numerous intro to Smalltalk tutorials out there.  3 seems like the point
> where written tutorials (along the lines of what Hannes pointed out
> earlier in the thread) and/or videos would best come into play.
>

I essentially agree with all you say. And obviously, I support any and
all efforts to enhance Cuis (including docs, external tools, etc).

WRT using tests to document stable or standarized APIs, another reason
for marking them or making them identifiable as such is managing change.
When I change an API in Cuis, I update all senders in the image and core
packages, including those in tests. So, I need to know that the test
documents a standarized API, to avoid the change, or at least discuss it
with you here, or move the old API to SqueakCompatibility.pck.st, etc.

Another alternative is to have some symbol, like #standardAPI or
#stableAPI to be called from such methods. This makes it easy to spot
them with 'senders', and it is obvious for anyone modifying them.

In any case, Phil, given that all this is not yet done, when you face
such breakage, please don't just fix it yourself (unless you think
that's best). Please use that frustration to open a specific discussion
on that particular API. Ideally, we would agree on a reasonable API
(maybe the old one, maybe a better new one), implement it, and document
it. Just remember that we can and should discuss decisions.

Cheers,
Juan Vuletich


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Phil B
On Thu, 2015-02-26 at 11:18 -0300, Juan Vuletich wrote:

> Hi Folks,
>
> (below)
> On 2/25/2015 12:22 AM, Phil (list) wrote:
> > On Tue, 2015-02-24 at 17:51 -0800, Ken.Dickey wrote:
> >> On Mon, 23 Feb 2015 14:43:26 -0500
> >> "Phil (list)"<[hidden email]>  wrote:
> >>
> >>> I started playing around with a couple of example test cases to see what
> >>> I ran into and came up with a distinct class to store all of these per
> >>> test category (i.e. under Test-Files a class ApiFile which could have a
> >>> method testWriteTextFile)  The rationale was that it might make sense to
> >>> keep these test cases separate from traditional test cases which are
> >>> free to make calls that users of the class (i.e. who are calling the
> >>> supported API) should not.
> >> You bring up a good point.
> >>
> >> I suspect there are three basic cases:
> >>    [1] A number of small test cases for a wide, shallow API.  E.g. Collection classes.  Many "small" usage tests.
> >>    [2] A few deep calls which demonstrate a usage protocol.  E.g. open/read/close files, sockets; use library wrapper.
> >>    [3] Sample/example UI code which illustrate how to build apps/applets/tools  [Color Picker, File List, ..]
> >>
> >> A test class, as you point out, is probably appropriate for case [1], where there would be a zillion test<collection><access>API kinds of things would swamp a name search.  A class naming convention would be good here.
> >>
> >> Individual test methods, with a searchable name convention, would be best for [2] where there would be few illustrative usage tests.
> >>
> >> We could call out example code in a README doc for [3] where examples might have auxiliary architectural documentation.
> >>
> > Agreed.
> >
> > 2 was what originally got me going on this subject as that's where my
> > code seems to usually be breaking due to guessing wrong at what the
> > longer term entry points were.  I wasn't envisioning a kitchen-sink
> > approach to tests, just some basic 'here's the recommended way to use
> > this area of functionality' tests.
> >
> > Unless Cuis evolves into a completely separate Smalltalk dialect, 1 can
> > probably be handled via traditional unit tests / categorization and the
> > numerous intro to Smalltalk tutorials out there.  3 seems like the point
> > where written tutorials (along the lines of what Hannes pointed out
> > earlier in the thread) and/or videos would best come into play.
> >
>
> I essentially agree with all you say. And obviously, I support any and
> all efforts to enhance Cuis (including docs, external tools, etc).
>
> WRT using tests to document stable or standarized APIs, another reason
> for marking them or making them identifiable as such is managing change.
> When I change an API in Cuis, I update all senders in the image and core
> packages, including those in tests. So, I need to know that the test
> documents a standarized API, to avoid the change, or at least discuss it
> with you here, or move the old API to SqueakCompatibility.pck.st, etc.
>
> Another alternative is to have some symbol, like #standardAPI or
> #stableAPI to be called from such methods. This makes it easy to spot
> them with 'senders', and it is obvious for anyone modifying them.
>

self flag: #stableAPI? (assuming you're talking about doing this in test
cases)  I don't really have strong feelings on symbols vs. categories
vs. something else on how to indicate them other than it should be
consistent so that all concerned know how to identify/find them.

> In any case, Phil, given that all this is not yet done, when you face
> such breakage, please don't just fix it yourself (unless you think
> that's best). Please use that frustration to open a specific discussion
> on that particular API. Ideally, we would agree on a reasonable API
> (maybe the old one, maybe a better new one), implement it, and document
> it. Just remember that we can and should discuss decisions.
>

That's why I started this thread.  As I've been going through this round
of upgrade fixes I was thinking that 'there's got to be a better way'.
One of the things I can do to help is to generate some test cases for
things I'm having to fix in my code where it seems like there should be
a standard API and then you would be able to look at and say 'yes, that
looks right' or 'no... here's how it should be done instead' and the
result could live on in the tests so that hopefully that particular
issue wouldn't occur again in future releases.

> Cheers,
> Juan Vuletich
>



_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

KenDickey
On Thu, 26 Feb 2015 15:32:39 -0500
"Phil (list)" <[hidden email]> wrote:

> One of the things I can do to help is to generate some test cases for
> things I'm having to fix in my code where it seems like there should be
> a standard API and then you would be able to look at and say 'yes, that
> looks right' or 'no... here's how it should be done instead' and the
> result could live on in the tests so that hopefully that particular
> issue wouldn't occur again in future releases.

This works for algorithmic things.

A pattern for visual aspects of Morphs is an 'examples' category, typically on the class side, with message names including 'example'.  Slightly different usage.

I keep thinking of an "Exploring Cuis" kind of tutorial which investigates in brief how things are put together.  Hence my Morphic-Misc1 pack which includes, e.g. a  LayoutMorphEditPanel and LayoutSpecEditPanel, so one can select a morph and edit its Layout properties from the Morph's menu to play with and see how layout works.

It would be cool to do this in StyledTextEditor.

If only we had 72 hour days!  ;^)

-KenD

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
-KenD
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Phil B
On Thu, 2015-02-26 at 13:46 +0800, Ken.Dickey wrote:

> On Thu, 26 Feb 2015 15:32:39 -0500
> "Phil (list)" <[hidden email]> wrote:
>
> > One of the things I can do to help is to generate some test cases for
> > things I'm having to fix in my code where it seems like there should be
> > a standard API and then you would be able to look at and say 'yes, that
> > looks right' or 'no... here's how it should be done instead' and the
> > result could live on in the tests so that hopefully that particular
> > issue wouldn't occur again in future releases.
>
> This works for algorithmic things.
>

For traditional unit tests, sure.  What we're talking about here is
veering into small- to medium-scale integration testing.  True,
integration tests can be for purely algorithmic things as well.  But it
doesn't need to strictly be limited to purely testable functionality.
Sometimes runnable is a better test than nothing: i.e. if a test
attempts to call a method and that method no longer exists...

> A pattern for visual aspects of Morphs is an 'examples' category, typically on the class side, with message names including 'example'.  Slightly different usage.
>
> I keep thinking of an "Exploring Cuis" kind of tutorial which investigates in brief how things are put together.  Hence my Morphic-Misc1 pack which includes, e.g. a  LayoutMorphEditPanel and LayoutSpecEditPanel, so one can select a morph and edit its Layout properties from the Morph's menu to play with and see how layout works.
>
> It would be cool to do this in StyledTextEditor.
>
> If only we had 72 hour days!  ;^)
>

That would be very cool indeed!

> -KenD



_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
bpi
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

bpi
In reply to this post by Wolfgang Eder
Hi Wolfgang,

Great to see you are reading this list!

I think your suggestion is a great one. What would be the simplest way to separate those that could possibly work?

I can think of:
a) Use different test classes with a naming convention, e.g. GnagnaApiTest versus GnagnaInternalTest
b) Put test methods in different protocols, e.g. API testing versus internal testing

b) seems simpler to me and it has the advantage that one could reuse test fixtures (setUp and tearDown).

What do you think?

Cheers,
Bernhard

> Am 22.02.2015 um 07:52 schrieb Wolfgang Eder <[hidden email]>:
>
> hi,
> I believe you have an excellent point here!
> I believe that tests are a very valuable and fitting
> way of documenting an API.
>
> And I believe that there needs to be a distinction made
> alone the lines of blackbox vs. whitebox testing,
> where the distinction is whether the internal functioning
> is tested or not.
>
> Blackbox tests should test just the API, treating the code
> as a "black" box, meaning it does not assume or know a lot
> about its internal workings.
>
> Whitebox tests should test the internal workings, and these
> may break, or not even compile, when refactorings or
> restructurings are done.
>
> My point is, all tests need to be documented whether
> they are blackbox or whitebox tests, because a user of the
> code and tests needs to know whether the tests are
> testing the API or the inner workings of the code.
>
> kind regards
> wolfgang
>
>
>
>> Am 22.02.2015 um 01:02 schrieb Phil (list) <[hidden email]>:
>>
>> My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
>> of breakage.  As I've been going through and fixing things I am thinking
>> that I'm possibly going from doing some things in a broken way in
>> Squeak/earlier Cuis to a broken way in current Cuis simply because I
>> don't know what the 'correct' way is.
>>
>> One example is related to compression: there aren't any canonical
>> examples of how to compress/decompress files/streams so I poke around
>> the classes until I get to something that works for me right now having
>> no idea if what I come up with will still work a few versions from now.
>> Another example is package loading: I've been calling CodePackageFile
>> installFileStream:packageName:fullName: but that no longer appears to be
>> the correct entry point if I want dependency checking.  While I
>> understand that things are subject to change, would it make sense to
>> have some documented, fixed entry points that would attempt to make the
>> most sane choices by default going forward?
>>
>> My thought was why not use test cases to help document the preferred
>> (and therefore, longer term supported) way of doing things.  They could
>> be put in Tests-Examples or something similar and could serve double
>> duty in the sense that they would not only help point people in the
>> right direction on how to use a given class, but would also serve as a
>> warning/reminder to anyone wanting to make changes that these are the
>> core calls that need to be maintained from a code compatibility
>> standpoint or otherwise need to be called out when compatibility needs
>> to be broken.  I imagine this would be a fairly small subset of
>> classes/methods and not try to anticipate all the edge cases, just deal
>> with the most common ones.
>>
>> Does what I'm suggesting make sense / seem worthwhile?
>>
>>
>> _______________________________________________
>> Cuis mailing list
>> [hidden email]
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Wolfgang Eder
hi bernhard,
yes i agree that b) is a practical way to do it.

the hard part imho is to work out what the api actually is.
since it carries the expectations that it should not break/change.
because being able to change everything is part of the fun
of working with smalltalk.

working in an image with everything loaded would help
with these cases, as the code could be fixed the moment
the change happens.

kind regards,
wolfgang



> Am 01.03.2015 um 20:08 schrieb Bernhard Pieber <[hidden email]>:
>
> Hi Wolfgang,
>
> Great to see you are reading this list!
>
> I think your suggestion is a great one. What would be the simplest way to separate those that could possibly work?
>
> I can think of:
> a) Use different test classes with a naming convention, e.g. GnagnaApiTest versus GnagnaInternalTest
> b) Put test methods in different protocols, e.g. API testing versus internal testing
>
> b) seems simpler to me and it has the advantage that one could reuse test fixtures (setUp and tearDown).
>
> What do you think?
>
> Cheers,
> Bernhard
>
>> Am 22.02.2015 um 07:52 schrieb Wolfgang Eder <[hidden email]>:
>>
>> hi,
>> I believe you have an excellent point here!
>> I believe that tests are a very valuable and fitting
>> way of documenting an API.
>>
>> And I believe that there needs to be a distinction made
>> alone the lines of blackbox vs. whitebox testing,
>> where the distinction is whether the internal functioning
>> is tested or not.
>>
>> Blackbox tests should test just the API, treating the code
>> as a "black" box, meaning it does not assume or know a lot
>> about its internal workings.
>>
>> Whitebox tests should test the internal workings, and these
>> may break, or not even compile, when refactorings or
>> restructurings are done.
>>
>> My point is, all tests need to be documented whether
>> they are blackbox or whitebox tests, because a user of the
>> code and tests needs to know whether the tests are
>> testing the API or the inner workings of the code.
>>
>> kind regards
>> wolfgang
>>
>>
>>
>>> Am 22.02.2015 um 01:02 schrieb Phil (list) <[hidden email]>:
>>>
>>> My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
>>> of breakage.  As I've been going through and fixing things I am thinking
>>> that I'm possibly going from doing some things in a broken way in
>>> Squeak/earlier Cuis to a broken way in current Cuis simply because I
>>> don't know what the 'correct' way is.
>>>
>>> One example is related to compression: there aren't any canonical
>>> examples of how to compress/decompress files/streams so I poke around
>>> the classes until I get to something that works for me right now having
>>> no idea if what I come up with will still work a few versions from now.
>>> Another example is package loading: I've been calling CodePackageFile
>>> installFileStream:packageName:fullName: but that no longer appears to be
>>> the correct entry point if I want dependency checking.  While I
>>> understand that things are subject to change, would it make sense to
>>> have some documented, fixed entry points that would attempt to make the
>>> most sane choices by default going forward?
>>>
>>> My thought was why not use test cases to help document the preferred
>>> (and therefore, longer term supported) way of doing things.  They could
>>> be put in Tests-Examples or something similar and could serve double
>>> duty in the sense that they would not only help point people in the
>>> right direction on how to use a given class, but would also serve as a
>>> warning/reminder to anyone wanting to make changes that these are the
>>> core calls that need to be maintained from a code compatibility
>>> standpoint or otherwise need to be called out when compatibility needs
>>> to be broken.  I imagine this would be a fairly small subset of
>>> classes/methods and not try to anticipate all the edge cases, just deal
>>> with the most common ones.
>>>
>>> Does what I'm suggesting make sense / seem worthwhile?
>>>
>>>
>>> _______________________________________________
>>> Cuis mailing list
>>> [hidden email]
>>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>>
>> _______________________________________________
>> Cuis mailing list
>> [hidden email]
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

signature.asc (507 bytes) Download Attachment
bpi
Reply | Threaded
Open this post in threaded view
|

How to find out the API of a class? was: Canonical test cases?

bpi
Hi Wolfgang,

Thanks for the quick answer. See below:

Cheers,
Bernhard

> Am 01.03.2015 um 20:45 schrieb Wolfgang Eder <[hidden email]>:
>
> hi bernhard,
> yes i agree that b) is a practical way to do it.
>
> the hard part imho is to work out what the api actually is.
> since it carries the expectations that it should not break/change.
> because being able to change everything is part of the fun
> of working with smalltalk.
Yes indeed. It would be great if public methods could be distinguished more easily from private ones, like in other programming languages. If I remember correctly, Squeak's convention is that all methods in categores/protocols other than private should be considered part of the API. However, I am pretty sure this is not true right now, neither in Squeak nor in Cuis. Method with names starting with pvt are still handled in a special way in Cuis, see isPvtSelector. This feature is almost unused – there are only 5 such methods – so it should probably be removed to make the system simpler.

> working in an image with everything loaded would help
> with these cases, as the code could be fixed the moment
> the change happens.
I agree this would be the best, especially for such a small community. It increases the risk to introduce unwanted dependencies between packages, though. So a tool to identify these would be most welcome. Better yet would be a runtime system which enforces these package dependencies. I chatted with Juan about this the other day. He thinks this would require VM changes to implement cleanly. Pity.

> kind regards,
> wolfgang
>
>
>
>> Am 01.03.2015 um 20:08 schrieb Bernhard Pieber <[hidden email]>:
>>
>> Hi Wolfgang,
>>
>> Great to see you are reading this list!
>>
>> I think your suggestion is a great one. What would be the simplest way to separate those that could possibly work?
>>
>> I can think of:
>> a) Use different test classes with a naming convention, e.g. GnagnaApiTest versus GnagnaInternalTest
>> b) Put test methods in different protocols, e.g. API testing versus internal testing
>>
>> b) seems simpler to me and it has the advantage that one could reuse test fixtures (setUp and tearDown).
>>
>> What do you think?
>>
>> Cheers,
>> Bernhard
>>
>>> Am 22.02.2015 um 07:52 schrieb Wolfgang Eder <[hidden email]>:
>>>
>>> hi,
>>> I believe you have an excellent point here!
>>> I believe that tests are a very valuable and fitting
>>> way of documenting an API.
>>>
>>> And I believe that there needs to be a distinction made
>>> alone the lines of blackbox vs. whitebox testing,
>>> where the distinction is whether the internal functioning
>>> is tested or not.
>>>
>>> Blackbox tests should test just the API, treating the code
>>> as a "black" box, meaning it does not assume or know a lot
>>> about its internal workings.
>>>
>>> Whitebox tests should test the internal workings, and these
>>> may break, or not even compile, when refactorings or
>>> restructurings are done.
>>>
>>> My point is, all tests need to be documented whether
>>> they are blackbox or whitebox tests, because a user of the
>>> code and tests needs to know whether the tests are
>>> testing the API or the inner workings of the code.
>>>
>>> kind regards
>>> wolfgang
>>>
>>>
>>>
>>>> Am 22.02.2015 um 01:02 schrieb Phil (list) <[hidden email]>:
>>>>
>>>> My code migration to Cuis 4.2 has gone from a saga to an ordeal... lots
>>>> of breakage.  As I've been going through and fixing things I am thinking
>>>> that I'm possibly going from doing some things in a broken way in
>>>> Squeak/earlier Cuis to a broken way in current Cuis simply because I
>>>> don't know what the 'correct' way is.
>>>>
>>>> One example is related to compression: there aren't any canonical
>>>> examples of how to compress/decompress files/streams so I poke around
>>>> the classes until I get to something that works for me right now having
>>>> no idea if what I come up with will still work a few versions from now.
>>>> Another example is package loading: I've been calling CodePackageFile
>>>> installFileStream:packageName:fullName: but that no longer appears to be
>>>> the correct entry point if I want dependency checking.  While I
>>>> understand that things are subject to change, would it make sense to
>>>> have some documented, fixed entry points that would attempt to make the
>>>> most sane choices by default going forward?
>>>>
>>>> My thought was why not use test cases to help document the preferred
>>>> (and therefore, longer term supported) way of doing things.  They could
>>>> be put in Tests-Examples or something similar and could serve double
>>>> duty in the sense that they would not only help point people in the
>>>> right direction on how to use a given class, but would also serve as a
>>>> warning/reminder to anyone wanting to make changes that these are the
>>>> core calls that need to be maintained from a code compatibility
>>>> standpoint or otherwise need to be called out when compatibility needs
>>>> to be broken.  I imagine this would be a fairly small subset of
>>>> classes/methods and not try to anticipate all the edge cases, just deal
>>>> with the most common ones.
>>>>
>>>> Does what I'm suggesting make sense / seem worthwhile?
>>>>
>>>>
>>>> _______________________________________________
>>>> Cuis mailing list
>>>> [hidden email]
>>>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>>>
>>> _______________________________________________
>>> Cuis mailing list
>>> [hidden email]
>>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>>
>>
>> _______________________________________________
>> Cuis mailing list
>> [hidden email]
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Canonical test cases?

Hannes Hirzel
In reply to this post by Phil B
On 2/23/15, Phil (list) <[hidden email]> wrote:

> On Mon, 2015-02-23 at 07:53 -0800, Ken.Dickey wrote:
>> On Mon, 23 Feb 2015 08:38:44 +0000
>> "H. Hirzel" <[hidden email]> wrote:
>>
>> > As Casey writes "The hard part is time and energy to do so."
>> >
>> > So what can be done with minimal effort and big impact?
>>
>> Looking around, I see that I could make some simple naming changes for
>> test cases.
>>
>> E.g in package Crypto-Nacl there are test cases (class NaclTests)
>>   testExampleHighlevel -> testNaclHiglLevelAPI
>>   textExampleLowLevel -> testNaclLowLevelAPI
>>
>> We could use other words, but API is shorter than Usage, Example,
>> Interface.
>>
>> Having specific test names end in 'API' would clue me in.
>>
>> Adding 'Nacl' means I can use the existing message finder to search for
>> 'api' and note testNacl*API methods.  This lets me find API tests specific
>> to Nacl.
>>
>> And we can do this as I get the chance.
>>
>> Other suggestions?
>
> I started playing around with a couple of example test cases to see what
> I ran into and came up with a distinct class to store all of these per
> test category (i.e. under Test-Files a class ApiFile which could have a
> method testWriteTextFile)  The rationale was that it might make sense to
> keep these test cases separate from traditional test cases which are
> free to make calls that users of the class (i.e. who are calling the
> supported API) should not.

Phil,

where do you keep this class ApiFile?

regards

Hannes


> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
12