SUnit: Skipping tests?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
45 messages Options
123
Reply | Threaded
Open this post in threaded view
|

SUnit: Skipping tests?

Andreas.Raab
Hi Folks -

I am in the interesting situation that I'm writing a few tests that
require large data sets for input and where I don't want people to
require to download the data sets. My problem is while it's easy to
determine that the data is missing and skip the test there isn't a good
way of relaying this to the user. From the user's point of view "all
tests are green" even though that statement is completely meaningless
and I'd rather communicate that in a way that says "X tests skipped" so
that one can look at and decide whether it's useful to re-run the tests
with the data sets or not.

Another place where I've seen this to happen is when platform specific
tests are involved. A test which cannot be run on some platform should
be skipped meaningfully (e.g., by telling the user it was skipped)
rather than to appear green and working.

Any ideas?

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Markus Gälli-3
Hi Andreas,

> I am in the interesting situation that I'm writing a few tests that  
> require large data sets for input and where I don't want people to  
> require to download the data sets. My problem is while it's easy to  
> determine that the data is missing and skip the test there isn't a  
> good way of relaying this to the user. From the user's point of  
> view "all tests are green" even though that statement is completely  
> meaningless and I'd rather communicate that in a way that says "X  
> tests skipped" so that one can look at and decide whether it's  
> useful to re-run the tests with the data sets or not.

> Another place where I've seen this to happen is when platform  
> specific tests are involved. A test which cannot be run on some  
> platform should be skipped meaningfully (e.g., by telling the user  
> it was skipped) rather than to appear green and working.
>
> Any ideas?
>
> Cheers,
>   - Andreas
>

If it's not possible to put the data zipped into a method because it  
would be too big somehow, I'd consider your two examples logically  
equivalent to "If the moon is made out of green cheese anything is  
allowed". So it is kind of ok that these tests are green.
But you are right, one usually does not think about tests to have  
prerequisites, one does think about them as commands, which "always"  
bring their necessary context.

And you are suggesting to indicate clearly, which tests depend on  
some external resource?
I'd suggest to use (and introduce in general into Squeak)  
preconditions using blocks (*1) like:

testCroquetOnXBox
        self precondition: [SmalltalkImage current platformName = 'XBox'].
        (...)

Having that in place one could easily collect and indicate all the  
tests, which have a failed precondition.
They should be rare and all depend on an external resource, which is  
too cumbersome to recreate as a scenario.

Cheers,

Markus

(*1)  Except backwards compatibility I wouldn't have problems to use  
method properties/pragmas for the introduction of pre- and  
postconditions either.


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Andres Valloud
In reply to this post by Andreas.Raab
Hello Andreas,

Sunday, March 26, 2006, 5:08:17 PM, you wrote:

AR> I am in the interesting situation that I'm writing a few tests
AR> that require large data sets for input and where I don't want
AR> people to require to download the data sets. My problem is while
AR> it's easy to determine that the data is missing and skip the test
AR> there isn't a good way of relaying this to the user. From the
AR> user's point of view "all tests are green" even though that
AR> statement is completely meaningless and I'd rather communicate
AR> that in a way that says "X tests skipped" so that one can look at
AR> and decide whether it's useful to re-run the tests with the data
AR> sets or not.

Would it make sense to make separate test case classes for the tests
that take a lot of data, and then users would only get those test
cases if they download the large data sets?

AR> Another place where I've seen this to happen is when platform
AR> specific tests are involved. A test which cannot be run on some
AR> platform should be skipped meaningfully (e.g., by telling the user
AR> it was skipped) rather than to appear green and working.

Personally, I'd make a subclass to hold platform specific tests and
make the whole thing pass if the platform does not match the one for
which the tests were designed.  If the test does not even apply, why
should I get concerned as to why was something skipped when it's ok?

--
Best regards,
 Andres                            mailto:[hidden email]


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Andreas.Raab
In reply to this post by Markus Gälli-3
Markus Gaelli wrote:
> If it's not possible to put the data zipped into a method because it
> would be too big somehow, I'd consider your two examples logically
> equivalent to "If the moon is made out of green cheese anything is
> allowed". So it is kind of ok that these tests are green.

It's 8MB a pop so no, I think it's not really feasible to stick that
test data into a method ;-)

> And you are suggesting to indicate clearly, which tests depend on some
> external resource?

Well, really, what I'm looking for is something that instead of saying
"all tests are green, everything is fine" says "all the tests we ran
were green, but there were various that were *not* run so YMMV". I think
what I'm really looking for is something that instead of saying "x
tests, y passed" either says "x tests, y passed, z skipped" or simply
doesn't include the "skipped" ones in the number of tests being run. In
either case, looking at something that says "19 tests, 0 passed, 19
skipped" or simply "0 tests, 0 passed" is vastly more explicit than "19
tests, 19 passed" where in reality 0 were run.

Like, what if a test which doesn't have any assertion is simply not
counted? Doesn't make sense to begin with, and then all the
preconditions need to do is to bail out and the test doesn't count...

In any case, my complaint here is more about the *perception* of "these
tests are all green, everything must be fine" when in fact, none of them
have tested anything.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Adrian Lienhard
Maybe the "expected failures" feature of SUnit would do the job? You  
let the tests in question fail but mark them as expected failures  
depending on whether the resources are loaded or not. Visually, the  
test runner will run yellow but explicitly state that it expected to so.

Adrian


On Mar 27, 2006, at 11:09 , Andreas Raab wrote:

> Markus Gaelli wrote:
>> If it's not possible to put the data zipped into a method because  
>> it would be too big somehow, I'd consider your two examples  
>> logically equivalent to "If the moon is made out of green cheese  
>> anything is allowed". So it is kind of ok that these tests are green.
>
> It's 8MB a pop so no, I think it's not really feasible to stick  
> that test data into a method ;-)
>
>> And you are suggesting to indicate clearly, which tests depend on  
>> some external resource?
>
> Well, really, what I'm looking for is something that instead of  
> saying "all tests are green, everything is fine" says "all the  
> tests we ran were green, but there were various that were *not* run  
> so YMMV". I think what I'm really looking for is something that  
> instead of saying "x tests, y passed" either says "x tests, y  
> passed, z skipped" or simply doesn't include the "skipped" ones in  
> the number of tests being run. In either case, looking at something  
> that says "19 tests, 0 passed, 19 skipped" or simply "0 tests, 0  
> passed" is vastly more explicit than "19 tests, 19 passed" where in  
> reality 0 were run.
>
> Like, what if a test which doesn't have any assertion is simply not  
> counted? Doesn't make sense to begin with, and then all the  
> preconditions need to do is to bail out and the test doesn't count...
>
> In any case, my complaint here is more about the *perception* of  
> "these tests are all green, everything must be fine" when in fact,  
> none of them have tested anything.
>
> Cheers,
>   - Andreas
>


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Markus Gälli-3
In reply to this post by Andreas.Raab

On Mar 27, 2006, at 11:09 AM, Andreas Raab wrote:

> Markus Gaelli wrote:
>> If it's not possible to put the data zipped into a method because  
>> it would be too big somehow, I'd consider your two examples  
>> logically equivalent to "If the moon is made out of green cheese  
>> anything is allowed". So it is kind of ok that these tests are green.
>
> It's 8MB a pop so no, I think it's not really feasible to stick  
> that test data into a method ;-)
>
>> And you are suggesting to indicate clearly, which tests depend on  
>> some external resource?
>
> Well, really, what I'm looking for is something that instead of  
> saying "all tests are green, everything is fine" says "all the  
> tests we ran were green, but there were various that were *not* run  
> so YMMV". I think what I'm really looking for is something that  
> instead of saying "x tests, y passed" either says "x tests, y  
> passed, z skipped" or simply doesn't include the "skipped" ones in  
> the number of tests being run. In either case, looking at something  
> that says "19 tests, 0 passed, 19 skipped" or simply "0 tests, 0  
> passed" is vastly more explicit than "19 tests, 19 passed" where in  
> reality 0 were run.
Yeah, and I think my precondition mechanism could just do that. I  
mean you want to annotate your test somehow of being such kind of  
beast, so that the TestRunner can know about them, and indicate them  
as you suggest, no?
I was banging on the "external resource" a bit, cause until now this  
is the only reason I can see for writing such kinds of tests, and I  
wanted to make that very explicit... ;-)
>
> Like, what if a test which doesn't have any assertion is simply not  
> counted? Doesn't make sense to begin with, and then all the  
> preconditions need to do is to bail out and the test doesn't count...
I don't understand this remark within that context.

I know a guy who is using that
        shouldnt: aBlock raise: anExceptionalEvent : []
idiom a lot ;-) , which is good for knowing what is really tested ;-)  
but otherwise does not provide any real assertion in the test. (See  
most of the BitBltClipBugs tests, which should be platform independent)

Also, tests without any assertions still could execute lots of  
methods, which have nice post conditions with them.
So besides being good smoke tests, they also could be seen as tests  
of that very methods.
>
> In any case, my complaint here is more about the *perception* of  
> "these tests are all green, everything must be fine" when in fact,  
> none of them have tested anything.

Fine for me, all I proposed was a mechanism to denote them. :-)

Cheers,

Markus


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Markus Gälli-3
In reply to this post by Adrian Lienhard

On Mar 27, 2006, at 11:18 AM, Adrian Lienhard wrote:

> Maybe the "expected failures" feature of SUnit would do the job?  
> You let the tests in question fail but mark them as expected  
> failures depending on whether the resources are loaded or not.  
> Visually, the test runner will run yellow but explicitly state that  
> it expected to so.
...which also could be denoted in the test by

BarTes >> testFoo
        self precondition: [Bar includesSelector: #foo]

Test which are known not to be implemented yet could then be easily  
selected asking for preconditions including #includesSelector...

Cheers,

Markus

>
> Adrian
>
>
> On Mar 27, 2006, at 11:09 , Andreas Raab wrote:
>
>> Markus Gaelli wrote:
>>> If it's not possible to put the data zipped into a method because  
>>> it would be too big somehow, I'd consider your two examples  
>>> logically equivalent to "If the moon is made out of green cheese  
>>> anything is allowed". So it is kind of ok that these tests are  
>>> green.
>>
>> It's 8MB a pop so no, I think it's not really feasible to stick  
>> that test data into a method ;-)
>>
>>> And you are suggesting to indicate clearly, which tests depend on  
>>> some external resource?
>>
>> Well, really, what I'm looking for is something that instead of  
>> saying "all tests are green, everything is fine" says "all the  
>> tests we ran were green, but there were various that were *not*  
>> run so YMMV". I think what I'm really looking for is something  
>> that instead of saying "x tests, y passed" either says "x tests, y  
>> passed, z skipped" or simply doesn't include the "skipped" ones in  
>> the number of tests being run. In either case, looking at  
>> something that says "19 tests, 0 passed, 19 skipped" or simply "0  
>> tests, 0 passed" is vastly more explicit than "19 tests, 19  
>> passed" where in reality 0 were run.
>>
>> Like, what if a test which doesn't have any assertion is simply  
>> not counted? Doesn't make sense to begin with, and then all the  
>> preconditions need to do is to bail out and the test doesn't count...
>>
>> In any case, my complaint here is more about the *perception* of  
>> "these tests are all green, everything must be fine" when in fact,  
>> none of them have tested anything.
>>
>> Cheers,
>>   - Andreas
>>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Bert Freudenberg-3
In reply to this post by Andreas.Raab

Am 27.03.2006 um 11:09 schrieb Andreas Raab:

> Markus Gaelli wrote:
>> If it's not possible to put the data zipped into a method because  
>> it would be too big somehow, I'd consider your two examples  
>> logically equivalent to "If the moon is made out of green cheese  
>> anything is allowed". So it is kind of ok that these tests are green.
>
> It's 8MB a pop so no, I think it's not really feasible to stick  
> that test data into a method ;-)
>
>> And you are suggesting to indicate clearly, which tests depend on  
>> some external resource?
>
> Well, really, what I'm looking for is something that instead of  
> saying "all tests are green, everything is fine" says "all the  
> tests we ran were green, but there were various that were *not* run  
> so YMMV". I think what I'm really looking for is something that  
> instead of saying "x tests, y passed" either says "x tests, y  
> passed, z skipped" or simply doesn't include the "skipped" ones in  
> the number of tests being run. In either case, looking at something  
> that says "19 tests, 0 passed, 19 skipped" or simply "0 tests, 0  
> passed" is vastly more explicit than "19 tests, 19 passed" where in  
> reality 0 were run.
>
> Like, what if a test which doesn't have any assertion is simply not  
> counted? Doesn't make sense to begin with, and then all the  
> preconditions need to do is to bail out and the test doesn't count...
>
> In any case, my complaint here is more about the *perception* of  
> "these tests are all green, everything must be fine" when in fact,  
> none of them have tested anything.

Other Unit Test frameworks support skipping tests. One pattern is to  
raise a SkipTest exception, in which case the test it added to the  
"skipped" list.

The good thing about implementing this with exceptions is that it  
would work nicely even if the particular test runner does not yet  
know about skipping.

Another nice XPish thing is to mark tests as ToDo - it's an expected  
failure, but you communicate that you intend to fix it soon.

See, e.g., http://twistedmatrix.com/projects/core/documentation/howto/ 
policy/test-standard.html#auto6

- Bert -


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Markus Gälli-3

On Mar 27, 2006, at 12:28 PM, Bert Freudenberg wrote:

>>> And you are suggesting to indicate clearly, which tests depend on  
>>> some external resource?
>>
>> Well, really, what I'm looking for is something that instead of  
>> saying "all tests are green, everything is fine" says "all the  
>> tests we ran were green, but there were various that were *not*  
>> run so YMMV". I think what I'm really looking for is something  
>> that instead of saying "x tests, y passed" either says "x tests, y  
>> passed, z skipped" or simply doesn't include the "skipped" ones in  
>> the number of tests being run. In either case, looking at  
>> something that says "19 tests, 0 passed, 19 skipped" or simply "0  
>> tests, 0 passed" is vastly more explicit than "19 tests, 19  
>> passed" where in reality 0 were run.
>>
>> Like, what if a test which doesn't have any assertion is simply  
>> not counted? Doesn't make sense to begin with, and then all the  
>> preconditions need to do is to bail out and the test doesn't count...
>>
>> In any case, my complaint here is more about the *perception* of  
>> "these tests are all green, everything must be fine" when in fact,  
>> none of them have tested anything.
>
> Other Unit Test frameworks support skipping tests. One pattern is  
> to raise a SkipTest exception, in which case the test it added to  
> the "skipped" list.
>
> The good thing about implementing this with exceptions is that it  
> would work nicely even if the particular test runner does not yet  
> know about skipping.
>
> Another nice XPish thing is to mark tests as ToDo - it's an  
> expected failure, but you communicate that you intend to fix it soon.
>
> See, e.g., http://twistedmatrix.com/projects/core/documentation/ 
> howto/policy/test-standard.html#auto6

So the circle is closing... exceptions and preconditions again! ;-)
So Andreas, want to introduce some ResourceNotAvailable and ToDo  
exceptions ;-) , or do we get away without them and just throw a  
PreconditionError that I was suggesting in an earlier thread?

As said in the previous mail ToDo's could be easily figured by just  
sticking to the convention not to even start the method under test,  
which is a good idea in that case anyhow.
As a nice effect one would not even have to touch the tests later  
when the method under test gets implemented.

Cheers,

Markus

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Bert Freudenberg-3

Am 27.03.2006 um 12:54 schrieb Markus Gaelli:

>
> On Mar 27, 2006, at 12:28 PM, Bert Freudenberg wrote:
>
>>>> And you are suggesting to indicate clearly, which tests depend  
>>>> on some external resource?
>>>
>>> Well, really, what I'm looking for is something that instead of  
>>> saying "all tests are green, everything is fine" says "all the  
>>> tests we ran were green, but there were various that were *not*  
>>> run so YMMV". I think what I'm really looking for is something  
>>> that instead of saying "x tests, y passed" either says "x tests,  
>>> y passed, z skipped" or simply doesn't include the "skipped" ones  
>>> in the number of tests being run. In either case, looking at  
>>> something that says "19 tests, 0 passed, 19 skipped" or simply "0  
>>> tests, 0 passed" is vastly more explicit than "19 tests, 19  
>>> passed" where in reality 0 were run.
>>>
>>> Like, what if a test which doesn't have any assertion is simply  
>>> not counted? Doesn't make sense to begin with, and then all the  
>>> preconditions need to do is to bail out and the test doesn't  
>>> count...
>>>
>>> In any case, my complaint here is more about the *perception* of  
>>> "these tests are all green, everything must be fine" when in  
>>> fact, none of them have tested anything.
>>
>> Other Unit Test frameworks support skipping tests. One pattern is  
>> to raise a SkipTest exception, in which case the test it added to  
>> the "skipped" list.
>>
>> The good thing about implementing this with exceptions is that it  
>> would work nicely even if the particular test runner does not yet  
>> know about skipping.
>>
>> Another nice XPish thing is to mark tests as ToDo - it's an  
>> expected failure, but you communicate that you intend to fix it soon.
>>
>> See, e.g., http://twistedmatrix.com/projects/core/documentation/ 
>> howto/policy/test-standard.html#auto6
>
> So the circle is closing... exceptions and preconditions again! ;-)
> So Andreas, want to introduce some ResourceNotAvailable and ToDo  
> exceptions ;-) , or do we get away without them and just throw a  
> PreconditionError that I was suggesting in an earlier thread?

It's all about communicating the test writer's intent to the test  
runner. And I think I'd prefer "x tests, y passed, z skipped" as  
Andreas suggested.

> As said in the previous mail ToDo's could be easily figured by just  
> sticking to the convention not to even start the method under test,  
> which is a good idea in that case anyhow.
> As a nice effect one would not even have to touch the tests later  
> when the method under test gets implemented.

However, you wouldn't get the "unexpected success" mentioned in the  
link above.

- Bert -


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

David T. Lewis
In reply to this post by Adrian Lienhard
On Mon, Mar 27, 2006 at 11:18:58AM +0200, Adrian Lienhard wrote:
> Maybe the "expected failures" feature of SUnit would do the job? You  
> let the tests in question fail but mark them as expected failures  
> depending on whether the resources are loaded or not. Visually, the  
> test runner will run yellow but explicitly state that it expected to so.

Can you give an example of how to mark a test that is expected to
fail?  I'm looking for something like the following, but I must be
missing something obvious.

  (SmalltalkImage current platformName = 'unix') ifFalse: [self expectFailure]

Thanks,

Dave


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Göran Krampe
In reply to this post by Bert Freudenberg-3
Hi!

Bert Freudenberg <[hidden email]> wrote:
> It's all about communicating the test writer's intent to the test  
> runner. And I think I'd prefer "x tests, y passed, z skipped" as  
> Andreas suggested.

But as Andres Valloud implied, there is a difference between:

- "skipped because I could not run them due to missing resources but I
wanted to run them"
- "skipped because the tests do not even apply to this platform and
there is no point in even trying to run them"

So perhaps even:

"x tests, y passed, z skipped, k not applicable"

:)

regards, Göran

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Adrian Lienhard
In reply to this post by David T. Lewis
You can override TestCase>>#expectedFailures in the test case.

HTH,
Adrian

On Mar 27, 2006, at 13:41 , David T. Lewis wrote:

> On Mon, Mar 27, 2006 at 11:18:58AM +0200, Adrian Lienhard wrote:
>> Maybe the "expected failures" feature of SUnit would do the job? You
>> let the tests in question fail but mark them as expected failures
>> depending on whether the resources are loaded or not. Visually, the
>> test runner will run yellow but explicitly state that it expected  
>> to so.
>
> Can you give an example of how to mark a test that is expected to
> fail?  I'm looking for something like the following, but I must be
> missing something obvious.
>
>   (SmalltalkImage current platformName = 'unix') ifFalse: [self  
> expectFailure]
>
> Thanks,
>
> Dave
>
>


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Diego Fernández
In reply to this post by Andreas.Raab

On 3/27/06, Andreas Raab <[hidden email]> wrote:
It's 8MB a pop so no, I think it's not really feasible to stick that
test data into a method ;-)


A test that needs 8mb of data doesn't look like a unit test to me.
Why you need 8mb, you can't test the same with a few bytes only?

I think that the problem is that some times is very useful to use the assertion framework and the test runner to make another kind of tests ("user story tests", "stress test", etc). But the SUnit framework depends on classification to detect all test cases available (unit tests or not).
When I run all the unit tests on the system, I expect unit tests, that is small and fast tests.

Maybe we can have a Test trait or something like that to reuse the assertion framework, and the test runners, and keep the TestCase hierarchy as a classification for unit tests.



Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Markus Gälli-3
In reply to this post by Bert Freudenberg-3

On Mar 27, 2006, at 1:23 PM, Bert Freudenberg wrote:

>> So the circle is closing... exceptions and preconditions again! ;-)
>> So Andreas, want to introduce some ResourceNotAvailable and ToDo  
>> exceptions ;-) , or do we get away without them and just throw a  
>> PreconditionError that I was suggesting in an earlier thread?
>
> It's all about communicating the test writer's intent to the test  
> runner. And I think I'd prefer "x tests, y passed, z skipped" as  
> Andreas suggested.

Right. I still fail to see why this wouldn't be possible using  
preconditions and basically putting all tests into the skipped  
section where the precondition fails.

>
>> As said in the previous mail ToDo's could be easily figured by  
>> just sticking to the convention not to even start the method under  
>> test, which is a good idea in that case anyhow.
>> As a nice effect one would not even have to touch the tests later  
>> when the method under test gets implemented.
>
> However, you wouldn't get the "unexpected success" mentioned in the  
> link above.

Hmmm, right!
Maybe these to-do tests should not be treated by using failed  
preconditions but by some idiom like:

FooTest >> testBar

        self should: [Foo new bar]  
stillRaiseButIIDoPromiseToFitxItReallySoonNowTM: Error
                                               
TestRunner could be tweaked so that failing tests sending above  
message - or a shorter one ;-) - land in a special section, which  
would be "not yet implemented" / "unexpected success" respectively.

Just trying to keep the number of concepts small.

Cheers,

Markus


Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

stéphane ducasse-2
In reply to this post by Andreas.Raab
May be we could use method annotation to carry this kind of behavior.
Lukas started to do something in that direction.

> Well, really, what I'm looking for is something that instead of  
> saying "all tests are green, everything is fine" says "all the  
> tests we ran were green, but there were various that were *not* run  
> so YMMV". I think what I'm really looking for is something that  
> instead of saying "x tests, y passed" either says "x tests, y  
> passed, z skipped" or simply doesn't include the "skipped" ones in  
> the number of tests being run. In either case, looking at something  
> that says "19 tests, 0 passed, 19 skipped" or simply "0 tests, 0  
> passed" is vastly more explicit than "19 tests, 19 passed" where in  
> reality 0 were run.
>
> Like, what if a test which doesn't have any assertion is simply not  
> counted? Doesn't make sense to begin with, and then all the  
> preconditions need to do is to bail out and the test doesn't count...
>
> In any case, my complaint here is more about the *perception* of  
> "these tests are all green, everything must be fine" when in fact,  
> none of them have tested anything.

For that I would extend SUnit because this idea of skipped tests is  
nice to have.

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

David T. Lewis
In reply to this post by Adrian Lienhard
Thanks!

On Mon, Mar 27, 2006 at 02:32:59PM +0200, Adrian Lienhard wrote:

> You can override TestCase>>#expectedFailures in the test case.
>
> HTH,
> Adrian
>
> On Mar 27, 2006, at 13:41 , David T. Lewis wrote:
> >
> > Can you give an example of how to mark a test that is expected to
> > fail?  I'm looking for something like the following, but I must be
> > missing something obvious.
> >
> >   (SmalltalkImage current platformName = 'unix') ifFalse: [self  
> > expectFailure]

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Lukas Renggli
In reply to this post by stéphane ducasse-2
> May be we could use method annotation to carry this kind of behavior.
> Lukas started to do something in that direction.

Yes, but what I did was in the context of SmallLint, so that methods
could be annotated to expect or ignore certain SmallLint rules. The
class LintTestCase (a subclass of TestCase) then queries these pragmas
when performing the rules and raises errors only in appropriate cases.

> For that I would extend SUnit because this idea of skipped tests is
> nice to have.

While developing the new test-runner and this SmallLint extension I
struggled with SUnit several times. Even-tough there are always ways
to subclass and configure it to suit special needs, it gets very soon
ugly and cumbersome.

After 3 years of not having touched Java, I decided to have a quick
look at JUnit 4 [1] and I must say that they changed a lot to the
positive. It makes me sad to see that SUnit still looks more or less
the same as the first time I've used it about 4 years ago. We should
really try to improve it! Imagine a newbie coming from Java and seeing
a testing-framework looks like JUnit 1.0 :-/

A new test runner should be the second step (I've done that already,
because the old one was simply causing too much pain), an improved
test-model should be the first step! ;-)

Lukas

[1] <http://junit.sourceforge.net/javadoc_40>

--
Lukas Renggli
http://www.lukas-renggli.ch

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

Andreas.Raab
In reply to this post by Diego Fernández
Diego Fernandez wrote:
> On 3/27/06, *Andreas Raab* <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     It's 8MB a pop so no, I think it's not really feasible to stick that
>     test data into a method ;-)
>
>
> A test that needs 8mb of data doesn't look like a unit test to me.
> Why you need 8mb, you can't test the same with a few bytes only?

It's test data to guarantee that floating point operations create the
same bit patterns across platforms (1 million samples per operation).
BTW, there is a "common" variant of those tests that run with a few
bytes only (by MD5-hashing and comparing it to the expected result) but
that doesn't help you understanding what is going wrong and where. In
any case, my inquiry wasn't about the concrete test but rather about the
issue of skipped tests in general. This example just reminded me of the
issue again (that I had thought about before when I wrote certain
platform tests).

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: SUnit: Skipping tests?

stéphane ducasse-2
In reply to this post by Lukas Renggli
> While developing the new test-runner and this SmallLint extension I
> struggled with SUnit several times. Even-tough there are always ways
> to subclass and configure it to suit special needs, it gets very soon
> ugly and cumbersome.
>
> After 3 years of not having touched Java, I decided to have a quick
> look at JUnit 4 [1] and I must say that they changed a lot to the
> positive. It makes me sad to see that SUnit still looks more or less
> the same as the first time I've used it about 4 years ago. We should
> really try to improve it! Imagine a newbie coming from Java and seeing
> a testing-framework looks like JUnit 1.0 :-/

Excellent idea. Any taker. I think that the backward compatibility  
between
the dialect is a good idea as soon as it does not get in our way.

So if you have suggestions please say them

> A new test runner should be the second step (I've done that already,
> because the old one was simply causing too much pain), an improved
> test-model should be the first step! ;-)


123