destructive / regression tests

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

destructive / regression tests

Camillo Bruni-3
I've been thinking these days on how to implement regression tests.
Obviously the core requirement is to launch a separate image and capture terminal output and/or result files.

For now I will put them into a separate repository:

        http://smalltalkhub.com/#!/~Pharo/regression-testing

Some question:
- Is there a test ConfigurationOf* which can be used to try loading different versions / groups?
- Is it ok to rely on OSProcess for now to run launch a separate new image?
Reply | Threaded
Open this post in threaded view
|

Re: destructive / regression tests

Igor Stasenko
On 24 January 2013 18:12, Camillo Bruni <[hidden email]> wrote:

> I've been thinking these days on how to implement regression tests.
> Obviously the core requirement is to launch a separate image and capture terminal output and/or result files.
>
> For now I will put them into a separate repository:
>
>         http://smalltalkhub.com/#!/~Pharo/regression-testing
>
> Some question:
> - Is there a test ConfigurationOf* which can be used to try loading different versions / groups?
> - Is it ok to rely on OSProcess for now to run launch a separate new image?

+ 1
we need such tests.
And actually it would be even useful to know which code crashing the
system , e.g.
we can even have a tests which succeed only if they crashing VM.


--
Best regards,
Igor Stasenko.

Reply | Threaded
Open this post in threaded view
|

Re: destructive / regression tests

Camillo Bruni-3

On 2013-01-24, at 18:16, Igor Stasenko <[hidden email]> wrote:

> On 24 January 2013 18:12, Camillo Bruni <[hidden email]> wrote:
>> I've been thinking these days on how to implement regression tests.
>> Obviously the core requirement is to launch a separate image and capture terminal output and/or result files.
>>
>> For now I will put them into a separate repository:
>>
>>        http://smalltalkhub.com/#!/~Pharo/regression-testing
>>
>> Some question:
>> - Is there a test ConfigurationOf* which can be used to try loading different versions / groups?
>> - Is it ok to rely on OSProcess for now to run launch a separate new image?
>
> + 1
> we need such tests.
> And actually it would be even useful to know which code crashing the
> system , e.g.
> we can even have a tests which succeed only if they crashing VM.


yes exactly!!

I even thought about doing this random code-change validation ;)

1. chose a random method (minus some uninteresting packages or only those with methods)
2. make it return a wrong value
3. run all tests
4. make sure we get an assertion failed ;)
Reply | Threaded
Open this post in threaded view
|

Re: destructive / regression tests

Marcus Denker-4
In reply to this post by Igor Stasenko

On Jan 24, 2013, at 6:16 PM, Igor Stasenko <[hidden email]> wrote:

> On 24 January 2013 18:12, Camillo Bruni <[hidden email]> wrote:
>> I've been thinking these days on how to implement regression tests.
>> Obviously the core requirement is to launch a separate image and capture terminal output and/or result files.
>>
>> For now I will put them into a separate repository:
>>
>>        http://smalltalkhub.com/#!/~Pharo/regression-testing
>>
>> Some question:
>> - Is there a test ConfigurationOf* which can be used to try loading different versions / groups?
>> - Is it ok to rely on OSProcess for now to run launch a separate new image?
>
> + 1
> we need such tests.
> And actually it would be even useful to know which code crashing the
> system , e.g.
> we can even have a tests which succeed only if they crashing VM.

Yes!

We did a first (ad-hoc) regression tester for Opal:

        https://ci.inria.fr/rmod/job/OpalRegression/

We just have the tests in different packages and take care which to run when. The regression-test job
runs first slow tests that can not do any harm (compiling all method, but not installing them),
then as a second to run tests that recompile the whole image, this we save and then run all tests
on that:


./vm.sh $JOB_NAME.image test --junit-xml-output "OpalCompiler-RegressionTests.*"
./vm.sh $JOB_NAME.image test --junit-xml-output "OpalCompiler-DestructiveRegressionTests.*" --save
./vm.sh $JOB_NAME.image test --junit-xml-output ".*"

With that in place we can be quire sure that a change done on the compiler is not doing anything wrong.
(and that speeds up development quite a lot…)

        Marcus


Reply | Threaded
Open this post in threaded view
|

Re: destructive / regression tests

Camillo Bruni-3
I think I got it! last night's hacking session was fruitful!
For now I have
- simple command line tests that run commandline arguments in a separate
  image and capture stdout/stderr
- complete test-cases that are run in a separate image (including UI)
  by forking of images at the right point during test cases

to accomplish that I requires OSProcess and redirect stderr/stdout to files.
Furthermore I added a new save method to SmalltalkImage which does not transfer
control to the new image but rather sticks to the original image.

I will prepare a Configuration now


On 2013-01-24, at 18:23, Marcus Denker <[hidden email]> wrote:

> On Jan 24, 2013, at 6:16 PM, Igor Stasenko <[hidden email]> wrote:
>> On 24 January 2013 18:12, Camillo Bruni <[hidden email]> wrote:
>>> I've been thinking these days on how to implement regression tests.
>>> Obviously the core requirement is to launch a separate image and capture terminal output and/or result files.
>>>
>>> For now I will put them into a separate repository:
>>>
>>>       http://smalltalkhub.com/#!/~Pharo/regression-testing
>>>
>>> Some question:
>>> - Is there a test ConfigurationOf* which can be used to try loading different versions / groups?
>>> - Is it ok to rely on OSProcess for now to run launch a separate new image?
>>
>> + 1
>> we need such tests.
>> And actually it would be even useful to know which code crashing the
>> system , e.g.
>> we can even have a tests which succeed only if they crashing VM.
>
> Yes!
>
> We did a first (ad-hoc) regression tester for Opal:
>
> https://ci.inria.fr/rmod/job/OpalRegression/
>
> We just have the tests in different packages and take care which to run when. The regression-test job
> runs first slow tests that can not do any harm (compiling all method, but not installing them),
> then as a second to run tests that recompile the whole image, this we save and then run all tests
> on that:
>
>
> ./vm.sh $JOB_NAME.image test --junit-xml-output "OpalCompiler-RegressionTests.*"
> ./vm.sh $JOB_NAME.image test --junit-xml-output "OpalCompiler-DestructiveRegressionTests.*" --save
> ./vm.sh $JOB_NAME.image test --junit-xml-output ".*"
>
> With that in place we can be quire sure that a change done on the compiler is not doing anything wrong.
> (and that speeds up development quite a lot…)
>
> Marcus
>
>


Reply | Threaded
Open this post in threaded view
|

Re: destructive / regression tests

Sean P. DeNigris
Administrator
Camillo Bruni-3 wrote
I think I got it! last night's hacking session was fruitful!
For now I have
...
Nice work, cami! Thanks :)
Cheers,
Sean