Hi all,
What results do you get when running all tests in Pharo and PharoCore 1.0 on Mac/Windows/Linux? Lukas has MCPackageTest>>#testUnload failing on Hudson (a Linux server, I assume). When running the tests locally on the Mac, the test BlockContextTest>>#testTrace fails. But only the first time. Apparently ContextPart>>#trace: is broken [1]. When I run all tests in Pharo1.0 on a Mac VM 4.2.3beta1U the first time I get no failures. When running the second time I get following failures: MCInitializationTest>>testWorkingCopy MCSnapshotTest>>testInstanceReuse ProcessspecificTest>>testDynamicVariable ProcessspecificTest>>testLocalVariable ReleaseTest>>testObsoleteClasses If I then run only the above, they all pass. So there is certainly something fishy with the tests or how they are run. I would like to have a reliable, green test suite for Pharo 1.0. Can you report the results you get including the VM/platform? Any idea why the results are not stable? Thanks, Adrian [1] http://code.google.com/p/pharo/issues/detail?id=2210 _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On Sat, Mar 27, 2010 at 6:17 PM, Adrian Lienhard <[hidden email]> wrote: Hi all, Same result on Linux with the VM I've built. Laurent Laffont
_______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Adrian Lienhard
Hi adrian
who is working on what? On Mar 27, 2010, at 6:17 PM, Adrian Lienhard wrote: > Hi all, > > What results do you get when running all tests in Pharo and PharoCore 1.0 on Mac/Windows/Linux? > > Lukas has MCPackageTest>>#testUnload failing on Hudson (a Linux server, I assume). When running the tests locally on the Mac, the test BlockContextTest>>#testTrace fails. But only the first time. Apparently ContextPart>>#trace: is broken [1]. > > When I run all tests in Pharo1.0 on a Mac VM 4.2.3beta1U the first time I get no failures. When running the second time I get following failures: > > MCInitializationTest>>testWorkingCopy when I run this test twice is is green "self debug: #testWorkingCopy" > MCSnapshotTest>>testInstanceReuse same here self debug: #testInstanceReuse > ReleaseTest>>testObsoleteClasses probably another test create a class and it is not cleaned up. I think that theses tests do not show bugs but probably badly designed tests. So why don't you move them to expected failures and we release 1.0? > ProcessspecificTest>>testDynamicVariable > ProcessspecificTest>>testLocalVariable > > > If I then run only the above, they all pass. > > So there is certainly something fishy with the tests or how they are run. > > I would like to have a reliable, green test suite for Pharo 1.0. Can you report the results you get including the VM/platform? Any idea why the results are not stable? > > Thanks, > Adrian > > [1] http://code.google.com/p/pharo/issues/detail?id=2210 > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On Mar 27, 2010, at 20:56 , Stéphane Ducasse wrote: > Hi adrian > > who is working on what? - I was working on the website and change log, and checked the tests. I can also check the workspace contents in the Pharo image. I plan to put the changes log on the website and simply link from the workspace to that page. - Marcus said he would do a one-click image - I assume Mariano is going to build a Pharo RC4 when we have a new PharoCore RC4 (there have been some updated packages like Metacello and OB if I understood correctly). [...] > I think that theses tests do not show bugs but probably badly designed tests. > So why don't you move them to expected failures and we release 1.0? Then there would be a failure when these tests don't fail, which happens every second run. So we would need to remove them. But if somebody has some cycles to figure out why they fail we could probably fix them. Adrian _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
PharoCore RC4 or PharoDev RC4 ??? because Metacello and OB are for Dev...not core :) Or there will be a PharoCore RC4 ? [...] _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
On Mar 29, 2010, at 09:22 , Mariano Martinez Peck wrote: >> - I assume Mariano is going to build a Pharo RC4 when we have a new >> PharoCore RC4 (there have been some updated packages like Metacello and OB >> if I understood correctly). >> >> >> > PharoCore RC4 or PharoDev RC4 ??? because Metacello and OB are for > Dev...not core :) > Or there will be a PharoCore RC4 ? Yes, since we have to do something to fix the situation with the tests we'll have to do an update, which will be declared PharoCore RC4. Adrian > > > >> [...] >> >>> I think that theses tests do not show bugs but probably badly designed >> tests. >>> So why don't you move them to expected failures and we release 1.0? >> >> Then there would be a failure when these tests don't fail, which happens >> every second run. So we would need to remove them. But if somebody has some >> cycles to figure out why they fail we could probably fix them. >> >> Adrian >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project >> > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Adrian Lienhard
For now
>> I think that theses tests do not show bugs but probably badly designed tests. >> So why don't you move them to expected failures and we release 1.0? > > Then there would be a failure when these tests don't fail, which happens every second run. So we would need to remove them. But if somebody has some cycles to figure out why they fail we could probably fix them. We should evaluate if we have the time and if not then psuh 1.0 out of the door. Stef _______________________________________________ Pharo-project mailing list [hidden email] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
Free forum by Nabble | Edit this page |