argh, tests are failing!

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

argh, tests are failing!

Guillermo Polito
Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Pavel Krivanek-3
Of course you are right, the Integrators should really take care on it. 

Before it were only common random failures but the main problem is with this PR:

The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion

-- Pavel

2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email]>:
Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13


Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
Yes we know but have no idea how to recategorise Iceberg protocols 

On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email]> wrote:
Of course you are right, the Integrators should really take care on it. 

Before it were only common random failures but the main problem is with this PR:

The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion

-- Pavel

2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email]>:
Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13



Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
Give me a system that I can fully change and you will see less trivial failing tests :)

On Sun, Sep 10, 2017 at 6:24 PM, Stephane Ducasse <[hidden email]> wrote:
Yes we know but have no idea how to recategorise Iceberg protocols 

On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email]> wrote:
Of course you are right, the Integrators should really take care on it. 

Before it were only common random failures but the main problem is with this PR:

The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion

-- Pavel

2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email]>:
Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13




Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

CyrilFerlicot
In reply to this post by Stephane Ducasse-3
Le 10/09/2017 à 18:24, Stephane Ducasse a écrit :
> Yes we know but have no idea how to recategorise Iceberg protocols 
>

The protocol is already recategorised in Iceberg dev branch. Now we need
a new stable release of Iceberg and to integrated this release in Pharo.


--
Cyril Ferlicot
https://ferlicot.fr

http://www.synectique.eu
2 rue Jacques Prévert 01,
59650 Villeneuve d'ascq France


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
Ah thanks.
So far I do not know how to integrate external project with the new
process. So I need to learn.

On Sun, Sep 10, 2017 at 6:26 PM, Cyril Ferlicot D.
<[hidden email]> wrote:

> Le 10/09/2017 à 18:24, Stephane Ducasse a écrit :
>> Yes we know but have no idea how to recategorise Iceberg protocols
>>
>
> The protocol is already recategorised in Iceberg dev branch. Now we need
> a new stable release of Iceberg and to integrated this release in Pharo.
>
>
> --
> Cyril Ferlicot
> https://ferlicot.fr
>
> http://www.synectique.eu
> 2 rue Jacques Prévert 01,
> 59650 Villeneuve d'ascq France
>

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Pavel Krivanek-3
Iceberg management is very different from others. Esteban needs to create a tag for a specific version which is then used in BaselineOfIDE>>#loadIceberg

-- Pavel

2017-09-10 18:33 GMT+02:00 Stephane Ducasse <[hidden email]>:
Ah thanks.
So far I do not know how to integrate external project with the new
process. So I need to learn.

On Sun, Sep 10, 2017 at 6:26 PM, Cyril Ferlicot D.
<[hidden email]> wrote:
> Le 10/09/2017 à 18:24, Stephane Ducasse a écrit :
>> Yes we know but have no idea how to recategorise Iceberg protocols
>>
>
> The protocol is already recategorised in Iceberg dev branch. Now we need
> a new stable release of Iceberg and to integrated this release in Pharo.
>
>
> --
> Cyril Ferlicot
> https://ferlicot.fr
>
> http://www.synectique.eu
> 2 rue Jacques Prévert 01,
> 59650 Villeneuve d'ascq France
>


Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Juraj Kubelka
In reply to this post by Guillermo Polito
Hi,

I have checked the EventRecorderTests and it works on my computer. The only reason that it might not work on other cases is that there an assert for 'semaphore waitTimeoutMSecs: 200’. I can put higher timeout if necessary. The timeout 200 has worked for couple of years, right?. There might be another issue.

Cheers,
Juraj

El 10-09-2017, a las 10:13, Guillermo Polito <[hidden email]> escribió:

Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Torsten Bergmann
In reply to this post by Stephane Ducasse-3
Hi,

to explain:

we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.

Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
hopefully still is) to cleanup things up and do better than before.

So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
all methods are categorized and where all classes have a comment.

But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.

So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
a year often is more or less a snapshot.

So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.

What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub.

Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.

Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.

I already submitted a PR for Iceberg
 
 https://github.com/pharo-vcs/iceberg/pull/458

and Esteban included that already - but only in Iceberg. We now need to include Iceberg.

Several lessons learned from my side:
  - doing changes and having green tests in the image is not enough
  - also doing a commit and PR does not mean all of your changes are on GitHub
  - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
    the new process we should discuss and address
  - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
    to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)

I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.

Thanks
T.


Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
Von: "Stephane Ducasse" <[hidden email]>
An: "Pharo Development List" <[hidden email]>
Betreff: Re: [Pharo-dev] argh, tests are failing!

Yes we know but have no idea how to recategorise Iceberg protocols 
 
On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:

Of course you are right, the Integrators should really take care on it. 
 
Before it were only common random failures but the main problem is with this PR:
https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
 
The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
 
-- Pavel

 
2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
Hi all,
 
Since a couple of builds we have consistently failing the following tests:
 

GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
 
 
Green builds are the only hard metric to say if the build is healthy or not.
 
- We should not integrate anything until the build is green again...
 
Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

 --

   
Guille Polito
 
Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
 
 
Web: http://guillep.github.io[http://guillep.github.io]
Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
Torsten for the recategorisation can you use the categoriser?
Because I improved it a bit and like that we will get the same
convention everywhere and not yours and mine differently managed.
For example, initialize - release does not exist.

In the automaticRecategoriser package I published a while ago (not the
one in the image) you can find an analyser for a given selector.
It shows the common usage and the outliers. There is also a method to
automatically fix outliers and it handles *

MCHttpRepository
location: 'http://smalltalkhub.com/mc/StephaneDucasse/AutomaticMethodCategorizer/main'
user: ''
password: ''



Tell me what you think.

Stef


On Sun, Sep 10, 2017 at 7:16 PM, Torsten Bergmann <[hidden email]> wrote:

> Hi,
>
> to explain:
>
> we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
> I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.
>
> Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
> hopefully still is) to cleanup things up and do better than before.
>
> So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
> in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
> all methods are categorized and where all classes have a comment.
>
> But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.
>
> So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
> a year often is more or less a snapshot.
>
> So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
> showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.
>
> What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
> method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub.
>
> Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.
>
> Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.
>
> I already submitted a PR for Iceberg
>
>  https://github.com/pharo-vcs/iceberg/pull/458
>
> and Esteban included that already - but only in Iceberg. We now need to include Iceberg.
>
> Several lessons learned from my side:
>   - doing changes and having green tests in the image is not enough
>   - also doing a commit and PR does not mean all of your changes are on GitHub
>   - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
>     the new process we should discuss and address
>   - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
>     to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)
>
> I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.
>
> Thanks
> T.
>
>
> Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
> Von: "Stephane Ducasse" <[hidden email]>
> An: "Pharo Development List" <[hidden email]>
> Betreff: Re: [Pharo-dev] argh, tests are failing!
>
> Yes we know but have no idea how to recategorise Iceberg protocols
>
> On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:
>
> Of course you are right, the Integrators should really take care on it.
>
> Before it were only common random failures but the main problem is with this PR:
> https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
>
> The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
>
> -- Pavel
>
>
> 2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
> Hi all,
>
> Since a couple of builds we have consistently failing the following tests:
>
>
> GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
>
>
> Green builds are the only hard metric to say if the build is healthy or not.
>
> - We should not integrate anything until the build is green again...
>
> Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/
>
>  --
>
>
> Guille Polito
>
> Research Engineer
> French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
>
>
> Web: http://guillep.github.io[http://guillep.github.io]
> Phone: +33 06 52 70 66 13
>

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
In reply to this post by Torsten Bergmann
Hi torsten

From a process I think that we decided that external packages should
be managed like the following:

- PR (from people) to change the code in Pharo
+ issues a PR to Iceberg.

- then Iceberg team merge PR +
issue a new PR for Pharo integration

So I think that this is what you describe too?

Like that we do not have deadlock and can always have a stable version.

Stef






On Sun, Sep 10, 2017 at 7:16 PM, Torsten Bergmann <[hidden email]> wrote:

> Hi,
>
> to explain:
>
> we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
> I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.
>
> Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
> hopefully still is) to cleanup things up and do better than before.
>
> So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
> in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
> all methods are categorized and where all classes have a comment.
>
> But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.
>
> So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
> a year often is more or less a snapshot.
>
> So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
> showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.
>
> What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
> method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub.
>
> Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.
>
> Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.
>
> I already submitted a PR for Iceberg
>
>  https://github.com/pharo-vcs/iceberg/pull/458
>
> and Esteban included that already - but only in Iceberg. We now need to include Iceberg.
>
> Several lessons learned from my side:
>   - doing changes and having green tests in the image is not enough
>   - also doing a commit and PR does not mean all of your changes are on GitHub
>   - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
>     the new process we should discuss and address
>   - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
>     to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)
>
> I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.
>
> Thanks
> T.
>
>
> Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
> Von: "Stephane Ducasse" <[hidden email]>
> An: "Pharo Development List" <[hidden email]>
> Betreff: Re: [Pharo-dev] argh, tests are failing!
>
> Yes we know but have no idea how to recategorise Iceberg protocols
>
> On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:
>
> Of course you are right, the Integrators should really take care on it.
>
> Before it were only common random failures but the main problem is with this PR:
> https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
>
> The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
>
> -- Pavel
>
>
> 2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
> Hi all,
>
> Since a couple of builds we have consistently failing the following tests:
>
>
> GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
>
>
> Green builds are the only hard metric to say if the build is healthy or not.
>
> - We should not integrate anything until the build is green again...
>
> Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/
>
>  --
>
>
> Guille Polito
>
> Research Engineer
> French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
>
>
> Web: http://guillep.github.io[http://guillep.github.io]
> Phone: +33 06 52 70 66 13
>

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Denis Kudriashov
In reply to this post by Guillermo Polito
I will take a look into Mutex

2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email]>:
Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13


Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Guillermo Polito
In reply to this post by Juraj Kubelka
Hi Juraj,

think that it may really depend on the machine and the state of the system. Slower slave machines could not really work with that timeout...

The question is how could we make such test more robust.

On Sun, Sep 10, 2017 at 6:41 PM, Juraj Kubelka <[hidden email]> wrote:
Hi,

I have checked the EventRecorderTests and it works on my computer. The only reason that it might not work on other cases is that there an assert for 'semaphore waitTimeoutMSecs: 200’. I can put higher timeout if necessary. The timeout 200 has worked for couple of years, right?. There might be another issue.

Cheers,
Juraj

El 10-09-2017, a las 10:13, Guillermo Polito <[hidden email]> escribió:

Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13




--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Guillermo Polito
In reply to this post by Torsten Bergmann


On Sun, Sep 10, 2017 at 7:16 PM, Torsten Bergmann <[hidden email]> wrote:
Hi,

to explain:

we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.

Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
hopefully still is) to cleanup things up and do better than before.

So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
all methods are categorized and where all classes have a comment.

But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.

So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
a year often is more or less a snapshot.

So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.

Yes yes, I can't agree more. We need to add Lint rules running by default also.
 

What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub. 

Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.

Yep, this should be fixed as soon as we have multiple subdirectories/subprojects in iceberg. As soon as that is supported we will probably move iceberg to a subtree structure as explained in here:


 

Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.

I already submitted a PR for Iceberg

 https://github.com/pharo-vcs/iceberg/pull/458

and Esteban included that already - but only in Iceberg. We now need to include Iceberg.

Several lessons learned from my side:
  - doing changes and having green tests in the image is not enough
  - also doing a commit and PR does not mean all of your changes are on GitHub
  - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
    the new process we should discuss and address
  - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
    to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)

While the test is an improvement over the current situation, I ask myself if this particular validation of the protocols shouldn't be a Renraku rule.
 

I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.

As Pavel said, the iceberg version that is included in the system is the one described in 

BaselineOfIDE >> loadIceberg

Metacello new
baseline: 'Iceberg';
repository: 'github://pharo-vcs/iceberg:v0.5.7';
load.
(Smalltalk classNamed: #Iceberg) enableMetacelloIntegration: true.

So updating that method with the new iceberg version and pushing that to Pharo should be enough.

In any case, my mail was there just to raise the issue of the failing tests. We should care about failing tests, and I'll cry out loud everytime they are broken, just to make it explicit.


Thanks
T.


Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
Von: "Stephane Ducasse" <[hidden email]>
An: "Pharo Development List" <[hidden email]>
Betreff: Re: [Pharo-dev] argh, tests are failing!

Yes we know but have no idea how to recategorise Iceberg protocols 
 
On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:

Of course you are right, the Integrators should really take care on it. 
 
Before it were only common random failures but the main problem is with this PR:
<a href="https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]" rel="noreferrer" target="_blank">https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
 
The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
 
-- Pavel

 
2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
Hi all,
 
Since a couple of builds we have consistently failing the following tests:
 

GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[<a href="https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/" rel="noreferrer" target="_blank">https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
 
 
Green builds are the only hard metric to say if the build is healthy or not.
 
- We should not integrate anything until the build is green again...
 
Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

 --

   
Guille Polito
 
Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
 
 
Web: http://guillep.github.io[http://guillep.github.io]
Phone: <a href="tel:%2B33%2006%2052%2070%2066%2013" value="+33652706613">+33 06 52 70 66 13




--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Guillermo Polito
In reply to this post by Denis Kudriashov
Thanks Denis

On Mon, Sep 11, 2017 at 11:35 AM, Denis Kudriashov <[hidden email]> wrote:
I will take a look into Mutex

2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email]>:
Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13





--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Guillermo Polito
In reply to this post by Stephane Ducasse-3


On Mon, Sep 11, 2017 at 7:19 AM, Stephane Ducasse <[hidden email]> wrote:
Hi torsten

From a process I think that we decided that external packages should
be managed like the following:

- PR (from people) to change the code in Pharo
+ issues a PR to Iceberg.

- then Iceberg team merge PR +
issue a new PR for Pharo integration

So I think that this is what you describe too?

Yes, that's what I described in ESUG too :)


On the way back from ESUG we discussed with Esteban a way to support this kind of scenarios cleanly and without needing too much of a refactor ;)
 

Like that we do not have deadlock and can always have a stable version.

Stef






On Sun, Sep 10, 2017 at 7:16 PM, Torsten Bergmann <[hidden email]> wrote:
> Hi,
>
> to explain:
>
> we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
> I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.
>
> Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
> hopefully still is) to cleanup things up and do better than before.
>
> So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
> in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
> all methods are categorized and where all classes have a comment.
>
> But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.
>
> So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
> a year often is more or less a snapshot.
>
> So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
> showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.
>
> What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
> method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub.
>
> Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.
>
> Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.
>
> I already submitted a PR for Iceberg
>
https://github.com/pharo-vcs/iceberg/pull/458
>
> and Esteban included that already - but only in Iceberg. We now need to include Iceberg.
>
> Several lessons learned from my side:
>   - doing changes and having green tests in the image is not enough
>   - also doing a commit and PR does not mean all of your changes are on GitHub
>   - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
>     the new process we should discuss and address
>   - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
>     to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)
>
> I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.
>
> Thanks
> T.
>
>
> Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
> Von: "Stephane Ducasse" <[hidden email]>
> An: "Pharo Development List" <[hidden email]>
> Betreff: Re: [Pharo-dev] argh, tests are failing!
>
> Yes we know but have no idea how to recategorise Iceberg protocols
>
> On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:
>
> Of course you are right, the Integrators should really take care on it.
>
> Before it were only common random failures but the main problem is with this PR:
> <a href="https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]" rel="noreferrer" target="_blank">https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
>
> The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
>
> -- Pavel
>
>
> 2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
> Hi all,
>
> Since a couple of builds we have consistently failing the following tests:
>
>
> GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[<a href="https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/" rel="noreferrer" target="_blank">https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
>
>
> Green builds are the only hard metric to say if the build is healthy or not.
>
> - We should not integrate anything until the build is green again...
>
> Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/
>
>  --
>
>
> Guille Polito
>
> Research Engineer
> French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
>
>
> Web: http://guillep.github.io[http://guillep.github.io]
> Phone: <a href="tel:%2B33%2006%2052%2070%2066%2013" value="+33652706613">+33 06 52 70 66 13
>




--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Pavel Krivanek-3
In reply to this post by Guillermo Polito


2017-09-11 12:07 GMT+02:00 Guillermo Polito <[hidden email]>:


On Sun, Sep 10, 2017 at 7:16 PM, Torsten Bergmann <[hidden email]> wrote:
Hi,

to explain:

we have lots of uncategorized methods and non commented classes. For some time we had to accept this ugly situation that old and legacy code. For Pharo 4 and 5
I invested a lot of time to clean this up right before the release date. But with 6.0 such a round was not done and in 6.0 and 6.1 it looks like a mess again.

Why? Because we introduced new features and code and never defined a certain level of quality for code we include.  But our initial goal with Pharo was (and
hopefully still is) to cleanup things up and do better than before.

So we should not forget about quality and so I spend some time last on this again - now for Pharo 7. For instance #setUp and #tearDown in SUnit should be
in "running" as it was defined for TestCase subclasses. Also #hash and #= should be in "comparing" protocol. Also the goal is coming back to an image where
all methods are categorized and where all classes have a comment.

But for me it absolutely makes no sense to reinvest the time over and over into the same cleanups while others can easily make a mess again.

So we should RAISE THE QUALITY BAR to ensure that we keep with such a quality level. Especially as see each build to be a release and the real release once
a year often is more or less a snapshot.

So I also wrote a new test called "ProperMethodCategorizationTest" and while cleaning up #hash it was green. So ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol
showed no problem and I committed and have sent a Pull request. This looked like any other new contribution.

Yes yes, I can't agree more. We need to add Lint rules running by default also.

I will try to prepare a small tool that will collect information about health status of a build that will be stored as a STON file next to the image (at least for the development versions). Then we can generate the same information about incoming PRs and display changes. This way we will be able to see effects of the PR like what new dependencies it added, what new rules are failing and so on. 
It should not reject then automatically but to be a support for the PR review.

-- Pavel
 
 

What I did not knew at this time was that Iceberg code is not in git repository - but managed externally. So while I fixed the code in my image with wrongly categorized hash
method in an Iceberg class (IceSemanticVersion) the system did not show me that this change did not move to GitHub. 

Now I know - but this has such bad side effect when we do changes and these packages are in the image - but not managed with the pharo repository.

Yep, this should be fixed as soon as we have multiple subdirectories/subprojects in iceberg. As soon as that is supported we will probably move iceberg to a subtree structure as explained in here:


 

Long story short: one can not fix this situation with a simple PR as Iceberg code is not under the "pharo" repo umbrella.

I already submitted a PR for Iceberg

 https://github.com/pharo-vcs/iceberg/pull/458

and Esteban included that already - but only in Iceberg. We now need to include Iceberg.

Several lessons learned from my side:
  - doing changes and having green tests in the image is not enough
  - also doing a commit and PR does not mean all of your changes are on GitHub
  - it is not a good situation when a part of the image code is managed with "pharo" repository and another part is managed externally - I see this as a problem of
    the new process we should discuss and address
  - we should nonetheless try to define tests to show the edges where we can improve and cleanup (without a test it would not have been noticed that while I did my best
    to cleanup the current way of managing Iceberg prevented my fix to be into the image in the first place)

While the test is an improvement over the current situation, I ask myself if this particular validation of the protocols shouldn't be a Renraku rule.
 

I dont know what need to be done to include a new iceberg version or when this will happen from Estebans side - but as soon as it is done this test should be green again.

As Pavel said, the iceberg version that is included in the system is the one described in 

BaselineOfIDE >> loadIceberg

Metacello new
baseline: 'Iceberg';
repository: 'github://pharo-vcs/iceberg:v0.5.7';
load.
(Smalltalk classNamed: #Iceberg) enableMetacelloIntegration: true.

So updating that method with the new iceberg version and pushing that to Pharo should be enough.

In any case, my mail was there just to raise the issue of the failing tests. We should care about failing tests, and I'll cry out loud everytime they are broken, just to make it explicit.


Thanks
T.


Gesendet: Sonntag, 10. September 2017 um 18:24 Uhr
Von: "Stephane Ducasse" <[hidden email]>
An: "Pharo Development List" <[hidden email]>
Betreff: Re: [Pharo-dev] argh, tests are failing!

Yes we know but have no idea how to recategorise Iceberg protocols 
 
On Sun, Sep 10, 2017 at 1:03 PM, Pavel Krivanek <[hidden email][mailto:[hidden email]]> wrote:

Of course you are right, the Integrators should really take care on it. 
 
Before it were only common random failures but the main problem is with this PR:
https://github.com/pharo-project/pharo/pull/264[https://github.com/pharo-project/pharo/pull/264]
 
The new test testHashMethodNeedsToBeInComparingProtocol fails on classIceSemanticVersion
 
-- Pavel

 
2017-09-10 10:13 GMT+02:00 Guillermo Polito <[hidden email][mailto:[hidden email]]>:
Hi all,
 
Since a couple of builds we have consistently failing the following tests:
 

GT.EventRecorder.Tests.Core.GTEventRecorderTest.testDeliverNow2[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testDeliverNow2/]GT.EventRecorder.Tests.Core.GTEventRecorderTest.testNotDeliveredDataShouldBeResent[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/GT.EventRecorder.Tests.Core/GTEventRecorderTest/testNotDeliveredDataShouldBeResent/]Kernel.Tests.Processes.MutexTest.testFailedCriticalSectionShouldUnblockWaitingOne[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/Kernel.Tests.Processes/MutexTest/testFailedCriticalSectionShouldUnblockWaitingOne/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_2/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_3/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_4/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_5/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_6/]ReleaseTests.Categorization.ProperMethodCategorizationTest.testHashMethodNeedsToBeInComparingProtocol[https://ci.inria.fr/pharo-ci-jenkins2/job/Test%20pending%20pull%20request%20and%20branch%20Pipeline/job/development/110/testReport/junit/ReleaseTests.Categorization/ProperMethodCategorizationTest/testHashMethodNeedsToBeInComparingProtocol_7/]
 
 
Green builds are the only hard metric to say if the build is healthy or not.
 
- We should not integrate anything until the build is green again...
 
Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

 --

   
Guille Polito
 
Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr[http://www.cnrs.fr]
 
 
Web: http://guillep.github.io[http://guillep.github.io]
Phone: <a href="tel:%2B33%2006%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13




--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13


Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Stephane Ducasse-3
In reply to this post by Guillermo Polito


As Pavel said, the iceberg version that is included in the system is the one described in 

BaselineOfIDE >> loadIceberg

Metacello new
baseline: 'Iceberg';
repository: 'github://pharo-vcs/iceberg:v0.5.7';
load.
(Smalltalk classNamed: #Iceberg) enableMetacelloIntegration: true.

So updating that method with the new iceberg version and pushing that to Pharo should be enough.

Good to know that.
 
In any case, my mail was there just to raise the issue of the failing tests. We should care about failing tests, and I'll cry out loud everytime they are broken, just to make it explicit.

I agree. 
Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Juraj Kubelka
In reply to this post by Guillermo Polito
Hi Guillermo,

I have not found better solution. Waiting without a timeout threshold is not nice and makes it difficult to run all tests. 
If you have better idea how to sync and test two processes, I will appreciate it. 
Otherwise I will use a higher timeout.

Cheers,
Juraj

El 11-09-2017, a las 06:55, Guillermo Polito <[hidden email]> escribió:

Hi Juraj,

think that it may really depend on the machine and the state of the system. Slower slave machines could not really work with that timeout...

The question is how could we make such test more robust.

On Sun, Sep 10, 2017 at 6:41 PM, Juraj Kubelka <[hidden email]> wrote:
Hi,

I have checked the EventRecorderTests and it works on my computer. The only reason that it might not work on other cases is that there an assert for 'semaphore waitTimeoutMSecs: 200’. I can put higher timeout if necessary. The timeout 200 has worked for couple of years, right?. There might be another issue.

Cheers,
Juraj

El 10-09-2017, a las 10:13, Guillermo Polito <[hidden email]> escribió:

Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank" class="">+33 06 52 70 66 13




--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: +33 06 52 70 66 13

Reply | Threaded
Open this post in threaded view
|

Re: argh, tests are failing!

Guillermo Polito
But the thing is that those processes you are creating for delivery are running in priority 30. This means that it may happen that they may not run any time soon (even those 200ms) if there are processes scheduled with higher priorities.

So, the thing is that test is not a unit test at all. It depends a lot on the running environment. Solutions for that:
 - you change the priority of the delivery process for test purposes
 - Or, for testing purposes you don't create a new process, you just execute synchronously

If you want really to test the fact that you are creating and running a separate process, then you should also try to let explicit the processes you created run. This means, if the active process that is running the tests (usually priority 40) does not suspend itself, no processes of priority 30 will be able to run. Ways to suspend the active process and let lower priority ones run are:
 - calling suspend (Processor activeProcess suspend) but this is dangerous because somebody should resume it afterwards from a separate process
 - using a delay
 - some I/O like sockets or async files

On Mon, Sep 11, 2017 at 7:40 PM, Juraj Kubelka <[hidden email]> wrote:
Hi Guillermo,

I have not found better solution. Waiting without a timeout threshold is not nice and makes it difficult to run all tests. 
If you have better idea how to sync and test two processes, I will appreciate it. 
Otherwise I will use a higher timeout.

Cheers,
Juraj

El 11-09-2017, a las 06:55, Guillermo Polito <[hidden email]> escribió:

Hi Juraj,

think that it may really depend on the machine and the state of the system. Slower slave machines could not really work with that timeout...

The question is how could we make such test more robust.

On Sun, Sep 10, 2017 at 6:41 PM, Juraj Kubelka <[hidden email]> wrote:
Hi,

I have checked the EventRecorderTests and it works on my computer. The only reason that it might not work on other cases is that there an assert for 'semaphore waitTimeoutMSecs: 200’. I can put higher timeout if necessary. The timeout 200 has worked for couple of years, right?. There might be another issue.

Cheers,
Juraj

El 10-09-2017, a las 10:13, Guillermo Polito <[hidden email]> escribió:

Hi all,

Since a couple of builds we have consistently failing the following tests:



Green builds are the only hard metric to say if the build is healthy or not.

- We should not integrate anything until the build is green again...

Also, we spent with Pablo a lot of time to have a green build in all platforms...  I'd like to spend my time in other fun stuff than the CI :/

--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13




--
   
Guille Polito

Research Engineer
French National Center for Scientific Research - http://www.cnrs.fr


Phone: <a href="tel:+33%206%2052%2070%2066%2013" value="+33652706613" target="_blank">+33 06 52 70 66 13




--

   

Guille Polito


Research Engineer

French National Center for Scientific Research - http://www.cnrs.fr



Web: http://guillep.github.io

Phone: +33 06 52 70 66 13

12