Failing tests in #development

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Failing tests in #development

Pavel Krivanek-3
it was a bad idea to integrate PR 606 [1] about the world menu tooltips before it was green. Now we have failing tests in the development branch and validations of new pull requests will fail immediately. 

Btw. I do not think this test should be in the ReleaseTest because these tests should be valid for the minimal image too.


-- Pavel

Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

Pavel Krivanek-3

2017-12-18 9:12 GMT+01:00 Pavel Krivanek <[hidden email]>:
it was a bad idea to integrate PR 606 [1] about the world menu tooltips before it was green. Now we have failing tests in the development branch and validations of new pull requests will fail immediately. 

Btw. I do not think this test should be in the ReleaseTest because these tests should be valid for the minimal image too.


-- Pavel


Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

EstebanLM
but this is because we still need to stabilise the CI: what happens now is that we know some tests are failing randomly then we do not trust the CI to tell us everything is ok. 
Then we accept PRs in red state. 
Of course, this will fail eventually, and that’s what we are seeing.

Most important thing, IMO is to fix the CI: those  tests that fails randomly (I think most of them are networking problems) needs to be fixed or removed. You would say “you cannot remove a test of X” but in fact, a test that fails because of non controllable reasons is a bad test.

Then, we need to have a “no red integration” policy. 

This will fix this problems, but of course, we need to work to make this happen ;)

Esteban 


On 18 Dec 2017, at 09:21, Pavel Krivanek <[hidden email]> wrote:


2017-12-18 9:12 GMT+01:00 Pavel Krivanek <[hidden email]>:
it was a bad idea to integrate PR 606 [1] about the world menu tooltips before it was green. Now we have failing tests in the development branch and validations of new pull requests will fail immediately. 

Btw. I do not think this test should be in the ReleaseTest because these tests should be valid for the minimal image too.


-- Pavel



Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

Stephane Ducasse-3
In reply to this post by Pavel Krivanek-3
Yes sorry I did not think that it could crash.
I do not really understand why the tooltip can break only for one tool.

On Mon, Dec 18, 2017 at 9:12 AM, Pavel Krivanek
<[hidden email]> wrote:

> it was a bad idea to integrate PR 606 [1] about the world menu tooltips
> before it was green. Now we have failing tests in the development branch and
> validations of new pull requests will fail immediately.
>
> Btw. I do not think this test should be in the ReleaseTest because these
> tests should be valid for the minimal image too.
>
> [1] https://github.com/pharo-project/pharo/pull/606
>
> -- Pavel
>

Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

CyrilFerlicot

On mar. 19 déc. 2017 at 18:27, Stephane Ducasse <[hidden email]> wrote:
Yes sorry I did not think that it could crash.
I do not really understand why the tooltip can break only for one tool.

Hi,

With the tooltips I added a test to ensure all tools have an explanation. Currently there is one missing for Iceberg since it will be in its next release. 


--
Cyril Ferlicot
https://ferlicot.fr

http://www.synectique.eu
2 rue Jacques Prévert 01,
59650 Villeneuve d'ascq France
Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

Stephane Ducasse-3
ahhhhhhhhhhh ok so this is then really nothing. Just blurs any further tests.
So in case case we should do two PR on without iceberg and one for
iceberg + change the test after...
Seems to me a bit over kill.

On Tue, Dec 19, 2017 at 6:37 PM, Cyril Ferlicot
<[hidden email]> wrote:

>
> On mar. 19 déc. 2017 at 18:27, Stephane Ducasse <[hidden email]>
> wrote:
>>
>> Yes sorry I did not think that it could crash.
>> I do not really understand why the tooltip can break only for one tool.
>
>
> Hi,
>
> With the tooltips I added a test to ensure all tools have an explanation.
> Currently there is one missing for Iceberg since it will be in its next
> release.
>
>>
> --
> Cyril Ferlicot
> https://ferlicot.fr
>
> http://www.synectique.eu
> 2 rue Jacques Prévert 01,
> 59650 Villeneuve d'ascq France

Reply | Threaded
Open this post in threaded view
|

Re: Failing tests in #development

demarey
In reply to this post by EstebanLM

> Le 19 déc. 2017 à 10:50, Esteban Lorenzano <[hidden email]> a écrit :
>
> but this is because we still need to stabilise the CI: what happens now is that we know some tests are failing randomly then we do not trust the CI to tell us everything is ok.
> Then we accept PRs in red state.
> Of course, this will fail eventually, and that’s what we are seeing.
>
> Most important thing, IMO is to fix the CI: those  tests that fails randomly (I think most of them are networking problems) needs to be fixed or removed.

I think we should not rely on external services for our tests.
We should mock every network service call to have a local version (still using the network). Then, you will not have failing tests because of timeout or network problem.
With Zinc or Teapot it’s quite easy to set up a network service on a local port of the machine running tests.