Moving Monkey quality checks to Renraku infrastructure

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Moving Monkey quality checks to Renraku infrastructure

Uko2
Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a quality model that I’ve been working on and that was used so far by QualityAssistant.

At the moment I’m stuck while trying to do changes in Monkey, as it is really hard to understand how quality checks are made there. @Guille maybe you can advise something.

In the old model Rules were both checking code and storing the entities that violate them. In Renraku Rules are responsible only for checking, then for each violation they produce a critique object that is a mapping between the rule and the entity that violates the rule. Also critiques can provide plenty of additional information such as suggestion on how to fix the issue.

At the moment we can skip the critiques altogether, just run the rules and store the classes and method that violate them. But at the moment I cannot how Monkey in implemented. I.e. where do the rules come from and what output should be provided.

Cheers.
Uko
Reply | Threaded
Open this post in threaded view
|

Re: Moving Monkey quality checks to Renraku infrastructure

Nicolai Hess-3-2


2016-08-04 18:39 GMT+02:00 Yuriy Tymchuk <[hidden email]>:
Hi,

At the moment I am moving Pharo quality tools to Renraku model. This is a quality model that I’ve been working on and that was used so far by QualityAssistant.

At the moment I’m stuck while trying to do changes in Monkey, as it is really hard to understand how quality checks are made there. @Guille maybe you can advise something.

In the old model Rules were both checking code and storing the entities that violate them. In Renraku Rules are responsible only for checking, then for each violation they produce a critique object that is a mapping between the rule and the entity that violates the rule. Also critiques can provide plenty of additional information such as suggestion on how to fix the issue.

At the moment we can skip the critiques altogether, just run the rules and store the classes and method that violate them. But at the moment I cannot how Monkey in implemented. I.e. where do the rules come from and what output should be provided.

Cheers.
Uko

Hi Uko,

as far as I know (from looking at the CICommandLineHandler), the CIValidator collects a set of rules when validating an issue. Look at
CIValidator class>>#pharo60
The rules are
CISUnitTestsRule and
PharoCriticRules pharoIntegrationLintRule harden.




Reply | Threaded
Open this post in threaded view
|

Re: Moving Monkey quality checks to Renraku infrastructure

Guillermo Polito
In reply to this post by Uko2
Hi!


-------- Original Message --------
> Hi,
>
> At the moment I am moving Pharo quality tools to Renraku model. This is a quality model that I’ve been working on and that was used so far by QualityAssistant.
Cool, I'm interested on that. Do you have some examples, docs?
>
> At the moment I’m stuck while trying to do changes in Monkey, as it is really hard to understand how quality checks are made there. @Guille maybe you can advise something.
>
> In the old model Rules were both checking code and storing the entities that violate them. In Renraku Rules are responsible only for checking, then for each violation they produce a critique object that is a mapping between the rule and the entity that violates the rule. Also critiques can provide plenty of additional information such as suggestion on how to fix the issue.
>
> At the moment we can skip the critiques altogether, just run the rules and store the classes and method that violate them. But at the moment I cannot how Monkey in implemented. I.e. where do the rules come from and what output should be provided.
Well, so far I did not yet investigate all that part. I'm actually
experimenting on a new monkey implementation that has the following
objectives:
  - easy to configure and run locally
  - faster. It should be able to run build validations in parallel
(e.g., in my prototype tests run by 4 parallel pharo images are run in 2
minutes)
  - it should enforce the same process used for issue validation and
integration
  - integrated with the bootstrap :)

Are you going to ESUG?
>
> Cheers.
> Uko


Reply | Threaded
Open this post in threaded view
|

Re: Moving Monkey quality checks to Renraku infrastructure

Uko2

> On 05 Aug 2016, at 10:49, Guille Polito <[hidden email]> wrote:
>
> Hi!
>
>
> -------- Original Message --------
>> Hi,
>>
>> At the moment I am moving Pharo quality tools to Renraku model. This is a quality model that I’ve been working on and that was used so far by QualityAssistant.
> Cool, I'm interested on that. Do you have some examples, docs?

There are multiple places, but at the moment I’m focusing on in-Pharo help. If you open the Help Browser there is a book called “Renraku Quality Rules”. Let me know if something is unclear or missing, as it is hard to write a good documentation in one shot :)

>>
>> At the moment I’m stuck while trying to do changes in Monkey, as it is really hard to understand how quality checks are made there. @Guille maybe you can advise something.
>>
>> In the old model Rules were both checking code and storing the entities that violate them. In Renraku Rules are responsible only for checking, then for each violation they produce a critique object that is a mapping between the rule and the entity that violates the rule. Also critiques can provide plenty of additional information such as suggestion on how to fix the issue.
>>
>> At the moment we can skip the critiques altogether, just run the rules and store the classes and method that violate them. But at the moment I cannot how Monkey in implemented. I.e. where do the rules come from and what output should be provided.
> Well, so far I did not yet investigate all that part. I'm actually experimenting on a new monkey implementation that has the following objectives:
> - easy to configure and run locally
> - faster. It should be able to run build validations in parallel (e.g., in my prototype tests run by 4 parallel pharo images are run in 2 minutes)
> - it should enforce the same process used for issue validation and integration
> - integrated with the bootstrap :)
>
> Are you going to ESUG?

Yes

>>
>> Cheers.
>> Uko
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Moving Monkey quality checks to Renraku infrastructure

stepharo
In reply to this post by Uko2


Le 4/8/16 à 18:39, Yuriy Tymchuk a écrit :
> Hi,
>
> At the moment I am moving Pharo quality tools to Renraku model. This is a quality model that I’ve been working on and that was used so far by QualityAssistant.
>
> At the moment I’m stuck while trying to do changes in Monkey, as it is really hard to understand how quality checks are made there. @Guille maybe you can advise something.
>
> In the old model Rules were both checking code and storing the entities that violate them. In Renraku Rules are responsible only for checking, then for each violation they produce a critique object that is a mapping between the rule and the entity that violates the rule. Also critiques can provide plenty of additional information such as suggestion on how to fix the issue.

This is good that you improve this part.
The critics browser was done by someone that was told to do it and not
someone that believe that it is important :)
So it was a good first step.
Now this is good to revisit LintRules. They already helped us a lot.
Rules are really important for Pharo (and you know it) so this is
supercool that you push that.
Thanks a lot Yuriy.

> At the moment we can skip the critiques altogether, just run the rules and store the classes and method that violate them. But at the moment I cannot how Monkey in implemented. I.e. where do the rules come from and what output should be provided.
>
> Cheers.
> Uko
>


Reply | Threaded
Open this post in threaded view
|

Re: Moving Monkey quality checks to Renraku infrastructure

stepharo
In reply to this post by Guillermo Polito
Well, so far I did not yet investigate all that part. I'm actually
experimenting on a new monkey implementation that has the following
objectives:
>  - easy to configure and run locally
>  - faster. It should be able to run build validations in parallel
> (e.g., in my prototype tests run by 4 parallel pharo images are run in
> 2 minutes)
>  - it should enforce the same process used for issue validation and
> integration
>  - integrated with the bootstrap :)
This is super cool: it will change the way we work and make our life a
lot simpler.


Stef

PS: I'm happy to work with motivated, visionary smart and efficient guys.