Did we drop PNG support?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Did we drop PNG support?

Casey Ransberger-2
…or did it end up in one of the external packages?

I've been goofing with InkScape and thinking about trying my hand at some new icons. I suppose I could bring them in via JPEG, but I like having transparency for these kinds of things, and I don't seem to recall JPEG ever supporting transparency.

Any advice?

TIA,

Casey

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Juan Vuletich-4
Hi Casey,

On 12/18/2013 2:00 AM, Casey Ransberger wrote:

> …or did it end up in one of the external packages?
>
> I've been goofing with InkScape and thinking about trying my hand at
> some new icons. I suppose I could bring them in via JPEG, but I like
> having transparency for these kinds of things, and I don't seem to
> recall JPEG ever supporting transparency.
>
> Any advice?
>
> TIA,
>
> Casey
>

Please _never_ use JPEG for icons and stuff. JPEG is only good for
photographs. You can use BMP, that is lossless. Or just load
Graphics-Files-Additional.pck.st

I would have liked to have PNG in the kernel, but it uses Compression.
Maybe JPEG should be moved to the external package too. BMP is dog
simple, and is enough for a very basic kernel.

Cheers,
Juan Vuletich

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Hannes Hirzel
Hello
I prefer to have PNG in Graphics-Files-Additional.pck.st to maintain a
small kernel. And as Juan writes JPEG might be moved there too.

--Hannes


On 12/20/13, Juan Vuletich <[hidden email]> wrote:

> Hi Casey,
>
> On 12/18/2013 2:00 AM, Casey Ransberger wrote:
>> …or did it end up in one of the external packages?
>>
>> I've been goofing with InkScape and thinking about trying my hand at
>> some new icons. I suppose I could bring them in via JPEG, but I like
>> having transparency for these kinds of things, and I don't seem to
>> recall JPEG ever supporting transparency.
>>
>> Any advice?
>>
>> TIA,
>>
>> Casey
>>
>
> Please _never_ use JPEG for icons and stuff. JPEG is only good for
> photographs. You can use BMP, that is lossless. Or just load
> Graphics-Files-Additional.pck.st
>
> I would have liked to have PNG in the kernel, but it uses Compression.
> Maybe JPEG should be moved to the external package too. BMP is dog
> simple, and is enough for a very basic kernel.
>
> Cheers,
> Juan Vuletich
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Casey Ransberger-2
I don't care for JPEG, *ever.* For the record. My thinking was PNG support and that's it, but I can see now why you decided against using it (compression.) I didn't know that BMP actually supported transparency. Yes, that will do nicely (I don't care much for GIF either, though sometimes the animated ones can be kind of a fun retro laugh.)

I'm okay with getting rarely used image formats out of the core of the system, FWIW. I worry with having this stuff outside the image we usually develop in that we'll break stuff and not notice it though. Maybe soonish would be a good time to do some kind of continuous integration set up. That would help to ensure external packages keep working. An adventure for another day though.


On Fri, Dec 20, 2013 at 5:47 AM, H. Hirzel <[hidden email]> wrote:
Hello
I prefer to have PNG in Graphics-Files-Additional.pck.st to maintain a
small kernel. And as Juan writes JPEG might be moved there too.

--Hannes


On 12/20/13, Juan Vuletich <[hidden email]> wrote:
> Hi Casey,
>
> On 12/18/2013 2:00 AM, Casey Ransberger wrote:
>> …or did it end up in one of the external packages?
>>
>> I've been goofing with InkScape and thinking about trying my hand at
>> some new icons. I suppose I could bring them in via JPEG, but I like
>> having transparency for these kinds of things, and I don't seem to
>> recall JPEG ever supporting transparency.
>>
>> Any advice?
>>
>> TIA,
>>
>> Casey
>>
>
> Please _never_ use JPEG for icons and stuff. JPEG is only good for
> photographs. You can use BMP, that is lossless. Or just load
> Graphics-Files-Additional.pck.st
>
> I would have liked to have PNG in the kernel, but it uses Compression.
> Maybe JPEG should be moved to the external package too. BMP is dog
> simple, and is enough for a very basic kernel.
>
> Cheers,
> Juan Vuletich
>
> _______________________________________________
> Cuis mailing list
> [hidden email]
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Hannes Hirzel
On 12/20/13, Casey Ransberger <[hidden email]> wrote:
...
> Maybe soonish
> would be a good time to do some kind of continuous integration set up. That
> would help to ensure external packages keep working. An adventure for
> another day though.

Yes, let's rely at the moment on people complaining if an external
package does not work anymore as the rate of change of the core is
moderate.

--Hannes

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Juan Vuletich-4
In reply to this post by Casey Ransberger-2
On 12/20/2013 11:04 AM, Casey Ransberger wrote:
> I don't care for JPEG, *ever.* For the record. My thinking was PNG
> support and that's it, but I can see now why you decided against using
> it (compression.)

Oh, but I am not against using it! It is the best option in many
situations. I just think it is better for the kernel not to depend on it.

> I didn't know that BMP actually supported transparency. Yes, that will
> do nicely (I don't care much for GIF either, though sometimes the
> animated ones can be kind of a fun retro laugh.)

Given that we have an external package for this stuff, bringing back GIF
it might be a good idea.

> I'm okay with getting rarely used image formats out of the core of the
> system, FWIW. I worry with having this stuff outside the image we
> usually develop in that we'll break stuff and not notice it though.
> Maybe soonish would be a good time to do some kind of continuous
> integration set up. That would help to ensure external packages keep
> working. An adventure for another day though.

I'm not sure of the value of a continuous integration server. But I
usually do stuff for Cuis updates in an image with all the "official"
packages loaded, so I reduce the risk of breaking them. Sometimes, when
committing updates, I also need to update some package.

Cheers,
Juan Vuletich

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Frank Shearar-3
On 20 December 2013 14:36, Juan Vuletich <[hidden email]> wrote:

> On 12/20/2013 11:04 AM, Casey Ransberger wrote:
>>
>> I don't care for JPEG, *ever.* For the record. My thinking was PNG support
>> and that's it, but I can see now why you decided against using it
>> (compression.)
>
>
> Oh, but I am not against using it! It is the best option in many situations.
> I just think it is better for the kernel not to depend on it.
>
>> I didn't know that BMP actually supported transparency. Yes, that will do
>> nicely (I don't care much for GIF either, though sometimes the animated ones
>> can be kind of a fun retro laugh.)
>
>
> Given that we have an external package for this stuff, bringing back GIF it
> might be a good idea.
>
>> I'm okay with getting rarely used image formats out of the core of the
>> system, FWIW. I worry with having this stuff outside the image we usually
>> develop in that we'll break stuff and not notice it though. Maybe soonish
>> would be a good time to do some kind of continuous integration set up. That
>> would help to ensure external packages keep working. An adventure for
>> another day though.
>
>
> I'm not sure of the value of a continuous integration server. But I usually
> do stuff for Cuis updates in an image with all the "official" packages
> loaded, so I reduce the risk of breaking them. Sometimes, when committing
> updates, I also need to update some package.

On every commit, CI runs your entire test suite. When things fail, you
immediately know which commit broke the test, which gives you much
less code (hopefully) to analyse, to find the breakage. CI is all
about closing the feedback loop.

Further, in GitHub's case, you can use Travis CI to handle the CI
part. Because the test suite runs against every commit, a pull request
will be validated (or not), and Travis CI can update the pull
request's commit status. This gives you, the reviewer of the pull
request, a green/red light, telling you that the PR doesn't break
anything.

(For the seriously pedantic, the thing you _really_ want is to merge
the PR into the master and run the test suite on that. _That_ green
light should inform the commit status of the PR. Alas, no one actually
does that.)

frank

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Juan Vuletich-4
On 12/20/2013 1:16 PM, Frank Shearar wrote:

> On every commit, CI runs your entire test suite. When things fail, you
> immediately know which commit broke the test, which gives you much
> less code (hopefully) to analyse, to find the breakage. CI is all
> about closing the feedback loop.
>
> Further, in GitHub's case, you can use Travis CI to handle the CI
> part. Because the test suite runs against every commit, a pull request
> will be validated (or not), and Travis CI can update the pull
> request's commit status. This gives you, the reviewer of the pull
> request, a green/red light, telling you that the PR doesn't break
> anything.
>
> (For the seriously pedantic, the thing you _really_ want is to merge
> the PR into the master and run the test suite on that. _That_ green
> light should inform the commit status of the PR. Alas, no one actually
> does that.)
>
> frank

But all this assumes that:
a) Tests will catch every possible bug
b) Immediately after a test starts failing, someone will review the code

And people starts being less careful about quality, believing that the
CI server can compensate for it.

(Do I need to say names?)

Cheers,
Juan Vuletich

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Frank Shearar-3
On 20 December 2013 18:15, Juan Vuletich <[hidden email]> wrote:

> On 12/20/2013 1:16 PM, Frank Shearar wrote:
>>
>> On every commit, CI runs your entire test suite. When things fail, you
>> immediately know which commit broke the test, which gives you much
>> less code (hopefully) to analyse, to find the breakage. CI is all
>> about closing the feedback loop.
>>
>> Further, in GitHub's case, you can use Travis CI to handle the CI
>> part. Because the test suite runs against every commit, a pull request
>> will be validated (or not), and Travis CI can update the pull
>> request's commit status. This gives you, the reviewer of the pull
>> request, a green/red light, telling you that the PR doesn't break
>> anything.
>>
>> (For the seriously pedantic, the thing you _really_ want is to merge
>> the PR into the master and run the test suite on that. _That_ green
>> light should inform the commit status of the PR. Alas, no one actually
>> does that.)
>>
>> frank
>
>
> But all this assumes that:
> a) Tests will catch every possible bug
> b) Immediately after a test starts failing, someone will review the code

No? Well, I don't assume that. The point is that when someone issues a
pull request against your codebase, _something/one_ needs to run the
tests. Why would you want to _have_ to do that manually, when a
machine can do it for you?

Tests should catch every _found_ bug. As in: if you find a bug, you
write a test that demonstrates the bug. Then you fix the bug.

If I cared about catching _every possible_ bug, I'd program in Coq or ATS.

> And people starts being less careful about quality, believing that the CI
> server can compensate for it.

I would far rather have tests, CI, code coverage tools than not :)
_My_ experience clearly doesn't match with yours.

> (Do I need to say names?)

If you mean _me_, you're mistaken. But tests that are not run might as
well not exist.

frank

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Juan Vuletich-4
On 12/20/2013 3:34 PM, Frank Shearar wrote:

> On 20 December 2013 18:15, Juan Vuletich<[hidden email]>  wrote:
>> On 12/20/2013 1:16 PM, Frank Shearar wrote:
>>> On every commit, CI runs your entire test suite. When things fail, you
>>> immediately know which commit broke the test, which gives you much
>>> less code (hopefully) to analyse, to find the breakage. CI is all
>>> about closing the feedback loop.
>>>
>>> Further, in GitHub's case, you can use Travis CI to handle the CI
>>> part. Because the test suite runs against every commit, a pull request
>>> will be validated (or not), and Travis CI can update the pull
>>> request's commit status. This gives you, the reviewer of the pull
>>> request, a green/red light, telling you that the PR doesn't break
>>> anything.
>>>
>>> (For the seriously pedantic, the thing you _really_ want is to merge
>>> the PR into the master and run the test suite on that. _That_ green
>>> light should inform the commit status of the PR. Alas, no one actually
>>> does that.)
>>>
>>> frank
>>
>> But all this assumes that:
>> a) Tests will catch every possible bug
>> b) Immediately after a test starts failing, someone will review the code
> No? Well, I don't assume that. The point is that when someone issues a
> pull request against your codebase, _something/one_ needs to run the
> tests. Why would you want to _have_ to do that manually, when a
> machine can do it for you?

Because I do it at the same time I carefully review the code myself.
Then I can accept it or not considering my review and the tests.

> Tests should catch every _found_ bug. As in: if you find a bug, you
> write a test that demonstrates the bug. Then you fix the bug.
>
> If I cared about catching _every possible_ bug, I'd program in Coq or ATS.

Yes. But the stuff that constantly breaks is stuff that was never the
subject of a bug, and has no tests.

>> And people starts being less careful about quality, believing that the CI
>> server can compensate for it.
> I would far rather have tests, CI, code coverage tools than not :)
> _My_ experience clearly doesn't match with yours.
>
>> (Do I need to say names?)
> If you mean _me_, you're mistaken. But tests that are not run might as
> well not exist.

Apologies, I didn't mean that, and I didn't mean to be rude. Everything
I said (failed assumptions, lower quality, no good code review, broken
stuff that was never a bug, etc) applies to Pharo. Or at least that's
what I see in the Pharo mail list and what I heard from former Pharo users.

> frank
>

Cheers,
Juan Vuletich

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Frank Shearar-3
On 20 December 2013 18:49, Juan Vuletich <[hidden email]> wrote:

> On 12/20/2013 3:34 PM, Frank Shearar wrote:
>>
>> On 20 December 2013 18:15, Juan Vuletich<[hidden email]>  wrote:
>>>
>>> On 12/20/2013 1:16 PM, Frank Shearar wrote:
>>>>
>>>> On every commit, CI runs your entire test suite. When things fail, you
>>>> immediately know which commit broke the test, which gives you much
>>>> less code (hopefully) to analyse, to find the breakage. CI is all
>>>> about closing the feedback loop.
>>>>
>>>> Further, in GitHub's case, you can use Travis CI to handle the CI
>>>> part. Because the test suite runs against every commit, a pull request
>>>> will be validated (or not), and Travis CI can update the pull
>>>> request's commit status. This gives you, the reviewer of the pull
>>>> request, a green/red light, telling you that the PR doesn't break
>>>> anything.
>>>>
>>>> (For the seriously pedantic, the thing you _really_ want is to merge
>>>> the PR into the master and run the test suite on that. _That_ green
>>>> light should inform the commit status of the PR. Alas, no one actually
>>>> does that.)
>>>>
>>>> frank
>>>
>>>
>>> But all this assumes that:
>>> a) Tests will catch every possible bug
>>> b) Immediately after a test starts failing, someone will review the code
>>
>> No? Well, I don't assume that. The point is that when someone issues a
>> pull request against your codebase, _something/one_ needs to run the
>> tests. Why would you want to _have_ to do that manually, when a
>> machine can do it for you?
>
>
> Because I do it at the same time I carefully review the code myself. Then I
> can accept it or not considering my review and the tests.

I suppose I see CI as a tool to support the reviewer. If I see a green
commit status on a pull request, I at least know the code's not
obviously broken, not introduced a regression. It certainly doesn't
absolve me, the reviewer, of the duty to actually read the code. Using
CI, in my opinion, simply frees the reviewer to consider the code more
deeply, because the basic automated stuff's already taken care of.

>> Tests should catch every _found_ bug. As in: if you find a bug, you
>> write a test that demonstrates the bug. Then you fix the bug.
>>
>> If I cared about catching _every possible_ bug, I'd program in Coq or ATS.
>
>
> Yes. But the stuff that constantly breaks is stuff that was never the
> subject of a bug, and has no tests.

Well, sure, most of the time. Ideally of course no line of code gets
written without a test "pulling it into existence", so to speak. With
legacy code the lack of tests is a serious problem. Certainly that's
true in Squeak's case.

>>> And people starts being less careful about quality, believing that the CI
>>> server can compensate for it.
>>
>> I would far rather have tests, CI, code coverage tools than not :)
>> _My_ experience clearly doesn't match with yours.
>>
>>> (Do I need to say names?)
>>
>> If you mean _me_, you're mistaken. But tests that are not run might as
>> well not exist.
>
>
> Apologies, I didn't mean that, and I didn't mean to be rude. Everything I
> said (failed assumptions, lower quality, no good code review, broken stuff
> that was never a bug, etc) applies to Pharo. Or at least that's what I see
> in the Pharo mail list and what I heard from former Pharo users.

Ok, phew. Having been yelled at just the other day for breaking
things, I may still be a bit sensitive!

frank

> Cheers,
> Juan Vuletich

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Casey Ransberger-2
In reply to this post by Juan Vuletich-4
I hear what you're saying, Juan.

Counter arguments:

a) You're right, automated tests will not catch every possible bug. But they'll catch some, sometimes, and that's more information than no information.

b) I think your statement here misses the point. The point is that someone is *alerted* right away, generally by email, which is never a bad thing.

And I disagree with your statement that "people starts being less careful about quality, believing that the CI server can compensate for it." Obviously the CI server isn't going to fix bugs. IMHO, implementing CI is by definition being *more* careful about quality, not less.

Let me put it another way: people won't run the tests. Even if you pay them, they probably won't run the tests. Even if you convince them that they're probably going to hell if they don't run the tests, people still won't run the tests before they check in. I'm passionate about testing, but you know when the last time I ran the tests was? It was right before the last time we shipped a major version.

Running the tests as often as every commit is never a bad thing; having to do it manually though, is a pain in the ass.

To be clear: I know how much you care about quality. I know that you read every code submission very carefully, and you're keen to reject code that doesn't meet your standards for quality, which are very high. I've even see you rewrite bits of submissions to improve them before including them in the system. You're right about one thing, too, which is that no amount of automation can come close to the work of a dedicated, passionate, and able human being.

It's still worth doing, when the time is right. Eventually -- we should hope -- the number of submissions will become burdensome. Automated testing can reduce some of that burden, and let us know when something has become provably broken, even if us volunteers are too busy surviving to pay close attention that day.

Juan does have a point here. He's controlling checkins to the core system right now, and there aren't yet so many submissions that he has to deputize other people to be able to merge submissions. I think it will be when we reach the point where the number of submissions exceed Juan's available time that we will very seriously need to think about CI. I'm also glad to hear that he works in an image with the standard non-core packages loaded. That alone greatly assuages the particular concern I voiced.

One last advantage of CI: the CI system can build multiple image targets. This means it can cook us a core image and it can also cook us a standard image. The latter is advantageous because it's probably better that the easy-to-grab image we work in has "stuff we might break" loaded already; it's arguably burdensome to load all the external packages before beginning to work (I, for example, don't.)

Anyway, these are just my views, and they aren't carved on stone tablets.

Cheers,

Casey


On Fri, Dec 20, 2013 at 10:15 AM, Juan Vuletich <[hidden email]> wrote:
On 12/20/2013 1:16 PM, Frank Shearar wrote:
On every commit, CI runs your entire test suite. When things fail, you
immediately know which commit broke the test, which gives you much
less code (hopefully) to analyse, to find the breakage. CI is all
about closing the feedback loop.

Further, in GitHub's case, you can use Travis CI to handle the CI
part. Because the test suite runs against every commit, a pull request
will be validated (or not), and Travis CI can update the pull
request's commit status. This gives you, the reviewer of the pull
request, a green/red light, telling you that the PR doesn't break
anything.

(For the seriously pedantic, the thing you _really_ want is to merge
the PR into the master and run the test suite on that. _That_ green
light should inform the commit status of the PR. Alas, no one actually
does that.)

frank

But all this assumes that:
a) Tests will catch every possible bug
b) Immediately after a test starts failing, someone will review the code

And people starts being less careful about quality, believing that the CI server can compensate for it.

(Do I need to say names?)

Cheers,
Juan Vuletich


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Frank Shearar-3
On 21 Dec 2013, at 4:03, Casey Ransberger <[hidden email]> wrote:

I hear what you're saying, Juan.

Counter arguments:

a) You're right, automated tests will not catch every possible bug. But they'll catch some, sometimes, and that's more information than no information.

b) I think your statement here misses the point. The point is that someone is *alerted* right away, generally by email, which is never a bad thing.

And I disagree with your statement that "people starts being less careful about quality, believing that the CI server can compensate for it." Obviously the CI server isn't going to fix bugs. IMHO, implementing CI is by definition being *more* careful about quality, not less.

Let me put it another way: people won't run the tests. Even if you pay them, they probably won't run the tests. Even if you convince them that they're probably going to hell if they don't run the tests, people still won't run the tests before they check in. I'm passionate about testing, but you know when the last time I ran the tests was? It was right before the last time we shipped a major version.

Running the tests as often as every commit is never a bad thing; having to do it manually though, is a pain in the ass.

To be clear: I know how much you care about quality. I know that you read every code submission very carefully, and you're keen to reject code that doesn't meet your standards for quality, which are very high. I've even see you rewrite bits of submissions to improve them before including them in the system. You're right about one thing, too, which is that no amount of automation can come close to the work of a dedicated, passionate, and able human being.

It's still worth doing, when the time is right. Eventually -- we should hope -- the number of submissions will become burdensome. Automated testing can reduce some of that burden, and let us know when something has become provably broken, even if us volunteers are too busy surviving to pay close attention that day.

Juan does have a point here. He's controlling checkins to the core system right now, and there aren't yet so many submissions that he has to deputize other people to be able to merge submissions. I think it will be when we reach the point where the number of submissions exceed Juan's available time that we will very seriously need to think about CI. I'm also glad to hear that he works in an image with the standard non-core packages loaded. That alone greatly assuages the particular concern I voiced.

Exactly. So why waste a dedicated tasteful person's time doing something a computer can do? CI will tell you the tests claim to work (leaving you to ponder their rightness), will tell you the submission hasn't introduced old bugs or changed existing behaviour, will highlight untested code through coverage analysis, and any other useful task that's scriptable. Leaving the reviewer's precious time for what's important: does the submission suck or not?

frank

One last advantage of CI: the CI system can build multiple image targets. This means it can cook us a core image and it can also cook us a standard image. The latter is advantageous because it's probably better that the easy-to-grab image we work in has "stuff we might break" loaded already; it's arguably burdensome to load all the external packages before beginning to work (I, for example, don't.)

Anyway, these are just my views, and they aren't carved on stone tablets.

Cheers,

Casey


On Fri, Dec 20, 2013 at 10:15 AM, Juan Vuletich <[hidden email]> wrote:
On 12/20/2013 1:16 PM, Frank Shearar wrote:
On every commit, CI runs your entire test suite. When things fail, you
immediately know which commit broke the test, which gives you much
less code (hopefully) to analyse, to find the breakage. CI is all
about closing the feedback loop.

Further, in GitHub's case, you can use Travis CI to handle the CI
part. Because the test suite runs against every commit, a pull request
will be validated (or not), and Travis CI can update the pull
request's commit status. This gives you, the reviewer of the pull
request, a green/red light, telling you that the PR doesn't break
anything.

(For the seriously pedantic, the thing you _really_ want is to merge
the PR into the master and run the test suite on that. _That_ green
light should inform the commit status of the PR. Alas, no one actually
does that.)

frank

But all this assumes that:
a) Tests will catch every possible bug
b) Immediately after a test starts failing, someone will review the code

And people starts being less careful about quality, believing that the CI server can compensate for it.

(Do I need to say names?)

Cheers,
Juan Vuletich


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
Reply | Threaded
Open this post in threaded view
|

Re: Did we drop PNG support?

Juan Vuletich-4
In reply to this post by Casey Ransberger-2
Ok. I see your point.

Cheers,
Juan Vuletich

On 12/21/2013 1:03 AM, Casey Ransberger wrote:
I hear what you're saying, Juan.

Counter arguments:

a) You're right, automated tests will not catch every possible bug. But they'll catch some, sometimes, and that's more information than no information.

b) I think your statement here misses the point. The point is that someone is *alerted* right away, generally by email, which is never a bad thing.

And I disagree with your statement that "people starts being less careful about quality, believing that the CI server can compensate for it." Obviously the CI server isn't going to fix bugs. IMHO, implementing CI is by definition being *more* careful about quality, not less.

Let me put it another way: people won't run the tests. Even if you pay them, they probably won't run the tests. Even if you convince them that they're probably going to hell if they don't run the tests, people still won't run the tests before they check in. I'm passionate about testing, but you know when the last time I ran the tests was? It was right before the last time we shipped a major version.

Running the tests as often as every commit is never a bad thing; having to do it manually though, is a pain in the ass.

To be clear: I know how much you care about quality. I know that you read every code submission very carefully, and you're keen to reject code that doesn't meet your standards for quality, which are very high. I've even see you rewrite bits of submissions to improve them before including them in the system. You're right about one thing, too, which is that no amount of automation can come close to the work of a dedicated, passionate, and able human being.

It's still worth doing, when the time is right. Eventually -- we should hope -- the number of submissions will become burdensome. Automated testing can reduce some of that burden, and let us know when something has become provably broken, even if us volunteers are too busy surviving to pay close attention that day.

Juan does have a point here. He's controlling checkins to the core system right now, and there aren't yet so many submissions that he has to deputize other people to be able to merge submissions. I think it will be when we reach the point where the number of submissions exceed Juan's available time that we will very seriously need to think about CI. I'm also glad to hear that he works in an image with the standard non-core packages loaded. That alone greatly assuages the particular concern I voiced.

One last advantage of CI: the CI system can build multiple image targets. This means it can cook us a core image and it can also cook us a standard image. The latter is advantageous because it's probably better that the easy-to-grab image we work in has "stuff we might break" loaded already; it's arguably burdensome to load all the external packages before beginning to work (I, for example, don't.)

Anyway, these are just my views, and they aren't carved on stone tablets.

Cheers,

Casey


On Fri, Dec 20, 2013 at 10:15 AM, Juan Vuletich <[hidden email]> wrote:
On 12/20/2013 1:16 PM, Frank Shearar wrote:
On every commit, CI runs your entire test suite. When things fail, you
immediately know which commit broke the test, which gives you much
less code (hopefully) to analyse, to find the breakage. CI is all
about closing the feedback loop.

Further, in GitHub's case, you can use Travis CI to handle the CI
part. Because the test suite runs against every commit, a pull request
will be validated (or not), and Travis CI can update the pull
request's commit status. This gives you, the reviewer of the pull
request, a green/red light, telling you that the PR doesn't break
anything.

(For the seriously pedantic, the thing you _really_ want is to merge
the PR into the master and run the test suite on that. _That_ green
light should inform the commit status of the PR. Alas, no one actually
does that.)

frank

But all this assumes that:
a) Tests will catch every possible bug
b) Immediately after a test starts failing, someone will review the code

And people starts being less careful about quality, believing that the CI server can compensate for it.

(Do I need to say names?)

Cheers,
Juan Vuletich


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org

_______________________________________________ Cuis mailing list [hidden email] http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org


_______________________________________________
Cuis mailing list
[hidden email]
http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org