One of my goals for 3.10 is to improve the quality of the image. Our
first release (coming soon!) will have only green tests, and each following release will have only green tests. But there are many other things that could be checked automatically. For example, there should be no unimplemented methods in the released image. Unfortunately, there are a lot right now, so we can't make that rule. But I would like to have all these fixed by the end of the 3.10 cycle and to be able to enforce the rule that no release has any unimplemented methods. Jerome Peace has been working on getting rid of unimplemented methods and has a lot of fixes. You can find them at http://bugs.impara.de/view.php?id=4544 This is the original Mantis issue that he has been working on. Most of the fixes are in "child" issues, but you can find them from that page. You can help by checking Jerome's fixes. If you are familiar with the code he is changing, read it and see whether you can spot anything wrong. If you can, post a note. If you can't, please post a note that, as far as you can tell, it looks good. If you aren't that familiar with the code, but are working on an application that uses it, please file in the changes and try out your application. Again, report on the results! If two or three people try out some changes and everybody thinks they are OK, the release team will mark the issue as "resolved" and put it in the next release. We will make sure the code doesn't break any tests. But if you don't try out the code then we'll have to try it, as well, and that will take a lot more time. So, you can help us get more work done by checking out these fixes. Thanks! -Ralph Johnson |
El 1/26/07 10:10 AM, "Ralph Johnson" <[hidden email]> escribió:
> Jerome Peace has been working on getting rid of unimplemented methods > and has a lot of fixes. You can find them at > http://bugs.impara.de/view.php?id=4544 This is the original Mantis > issue that he has been working on. Most of the fixes are in "child" > issues, but you can find them from that page. > > You can help by checking Jerome's fixes. If you are familiar with the > code he is changing, read it and see whether you can spot anything > wrong. If you can, post a note. If you can't, please post a note > that, as far as you can tell, it looks good. If you aren't that > familiar with the code, but are working on an application that uses > it, please file in the changes and try out your application. Again, > report on the results! I have to look again, but what I when discuss the subject with Jerome, works. Pavel have few unimplemented in his Kernel, but I don't have his code. Maybe all Pavel and Jerome work about unimplemented could give a cleaner image. Edgar __________________________________________________ Preguntá. Respondé. Descubrí. Todo lo que querías saber, y lo que ni imaginabas, está en Yahoo! Respuestas (Beta). ¡Probalo ya! http://www.yahoo.com.ar/respuestas |
In reply to this post by Ralph Johnson
2007/1/26, Ralph Johnson <[hidden email]>:
> One of my goals for 3.10 is to improve the quality of the image. Our > first release (coming soon!) will have only green tests, and each > following release will have only green tests. How does removing failing tests improve the quality? Cheers Philippe > But there are many > other things that could be checked automatically. For example, there > should be no unimplemented methods in the released image. > Unfortunately, there are a lot right now, so we can't make that rule. > But I would like to have all these fixed by the end of the 3.10 cycle > and to be able to enforce the rule that no release has any > unimplemented methods. > > Jerome Peace has been working on getting rid of unimplemented methods > and has a lot of fixes. You can find them at > http://bugs.impara.de/view.php?id=4544 This is the original Mantis > issue that he has been working on. Most of the fixes are in "child" > issues, but you can find them from that page. > > You can help by checking Jerome's fixes. If you are familiar with the > code he is changing, read it and see whether you can spot anything > wrong. If you can, post a note. If you can't, please post a note > that, as far as you can tell, it looks good. If you aren't that > familiar with the code, but are working on an application that uses > it, please file in the changes and try out your application. Again, > report on the results! > > If two or three people try out some changes and everybody thinks they > are OK, the release team will mark the issue as "resolved" and put it > in the next release. > > We will make sure the code doesn't break any tests. But if you don't > try out the code then we'll have to try it, as well, and that will > take a lot more time. So, you can help us get more work done by > checking out these fixes. > > Thanks! > > -Ralph Johnson > > |
On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: > 2007/1/26, Ralph Johnson <[hidden email]>: >> One of my goals for 3.10 is to improve the quality of the image. Our >> first release (coming soon!) will have only green tests, and each >> following release will have only green tests. > > How does removing failing tests improve the quality? Woa, where does that hostility come from? There is another way to ensure all tests are green, besides removing the failing ones. - Bert - >> But there are many >> other things that could be checked automatically. For example, there >> should be no unimplemented methods in the released image. >> Unfortunately, there are a lot right now, so we can't make that rule. >> But I would like to have all these fixed by the end of the 3.10 cycle >> and to be able to enforce the rule that no release has any >> unimplemented methods. >> >> Jerome Peace has been working on getting rid of unimplemented methods >> and has a lot of fixes. You can find them at >> http://bugs.impara.de/view.php?id=4544 This is the original Mantis >> issue that he has been working on. Most of the fixes are in "child" >> issues, but you can find them from that page. >> >> You can help by checking Jerome's fixes. If you are familiar with >> the >> code he is changing, read it and see whether you can spot anything >> wrong. If you can, post a note. If you can't, please post a note >> that, as far as you can tell, it looks good. If you aren't that >> familiar with the code, but are working on an application that uses >> it, please file in the changes and try out your application. Again, >> report on the results! >> >> If two or three people try out some changes and everybody thinks they >> are OK, the release team will mark the issue as "resolved" and put it >> in the next release. >> >> We will make sure the code doesn't break any tests. But if you don't >> try out the code then we'll have to try it, as well, and that will >> take a lot more time. So, you can help us get more work done by >> checking out these fixes. >> >> Thanks! >> >> -Ralph Johnson >> >> > |
Some of the bugs got fixed, some of the tests were wrong and got
deleted, and the rest of the tests got moved to a to-do list. It is important that people be able to run the tests to see if they broke anything. This cannot happen if tests that are known to fail are mixed with tests that are supposed to succeed. Perhaps you have never worked on a project in which all tests worked all the time. This is an important tool in achieving high quality software. -Ralph Johnson |
In reply to this post by Bert Freudenberg
2007/1/26, Bert Freudenberg <[hidden email]>:
> > On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: > > > 2007/1/26, Ralph Johnson <[hidden email]>: > >> One of my goals for 3.10 is to improve the quality of the image. Our > >> first release (coming soon!) will have only green tests, and each > >> following release will have only green tests. > > > > How does removing failing tests improve the quality? > > Woa, where does that hostility come from? There is another way to > ensure all tests are green, besides removing the failing ones. What hostility? I could not see why this improves the quality because to me the first step to fix a problem is to admit that you have a problem. Failing tests are pointer to problems for me. Removing failing tests because they can not be fixed today or tomorrow looked to me like an attempt to hide hide a problem. So I asked and now I know the reason why it was done. Philippe > >> But there are many > >> other things that could be checked automatically. For example, there > >> should be no unimplemented methods in the released image. > >> Unfortunately, there are a lot right now, so we can't make that rule. > >> But I would like to have all these fixed by the end of the 3.10 cycle > >> and to be able to enforce the rule that no release has any > >> unimplemented methods. > >> > >> Jerome Peace has been working on getting rid of unimplemented methods > >> and has a lot of fixes. You can find them at > >> http://bugs.impara.de/view.php?id=4544 This is the original Mantis > >> issue that he has been working on. Most of the fixes are in "child" > >> issues, but you can find them from that page. > >> > >> You can help by checking Jerome's fixes. If you are familiar with > >> the > >> code he is changing, read it and see whether you can spot anything > >> wrong. If you can, post a note. If you can't, please post a note > >> that, as far as you can tell, it looks good. If you aren't that > >> familiar with the code, but are working on an application that uses > >> it, please file in the changes and try out your application. Again, > >> report on the results! > >> > >> If two or three people try out some changes and everybody thinks they > >> are OK, the release team will mark the issue as "resolved" and put it > >> in the next release. > >> > >> We will make sure the code doesn't break any tests. But if you don't > >> try out the code then we'll have to try it, as well, and that will > >> take a lot more time. So, you can help us get more work done by > >> checking out these fixes. > >> > >> Thanks! > >> > >> -Ralph Johnson > >> > >> > > > > > > > > |
Philippe Marschall wrote:
> 2007/1/26, Bert Freudenberg <[hidden email]>: >> >> On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: >> >> > 2007/1/26, Ralph Johnson <[hidden email]>: >> >> One of my goals for 3.10 is to improve the quality of the image. Our >> >> first release (coming soon!) will have only green tests, and each >> >> following release will have only green tests. >> > >> > How does removing failing tests improve the quality? >> >> Woa, where does that hostility come from? There is another way to >> ensure all tests are green, besides removing the failing ones. > > What hostility? I could not see why this improves the quality because > to me the first step to fix a problem is to admit that you have a > problem. Failing tests are pointer to problems for me. Removing > failing tests because they can not be fixed today or tomorrow looked > to me like an attempt to hide hide a problem. So I asked and now I > know the reason why it was done. > > Philippe > release will have only green tests" means, that all tests remain and will pass, not fail. There will be no test removal at all! I'm, pretty sure you misunderstood something. Elod >> >> But there are many >> >> other things that could be checked automatically. For example, there >> >> should be no unimplemented methods in the released image. >> >> Unfortunately, there are a lot right now, so we can't make that rule. >> >> But I would like to have all these fixed by the end of the 3.10 cycle >> >> and to be able to enforce the rule that no release has any >> >> unimplemented methods. >> >> >> >> Jerome Peace has been working on getting rid of unimplemented methods >> >> and has a lot of fixes. You can find them at >> >> http://bugs.impara.de/view.php?id=4544 This is the original Mantis >> >> issue that he has been working on. Most of the fixes are in "child" >> >> issues, but you can find them from that page. >> >> >> >> You can help by checking Jerome's fixes. If you are familiar with >> >> the >> >> code he is changing, read it and see whether you can spot anything >> >> wrong. If you can, post a note. If you can't, please post a note >> >> that, as far as you can tell, it looks good. If you aren't that >> >> familiar with the code, but are working on an application that uses >> >> it, please file in the changes and try out your application. Again, >> >> report on the results! >> >> >> >> If two or three people try out some changes and everybody thinks they >> >> are OK, the release team will mark the issue as "resolved" and put it >> >> in the next release. >> >> >> >> We will make sure the code doesn't break any tests. But if you don't >> >> try out the code then we'll have to try it, as well, and that will >> >> take a lot more time. So, you can help us get more work done by >> >> checking out these fixes. >> >> >> >> Thanks! >> >> >> >> -Ralph Johnson >> >> >> >> >> > >> >> >> >> >> >> > > |
Hi folks!
> Philippe Marschall wrote: >> 2007/1/26, Bert Freudenberg <[hidden email]>: >>> >>> On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: >>> >>> > 2007/1/26, Ralph Johnson <[hidden email]>: >>> >> One of my goals for 3.10 is to improve the quality of the image. >>> Our >>> >> first release (coming soon!) will have only green tests, and each >>> >> following release will have only green tests. >>> > >>> > How does removing failing tests improve the quality? >>> >>> Woa, where does that hostility come from? There is another way to >>> ensure all tests are green, besides removing the failing ones. etc etc. Ok, IMHO this is all about classification of tests. Perhaps someone has already proposed the following but how about: - If there is a bug and someone authors a test to show it and it enters the image (lets ignore *that* particular question for 3 seconds - many of course argue that it should not enter the image unless accompanied with a fix), why not mark it as "has never worked" - or something along those lines? That way we can keep all *working* (not marked) tests green and at the same time have a list of non working tests (those marked) that typically are all red. When something is fixed and turned green we remove the marker. Sure, this may be a daft idea :), just wanted to mention it. regards, Göran |
In reply to this post by Elod Kironsky
2007/1/29, Elod Kironsky <[hidden email]>:
> Philippe Marschall wrote: > > 2007/1/26, Bert Freudenberg <[hidden email]>: > >> > >> On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: > >> > >> > 2007/1/26, Ralph Johnson <[hidden email]>: > >> >> One of my goals for 3.10 is to improve the quality of the image. Our > >> >> first release (coming soon!) will have only green tests, and each > >> >> following release will have only green tests. > >> > > >> > How does removing failing tests improve the quality? > >> > >> Woa, where does that hostility come from? There is another way to > >> ensure all tests are green, besides removing the failing ones. > > > > What hostility? I could not see why this improves the quality because > > to me the first step to fix a problem is to admit that you have a > > problem. Failing tests are pointer to problems for me. Removing > > failing tests because they can not be fixed today or tomorrow looked > > to me like an attempt to hide hide a problem. So I asked and now I > > know the reason why it was done. > > > > Philippe > > > Philippe, where did you read that failing tests will be removed? "First > release will have > only green tests" means, that all tests remain and will pass, not fail. > There will be no > test removal at all! I'm, pretty sure you misunderstood something. http://bugs.impara.de/view.php?id=5527 Philippe > Elod > >> >> But there are many > >> >> other things that could be checked automatically. For example, there > >> >> should be no unimplemented methods in the released image. > >> >> Unfortunately, there are a lot right now, so we can't make that rule. > >> >> But I would like to have all these fixed by the end of the 3.10 cycle > >> >> and to be able to enforce the rule that no release has any > >> >> unimplemented methods. > >> >> > >> >> Jerome Peace has been working on getting rid of unimplemented methods > >> >> and has a lot of fixes. You can find them at > >> >> http://bugs.impara.de/view.php?id=4544 This is the original Mantis > >> >> issue that he has been working on. Most of the fixes are in "child" > >> >> issues, but you can find them from that page. > >> >> > >> >> You can help by checking Jerome's fixes. If you are familiar with > >> >> the > >> >> code he is changing, read it and see whether you can spot anything > >> >> wrong. If you can, post a note. If you can't, please post a note > >> >> that, as far as you can tell, it looks good. If you aren't that > >> >> familiar with the code, but are working on an application that uses > >> >> it, please file in the changes and try out your application. Again, > >> >> report on the results! > >> >> > >> >> If two or three people try out some changes and everybody thinks they > >> >> are OK, the release team will mark the issue as "resolved" and put it > >> >> in the next release. > >> >> > >> >> We will make sure the code doesn't break any tests. But if you don't > >> >> try out the code then we'll have to try it, as well, and that will > >> >> take a lot more time. So, you can help us get more work done by > >> >> checking out these fixes. > >> >> > >> >> Thanks! > >> >> > >> >> -Ralph Johnson > >> >> > >> >> > >> > > >> > >> > >> > >> > >> > >> > > > > > > > |
Philippe Marschall wrote:
> 2007/1/29, Elod Kironsky <[hidden email]>: >> Philippe Marschall wrote: >> > 2007/1/26, Bert Freudenberg <[hidden email]>: >> >> >> >> On Jan 26, 2007, at 16:03 , Philippe Marschall wrote: >> >> >> >> > 2007/1/26, Ralph Johnson <[hidden email]>: >> >> >> One of my goals for 3.10 is to improve the quality of the >> image. Our >> >> >> first release (coming soon!) will have only green tests, and each >> >> >> following release will have only green tests. >> >> > >> >> > How does removing failing tests improve the quality? >> >> >> >> Woa, where does that hostility come from? There is another way to >> >> ensure all tests are green, besides removing the failing ones. >> > >> > What hostility? I could not see why this improves the quality because >> > to me the first step to fix a problem is to admit that you have a >> > problem. Failing tests are pointer to problems for me. Removing >> > failing tests because they can not be fixed today or tomorrow looked >> > to me like an attempt to hide hide a problem. So I asked and now I >> > know the reason why it was done. >> > >> > Philippe >> > >> Philippe, where did you read that failing tests will be removed? "First >> release will have >> only green tests" means, that all tests remain and will pass, not fail. >> There will be no >> test removal at all! I'm, pretty sure you misunderstood something. > > http://bugs.impara.de/view.php?id=5527 > > Philippe > proposition to classify the test, removing them is not a good solution I think. Elod |
Failing tests do not belong in the image. They belong in Mantis. If
you have a test that ought to run but does not then post it to Mantis. We should not release an image with failing tests. In general, the 3.10 release team will not accept changes that break tests. If the image contains failing tests then it is hard for people to tell that their proposed changed breaks them. It is crucial that there be NO failing tests in the image so that anyone can easily check whether their change caused any of the tests to fail. Go to the test runner, select all tests, run them, and make sure everything is green. There is no need to make some new way of classifying tests. The way we classify tests is that tests that do not work are in Mantis as bug reports. If you want to work on them then you can easily file them in. -Ralph Johnson |
Ralph Johnson wrote:
> Failing tests do not belong in the image. They belong in Mantis. If > you have a test that ought to run but does not then post it to Mantis. > > We should not release an image with failing tests. In general, the > 3.10 release team will not accept changes that break tests. If the > image contains failing tests then it is hard for people to tell that > their proposed changed breaks them. It is crucial that there be NO > failing tests in the image so that anyone can easily check whether > their change caused any of the tests to fail. Go to the test runner, > select all tests, run them, and make sure everything is green. > > There is no need to make some new way of classifying tests. The way > we classify tests is that tests that do not work are in Mantis as bug > reports. If you want to work on them then you can easily file them > in. Is there not a difference between an image under development and a released image? I think so. A released image should not have failing tests but it should also not contain the code that the test failed on. So, defect #5527 should read: "Some of the tests in the final released 3.9 image fail. You should be able to run all tests in a *RELEASED* image and get a green bar in the TestRunner." Also, what do you mean by this: "... and I deleted some of them and moved the rest of them to subclasses in a category FailingTests, which I then deleted. I have attached a change file that makes all the tests green in the final released image." Did you delete the tests AND the code that it tests? Just removing the tests doesn't really help much if the code that it tests is still there. I'm sure you agree with this, so that is why I'm a bit stumped. -- brad fuller www.bradfuller.com |
> Did you delete the tests AND the code that it tests? Just removing the
> tests doesn't really help much if the code that it tests is still there. > I'm sure you agree with this, so that is why I'm a bit stumped. You completely miss the point. Leaving broken tests in the image does not cause the tests to be fixed. Broken tests have been in the image for years and they did not help. There are a lot of bugs in Squeak. Squeak is, in general, of "experimental" quality. It is amazing to me that people actually develop commercial products with it, but they do. I would like to raise the quality of Squeak. Part of this is making sure that there is a regression test suite that actually works, and preventing people from breaking it. There are lots of other parts, of course. The image should be smaller, it should have more tests, more bugs should be fixed, there should not be any unimplemented messages. No one thing will improve quality, but a lot of little things will. In general, it is not easy to find the code that a test exercises. What you are asking does not make sense. Please run the tests in 3.9, look at the ones that are broken, and try to figure out what code is bad and should be deleted. It will be an enlightening experience. -Ralph Johnson |
Ralph Johnson wrote:
>> Did you delete the tests AND the code that it tests? Just removing the >> tests doesn't really help much if the code that it tests is still there. >> I'm sure you agree with this, so that is why I'm a bit stumped. > > You completely miss the point. Leaving broken tests in the image does > not cause the tests to be fixed. Broken tests have been in the image > for years and they did not help. > > There are a lot of bugs in Squeak. Squeak is, in general, of > "experimental" quality. It is amazing to me that people actually > develop commercial products with it, but they do. I would like to > raise the quality of Squeak. Part of this is making sure that there > is a regression test suite that actually works, and preventing people > from breaking it. There are lots of other parts, of course. The > image should be smaller, it should have more tests, more bugs should > be fixed, there should not be any unimplemented messages. No one > thing will improve quality, but a lot of little things will. > > In general, it is not easy to find the code that a test exercises. > What you are asking does not make sense. > > Please run the tests in 3.9, look at the ones that are broken, and try > to figure out what code is bad and should be deleted. It will be an > enlightening experience. I agree with you that there are a lot of bugs in squeak and they should be fixed. I'm not suggesting that they should be left! I am thankful that you want to raise the quality of squeak, I'm sure everyone does. But, you are taking an active role. Thanks. Forgive me for missing your point and please bare with me while I ask another question. There's a difference between tests broken and the code broken. You said: "Leaving broken tests in the image does not cause the tests to be fixed. Broken tests have been in the image for years and they did not help." So, you are only talking about removing tests that don't work correctly? Not tests that work correctly and fail because the code is buggy. Ok, that makes sense. -- brad fuller www.bradfuller.com |
> So, you are only talking about removing tests that don't work correctly?
> Not tests that work correctly and fail because the code is buggy. If we know that a test is improper then we just delete it. We did some of that. If we know how to fix it then we fix it. We did a little of that. Sometimes it is not clear whether a test is correct. Sometimes we know there is a bug but we don't know how to fix it. In both of those cases, we moved the tests to Mantis. Moving a test out of the image does not mean it has gone forever. A test that fails is a bug, either a test that shows a bug in the code or a buggy test. Bug reports are supposed to be in Mantis. The tests in the image should all work so we can tell that if we run them after we make a change and some of them fail then we made a mistake. -Ralph |
In reply to this post by Göran Krampe
Dear Goran,
I am with you on this one. Having broken tests visible, but correctly categorised is key to getting things sorted IMHO. I dont think that hiding things away on mantis is not visible enough. You might like to try my improvements to TestRunner and SUnit which implement these changes. You can try it by executing. Installer fixBug: 5639. best regards Keith > Ok, IMHO this is all about classification of tests. Perhaps someone has > already proposed the following but how about: > > - If there is a bug and someone authors a test to show it and it enters > the image (lets ignore *that* particular question for 3 seconds - many of > course argue that it should not enter the image unless accompanied with a > fix), why not mark it as "has never worked" - or something along those > lines? > > That way we can keep all *working* (not marked) tests green and at the > same time have a list of non working tests (those marked) that typically > are all red. When something is fixed and turned green we remove the > marker. > > Sure, this may be a daft idea :), just wanted to mention it. > > regards, Göran > > > > ____________________________________________________ Yahoo! Photos is now offering a quality print service from just 7p a photo. http://uk.photos.yahoo.com |
In reply to this post by Ralph Johnson
Ralph Johnson wrote:
>> ...[several excellent points]... > There are a lot of bugs in Squeak. Squeak is, in general, of > "experimental" quality. It is amazing to me that people actually > develop commercial products with it, but they do. > ... [more excellent points]... Relative to what I'd like to see, there are indeed a lot of bugs in Squeak. While I don't want to take anything away from the thrust of Ralph's plan, I'd like the record to show that that there is not necessarily consensus that Squeak is buggy relative to other platforms - commercial or otherwise. For example, I spend far too much time on the following technology stack: A financial application costing tens of $M and responsible for processing several $B each year. PeopleSoft Oracle Windows Relative to that, Squeak is of absolute perfect quality. Cheers, -Howard |
In reply to this post by keith1y
I actually side with Ralph on this one. It is very satisfying to see
the test runner turn green. With tests in the image that you cannot fix, you will never get this satisfaction. You should get used to using the bug tracker. If you have a little time to spare, go there, pick an issue that looks interesting or easy, fix it. Or better yet, write a test for it if there is none, run it, then fix it, and enjoy the soothing green of the test run. It'll make you smile :) - Bert - On Jan 29, 2007, at 18:11 , Keith Hodges wrote: > Dear Goran, > > I am with you on this one. Having broken tests visible, but > correctly categorised is key to getting things sorted IMHO. I dont > think that hiding things away on mantis is not visible enough. > > You might like to try my improvements to TestRunner and SUnit which > implement these changes. You can try it by executing. > > Installer fixBug: 5639. > > best regards > > Keith >> Ok, IMHO this is all about classification of tests. Perhaps >> someone has >> already proposed the following but how about: >> >> - If there is a bug and someone authors a test to show it and it >> enters >> the image (lets ignore *that* particular question for 3 seconds - >> many of >> course argue that it should not enter the image unless accompanied >> with a >> fix), why not mark it as "has never worked" - or something along >> those >> lines? >> >> That way we can keep all *working* (not marked) tests green and at >> the >> same time have a list of non working tests (those marked) that >> typically >> are all red. When something is fixed and turned green we remove the >> marker. >> >> Sure, this may be a daft idea :), just wanted to mention it. >> >> regards, Göran > |
In reply to this post by keith1y
Keith Hodges wrote:
> Dear Goran, > > I am with you on this one. Having broken tests visible, but correctly > categorised is key to getting things sorted IMHO. I dont think that > hiding things away on mantis is not visible enough. > Just to clarify, we will have the ability to release more than one image, a 'refined', all tests green, and slimmed down image for public consumption, and a 'this is what needs working on' image for those interested in fixing things. Keith ___________________________________________________________ Try the all-new Yahoo! Mail. "The New Version is radically easier to use" The Wall Street Journal http://uk.docs.yahoo.com/nowyoucan.html |
In reply to this post by Howard Stearns
Hello Howard,
+1 just replace Oracle with other real big players in the software market. And to Ralph: The plan that running the tests with the knowledge that *every* red is introduced by me is very good! Herbert mailto:[hidden email] |
Free forum by Nabble | Edit this page |