Randal L. Schwartz wrote:
>>>>>> "Josh" == Josh Gargus <[hidden email]> writes: > > Josh> I believe that the two visions are fundamentally at odds. I don't think > Josh> that it is a technical shortcoming of Sake/Packages, I just think that > Josh> any attempt to have universal cross-fork compatibility is fundamentally > Josh> doomed to either: > > Josh> 1) fail, or > > Josh> 2) "succeed", but at the cost of preventing fundamental improvements to > Josh> the programming model > > Indeed. One of the problems of non-trunk development is that the barrier > to contribution is far higher, because each individual contributor has > to understand how to make his idea *work* with *all* base images. > > Whereas the model we have now, the Squeak base gets better by local commits > and by borrowing things that make sense from Pharo and Cuis, even though the > Pharo and Cuis committers didn't even know or care that Squeak may want to > borrow it. > > And Pharo is getting better by borrowing *relevant* commits from > Squeak. > > And I, as an individual committer to Squeak, don't have to know or care > whether my patch will work on Pharo. It's up to the Pharo guys to > figure that out. > > This is a far better system. More commits, more progress has been made in the > past six months than the previous 18 months. What's boggling me about this whole brouhaha is this: surely our situation - several similar-but-not-identical Smalltalks - is pretty much like the BSD world? It's not up to, say, the FreeBSD developers to make sure that the ports stay working. That's what port maintainers are for. The port maintainer of, say, curl, then needs to make sure that curl works nicely on FreeBSD. Ditto for the NetBSD maintainer (who might, of course, be the same guy). People who actually write the packages - the curl developers, in this example - either care about their software running everywhere, in which case they stick to standards and try minimise platform specific stuff, or they don't. Of course the various roles don't just blindly muck about, but I hope we don't need to keep actually saying "and the person tries to communicate with the other people in the ecosystem". frank |
In reply to this post by Josh Gargus
A new package that does not know that ifNotNil: [ :value | ] is invalid in 3.8 will not load or compile in 3.8. So you promote compatibility and the ability to migrate, by fixing the OLD image, and migrating the code to the new API there. The advantage of this being that your code base can move forward in situ, and your packages dont have to use the old api, and you can maintain one codebase for all squeak images ever.
I know it still exists. |
In reply to this post by Josh Gargus
Josh wrote:
The irony being that I cant use my build system to build my own images, because they are pier based, and pier keeps its data in image. Keith |
In reply to this post by keith1y
keith wrote:
>>> >>> This is what happened in the case of ifNotNil: ifNotNilDo: merger. >> >> >> I'm not sure what you're referring to. The old method still exists, >> so no packages that you load can break from it. What am I missing? > > A new package that does not know that ifNotNil: [ :value | ] is > invalid in 3.8 will not load or compile in 3.8. So you promote > compatibility and the ability to migrate, by fixing the OLD image, and > migrating the code to the new API there. > > The advantage of this being that your code base can move forward in > situ, and your packages dont have to use the old api, and you can > maintain one codebase for all squeak images ever. > >> You were probably just unaware that #ifNotNilDo: still existed. > > I know it still exists. > > Keith Hi Folks, (reposted, in the hope of not being ignored) Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers. Base image builders want to be free of the need to provide backwards compatibility. This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not-standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images. Building these suites is quite some work, mostly to be done by package developers. But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.". All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect. What do you think? Cheers, Juan Vuletich |
In reply to this post by keith1y
Sure and you then object file out the pier data like I do for wikiserver on the iPhone, kill the VM, start the VM with the read-only image, read the Pier data back in.
On 2010-01-25, at 8:25 AM, keith wrote:
-- =========================================================================== John M. McIntosh <[hidden email]> Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com =========================================================================== |
In reply to this post by Juan Vuletich-4
2010/1/25 Juan Vuletich <[hidden email]>:
> keith wrote: >>>> >>>> This is what happened in the case of ifNotNil: ifNotNilDo: merger. >>> >>> >>> I'm not sure what you're referring to. The old method still exists, so >>> no packages that you load can break from it. What am I missing? >> >> A new package that does not know that ifNotNil: [ :value | ] is invalid in >> 3.8 will not load or compile in 3.8. So you promote compatibility and the >> ability to migrate, by fixing the OLD image, and migrating the code to the >> new API there. >> >> The advantage of this being that your code base can move forward in situ, >> and your packages dont have to use the old api, and you can maintain one >> codebase for all squeak images ever. >> >>> You were probably just unaware that #ifNotNilDo: still existed. >> >> I know it still exists. >> >> Keith > > Hi Folks, > > (reposted, in the hope of not being ignored) > > Package developers want their work to run on various Squeak versions and > variants, without needing rewrite. Same for app developers. > > Base image builders want to be free of the need to provide backwards > compatibility. > > This is what I suggest: A package assumes it can use a set of apis of the > Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions > should be made explicit, in the form of tests. So, for example, for > collections, some package developer might require the "Common Collection API > tests" to pass. Then, if his package fails to run, let's say in Cuis, he > would run the tests for the apis he needs. If some test fails, he could say > "Cuis developers, you're not supporting api XXX", end expect them to fix the > issue. But if no test fails, he needs to either modify his code so it > doesn't use not-standarized apis, or he could negotiate with (all) base > image developers the addition of a new api or use case to the test suite and > the base images. > > Building these suites is quite some work, mostly to be done by package > developers. But it can easily point out responsibilities and duties. It > frees package developers of needing to have a deep knowledge of various base > images. And it frees base image developers from needing to know details > about an unbounded set of external packages. Besides, it puts popular > packages that everybody wants to support on equal footing with less-known > packages. It also lets base image developers say "we support Common APIs > xxx, yyy, zzz, etc.". > > All what I say about base images could also apply to packages that offer > services to other packages: There could also be test suites to specify their > services, and allow users to switch versions of the packages they use > knowing what to expect. > > What do you think? > > Cheers, > Juan Vuletich > > A quick-cheap analysis could be performed: - list of classes extended by your packages - list of classes subclassed by your packages - list of methods used but not implemented by your packages With type inference (Roel or other), could be possible to get more could lead to tests like: self assertHasClassNamed: #Array. self assertClassNamed: #Array canUnderstand: #collect:. "If you can infer type" self assertHasMessage: #at:put:. "if you cannot..." etc... Doesn't that exists ? Of course, it should operate on a Set of packages... Nicolas |
In reply to this post by Juan Vuletich-4
On Jan 25, 2010, at 12:36 PM, Juan Vuletich wrote:
> Hi Folks, > > (reposted, in the hope of not being ignored) > > Package developers want their work to run on various Squeak versions > and variants, without needing rewrite. Same for app developers. > > Base image builders want to be free of the need to provide backwards > compatibility. > > This is what I suggest: A package assumes it can use a set of apis > of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. > Those assumptions should be made explicit, in the form of tests. So, > for example, for collections, some package developer might require > the "Common Collection API tests" to pass. Then, if his package > fails to run, let's say in Cuis, he would run the tests for the apis > he needs. If some test fails, he could say "Cuis developers, you're > not supporting api XXX", end expect them to fix the issue. But if no > test fails, he needs to either modify his code so it doesn't use not- > standarized apis, or he could negotiate with (all) base image > developers the addition of a new api or use case to the test suite > and the base images. > > Building these suites is quite some work, mostly to be done by > package developers. But it can easily point out responsibilities and > duties. It frees package developers of needing to have a deep > knowledge of various base images. And it frees base image developers > from needing to know details about an unbounded set of external > packages. Besides, it puts popular packages that everybody wants to > support on equal footing with less-known packages. It also lets base > image developers say "we support Common APIs xxx, yyy, zzz, etc.". > > All what I say about base images could also apply to packages that > offer services to other packages: There could also be test suites to > specify their services, and allow users to switch versions of the > packages they use knowing what to expect. > > What do you think? > The overall concept makes sense to me re: getting to a common set of APIs. It would be nice to have it in the form of formal protocols eventually but tests would provide a simpler starting point. This wouldn't just be helpful to image and package maintainers, but also to developers in general as API documentation is often lacking and an effort like this would (hopefully) encourage developers to better document what they produce. > Cheers, > Juan Vuletich > > Thanks, Phil |
In reply to this post by Nicolas Cellier
On Mon, Jan 25, 2010 at 08:47:20PM +0100, Nicolas Cellier wrote:
> 2010/1/25 Juan Vuletich <[hidden email]>: > > > > Hi Folks, > > > > (reposted, in the hope of not being ignored) > > > > Package developers want their work to run on various Squeak versions and > > variants, without needing rewrite. Same for app developers. > > > > Base image builders want to be free of the need to provide backwards > > compatibility. > > > > This is what I suggest: A package assumes it can use a set of apis of the > > Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions > > should be made explicit, in the form of tests. So, for example, for > > collections, some package developer might require the "Common Collection API > > tests" to pass. Then, if his package fails to run, let's say in Cuis, he > > would run the tests for the apis he needs. If some test fails, he could say > > "Cuis developers, you're not supporting api XXX", end expect them to fix the > > issue. But if no test fails, he needs to either modify his code so it > > doesn't use not-standarized apis, or he could negotiate with (all) base > > image developers the addition of a new api or use case to the test suite and > > the base images. > > > > Building these suites is quite some work, mostly to be done by package > > developers. But it can easily point out responsibilities and duties. It > > frees package developers of needing to have a deep knowledge of various base > > images. And it frees base image developers from needing to know details > > about an unbounded set of external packages. Besides, it puts popular > > packages that everybody wants to support on equal footing with less-known > > packages. It also lets base image developers say "we support Common APIs > > xxx, yyy, zzz, etc.". > > > > All what I say about base images could also apply to packages that offer > > services to other packages: There could also be test suites to specify their > > services, and allow users to switch versions of the packages they use > > knowing what to expect. > > > > What do you think? > > > > Cheers, > > Juan Vuletich > > > > > > A quick-cheap analysis could be performed: > - list of classes extended by your packages > - list of classes subclassed by your packages > - list of methods used but not implemented by your packages > With type inference (Roel or other), could be possible to get more > > could lead to tests like: > self assertHasClassNamed: #Array. > self assertClassNamed: #Array canUnderstand: #collect:. "If you can > infer type" > self assertHasMessage: #at:put:. "if you cannot..." > etc... > Doesn't that exists ? > > Of course, it should operate on a Set of packages... I like Juan's idea a lot, but I lost some enthusiasm when I got to the part about it being a lot of work ;-) Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, it might be manageable. I think that it would be important that the work be done in small chunks that can be contributed easily. We need to consider who is doing the work, and why they would be motivated to spend time on it. For example, the OSProcess package that I maintain (and I don't know if this is a good example) already has a large set of unit tests that fail right away if an expected interface changes. I would be willing to put some work into writing new tests that document just the api expectations alone, but I would not want to sink a large amount of time into it because it's likely to be boring work that does not provide much additional benefit to me. So I like the idea, but let's keep it as simple and easy as possible. Dave |
In reply to this post by Juan Vuletich-4
For this vision to have a chance, absolutely one thing is 100% essential. SUnit must be common, between forks, and there must be some way of flagging known exceptions for different target images. This is something I attempted to add to SUnit in August 2006, in eagre anticipation. The second essential thing is for the package loading tools to also be in common. That means Monticello (in my book, though probably not in yours). However, most forks imho are keeping all of their libraries too close to their chests. All efforts to change this, to move obvious loadable libraries like SUnit, and MC out to be externally managed, have up to now failed. The weakness of my attempts so far has been in the testing side of things. (Matthew Fulmer is worth is weight in gold on that one) However Monticello is a complicated beast, I may have made 400 more commits, merging 3 forks, but one or two bugs is all it takes to reject the entire refactoring of the repositories code, the improved more uniform ui implementation, the password manager, the dual change sorter, the orphanage for out of order loading, public package info properties for package managers, scripting of commits, memory analysis per package, the atomic loader, cleanUp code, improved version numbering, integrated Configurations, separated tests, default packageinfo package types etc etc etc. I always needed others who are more rigourous to join in and help, but so far the vision hasn't caught. I now think it is going to fall to the forks for whom the libraries are already genuinely optional, to pioneer this process. i.e. Cuis.
As I said, if you try to treat what is perceived as an integral library as an external package, to be maintained by a package developer, with the API maintained by "actual conversation" between the fork-leaders and the package maintainers. The fork controllers wont have any of it, they forked for the purposes of retaining control, and wild horses wont shift them. I tried, I asked, I begged, I cried, I explained, and I ranted, in the belief that it was now or never. Up until Pharo, all "forks" were basically differing applications on the same evolving kernel. With Pharo this is different, they are moving the Kernel in a different direction on purpose, however for some reason they believe that forking SUnit, an obviously loadable package, is necessary too! Correct me if I am wrong, but in my thinking if SUnit is forked, your vision is pretty doomed. SUnit is forked.
However, may I point out we don't need to do that here, we have a shared repository you can commit your changes to, its called "trunk". Keith |
In reply to this post by David T. Lewis
On 2010-01-25, at 1:39 PM, David T. Lewis wrote: > I like Juan's idea a lot, but I lost some enthusiasm when I got to the > part about it being a lot of work ;-) > > Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, > it might be manageable. > > I think that it would be important that the work be done in small chunks > that can be contributed easily. We need to consider who is doing the work, > and why they would be motivated to spend time on it. For example, the > OSProcess package that I maintain (and I don't know if this is a good > example) already has a large set of unit tests that fail right away if > an expected interface changes. I would be willing to put some work into > writing new tests that document just the api expectations alone, but I > would not want to sink a large amount of time into it because it's > likely to be boring work that does not provide much additional benefit > to me. I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs? From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task. In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed. Colin |
On Jan 25, 2010, at 8:02 PM, Colin Putney wrote: > > On 2010-01-25, at 1:39 PM, David T. Lewis wrote: > >> I like Juan's idea a lot, but I lost some enthusiasm when I got to the >> part about it being a lot of work ;-) >> >> Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, >> it might be manageable. >> >> I think that it would be important that the work be done in small chunks >> that can be contributed easily. We need to consider who is doing the work, >> and why they would be motivated to spend time on it. For example, the >> OSProcess package that I maintain (and I don't know if this is a good >> example) already has a large set of unit tests that fail right away if >> an expected interface changes. I would be willing to put some work into >> writing new tests that document just the api expectations alone, but I >> would not want to sink a large amount of time into it because it's >> likely to be boring work that does not provide much additional benefit >> to me. > > I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs? > > From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task. > > In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed. I agree with this. If many people are writing tests for their respective codebases, then it is very likely that someone's test will notice breakages in the libraries that they rely on. Plus, every test will be in the context of a real use-case; as Colin notes, it's difficult to reliably anticipate which use-cases to write tests for, and to avoid wasting time on trivial and unnecessary tests. Cheers, Josh > > Colin |
In reply to this post by keith1y
keith wrote:
>> Hi Folks, >> >> (reposted, in the hope of not being ignored) >> >> Package developers want their work to run on various Squeak versions >> and variants, without needing rewrite. Same for app developers. >> >> Base image builders want to be free of the need to provide backwards >> compatibility. >> >> This is what I suggest: A package assumes it can use a set of apis of >> the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those >> assumptions should be made explicit, in the form of tests. So, for >> example, for collections, some package developer might require the >> "Common Collection API tests" to pass. Then, if his package fails to >> run, let's say in Cuis, he would run the tests for the apis he needs. >> If some test fails, he could say "Cuis developers, you're not >> supporting api XXX", end expect them to fix the issue. But if no test >> fails, he needs to either modify his code so it doesn't use >> not-standarized apis, or he could negotiate with (all) base image >> developers the addition of a new api or use case to the test suite >> and the base images. > > Agreed wholeheartedly. > > For this vision to have a chance, absolutely one thing is 100% > essential. SUnit must be common, between forks, and there must be some > way of flagging known exceptions for different target images. This is > something I attempted to add to SUnit in August 2006, in eagre > anticipation. Why? All that is needed is to be able to run the same tests on all fork. That is asking a lot less than the SUnit package is exactly the same... Se Julian's recent message about Seaside and Grease. It even works across Smalltalk dialects. > > The second essential thing is for the package loading tools to also be > in common. That means Monticello (in my book, though probably not in > yours). Why? This has nothing to do with how code is loaded into each environment. Package developers might choose between ChangeSets, Monticello, or possibly other options. > > However, most forks imho are keeping all of their libraries too close > to their chests. This initiative (actually Grease) allows each fork to do exactly that, while having guaranteed compatibility. It is the best of both worlds. > > All efforts to change this, to move obvious loadable libraries like > SUnit, and MC out to be externally managed, have up to now failed. The > weakness of my attempts so far has been in the testing side of things. > (Matthew Fulmer is worth is weight in gold on that one) > > However Monticello is a complicated beast, I may have made 400 more > commits, merging 3 forks, but one or two bugs is all it takes to > reject the entire refactoring of the repositories code, the improved > more uniform ui implementation, the password manager, the dual change > sorter, the orphanage for out of order loading, public package info > properties for package managers, scripting of commits, memory analysis > per package, the atomic loader, cleanUp code, improved version > numbering, integrated Configurations, separated tests, default > packageinfo package types etc etc etc. Those are package specific problems. I suggest getting in touch with Monticello developers to merge your changes. > > ... > > Correct me if I am wrong, but in my thinking if SUnit is forked, your > vision is pretty doomed. As I said above, I see no reason for this. > ... Cheers, Juan Vuletich |
In reply to this post by Colin Putney
Colin Putney wrote:
> On 2010-01-25, at 1:39 PM, David T. Lewis wrote: > > >> I like Juan's idea a lot, but I lost some enthusiasm when I got to the >> part about it being a lot of work ;-) >> >> Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, >> it might be manageable. >> >> I think that it would be important that the work be done in small chunks >> that can be contributed easily. We need to consider who is doing the work, >> and why they would be motivated to spend time on it. For example, the >> OSProcess package that I maintain (and I don't know if this is a good >> example) already has a large set of unit tests that fail right away if >> an expected interface changes. I would be willing to put some work into >> writing new tests that document just the api expectations alone, but I >> would not want to sink a large amount of time into it because it's >> likely to be boring work that does not provide much additional benefit >> to me. >> > > I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs? > > From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task. > > In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed. > > Colin You're right, but there's Grease. If other packages besides Seaside adopt it, it is a win-win. Cheers, Juan Vuletich |
On Tue, Jan 26, 2010 at 11:49:08AM -0300, Juan Vuletich wrote:
> > You're right, but there's Grease. If other packages besides Seaside > adopt it, it is a win-win. +1 Dave |
In reply to this post by Juan Vuletich-4
Matthew and I were the monticello maintainers for 3 years, after there had been none for at least a year. That was the whole point of setting up a shared repository squeaksource.com/mc so that Monticello could be maintained and worked on by anyone that knew how. Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version. There are no more bugs in the new version, the exisiting bugs are just in slightly different places. The new version passes lukas' "difficult test case", whereas the old one doesn't. Keith |
In reply to this post by Juan Vuletich-4
So when magma - written in squeak, requires one variant with complex facilities such as remote invocation of images, and Seaside written in pharo requires another. The integrator who wishes to test both in one image may find irreconcilable differences. Not all testing code uses the lowest common denominator. So what will happen is the multiple variants of SUnit will exist in a creative tension, to the extent that evolving any of them will become virtually impossible. A trivial example, I prefer that shouldInheritSelectors be specified explicitly, most implementations set it automatically for Abstract classes. An "improvement" as simple as this will never happen. Another trivial example, there are no users of LongTestCase in the squeak image, having a general test categorisation mechanism will provide the same facility. Write one test case that required LongTestCase, and you force me to remain compatible. What is so wrong with treating SUnit as a loadable package with maintainers and conversations to discuss its future, so that it may actually evolve. You seem to think it is a bad thing. Keith p.s. I think Cuis will be great for squeak, because... 1. as long as it loads in Cuis, it will load in most places. 2. The Cuis version are like to be simpler than others |
keith wrote:
>> >> Why? All that is needed is to be able to run the same tests on all fork. > > So when magma - written in squeak, requires one variant with complex > facilities such as remote invocation of images, and Seaside written in > pharo requires another. The integrator who wishes to test both in one > image may find irreconcilable differences. Not all testing code uses > the lowest common denominator. I see. So, there are actually several versions of SUnit maintained as external packages by different teams? Didn't know about that... If those external packages already exist and have maintainers, I have nothing against that. > > So what will happen is the multiple variants of SUnit will exist in a > creative tension, to the extent that evolving any of them will become > virtually impossible. > > A trivial example, I prefer that shouldInheritSelectors be specified > explicitly, most implementations set it automatically for Abstract > classes. An "improvement" as simple as this will never happen. Another > trivial example, there are no users of LongTestCase in the squeak > image, having a general test categorisation mechanism will provide the > same facility. Write one test case that required LongTestCase, and you > force me to remain compatible. Hey, I'll never force you to do anything at all. > What is so wrong with treating SUnit as a loadable package with > maintainers and conversations to discuss its future, so that it may > actually evolve. You seem to think it is a bad thing. Not at all. I just didn't know those packages and their teams actually exist. > Keith > > p.s. I think Cuis will be great for squeak, because... > > 1. as long as it loads in Cuis, it will load in most places. > 2. The Cuis version are like to be simpler than others I'm not that sure about any of those, but you might be right. Just please keep in mind that at the "Cuis Manifesto" or whatever it should be called I say: "This means that there are no guarantees of compatibility between Cuis and anything else, including the various releases and derivatives of Squeak, or even other releases of Cuis itself." Cheers, Juan Vuletich |
In reply to this post by keith1y
keith wrote:
> Most of us work with the latest of established packages on a day to day > basis. Yet for some reason, both Pharo and "trunk" adopted the ancient > version. Simple answer: The old version works and it had tons of mileage. When I asked for feedback on who's been using MC 1.5 and 1.6 I drew blanks from anyone but you and Matthew. When I then tried to see whether one of these versions could do everything that the current shipping version can do I ran into the issues described here: http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.html The point being that if the new version can't deal with all the cases that the old version could, then it's probably not ready for adoption yet. If the issues listed above have been addressed since I'd be happy to repeat the experiment. Cheers, - Andreas |
On 27 Jan 2010, at 02:15, Andreas Raab wrote:
Hi Andreas, MC1.5 has quite a few users out there, anyone who uses LPF, which included Randal for a start. I would expect MC1.5 to be stable enough, this is the one with the atomic loading preference turned OFF. The email you reference above is referring to MC1.6 (MCPackageLoader2) this is the experimental, atomic loading loader, which everyone knows isn't finished, no one ever claimed it was stable. We only ever claimed it would be really worth finishing and I had been asking for help with for more than 18 months, because it is not my area of expertise at all, and Matthew had got stuck afaik. So the point being, if you test the wrong thing, you wont get the results you hoped for. cheers Keith |
2010/1/27 keith <[hidden email]>:
> > On 27 Jan 2010, at 02:15, Andreas Raab wrote: > > keith wrote: > > Most of us work with the latest of established packages on a day to day > basis. Yet for some reason, both Pharo and "trunk" adopted the ancient > version. > > Simple answer: The old version works and it had tons of mileage. When I > asked for feedback on who's been using MC 1.5 and 1.6 I drew blanks from > anyone but you and Matthew. When I then tried to see whether one of these > versions could do everything that the current shipping version can do I ran > into the issues described here: > > http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.html > > The point being that if the new version can't deal with all the cases that > the old version could, then it's probably not ready for adoption yet. If the > issues listed above have been addressed since I'd be happy to repeat the > experiment. > I tried to look at SystemEditor code a while ago. It would be cool firt, to make all its tests green. But i found some controversy in what test says and what it actually does. I don't think that it would be possible to fix Traits support without author of SystemEditor. Only then we could move and try using it for atomic loading (in MC, DS or whatever). > Cheers, > - Andreas > > Hi Andreas, > MC1.5 has quite a few users out there, anyone who uses LPF, which included > Randal for a start. > I would expect MC1.5 to be stable enough, this is the one with the atomic > loading preference turned OFF. > The email you reference above is referring to MC1.6 (MCPackageLoader2) this > is the experimental, atomic loading loader, which everyone knows isn't > finished, no one ever claimed it was stable. We only ever claimed it would > be really worth finishing and I had been asking for help with for more than > 18 months, because it is not my area of expertise at all, and Matthew had > got stuck afaik. > So the point being, if you test the wrong thing, you wont get the results > you hoped for. > cheers > Keith > > > > > -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |