On Sun, 21 Feb 2021 at 20:33, Eliot Miranda <[hidden email]> wrote: >> +10. Don’t let the perfect be the enemy of the good. Incremental >> progress benefits from amplifying feedback. An absence of progress >> can’t. > The problem with this is that it leads to a rather toxic final > conclusion... > .. WorseIsBetter The problem is "final". _Evolution_ is relentless and continuous. Thinking back to arguments in _The Blind Clockmaker_, one needs to continually observe and refine / replace / clarify. One does not hear much about dynamic language usage in commercial settings because [1] feature evolving faster than your competition is a strategic advantage and [2] investors get distracted by "why are you using language X" rather than focusing on strategic advantages of particular development practices within a business context. While OS features have tended to feature evolve slowly, placing ourselves in a position to evolve our systems should lead to better outcomes. One aspect we have not touched on yet is the modelling of the HW systems themselves. I like to keep in mind Dan Ingalls dictum: "Reactive Principle. Every component accessible to the user should be able to present itself in a meaningful way for observation and manipulation." Being able to present and observe what CPU/GPU/GPIO/USART/USB/.. seems a useful exercise for a meta/self-knowledgeable system, especially one concerned with hot-plug devices and live updates. Placed in this context, device drivers and memory systems can be much more interesting. IMHO, C has been a fine language for device drivers but got into trouble with trying to scale up to large systems. Perhaps the time has come when it is again useful to look at ways to scale, e.g. Smalltalk, to smaller/finer use cases. -KenD [Note also: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.7354&rep=rep1&type=pdf ]
-KenD
|
On Mon, 22 Feb 2021 at 21:03, <[hidden email]> wrote:
> I like to keep in mind Dan Ingalls dictum: > > "Reactive Principle. Every component accessible to the user should be > able to present itself in a meaningful way for observation and > manipulation." > > Being able to present and observe what CPU/GPU/GPIO/USART/USB/.. seems a > useful exercise for a meta/self-knowledgeable system, especially one > concerned with hot-plug devices and live updates. > > Placed in this context, device drivers and memory systems can be much > more interesting. Well, broadly keeping this principle in mind was why I ended up proposing the Oberon system, and specifically A2, in the talk. Oberon is famed for being a readable, comprehensible system, even by a single person, even if that person is still a student. It is still being used for teaching at ETH and I believe in Linz, it still has a community of users and fans... and judging from the traffic levels and Github activity, there is quite a lot of R&D going on in Russia. I have even been invited to an online symposium on the language and OS, but my Russian is all but nonexistent. > IMHO, C has been a fine language for device drivers but got into trouble > with trying to scale up to large systems. I agree. This is a profoundly heretical view to state these days. For saying it, I have been told to kill myself on Twitter. I wish I were joking. > Perhaps the time has come when it is again useful to look at ways to > scale, e.g. Smalltalk, to smaller/finer use cases. Well, yes, that's what I was getting at! :-) -- Liam Proven – Profile: https://about.me/liamproven Email: [hidden email] – gMail/gTalk/gHangouts: [hidden email] Twitter/Facebook/LinkedIn/Flickr: lproven – Skype: liamproven UK: +44 7939-087884 – ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053 |
Free forum by Nabble | Edit this page |