Hi all.
Is the policy of the VM makers (whoever they currently are) to prevent the VM from crashing, particularly when given malicious bytecodes? This is a general question, mostly related to http://bugs.squeak.org/view.php?id=1395 which is now closed. Is it considered a bug if I can crash the VM with a maliciously crafted method? Which direction would the Squeak community want to go in? Should we aim to have a VM that would never seg fault and dump core (or blue screen under Windows), regardless of what rubbish is fed to it? Doing extra sanity checks and bounds checking would possibly have a performance penalty. Regards, Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
Michael van der Gulik schrieb:
> Hi all. > > Is the policy of the VM makers (whoever they currently are) to prevent > the VM from crashing, particularly when given malicious bytecodes? > > This is a general question, mostly related to > http://bugs.squeak.org/view.php?id=1395 which is now closed. Is it > considered a bug if I can crash the VM with a maliciously crafted method? > > Which direction would the Squeak community want to go in? Should we > aim to have a VM that would never seg fault and dump core (or blue > screen under Windows), regardless of what rubbish is fed to it? Doing > extra sanity checks and bounds checking would possibly have a > performance penalty. has neither dynamic bounds checks on temp, inst var or literal accesses nor static checks like the Java VM does. I am not a VM maker (although I played one a long time ago), so I can't speak for them, but given the dynamic nature of the Smalltalk environment it seems a bit difficult to design a Smalltalk VM that is absolutely safe against manipulation. Java does not allow many of the operations that make Smalltalk so powerful and malleable, making static checking much easier. At the moment, I'd guess that a tamper-proof VM is not a primary goal for Squeak, although it would be nice to have one for certain applications. Cheers, Hans-Martin |
In reply to this post by Michael van der Gulik-2
Hi,
On Dec 28, 2007, at 10:41 PM, Michael van der Gulik wrote: > Hi all. > > Is the policy of the VM makers (whoever they currently are) to > prevent the VM from crashing, particularly when given malicious > bytecodes? Perhaps on way to solve the problem is to avoid loading bytecode, instead load the source code that is compiled with a trust compiler. In Smalltalk the bytecode can be easily decompile so if the intension is to hide the code it doesn't worth loadin bytecode. > > > This is a general question, mostly related to http://bugs.squeak.org/view.php?id=1395 > which is now closed. Is it considered a bug if I can crash the VM > with a maliciously crafted method? > > Which direction would the Squeak community want to go in? Should we > aim to have a VM that would never seg fault and dump core (or blue > screen under Windows), regardless of what rubbish is fed to it? > Doing extra sanity checks and bounds checking would possibly have a > performance penalty. > > Regards, > Gulik. > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ Mth |
In reply to this post by Michael van der Gulik-2
I think perhaps the SqueakELib project should tackle this.
Squeak is not secure and does not pretend to be secure, although there are attempts to lock down file/socket access to keep casual users from doing undesirable things. However other forks of the VM like SqueakELib want: " a multithreaded vm for a secure, distributed object implementation" note the word *secure* buffer overflows, bytecode hacks, well those all valid tactics against *secure* VMs.. so go over there and ask... http://wiki.squeak.org/squeak/6011 Otherwise if you can compile smalltalk code that causes the VM to crash, then we are always interested, plus you get bonus points if that causes VisualWorks to crash too. On Dec 28, 2007, at 1:41 PM, Michael van der Gulik wrote: > Hi all. > > Is the policy of the VM makers (whoever they currently are) to > prevent the VM from crashing, particularly when given malicious > bytecodes? -- = = = ======================================================================== John M. McIntosh <[hidden email]> Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com = = = ======================================================================== |
On Dec 28, 2007 5:08 PM, John M McIntosh
<[hidden email]> wrote: > > so go over there and ask... > http://wiki.squeak.org/squeak/6011 It'd be nice if the author(s) placed his(their) name in the wiki page. (if it's there, I didn't see it.) |
In reply to this post by Michael van der Gulik-2
On 28-Dec-07, at 1:41 PM, Michael van der Gulik wrote: > Hi all. > > Is the policy of the VM makers (whoever they currently are) to > prevent the VM from crashing, particularly when given malicious > bytecodes? I think the truth would be somewhere around the area of "we try to make it reasonably safe and stable". There are simple practical problems such as lack of time to do more. There are other problems such as #become: which can easily swap a perfectly sound method for some arbitrary object. In between those are simple mistakes in logic and code. We do our best given the constraints. If you collectively want noticeably better you'll collectively have to come up with some serious resources to enable the work. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim Useful random insult:- Mind like a steel sieve. |
In reply to this post by johnmci
On Dec 29, 2007 2:08 PM, John M McIntosh <[hidden email]> wrote: I think perhaps the SqueakELib project should tackle this. Sure - so compiler-generated code that can crash the VM is considered a valid Squeak bug, but hand-crafted malicious bytecodes that crash Squeak are considered to be the programmer's fault. My project's page is at http://gulik.pbwiki.com/SecureSqueak. I'm not ready to start on modifying the VM, but when I get that far, I'll let people like Ron Teitelbaum know. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
In reply to this post by Michael van der Gulik-2
[apologies if this is late in the discussion. I have spent
the last couple of hours battling with various verschluggener mail
profiles for reestablishing a setup that works for me to post on
squeak-dev [my problem and new ISP ;-]]
Michael van der Gulik <[hidden email]>
wrote...
Is the policy of the VM makers (whoever they currently are) to prevent the VM from crashing, particularly when given malicious bytecodes? Which direction would the Squeak community want to go in? Should we aim to have a VM that would never seg fault and dump core (or blue screen under Windows), regardless of what rubbish is fed to it? Doing extra sanity checks and bounds checking would possibly have a performance penalty. This is an interesting topic, and at this point I can't presume
to answer where the community wants to go with it. I can say
that, as first designed, the VM was considered to be perfect if it
would execute every valid method perfectly, and ideal if it did so at
maximum speed.
To deal with garbage would require a lot of checks to be made,
all of them in frequent bytecodes, and many of them more expensive
than the actual operations themselves. For instance, it would
appear that you would want to do range checks on loads, stores and
jumps, which is just about all there is. I suppose you
could do less than this and still make it uncrashable, but if one is
going to pay this kind of attention to integrity, one would want to
tell the user at the first sign of error, rather than just keep making
errors but not crash. This would mean checking load bounds
rather than waiting until something weird happens later because you
loaded garbage from an invalid location.
Therefore, I suspect that things would be much more complicated,
and would run something like 2-3 times slower than before. And
what, exactly, is improved over VMs of the last 10 years? My
opinion, FWIW, is that execution of valid methods well compiled is
still a good success criterion for the VM. If someone wants to
feed the VM from a different source, then they should provide
appropriate integrity checks on their bytecodes. And if security
is the real agenda here, then it's probably best to acknowledge that
up front and look at the bigger system issues involved
(http://www.ERights.org being a good place to start ;-), before
worrying about malicious bytecodes.
|
In reply to this post by Mathieu SUEN
It shouldn't be difficult to verify the "well-formedness" of compiled
methods without the overhead of a complete recompilation. As with Mathieu's suggestion, this can be done once at load-time so that there is no performance penalty at run-time. It seems like ByteSurgeon (http://www.iam.unibe.ch/~scg/Research/ByteSurgeon/ ) might be the right tool for the task; perhaps someone more familiar with it can comment? Josh On Dec 28, 2007, at 3:22 PM, Mathieu Suen wrote: > Hi, > > On Dec 28, 2007, at 10:41 PM, Michael van der Gulik wrote: > >> Hi all. >> >> Is the policy of the VM makers (whoever they currently are) to >> prevent the VM from crashing, particularly when given malicious >> bytecodes? > > Perhaps on way to solve the problem is to avoid loading bytecode, > instead load the source code that is compiled with a trust compiler. > In Smalltalk the bytecode can be easily decompile so if the > intension is to hide the code it doesn't worth loadin bytecode. > >> >> >> This is a general question, mostly related to http://bugs.squeak.org/view.php?id=1395 >> which is now closed. Is it considered a bug if I can crash the VM >> with a maliciously crafted method? >> >> Which direction would the Squeak community want to go in? Should we >> aim to have a VM that would never seg fault and dump core (or blue >> screen under Windows), regardless of what rubbish is fed to it? >> Doing extra sanity checks and bounds checking would possibly have a >> performance penalty. >> >> Regards, >> Gulik. >> >> -- >> http://people.squeakfoundation.org/person/mikevdg >> http://gulik.pbwiki.com/ > > Mth > > > > |
In reply to this post by Michael van der Gulik-2
If security is the goal, this seems not to be the first place to spend scarce developer time. What are the vectors by which an attacker can cause such malicious bytecodes to be executed? The first three that come to mind are: - direct access to method dictionaries and/or unrestricted compiler access - providing malicious input to a system-provided binary code loader - exploiting bugs in the compiler If the first attack vector is available, crashes due to malicious bytecodes are the least of your problems; arbitrary code execution is a bigger concern. Glancing at the SecureSqueak page, it seems like you probably have a plan for this. Have you already solved this problem? If not, there's no point in bulletproofing the VM against ill-formed bytecodes. As I mentioned in response to Mathieu, it seems to me that the second attack vector can mostly be dealt with by load-time inspection. I'm not intimately familiar with Squeak's bytecodes, but I'd be surprised if there were more than a few where run-time checks are actually required. The third case assumes that the compiler is restricted in some way (eg: the attacker cannot simply "crash" the system by compiling a method containing "Smalltalk snapshot: false andQuit: true"); instead they have to find a way to write code such that the compiler accidently generates invalid bytecodes. To provide an extra layer of security, we can always subject the newly-compiled method to the same inspection as we do above when loading binary code. Does this sound reasonable? Best, Josh On Dec 28, 2007, at 1:41 PM, Michael van der Gulik wrote: Hi all. |
In reply to this post by Joshua Gargus-2
Joshua Gargus wrote:
> It shouldn't be difficult to verify the "well-formedness" of compiled > methods without the overhead of a complete recompilation. As with > Mathieu's suggestion, this can be done once at load-time so that there > is no performance penalty at run-time. GNU Smalltalk has a very simple bytecode verifier. Of course it cannot handle stuff like this: a := { #perform:withArguments:. nil }. a at: 2 put: a. self perform: #perform:withArguments: with: a (or whatever it should be). Paolo |
In reply to this post by Joshua Gargus-2
On Dec 29, 2007 9:23 PM, Joshua Gargus <[hidden email]> wrote:
Thanks for the input, Josh. I'll be starting this thread up again when I'm actually ready to submit changes to the VM or fork it. My developer time isn't scarce. I have about another 50 years left in me :-). To provide more information about what I'm doing, I'm loading code remotely (and transparently, using a distributed object architecture) as bytecodes. The literals in CompiledMethods are rebound when the code is loaded. The code itself is stored in Namespaces, so named literals can only refer to a small set of objects that that code has access to. Remotely loaded code wouldn't usually have access to MethodDictionary-s or CompiledMethods, nor the Compiler. My intention is that code is loaded into a sort of a browser, much like you could load a Project into Squeak now, meaning that code would be from a public source and could be malicious. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
Free forum by Nabble | Edit this page |