Hello Ron,
RT> Nobody updated my list! I guess this could mean two things either my list RT> is perfect, or nobody cares. Please let me know if my comments are not RT> helpful so that I don't spend too much time on it. thanks for going into details, I'm interested but not currently. But I read everything you write carefully! I'm of the hardware lock fraction mainly because the one I'm using offers interesting options in software payment. But this is a far in the future plan. The things that afaik software locking is bad at are: -floating licenses -move licenses (for laptop use) -stolen keys Once you have a hardware lock, you just encrypt every patch you send with it and only the owner of the right lock can install it. We have about 1000 customers using AutoCAD. They complained about the dongle (which was trivial to break) but they really started complaining when Adesk introduced soft locking and a software to transfer licenses with. We use a CA for email in another company. It's just a pain when something breaks (certificate expires, the IT person has a belly ache and accounting can not send encrypted emails.) To any one interested: www.codemeter.com . Anyway most of the discussion also is valid for dongle protected software so I'm awaiting your next posts. Things to add to your list: Security measures have to be spread in many places of the system. Security measures should be diverse. You have to be very careful not to inconvenience (this seems to be a word, the spell checker doesn't complain) your customers. Herbert mailto:[hidden email] _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
Updated List
1) A system must be able to ensure that it is updating itself from a trusted location. *In Discussion* 2) A system must be able to ensure that only the trusted system can ask it to update itself. 3) A system must be able to ensure that it can securely store an update location. 4) A system must be able to securely change the update location. 5) All communications must be encrypted. 6) A System must be able to verify patches before applying them. 7) A System must be able to automatically load the patch. 8) A System should be able to update without restarting the application. 9) A System must be able to report back success or failure of patch installation. 10) A System must be able to recover from a failed patch. 11) Security measures have to be spread in many places of the system. 12) Security measures should be diverse. 13) You have to be very careful not to inconvenience your customers. The second question is how you can limit functionally in your system unless and until payment is received. 1) A system must be able to enable features for a single instance and prevent those features from being shared to other systems. 2) A system could be able to detect features being used inappropriately 3) A system could be able to periodically check for permission (trial software) Herbert, I agree with your assessment that the topics are applicable to hardware. Here are the major differences as I see them. Hardware encryption is more reliable because the encryption primitives are encased in a tamper resistant medium. Hardware encryption is more costly then software. Hardware performance can be either greater or lower depending on the implementation. Hardware to store keys can be very useful if it meets the two factor authentication: Something you have and something you know. As long as your hardware includes a password for authentication to the device, and it allows only limited password attempts (and the user doesn't write the password on the device), then a hardware key significantly reduces the vulnerabilities of authentication. Dongles have some issues, they are usually but not always only one factor (if you have the dongle the system works), they break or can be lost, and some are easily cracked (so it's important that the value of the software is less then the amount of work to make your own, or that the dongles be unique per installation so that the selling of a cracked dongle is not profitable). Also because the dongle links the computer to the software and not the user to the software unauthorized users can still access the software. A good example is when a user leaves the dongle attached to the computer and goes to lunch. I do think that having hardware authentication is a good idea and it does make things much easier to verify when the crypto code is in the hardware. I still wonder why it is that they are not more widely used. As for email, until the certificates are free and the software does all the work for you, (hardware or not), I doubt we will see much more acceptance. In the system that I'm building it is all automatic. If you use my software and then write an email to your doctor it automatically sends it encrypted from your regular email program. Or if you fill out a personalized template online to communicate with your doctor it is also sent encrypted with your certificate so that the doctor (and the insurance company) knows they are talking to the real patient. Very good comments! Thank you, Ron Teitelbaum President / Principal Software Engineer US Medical Record Specialists www.usmedrec.com Squeak Cryptography Team Leader > From: Herbert König > Sent: Wednesday, March 07, 2007 12:20 PM > > Hello Ron, > > RT> Nobody updated my list! I guess this could mean two things either my > list > RT> is perfect, or nobody cares. Please let me know if my comments are > not > RT> helpful so that I don't spend too much time on it. > thanks for going into details, I'm interested but not currently. > But I read everything you write carefully! > > I'm of the hardware lock fraction mainly because the one I'm using > offers interesting options in software payment. > > But this is a far in the future plan. > > The things that afaik software locking is bad at are: > -floating licenses > -move licenses (for laptop use) > -stolen keys > > Once you have a hardware lock, you just encrypt every patch you send > with it and only the owner of the right lock can install it. > > We have about 1000 customers using AutoCAD. They complained about the > dongle (which was trivial to break) but they really started > complaining when Adesk introduced soft locking and a software to > transfer licenses with. > > We use a CA for email in another company. It's just a pain when > something breaks (certificate expires, the IT person has a belly ache > and accounting can not send encrypted emails.) > > To any one interested: www.codemeter.com . > > Anyway most of the discussion also is valid for dongle protected > software so I'm awaiting your next posts. > > Things to add to your list: > Security measures have to be spread in many places of the system. > Security measures should be diverse. > You have to be very careful not to inconvenience (this seems to be a > word, the spell checker doesn't complain) your customers. > > > Herbert mailto:[hidden email] > > _______________________________________________ > Beginners mailing list > [hidden email] > http://lists.squeakfoundation.org/mailman/listinfo/beginners _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
In reply to this post by Ron Teitelbaum
Well, it's basically just
#myFirst:secret:selector: become: #a:a:a: and then rehash the method dictionaries where it was used and class Symbol. The VM does not care, it only looks at identity. - Bert - On Mar 7, 2007, at 17:11 , Ron Teitelbaum wrote: > Hey Bert, > > This sounds pretty interesting, can you share more about how to mangle > names. Does it require a change in the VM to de-mangle? > > Ron Teitelbaum > >> From: Bert Freudenberg >> >> On Mar 7, 2007, at 8:57 , [hidden email] wrote: >> >>> Hi! >>> >>> Just a note - decompiling from bytecodes is very easy in Squeak. The >>> only thing missing is the original indentation and any comments. But >>> everything else is there. Just so you know. >> >> Well, if you're really concerned about decompiling, just mangle the >> selectors. As long as you are not constructing Symbols at runtime >> (#asSymbol, #intern:) this works perfectly well. Same for class names >> and instance variable names. >> >>> Locking down the image is of course doable - so that you can't >>> easily >>> get to the tools etc - but there are of course ways to go around >>> that >>> too. For example, I guess you can use an image file analyzer >>> (there is >>> at least one I think) or hack a VM to do stuff when the image is >>> loaded. >> >> Sure. But if the names are mangled this is about as much fun as >> reverse engineering machine code. No wait, the tool support is still >> better ;) >> >>>> But doesn't this imply that the source is downloaded, making it >>>> easy >>>> (easier) to hack the system? I could make the private Monticello >>>> connection secure, update the system and then delete the source... >>>> just >>>> thinking out loud. >>> >>> Yes - a Monticello package is just a zip file of source code. Sure, >>> you >>> can make the transfer "secure" using SSL or whatever - and you can >>> apply >>> it and throw it away >> >> Well, you certainly would want to encrypt and sign the patch. If you >> are *that* paranoid I'd not even use MC but just image segments. >> >> It's all a question of cost/value. I for one would be more concerned >> about preventing malicious code injection than the possibility of >> reverse engineering. But you have to weigh that yourself. _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
Hey Bert,
Ohh that is too cool! It makes perfect sense too. I wonder if we shouldn't build a package for that in cryptography. I'll have to play with it. It could be fun too because we could use a key and the name to hash the selector. Still I suppose that not knowing the name of the selector doesn't prevent you from stepping through the code. It may be harder to read but it is still readable. It's like reading the decompiled code of VW with all the A1's. But it is at least one more step toward protecting code. How hard would it be to add a decrypt algorithm to the VM so that we could mangle and encrypt the selectors and ivars? It would be neat if we could have the vm take the selector and/or code, decrypt it, then verify a hmac on the method, before calling it. Great tip Bert, you are terrific!! Ron Teitelbaum > -----Original Message----- > From: [hidden email] [mailto:beginners- > [hidden email]] On Behalf Of Bert Freudenberg > Sent: Wednesday, March 07, 2007 1:29 PM > To: A friendly place to get answers to even the most basic questions > aboutSqueak. > Subject: Re: [Newbies] Squeak in commercial projects > > Well, it's basically just > > #myFirst:secret:selector: become: #a:a:a: > > and then rehash the method dictionaries where it was used and class > Symbol. The VM does not care, it only looks at identity. > > - Bert - > > On Mar 7, 2007, at 17:11 , Ron Teitelbaum wrote: > > > Hey Bert, > > > > This sounds pretty interesting, can you share more about how to mangle > > names. Does it require a change in the VM to de-mangle? > > > > Ron Teitelbaum > > > >> From: Bert Freudenberg > >> > >> On Mar 7, 2007, at 8:57 , [hidden email] wrote: > >> > >>> Hi! > >>> > >>> Just a note - decompiling from bytecodes is very easy in Squeak. The > >>> only thing missing is the original indentation and any comments. But > >>> everything else is there. Just so you know. > >> > >> Well, if you're really concerned about decompiling, just mangle the > >> selectors. As long as you are not constructing Symbols at runtime > >> (#asSymbol, #intern:) this works perfectly well. Same for class names > >> and instance variable names. > >> > >>> Locking down the image is of course doable - so that you can't > >>> easily > >>> get to the tools etc - but there are of course ways to go around > >>> that > >>> too. For example, I guess you can use an image file analyzer > >>> (there is > >>> at least one I think) or hack a VM to do stuff when the image is > >>> loaded. > >> > >> Sure. But if the names are mangled this is about as much fun as > >> reverse engineering machine code. No wait, the tool support is still > >> better ;) > >> > >>>> But doesn't this imply that the source is downloaded, making it > >>>> easy > >>>> (easier) to hack the system? I could make the private Monticello > >>>> connection secure, update the system and then delete the source... > >>>> just > >>>> thinking out loud. > >>> > >>> Yes - a Monticello package is just a zip file of source code. Sure, > >>> you > >>> can make the transfer "secure" using SSL or whatever - and you can > >>> apply > >>> it and throw it away > >> > >> Well, you certainly would want to encrypt and sign the patch. If you > >> are *that* paranoid I'd not even use MC but just image segments. > >> > >> It's all a question of cost/value. I for one would be more concerned > >> about preventing malicious code injection than the possibility of > >> reverse engineering. But you have to weigh that yourself. > > > > > _______________________________________________ > Beginners mailing list > [hidden email] > http://lists.squeakfoundation.org/mailman/listinfo/beginners _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
In reply to this post by Ron Teitelbaum
Hello All,
Ok so next point: 2) A system must be able to ensure that only the trusted system can ask it to update itself. The way that I do this is pretty simple. If an external system can only say update, but can not say update from where, then the incentive to break this message is lessoned. You will notice that the next two points make this a circular argument so we need consider this carefully. A system could check a server periodically to see if there is an update available. If you have a trusted server as in 1) then asking that server for updates periodically is safe. A system could accept messages from the server. This is very useful for business production environments. We need to upgrade all of our clients at once. I've solved this problem with a combination of both pushing and pulling. First you upload the patch to the server and set the system patch level that the client checks when it starts up. This ensures that clients that are not connected will update when they finally do connect. Then I send a message to all running clients to update. This basically runs the same code that checks for a patch on start up (and possibly periodically) but since the message doesn't tell the client where to update from this message is mostly harmless. (Although it is important that the message only be accepted n number of times to prevent DOS attacks). It is easy to imagine variations on this process, update when you can, update now, update after next commit, ask user to update, shutdown if not updated by... Another variation of this message is a message that says, report home with your running patch level. Again it does not accept a location to report too. This is very useful for finding dead or disabled clients. Now I didn't really answer the question of how to determine that only a trusted system can ask the client to update. This really depends on your operating environment. If you have access to network facilities where the client is installed then the obvious answer is with a firewall rule. If you can not set the network environment then you can fix the problem by adjusting the message itself. The first way to do this is to encrypt the message. The server can encrypt the message with a random key. The key used to encrypt the message should be itself encrypted with the public key of the client. Then the client can decrypt the key run an HMAC on the message and then decrypt it. Now the public key in this case is not so public. The server knows the key of the client, so if the client receives a message that it can actually read, it can then process the message. Also the message is sent on an encrypted SSL connection so it is actually encrypted twice. This can be done in the opposite direction, you could have the server use it's private key to encrypt the random key, and then have the client decrypt the message with the servers public key. This reduces the number of keys needed, but it decreases the security since every message is readable by every system that knows the servers public key. It also increases the traffic on the server's private key which limits its lifetime. By the way the reason for encrypting a random key using a public key is that it limits the usefulness of the random key (since it changes with each message), it keeps the certificate from being used too often so it extends its life. We need to add another point, and that is how to update client and server certificates (see 14 below). In summary, Make the message safe incase it is received by an unfriendly user. Limit the communication in a networked environment if possible. Make the message content authenticate itself, by requiring decryption. Make the message content verifiable by adding HMAC. Again your comments are welcome; I received a lot of nice comments about the last post offline, thank you. This is a lot of detail so I really want to make sure you actually find it useful. Ron Teitelbaum President / Principal Software Engineer US Medical Record Specialists www.usmedrec.com Squeak Cryptography Team Leader > From: Ron Teitelbaum > > Updated List > > 1) A system must be able to ensure that it is updating itself from a > trusted > location. *In Discussion* > 2) A system must be able to ensure that only the trusted system can ask it > to update itself. *In Discussion* > 3) A system must be able to ensure that it can securely store an update > location. > 4) A system must be able to securely change the update location. > 5) All communications must be encrypted. > 6) A System must be able to verify patches before applying them. > 7) A System must be able to automatically load the patch. > 8) A System should be able to update without restarting the application. > 9) A System must be able to report back success or failure of patch > installation. > 10) A System must be able to recover from a failed patch. > 11) Security measures have to be spread in many places of the system. > 12) Security measures should be diverse. > 13) You have to be very careful not to inconvenience your customers. > > The second question is how you can limit functionally in your system > unless > and until payment is received. > > 1) A system must be able to enable features for a single instance and > prevent those features from being shared to other systems. > 2) A system could be able to detect features being used inappropriately > 3) A system could be able to periodically check for permission (trial > software) > _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
In reply to this post by Ron Teitelbaum
This one should have hone to the list.
Right now I'll start playing with my mail client so that I don't reply off list inadvertently. Ron is not talking to himself. Sorry! Hello Ron, let's continue our totally newbie-ish discussion :-) Ron you're doing this nice and systematically, be sure I will archive this thread as long as it goes. Thesis: Name mangling as Bert suggests is a way to protect intellectual property while the majority of points in this discussion are about protecting the income of the software supplier. If a system is big enough (in lines of code) I would trust name mangling a lot. It is a bit compromised by polymorphism. Identical method names must have identical mangled names if it is an automated process. I was very close to using it twice (in Lisp), so I gave it serious consideration. RT> 1) A system must be able to enable features for a single instance and RT> prevent those features from being shared to other systems. If you combine name mangling with individual crypting you can build modules which will only load into a single instance of the software. RT> 2) A system could be able to detect features being used inappropriately Will be unnecessary then. RT> 3) A system could be able to periodically check for permission (trial RT> software) Smalltalk has one advantage here with being image based. If part (or all) of the users data are always stored in the image you can keep a timer in the system which detects a set back system clock. Again we run such a timer in the hardware lock which also contains the end of trial date. RT> Hardware encryption is more costly then software. Yea, the way to go is to have one medium into which several software suppliers put their security codes. I guess the people from the link I provide have exceeded their initial goal to sell 1 million of their devices. I'm unhappy that I'm advertising here but those are serious guys and we do business with them for more than a decade. Imagine a dongle combined with a usb stick. The software suddenly becomes a physical possession. People are used to dealing with valuables for millennia. As soon as a stolen software connects to the Internet the dongle (with all contained software) can be invalidated. RT> Dongles have some issues, they are usually but not always only one factor RT> (if you have the dongle the system works), they break or can be lost, and RT> some are easily cracked (so it's important that the value of the software is Like some software locks too, I cracked one by accident. OTOH I once worked for a man who replicated a dongle to learn how to use gate arrays :-)) RT> less then the amount of work to make your own, or that the dongles be unique RT> per installation so that the selling of a cracked dongle is not profitable). We have it this way though I personally dislike the effort it takes building updates and upgrades. RT> Also because the dongle links the computer to the software and not the user RT> to the software unauthorized users can still access the software. A good RT> example is when a user leaves the dongle attached to the computer and goes RT> to lunch. I never tried but I believe that I can go to a computer, start IE, and export any certificate to my usb stick with no one the wiser. That leaves the password which in practice is easily hacked. Easy in a statistical meaning, as you already observed people don't care about security until it's too late. Next week I'll try if exporting a certificate already needs the password. I would have to steal the dongle though. At least this wouldn't go unnoticed. A call to the supplier could lock that dongle and a replacement can be bought for the costs of the dongle. RT> I do think that having hardware authentication is a good idea and it does RT> make things much easier to verify when the crypto code is in the hardware. RT> I still wonder why it is that they are not more widely used. Here in Germany you can choose between several suppliers of dongles many of them in the business for a long time. Autodesk have used Dongles for very long until 2000 in Europe. They sell a lot :-)) I know of vendors moving towards a dongle and others giving up on the dongle. RT> As for email, until the certificates are free and the software does all the RT> work for you, (hardware or not), I doubt we will see much more acceptance. I totally agree. RT> In the system that I'm building it is all automatic. If you use my software RT> and then write an email to your doctor it automatically sends it encrypted >>from your regular email program. Or if you fill out a personalized template RT> online to communicate with your doctor it is also sent encrypted with your RT> certificate so that the doctor (and the insurance company) knows they are RT> talking to the real patient. How do you assure the identity of the patient the first time? How do you assure the correct initial recipient? I always enjoy this line of thought, I got my first contract because I broke a protected software in front of the protector :-) Thank you for reading! Cheers Herbert mailto:[hidden email] _______________________________________________ Beginners mailing list [hidden email] http://lists.squeakfoundation.org/mailman/listinfo/beginners |
Free forum by Nabble | Edit this page |