Incremental deployment

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Incremental deployment

Bill Dargel
I'm trying to figure out the best strategy for doing ongoing, fairly
small, incremental changes to a deployed application. Ideas and pointers
would be appreciated.

First a little background. I've got an application that's migrating to
be in the form of a client/server application using the Internet for
connection. I'd like to be able to think of it as a single application,
where part of it just happens to run on the client machine. I'm looking
for the advantages of managing a server application, such as centralized
control, quick and easy updates on a frequent basis, etc. To do so, I'd
like to be have the client apps updated in concert with making changes
on the server, and make that update transparent to the end users. From
their perspective, it would be the same as using a server app that keeps
smoothing evolving and getting better on an ongoing basis.

As the updates could potentially occur every day or so, I don't want it
to have any discernible impact on the users. With a reasonable size
application, say 1 MB, it would take 1 minute to download on my slow
DSL. A user stuck with a dialup would take closer to 5 minutes. Seems
like too long a wait to force on the user when they need an updated
client to work with the new version of the server.

So I'm looking for an incremental approach. The thought is to put
together the small set of changes needed to go from version n to n+1,
and create a binary package with them. A client, say when logging into
or connecting with the server, would get whatever incremental update(s)
it needed and apply them.

Would want to build what those incremental differences are by comparing
stripped versions of the application (or logs produced by each of the
stripping processes). Seems like the only way to keep the whole endeavor
robust. For example, a (base) method stripped out in version n may now
be needed in version n+1 due to other changes.

Is there a handy way to get an inventory of what the stripper has left
in the image when it's done? Or is it a matter of processing a log of
what the stripper removes against the unstripped image? I'm using the
Source Tracking System, so should be able to get from that suitable
annotations as to what versions of methods, etc. are in such an
inventory. Would then need to create a suitable binary change set that
could be applied to stripped image n to create version n+1. This might
be patterned roughly after what BinaryPackage in the Web Deployment Kit
is doing. Though would need to be different, such as being able to have
loose methods override the current methods.

Has anyone done something like this? Or something similar? Any pointers
as to what (else) I should be looking at?

After a client has had the appropriate delta(s) applied to bring it up
to the current version level, seems like I'd want to be able to save it,
so that typically only one delta would be needed at a time. (As opposed
to an ever increasing chain of deltas). Are there any issues involved
with doing a snapshot of a deployed image? It's not something I've tried
yet. Doing it with the ToGo format might be a bit tricky, but even
there, should be able to grab the first part of the current .exe and
concatenate on a new image snapshot to create the new .exe. BTW, what
does the "snapshotType" argument do on the
SessionManage>>primSnapshot:backup:type: method? One thing that it
appears to affect is whether the snapshot is compressed or not? I've not
been able to find any documentation on it though.

Come to think of it, snapshotting the updated image wouldn't be
essential. Could always transfer the full image for the current version
in the background, once applying the delta had taken care of the
time-critical part of getting the user working. Though snapshotting
might still be nice, reserving the full transfer for setting up a new
baseline.

Anyone else dealing with these issues? Or does being on a LAN, or
needing less frequent updates, make it so that the transfer of the full
image is no big deal?

thanks,
-Bill

-------------------------------------------
Bill Dargel            [hidden email]
Shoshana Technologies
100 West Joy Road, Ann Arbor, MI 48105  USA


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Christopher J. Demers
"Bill Dargel" <[hidden email]> wrote in message
news:[hidden email]...
> I'm trying to figure out the best strategy for doing ongoing, fairly
> small, incremental changes to a deployed application. Ideas and pointers
> would be appreciated.
...
> So I'm looking for an incremental approach. The thought is to put
> together the small set of changes needed to go from version n to n+1,
> and create a binary package with them. A client, say when logging into
> or connecting with the server, would get whatever incremental update(s)
> it needed and apply them.
...

I think I would consider a generic binary patch solution (non-Smalltalk
specific).  Try creating a delta patch based on two final EXE's.  The
patches could be applied in series if needed.  You can probably using an
existing tool (check Google), you just need to request the correct patches
in the correct order from the server.  This approach also avoids the need to
snapshot the image at runtime.  Essentially the user ends up with the same
EXE after each patch as if they downloaded the complete new version.

I don't know how well Dolphin EXE's would patch like this (size wise).  You
might want to do some testing to make sure the generated patches are not too
large relative to the changes made.

I have never had to do this, though I may consider this in the future, so
let us know what you choose and how it works out.

Chris


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Bill Dargel
Christopher J. Demers wrote:

> I think I would consider a generic binary patch solution (non-Smalltalk
> specific).  Try creating a delta patch based on two final EXE's.  The
> patches could be applied in series if needed.  You can probably using an
> existing tool (check Google), you just need to request the correct patches
> in the correct order from the server.  This approach also avoids the need to
> snapshot the image at runtime.  Essentially the user ends up with the same
> EXE after each patch as if they downloaded the complete new version.

Sounded good in theory. I did some Googling and and the first thing that I found
that sounded appropriate was http://www.astatech.com/products/binarypatcher/ so
I downloaded their evaluation version and gave it a shot ...

> I don't know how well Dolphin EXE's would patch like this (size wise).  You
> might want to do some testing to make sure the generated patches are not too
> large relative to the changes made.

I tried it on a couple of different deltas with nearly identical (bad) results.
The simpler case I tried was (I believe) confined to a few changes in a single
package so I used the Source Tracking System to compare the two package
editions. Dragged source of any method that had been changed to a file to tally
the character count, I found a total of 3 KB of source had been touched. I
expect that a binary of the compiled methods would be of similar order. The ToGo
EXE files went from 1055 KB to 1056 KB. I created the patch file that would turn
one into the other and it was <drum roll> 945 KB :-(

In thinking about it, I guess it's not that surprising. A Dolphin ToGo EXE is
not your typical EXE. It's really a 215 KB exe with some form of compressed
snapshot of the image memory concatenated on. Even without the compression, I
suspect that what's in the image could move around enough to play havoc with the
diff. But the compression would seem to be the real killer.

Okay. Just tried an experiment. I had saved the working .img files for each of
the deployed versions. They had been freshly built from source at the time, so
there should be about as much consistency as one could hope for. The image files
had gone from 10401 KB to 10402 KB, and the patch file created on the difference
was 4569 KB. A better percentage than trying to patch the compressed exe, but
still not a viable solution. At least not when compared to a KB or so of changed
CompiledMethods.

> I have never had to do this, though I may consider this in the future, so
> let us know what you choose and how it works out.

Will do.

regards,
-Bill

-------------------------------------------
Bill Dargel            [hidden email]
Shoshana Technologies
100 West Joy Road, Ann Arbor, MI 48105  USA


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Bill Schwab-2
Bill, Chris,

This discussion reminds me of an energetic debate in Denver CO a few years
ago.  There's something to be said for a locked-down development image that
can simply download/install patches, and save a new image.  The advocate of
that basic design was (IIRC) Stefan MatthiasAust (and one in the same with
the gentleman in the center of the photo below??), and he was doing a nice
job of defending a locked Squeak image with a shortcut that loads it (not
all that different from an exe with lots of DLLs that need to be available),
and also pointing out that more monolithic options had been created (though
I have to admit they appear to have been forgotten just about as quickly).

Anyway, I mention it not to suggest that you run off to Squeak, but to raise
the question of a Dolphin exe that has just enough development capability to
load patches, ideally save an image, and yet not allow dishonest folks to
turn the thing into a bootleg IDE.

Either way, the point is that the source code (which one could encrypt to
protect it, to a point) would make for very small downloads.

For various reasons, I considered using a Squeak image to do download and
configure ToGo exe Dolphin apps that would do the real work.  It fell flat
because: (1) Squeak on Windows running as servcice (AFAICT) can't use
sockets; (2) I ran out of time that I allocated to it (for now); (3) I
finally figured out how to configure Apache to the point that I have
idiot-proof "pull" for installations (good enough for now); (4) InnoSetup
isn't hurting.

Have a good one,

Bill


http://analgesic.anest.ufl.edu/anest4/bills/BOFPostBuckhornExchangeSmall.jpg

--
Wilhelm K. Schwab, Ph.D.
[hidden email]


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Christopher J. Demers
In reply to this post by Bill Dargel
"Bill Dargel" <[hidden email]> wrote in message
news:[hidden email]...
> Christopher J. Demers wrote:
>
> > I think I would consider a generic binary patch solution (non-Smalltalk
> > specific).  Try creating a delta patch based on two final EXE's.  The
> > patches could be applied in series if needed.  You can probably using an
> > existing tool (check Google), you just need to request the correct
patches
> > in the correct order from the server.  This approach also avoids the
need to
> > snapshot the image at runtime.  Essentially the user ends up with the
same
> > EXE after each patch as if they downloaded the complete new version.
>
...
> expect that a binary of the compiled methods would be of similar order.
The ToGo
> EXE files went from 1055 KB to 1056 KB. I created the patch file that
would turn
> one into the other and it was <drum roll> 945 KB :-(

I was wondering if something like that might be a problem.  I wonder how
good that particular diff tool is.  If the image is compressed I guess that
could cause some big differences for minor changes.

One thing I should mention might be to consider not deploying a to-go EXE.
If you did a non to-go install then you would not have to make the user
effectively download the runtime DLL's as part of the EXE for each upgrade,
though the initial install would have to include them.  I think that might
chop off a few hundred kb of the EXE size.

It is too bad the patch sizes are so big, that would have been an easy
solution.

Chris


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Chris Uppal-3
In reply to this post by Bill Dargel
Bill Dargel wrote:

> First a little background. I've got an application that's migrating to
> be in the form of a client/server application using the Internet for
> connection. I'd like to be able to think of it as a single
> application, where part of it just happens to run on the client
> machine. I'm looking for the advantages of managing a server
> application, such as centralized control, quick and easy updates on a
> frequent basis, etc. To do so, I'd like to be have the client apps
> updated in concert with making changes on the server, and make that
> update transparent to the end users. From their perspective, it would
> be the same as using a server app that keeps smoothing evolving and
> getting better on an ongoing basis.

I strikes me that you may be approaching this from an unproductive direction.
You are trying to ensure that all end-users are running the version of the
client software that corresponds to the running version of the server software.
If you accept that you may have up to N (for some smallish N) versions of the
client supported at the same time, then you can approach the problem in a
different way (albeit at the cost of more complexity in the server).

When a client connects, part of the dialog (hidden from the user) is a version
check, the client is effectively saying "I am version x.y.z, can you handle me,
and am I the most recent version you can support ?", the server replies yes or
no to both questions appropriately.  If the client is up-to-date then there's
no problem; if not, but the server can support it, then the client (with or
without consulting the user) downloads the newest version in the background,
meanwhile the user carries on with the older version.  If the version is too
old for the server, then the user is told "Sorry, too many changes have been
made to the server software, I'm afraid that you'll have to wait while we
download the newest client -- do you wish to continue now ?".

Yes, that does put more complexity into the server, which may be too much for
your particular application.  But it does bring some advantages too -- such as
the ability to "beta" a new client against the production server, to roll-back
client changes that didn't work, etc.

If that doesn't work for you, then I'd be inclined to try one of:

1)  Use a specially-developed ToGo .exe that contained *all* of the
non-development packages you are likely to need.  Ship the "real" code as
binary packages that are always loaded into the runtime at startup.  You can
then ship patches to the binary packages (which will probably be small even
without using a binary-diff, though the diffing should work a lot better than
it does on images).  This is essentially the Java "WebStart" architecture.  I
think something like this would be a valuable (i.e. worth paying for) addition
to the Dolphin 'Pro' product.

2)  Ship your application as a normal ToGo .exe, but have it check for binary
packages as it starts up (in the session manger, say) and load any patches.
It'll have to do that every time it starts up.  Provide the option of
downloading a fresh .exe with all patches pre-applied.  Notice that this
doubles your testing load.  The most error-prone bit is probably working out
what's "in" the ToGo .exe so as to know what to ship patches for.  If the
stripper's log is incomplete (from memory it's OK), or too hard to parse
reliably, then you could add command line flag to the .exe that simply dumps a
list of what's in the image to file.  Makes the deployed image a little bigger,
but not -- I suspect -- by very much.

3) Same as (2) but ship the patches as code (a filein) rather than compiled
binary.  Requires you to ship the compiler DLL as part of the deployment.

Of the three, I like (1) best -- it looks as if it'd give the best return of
functionality on work.  All of them *are* quite a bit of work, though, which is
why I suggested that it might be easiest to fix the deployment problem in the
server.

    -- chris


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Blair McGlashan
In reply to this post by Bill Dargel
Bill

You wrote in message news:[hidden email]...
>...
> Is there a handy way to get an inventory of what the stripper has left
> in the image when it's done? Or is it a matter of processing a log of
> what the stripper removes against the unstripped image? ...

D6 will feature an XML deployment "log" that includes a manifest that
details all remaining methods and classes. Emitting such an XML manifest is
a pretty straightforward thing to do, and will add little to the size of the
deployed executable since it uses basic reflective methods that would mostly
be there anyway.

Given an XML manifest you would be in a position to determine what you
needed to add into your patch, however I would be concerned about the burden
of testing.

>....Are there any issues involved
> with doing a snapshot of a deployed image? It's not something I've tried
> yet. Doing it with the ToGo format might be a bit tricky, ....

Very tricky indeed, since the image saving code is not included in the ToGo
stub :-).

I think you should go with Chris Uppals' suggestion (if I understand it
correctly), and deploy a ToGo application which is substantially complete in
that it is an unstripped image with all the base packages you need loaded.
The application itself (or at least that part of it likely to change) would
then be loaded from binary packages. In effect this amounts to the Dolphin
browser plug-in architecture, the plug-in being a fairly complete Dolphin
image omitting only the development classes. Obviously this will increase
the size of the initial distribution, but not as much as you might think.
The browser plugin was only a couple of Mb as I recall.

Regards

Blair


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Randy Coulman-2
In reply to this post by Bill Dargel
Bill Dargel wrote:
> I'm trying to figure out the best strategy for doing ongoing, fairly
> small, incremental changes to a deployed application. Ideas and pointers
> would be appreciated.
>
[... rest of description deleted ...]

Something in Chris Uppal's response twigged this thought.  Maybe it will
help, maybe not.

If it is "expensive" to upgrade the client, you want the client to be as
stable as possible.

Is there a way to make the client much more flexible and generic (at the
cost of some complexity) so that you can confine the bulk of the changes
to the server?

The extreme example would be web apps, where the browser doesn't have to
change very often, but you can change the server code hourly, if you like.

Randy
--
Randy Coulman
NOTE: Reply-to: address is spam-guarded.  Reassemble the following to
reply directly:
rvcoulman at acm dot org


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Bill Schwab
In reply to this post by Blair McGlashan
Blair,

> I think you should go with Chris Uppals' suggestion (if I understand it
> correctly), and deploy a ToGo application which is substantially complete
in
> that it is an unstripped image with all the base packages you need loaded.
> The application itself (or at least that part of it likely to change)
would
> then be loaded from binary packages. In effect this amounts to the Dolphin
> browser plug-in architecture, the plug-in being a fairly complete Dolphin
> image omitting only the development classes. Obviously this will increase
> the size of the initial distribution, but not as much as you might think.
> The browser plugin was only a couple of Mb as I recall.

Sounds reasoable.  I've thought about it for my systems, but it _probably_
would not buy me anything.  However, I might change my tune in the near
future.

One question: is OA committed to supporting binary packages in future
versions?  IIRC, the only official statement has been that they are
"unlikely to be removed".

Have a good one,

Bill

--
Wilhelm K. Schwab, Ph.D.
[hidden email]


Reply | Threaded
Open this post in threaded view
|

Re: Incremental deployment

Joseph Pelrine-2
In reply to this post by Blair McGlashan


Blair McGlashan wrote:
Bill

You wrote in message <a class="moz-txt-link-freetext" href="news:3ECE83EE.7ACC49C4@shoshana.com">news:3ECE83EE.7ACC49C4@......
...
Is there a handy way to get an inventory of what the stripper has left
in the image when it's done? Or is it a matter of processing a log of
what the stripper removes against the unstripped image? ...

D6 will feature an XML deployment "log" that includes a manifest that
details all remaining methods and classes. Emitting such an XML manifest is
a pretty straightforward thing to do, and will add little to the size of the
deployed executable since it uses basic reflective methods that would mostly
be there anyway.

Given an XML manifest you would be in a position to determine what you
needed to add into your patch, however I would be concerned about the burden
of testing.

....Are there any issues involved
with doing a snapshot of a deployed image? It's not something I've tried
yet. Doing it with the ToGo format might be a bit tricky, ....

Very tricky indeed, since the image saving code is not included in the ToGo
stub :-).

I think you should go with Chris Uppals' suggestion (if I understand it
correctly), and deploy a ToGo application which is substantially complete in
that it is an unstripped image with all the base packages you need loaded.
The application itself (or at least that part of it likely to change) would
then be loaded from binary packages. In effect this amounts to the Dolphin
browser plug-in architecture, the plug-in being a fairly complete Dolphin
image omitting only the development classes. Obviously this will increase
the size of the initial distribution, but not as much as you might think.
The browser plugin was only a couple of Mb as I recall.

Regards

Blair
Since I've designed, worked on and used incremental deployment schemes in a number of ST dialects, there are a few ideas I'd like to add to this discussion.

Pricipally, the question of incremental deployment - or updating a packaged, runtime image - is closely related to the dichotomy between the stripping (packaging) and building (from a minimal base image) approaches used to construct a deployment image. The tools needed for one technique are different from the tools needed for the other. Incremental deployment is essentially the same as the building paradigm, with the major difference being the point in time when (and the technique with which) a module is loaded and linked to the deployment image. Since almost all ST dialects are based on the stripping paradigm of deployment, with VSE being the notable exception, dynamic update is going to be difficult to implement without building significant and basic functionality to support the building paradigm.

How much (and which) functionality is needed in the base image/package? This depends on the deployment paradigm. The size of the base package plays less of a role in a stripping paradigm, where unused functionality is removed before deployment, but in a building paradigm (where only whole packages are loaded), the size of the base package has a direct effect on the size of both the minimal base image, and the in-memory working set.

Here, Dolphin suffers from a problem that most other dialects do - something I call (pardon the expression) "Slut Base Package", or SBP. The base package is just too big and monolithic. Even VSE suffered from this one. The only dialect I'm aware of that has a lean and mean base package is OTI/IBM Smalltalk, although VisualAge makes up for it by having 5000 packages all starting with 'Abt'. Also, even S#, which builds an image on the fly, has a base package (hidden in the aos.dll) that's too big for its own good.

OTOH, Dolphin has an exceptional prereq mechanism, which is elegant in its simplicity. Using this, along with some analysis tools I've developed, one should be able to divide the Dolphin base (kernel) up into a number of nice little chunks, and end up with a minimally-sized base image.

Once the base image size problem described above is solved, one must deal with the question of how to load new code into the base image. There are a number of schemes for this, each with its advantages and disadvantages in terms of which additional packages must be already loaded, and what infrastructure is required:

1. Loading and compilation of source code in text format from file (the standard way)
requires: kernel, file system, compiler

2. Loading and compilation of source code in text format via TCP
requires: kernel, sockets, compiler

3. Loading and compilation of source code in text format from file with tethered (external) compilation
requires: kernel, file system, sockets, some extra functionality needed (external tethered compiler image/tool)

4. Loading and compilation of source code in text format via TCP with tethered (external) compilation
requires: kernel, sockets, some extra functionality needed (external tethered compiler image/tool)

5. Loading and compilation of binary (pre-compiled) code in from file
requires: kernel, file system, some extra functionality needed

6. Loading and compilation of binary (pre-compiled) code in text format via TCP
requires: kernel, sockets, some extra functionality needed

As you can see, there are a number of alternatives. Some of them depend on the EULA allowing you to ship the compiler in a deployment image. I've worked with all of them, and I prefer nr. 5 for incremental deployment. When you change a module, you just ship out the new binary version of the module. This is similar to the classic Digitalk approach, where there was an (albeit not small) base image and a binding file listing the file names of the binary packages to be loaded. The old Digitalk approach loaded the files in the order that they were listed in the binding file, which created problems with prereqs and circularity. My later implementation read in all packages, analyzed their preq requirements, and deduced the proper load order before loading and binding the packages.

Although it would be great to have these capabilities in Dolphin, someone has to implement them, and it's probably not going to be me. If anyone is interested in taking on such an implementation challenge, though, I'd be willing to pass on advice based on my experience. Just let me know.

Cheers
-- 
--
Joseph Pelrine [ | ]
MetaProg GmbH
Email: [hidden email]
Web:   http://www.metaprog.com

"If you don't live on the edge, you're taking up too much space" -
Doug Robinson