Ci automated deploy of seaside to Digital Ocean

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
Hi - having run through the simple tutorial of getting a seaside app up and running on digital ocean - I’m interested in the next steps of doing it more continuously- does anyone have any tips ?

Eg. I know how to build an image with ci (like gitlab), having produced one I guess I can ssh to copy it, but do I really need to ps to find my running image to kill it and restart with the new replacement? I’m wondering if there are some simple tricks I’m missing?

Tim

Sent from my iPhone
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.

Regards,

[1] http://supervisord.org/introduction.html
Esteban A. Maringolo


2018-05-10 14:50 GMT-03:00 Tim Mackinnon <[hidden email]>:

> Hi - having run through the simple tutorial of getting a seaside app up and running on digital ocean - I’m interested in the next steps of doing it more continuously- does anyone have any tips ?
>
> Eg. I know how to build an image with ci (like gitlab), having produced one I guess I can ssh to copy it, but do I really need to ps to find my running image to kill it and restart with the new replacement? I’m wondering if there are some simple tricks I’m missing?
>
> Tim
>
> Sent from my iPhone
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo


On 10/05/2018 15:30, Esteban A. Maringolo wrote:
> How would you know when to kill the running vm+image? It might have
> active sessions.
>
> If you don't care about that, what I used to manage a bunch of "worker
> images" (process groups) with supervisord [1], so in that case you'd
> stop all the related workers, copy the new image, and start the
> workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st file handling the port number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
  Smalltalk commandLine arguments
    ifEmpty: [
      Transcript show: 'No port parameter was specified.'; cr.
      Smalltalk quitPrimitive. ]
    ifNotEmpty: [:args |
      | port |
      port := args first asNumber asInteger.
      Transcript show: 'Starting worker image at port ', port asString; cr.
      ZnZincServerAdaptor  startOn: port.
      ZnZincServerAdaptor default server debugMode: false.
    ]
  ]
  ifFalse: [
      | port |
      port := 8080.
      Transcript show: 'Starting worker image at port ', port asString; cr.
      ZnZincServerAdaptor  startOn: port.
      ZnZincServerAdaptor default server debugMode: true.
    ].


I hope it helps.

Best regards,



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
Ah - I see (hadn’t thought about having a configurable port number),

So if Ive understood correctly I could install supervisord with a config file like yours.

Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?

Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).

Thanks for the help - this is my next learning step.

Tim

> On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:
>
>
>
> On 10/05/2018 15:30, Esteban A. Maringolo wrote:
>> How would you know when to kill the running vm+image? It might have
>> active sessions.
>>
>> If you don't care about that, what I used to manage a bunch of "worker
>> images" (process groups) with supervisord [1], so in that case you'd
>> stop all the related workers, copy the new image, and start the
>> workers again.
> My supervisord.conf entry for a pool of working images was:
>
> [program:psworker]
> command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
> /home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
> process_name=%(program_name)s_%(process_num)02d ; process_name expr
> (default %(program_name)s)
> numprocs=2
> directory=/home/trentosur/perfectstore
> autostart=false
> autorestart=true
> user=trentosur
> stopasgroup=true
> killasgroup=true
>
>
> Part of the worker.st file handling the port number was:
>
> "Seaside server start"
> Smalltalk isHeadless ifTrue: [
>  Smalltalk commandLine arguments
>    ifEmpty: [
>      Transcript show: 'No port parameter was specified.'; cr.
>      Smalltalk quitPrimitive. ]
>    ifNotEmpty: [:args |
>      | port |
>      port := args first asNumber asInteger.
>      Transcript show: 'Starting worker image at port ', port asString; cr.
>      ZnZincServerAdaptor  startOn: port.
>      ZnZincServerAdaptor default server debugMode: false.
>    ]
>  ]
>  ifFalse: [
>      | port |
>      port := 8080.
>      Transcript show: 'Starting worker image at port ', port asString; cr.
>      ZnZincServerAdaptor  startOn: port.
>      ZnZincServerAdaptor default server debugMode: true.
>    ].
>
>
> I hope it helps.
>
> Best regards,
>
>
>
> --
> Esteban A. Maringolo
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
I forgot to mention - you are correct that this isn’t a good solution for a widely use production system - as I think you would want to use a load balancer to stop traffic to one image while sessions complete before terminating it. OR - there must be some cloud based solution for this these days - presumably using docker…

Still for hobby experiments - this and Digital Ocean seems ideal.

Tim

> On 10 May 2018, at 22:17, Tim Mackinnon <[hidden email]> wrote:
>
> Ah - I see (hadn’t thought about having a configurable port number),
>
> So if Ive understood correctly I could install supervisord with a config file like yours.
>
> Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?
>
> Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).
>
> Thanks for the help - this is my next learning step.
>
> Tim
>
>> On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:
>>
>>
>>
>> On 10/05/2018 15:30, Esteban A. Maringolo wrote:
>>> How would you know when to kill the running vm+image? It might have
>>> active sessions.
>>>
>>> If you don't care about that, what I used to manage a bunch of "worker
>>> images" (process groups) with supervisord [1], so in that case you'd
>>> stop all the related workers, copy the new image, and start the
>>> workers again.
>> My supervisord.conf entry for a pool of working images was:
>>
>> [program:psworker]
>> command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
>> /home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
>> process_name=%(program_name)s_%(process_num)02d ; process_name expr
>> (default %(program_name)s)
>> numprocs=2
>> directory=/home/trentosur/perfectstore
>> autostart=false
>> autorestart=true
>> user=trentosur
>> stopasgroup=true
>> killasgroup=true
>>
>>
>> Part of the worker.st file handling the port number was:
>>
>> "Seaside server start"
>> Smalltalk isHeadless ifTrue: [
>> Smalltalk commandLine arguments
>>   ifEmpty: [
>>     Transcript show: 'No port parameter was specified.'; cr.
>>     Smalltalk quitPrimitive. ]
>>   ifNotEmpty: [:args |
>>     | port |
>>     port := args first asNumber asInteger.
>>     Transcript show: 'Starting worker image at port ', port asString; cr.
>>     ZnZincServerAdaptor  startOn: port.
>>     ZnZincServerAdaptor default server debugMode: false.
>>   ]
>> ]
>> ifFalse: [
>>     | port |
>>     port := 8080.
>>     Transcript show: 'Starting worker image at port ', port asString; cr.
>>     ZnZincServerAdaptor  startOn: port.
>>     ZnZincServerAdaptor default server debugMode: true.
>>   ].
>>
>>
>> I hope it helps.
>>
>> Best regards,
>>
>>
>>
>> --
>> Esteban A. Maringolo
>> _______________________________________________
>> seaside mailing list
>> [hidden email]
>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
In reply to this post by Esteban A. Maringolo
Hi - I’ve been trying supervisord (which looks good) - however I can’t get pharo to start - I just get "pharo: ERROR (spawn error)”, and tailing the log in supervisorctl just shows me pharo spewing out the command line help. Its like the image parameter isn’t being passed along  - very weird and frustrating.

Tim

On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:



On 10/05/2018 15:30, Esteban A. Maringolo wrote:
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st file handling the port number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
 Smalltalk commandLine arguments
   ifEmpty: [
     Transcript show: 'No port parameter was specified.'; cr.
     Smalltalk quitPrimitive. ]
   ifNotEmpty: [:args |
     | port |
     port := args first asNumber asInteger.
     Transcript show: 'Starting worker image at port ', port asString; cr.
     ZnZincServerAdaptor  startOn: port.
     ZnZincServerAdaptor default server debugMode: false.
   ]
 ]
 ifFalse: [
     | port |
     port := 8080.
     Transcript show: 'Starting worker image at port ', port asString; cr.
     ZnZincServerAdaptor  startOn: port.
     ZnZincServerAdaptor default server debugMode: true.
   ].


I hope it helps.

Best regards,



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo
In reply to this post by Tim Mackinnon
Hi Tim,

On 10/05/2018 18:17, Tim Mackinnon wrote:
> Ah - I see (hadn’t thought about having a configurable port number),

I had a configurable port number, and at the peak of use of the now
decommissioned app I had up to 8 worker images running, defined as
upstream servers in nginx as well, so changing numprocs in
supervisord.conf also required modifying the nginx website conf.


> So if Ive understood correctly I could install supervisord with a config file like yours.
>
> Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?

Exactly.
I didn't have a CI, but I did build images from scratch, and my script
did exactly that, but using ssh and scp instead.

> Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).

I found the same thing, supervisord seemed simpler to get it up and
running. And once working reliably... why change it? :)

Regards,

--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo
In reply to this post by Tim Mackinnon
I don't know if something changed, the setup I shared used Pharo 4 and
supervisord v3.0b2.

There might have been changes in the VM parameters. Although the image
parameter is still there, and it is the most important.

You should ask in the pharo-list.


On 11/05/2018 07:41, Tim Mackinnon wrote:

> Hi - I’ve been trying supervisord (which looks good) - however I can’t
> get pharo to start - I just get "pharo: ERROR (spawn error)”, and
> tailing the log in supervisorctl just shows me pharo spewing out the
> command line help. Its like the image parameter isn’t being passed along
>  - very weird and frustrating.
>
> Tim
>
>> On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>
>>
>> On 10/05/2018 15:30, Esteban A. Maringolo wrote:
>>> How would you know when to kill the running vm+image? It might have
>>> active sessions.
>>>
>>> If you don't care about that, what I used to manage a bunch of "worker
>>> images" (process groups) with supervisord [1], so in that case you'd
>>> stop all the related workers, copy the new image, and start the
>>> workers again.
>> My supervisord.conf entry for a pool of working images was:
>>
>> [program:psworker]
>> command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
>> /home/trentosur/perfectstore/ps.image worker.st <http://worker.st/>
>> 818%(process_num)1d
>> process_name=%(program_name)s_%(process_num)02d ; process_name expr
>> (default %(program_name)s)
>> numprocs=2
>> directory=/home/trentosur/perfectstore
>> autostart=false
>> autorestart=true
>> user=trentosur
>> stopasgroup=true
>> killasgroup=true
>>
>>
>> Part of the worker.st <http://worker.st/> file handling the port
>> number was:
>>
>> "Seaside server start"
>> Smalltalk isHeadless ifTrue: [
>>  Smalltalk commandLine arguments
>>    ifEmpty: [
>>      Transcript show: 'No port parameter was specified.'; cr.
>>      Smalltalk quitPrimitive. ]
>>    ifNotEmpty: [:args |
>>      | port |
>>      port := args first asNumber asInteger.
>>      Transcript show: 'Starting worker image at port ', port asString; cr.
>>      ZnZincServerAdaptor  startOn: port.
>>      ZnZincServerAdaptor default server debugMode: false.
>>    ]
>>  ]
>>  ifFalse: [
>>      | port |
>>      port := 8080.
>>      Transcript show: 'Starting worker image at port ', port asString; cr.
>>      ZnZincServerAdaptor  startOn: port.
>>      ZnZincServerAdaptor default server debugMode: true.
>>    ].
>>
>>
>> I hope it helps.
>>
>> Best regards,
>>
>>
>>
>> --
>> Esteban A. Maringolo
>> _______________________________________________
>> seaside mailing list
>> [hidden email]
>> <mailto:[hidden email]>
>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>
>
>
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>

--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
Good to know you were pleased with it - I notice the Enterprise pharo chapter uses monit - perhaps I may have to try that in instead (but supervisord seemed much simpler). One small thought - I’ve been using my default root login account  (the tutorials showing Digital Ocean all seem to do this) - I have a nagging feeling this isn’t so cool - but I’m not sure what the general practice is these days with cloud infrastructure? 

Presumably supervisord needs to run as root? But my image doesn’t - still I would have expected that it would all run fine as root (if not a bit dangerously?)

Tim

On 11 May 2018, at 12:31, Esteban A. Maringolo <[hidden email]> wrote:

I don't know if something changed, the setup I shared used Pharo 4 and
supervisord v3.0b2.

There might have been changes in the VM parameters. Although the image
parameter is still there, and it is the most important.

You should ask in the pharo-list.


On 11/05/2018 07:41, Tim Mackinnon wrote:
Hi - I’ve been trying supervisord (which looks good) - however I can’t
get pharo to start - I just get "pharo: ERROR (spawn error)”, and
tailing the log in supervisorctl just shows me pharo spewing out the
command line help. Its like the image parameter isn’t being passed along
 - very weird and frustrating.

Tim

On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]
<[hidden email]>> wrote:



On 10/05/2018 15:30, Esteban A. Maringolo wrote:
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st <http://worker.st/>
818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st <http://worker.st/> file handling the port
number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
 Smalltalk commandLine arguments
   ifEmpty: [
     Transcript show: 'No port parameter was specified.'; cr.
     Smalltalk quitPrimitive. ]
   ifNotEmpty: [:args |
     | port |
     port := args first asNumber asInteger.
     Transcript show: 'Starting worker image at port ', port asString; cr.
     ZnZincServerAdaptor  startOn: port.
     ZnZincServerAdaptor default server debugMode: false.
   ]
 ]
 ifFalse: [
     | port |
     port := 8080.
     Transcript show: 'Starting worker image at port ', port asString; cr.
     ZnZincServerAdaptor  startOn: port.
     ZnZincServerAdaptor default server debugMode: true.
   ].


I hope it helps.

Best regards,



-- 
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
<[hidden email]>
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


-- 
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Stephan Eggermont-3
In reply to this post by Tim Mackinnon
Tim Mackinnon <[hidden email]> wrote:
> I forgot to mention - you are correct that this isn’t a good solution for
> a widely use production system - as I think you would want to use a load
> balancer to stop traffic to one image while sessions complete before
> terminating it. OR - there must be some cloud based solution for this
> these days - presumably using docker…

Still for hobby experiments - this and Digital Ocean seems ideal.

DO offers load balancers too. I never checked how easy they are to
configure

Stephan



_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo
In reply to this post by Tim Mackinnon


On 11/05/2018 08:45, Tim Mackinnon wrote:
> Good to know you were pleased with it - I notice the Enterprise pharo
> chapter uses monit - perhaps I may have to try that in instead (but
> supervisord seemed much simpler). One small thought - I’ve been using my
> default root login account  (the tutorials showing Digital Ocean all
> seem to do this) - I have a nagging feeling this isn’t so cool - but I’m
> not sure what the general practice is these days with cloud infrastructure? 

I normally don't, but mostly because the VPS I use are not "disposable",
so I lock it down as if it were a physical server.

It is not "cloud" as in that my the servers have a name, not a number :)
Using the analogy [1], my VPSs are pets, not cattle.


> Presumably supervisord needs to run as root? But my image doesn’t -
> still I would have expected that it would all run fine as root (if not a
> bit dangerously?)

supervisord will run as root, and in the process config you can specify
the user for it (for some reason I omitted that last line, sorry)

In my previous example, you should add this to the config:
user=trentosur   ; setuid to this UNIX account to run the program

I avoid running my software as root, in particular if the server where
it is hosted contains information that shouldn't be read by anybody but me.



[1]
https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/
, there are also "insects" now with microservices/lambda.



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
Hey thanks for the prompts - I figured out my starting problem I was using —no-default-preferences as a VM parameter at its not, its an image parameter so it comes AFTER the image name on the command line… subtle but its the kind of thing that drives you crazy.

I will look at what it takes to add users to my DO account and run that way (its better as you suggest)

Tim

> On 11 May 2018, at 13:02, Esteban A. Maringolo <[hidden email]> wrote:
>
>
>
> On 11/05/2018 08:45, Tim Mackinnon wrote:
>> Good to know you were pleased with it - I notice the Enterprise pharo
>> chapter uses monit - perhaps I may have to try that in instead (but
>> supervisord seemed much simpler). One small thought - I’ve been using my
>> default root login account  (the tutorials showing Digital Ocean all
>> seem to do this) - I have a nagging feeling this isn’t so cool - but I’m
>> not sure what the general practice is these days with cloud infrastructure?
>
> I normally don't, but mostly because the VPS I use are not "disposable",
> so I lock it down as if it were a physical server.
>
> It is not "cloud" as in that my the servers have a name, not a number :)
> Using the analogy [1], my VPSs are pets, not cattle.
>
>
>> Presumably supervisord needs to run as root? But my image doesn’t -
>> still I would have expected that it would all run fine as root (if not a
>> bit dangerously?)
>
> supervisord will run as root, and in the process config you can specify
> the user for it (for some reason I omitted that last line, sorry)
>
> In my previous example, you should add this to the config:
> user=trentosur   ; setuid to this UNIX account to run the program
>
> I avoid running my software as root, in particular if the server where
> it is hosted contains information that shouldn't be read by anybody but me.
>
>
>
> [1]
> https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/
> , there are also "insects" now with microservices/lambda.
>
>
>
> --
> Esteban A. Maringolo
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Sven Van Caekenberghe-2
Have you also looked at systemd ? It is the new Linux thing for a while now. I use it successfully (https://github.com/svenvc/pharo-server-tools). It takes some getting used to, but it does work.

> On 11 May 2018, at 14:23, Tim Mackinnon <[hidden email]> wrote:
>
> Hey thanks for the prompts - I figured out my starting problem I was using —no-default-preferences as a VM parameter at its not, its an image parameter so it comes AFTER the image name on the command line… subtle but its the kind of thing that drives you crazy.
>
> I will look at what it takes to add users to my DO account and run that way (its better as you suggest)
>
> Tim
>
>> On 11 May 2018, at 13:02, Esteban A. Maringolo <[hidden email]> wrote:
>>
>>
>>
>> On 11/05/2018 08:45, Tim Mackinnon wrote:
>>> Good to know you were pleased with it - I notice the Enterprise pharo
>>> chapter uses monit - perhaps I may have to try that in instead (but
>>> supervisord seemed much simpler). One small thought - I’ve been using my
>>> default root login account  (the tutorials showing Digital Ocean all
>>> seem to do this) - I have a nagging feeling this isn’t so cool - but I’m
>>> not sure what the general practice is these days with cloud infrastructure?
>>
>> I normally don't, but mostly because the VPS I use are not "disposable",
>> so I lock it down as if it were a physical server.
>>
>> It is not "cloud" as in that my the servers have a name, not a number :)
>> Using the analogy [1], my VPSs are pets, not cattle.
>>
>>
>>> Presumably supervisord needs to run as root? But my image doesn’t -
>>> still I would have expected that it would all run fine as root (if not a
>>> bit dangerously?)
>>
>> supervisord will run as root, and in the process config you can specify
>> the user for it (for some reason I omitted that last line, sorry)
>>
>> In my previous example, you should add this to the config:
>> user=trentosur   ; setuid to this UNIX account to run the program
>>
>> I avoid running my software as root, in particular if the server where
>> it is hosted contains information that shouldn't be read by anybody but me.
>>
>>
>>
>> [1]
>> https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/
>> , there are also "insects" now with microservices/lambda.
>>
>>
>>
>> --
>> Esteban A. Maringolo
>> _______________________________________________
>> seaside mailing list
>> [hidden email]
>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

NorbertHartl
In reply to this post by Tim Mackinnon


Am 10.05.2018 um 23:55 schrieb Tim Mackinnon <[hidden email]>:

I forgot to mention - you are correct that this isn’t a good solution for a widely use production system - as I think you would want to use a load balancer to stop traffic to one image while sessions complete before terminating it. OR - there must be some cloud based solution for this these days - presumably using docker…

All of this can be done. It just depends on the learning curve you want to take. I successfully escaped the unix/linux hell (used daemontools, monti, systemd before) for my services and entered docker hell :P 
I use docker swarm and that has everything you describe above. I can deploy any number of images across all my machines with ease. There is a zero-downtime option so there is no service interruption. But the amount of time I had to put into there was massive. So while I would encourage the usage of docker I’m not sure it is for everyone. 
The medium heavy approach for you should be:

1. nginx as frontend server. This proxies all requests to a docker service traefik (https://traefik.io/)
2. traefik listens on docker whenever a container is started and adds it to the load balancing routine.
3. You deploy any number of your application and they will automatically load balanced.

Norbert

Still for hobby experiments - this and Digital Ocean seems ideal.

Tim

On 10 May 2018, at 22:17, Tim Mackinnon <[hidden email]> wrote:

Ah - I see (hadn’t thought about having a configurable port number),

So if Ive understood correctly I could install supervisord with a config file like yours.

Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?

Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).

Thanks for the help - this is my next learning step.

Tim

On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:



On 10/05/2018 15:30, Esteban A. Maringolo wrote:
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st file handling the port number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
Smalltalk commandLine arguments
 ifEmpty: [
   Transcript show: 'No port parameter was specified.'; cr.
   Smalltalk quitPrimitive. ]
 ifNotEmpty: [:args |
   | port |
   port := args first asNumber asInteger.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: false.
 ]
]
ifFalse: [
   | port |
   port := 8080.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: true.
 ].


I hope it helps.

Best regards,



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Esteban A. Maringolo
On 11/05/2018 10:39, Norbert Hartl wrote:

> All of this can be done. It just depends on the learning curve you want
> to take. I successfully escaped the unix/linux hell (used daemontools,
> monti, systemd before) for my services and entered docker hell :P 
> I use docker swarm and that has everything you describe above. I can
> deploy any number of images across all my machines with ease. There is a
> zero-downtime option so there is no service interruption. But the amount
> of time I had to put into there was massive. So while I would encourage
> the usage of docker I’m not sure it is for everyone. 
> The medium heavy approach for you should be:
>
> 1. nginx as frontend server. This proxies all requests to a docker
> service traefik (https://traefik.io/)
> 2. traefik listens on docker whenever a container is started and adds it
> to the load balancing routine.
> 3. You deploy any number of your application and they will automatically
> load balanced.

Why nginx+traefik? Isn't the latter supposed to provide reverse proxying
as well?

A blog post/tutorial about how this was made would save many from
investing massive time into it. :)

I'm interested in how you implement the zero-downtime.

Regards!


--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
This has proved a very interesting exercise, and Norbert - I will add docker swarm to my todo list… thanks. Sven - I think I’ve just learned about systemd when learning how to autostart supervisord (how ironic).

Just for completeness to run supervisord = here’s the systemd sciprt ;) - https://gist.github.com/macta/0f30e73a6f5670f301795eacd25f6f0e

Tim

On 11 May 2018, at 14:54, Esteban A. Maringolo <[hidden email]> wrote:

On 11/05/2018 10:39, Norbert Hartl wrote:

All of this can be done. It just depends on the learning curve you want
to take. I successfully escaped the unix/linux hell (used daemontools,
monti, systemd before) for my services and entered docker hell :P 
I use docker swarm and that has everything you describe above. I can
deploy any number of images across all my machines with ease. There is a
zero-downtime option so there is no service interruption. But the amount
of time I had to put into there was massive. So while I would encourage
the usage of docker I’m not sure it is for everyone. 
The medium heavy approach for you should be:

1. nginx as frontend server. This proxies all requests to a docker
service traefik (https://traefik.io/)
2. traefik listens on docker whenever a container is started and adds it
to the load balancing routine.
3. You deploy any number of your application and they will automatically
load balanced.

Why nginx+traefik? Isn't the latter supposed to provide reverse proxying
as well?

A blog post/tutorial about how this was made would save many from
investing massive time into it. :)

I'm interested in how you implement the zero-downtime.

Regards!


--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

NorbertHartl
In reply to this post by Esteban A. Maringolo


> Am 11.05.2018 um 15:54 schrieb Esteban A. Maringolo <[hidden email]>:
>
> On 11/05/2018 10:39, Norbert Hartl wrote:
>
>> All of this can be done. It just depends on the learning curve you want
>> to take. I successfully escaped the unix/linux hell (used daemontools,
>> monti, systemd before) for my services and entered docker hell :P
>> I use docker swarm and that has everything you describe above. I can
>> deploy any number of images across all my machines with ease. There is a
>> zero-downtime option so there is no service interruption. But the amount
>> of time I had to put into there was massive. So while I would encourage
>> the usage of docker I’m not sure it is for everyone.
>> The medium heavy approach for you should be:
>>
>> 1. nginx as frontend server. This proxies all requests to a docker
>> service traefik (https://traefik.io/)
>> 2. traefik listens on docker whenever a container is started and adds it
>> to the load balancing routine.
>> 3. You deploy any number of your application and they will automatically
>> load balanced.
>
> Why nginx+traefik? Isn't the latter supposed to provide reverse proxying
> as well?
>
Yes, maybe you don’t need nginx then. I use it because nginx is fine for caching content and rewriting requests and such. That isn’t so good with traefik. I also usually do SSL offloading in nginx so traefik does not need to handle SSL.

> A blog post/tutorial about how this was made would save many from
> investing massive time into it. :)
>
Yes I know. But I hardly find time to release all the stuff we do to the public. So writing a blog post is out of reach for me at the moment, sorry. Maybe Marcus and me give a session on that at ESUG. And one thing is sure… a single blog post won’t keep you from spending a lot of time on it.

> I’m interested in how you implement the zero-downtime.
>
docker swarm provides it. It starts a new container and let this do health check for about a minute. Then it takes that container online and slighty later it takes one of the old containers offline. I’m not sure it works 100% but I did not see any HTTP errors from the frontend server while upgrading.

Norbert

> Regards!
>
>
> --
> Esteban A. Maringolo
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Tim Mackinnon
In reply to this post by NorbertHartl
This is very interesting - as I was taking the hobbyist view I completely overlooked what docker solutions there are.

Would like to hear more - and whether there is something that’s easy/cheap to play with part time. I would hope for something I can just stick my Pharo images in and it takes care of restarts etc and could scale if my hobby grew.

I’m understanding your experience is that it’s still not quite that simple?

I really love the simplicity of lambda - and it’s pretty easy to run Pharo there too - but could you write a web app that way? They say no - but there are some attractions to escape this nonsense we still face today.

You’ve really got me thinking now...

Tim

Sent from my iPhone

On 11 May 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:



Am 10.05.2018 um 23:55 schrieb Tim Mackinnon <[hidden email]>:

I forgot to mention - you are correct that this isn’t a good solution for a widely use production system - as I think you would want to use a load balancer to stop traffic to one image while sessions complete before terminating it. OR - there must be some cloud based solution for this these days - presumably using docker…

All of this can be done. It just depends on the learning curve you want to take. I successfully escaped the unix/linux hell (used daemontools, monti, systemd before) for my services and entered docker hell :P 
I use docker swarm and that has everything you describe above. I can deploy any number of images across all my machines with ease. There is a zero-downtime option so there is no service interruption. But the amount of time I had to put into there was massive. So while I would encourage the usage of docker I’m not sure it is for everyone. 
The medium heavy approach for you should be:

1. nginx as frontend server. This proxies all requests to a docker service traefik (https://traefik.io/)
2. traefik listens on docker whenever a container is started and adds it to the load balancing routine.
3. You deploy any number of your application and they will automatically load balanced.

Norbert

Still for hobby experiments - this and Digital Ocean seems ideal.

Tim

On 10 May 2018, at 22:17, Tim Mackinnon <[hidden email]> wrote:

Ah - I see (hadn’t thought about having a configurable port number),

So if Ive understood correctly I could install supervisord with a config file like yours.

Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?

Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).

Thanks for the help - this is my next learning step.

Tim

On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:



On 10/05/2018 15:30, Esteban A. Maringolo wrote:
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st file handling the port number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
Smalltalk commandLine arguments
 ifEmpty: [
   Transcript show: 'No port parameter was specified.'; cr.
   Smalltalk quitPrimitive. ]
 ifNotEmpty: [:args |
   | port |
   port := args first asNumber asInteger.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: false.
 ]
]
ifFalse: [
   | port |
   port := 8080.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: true.
 ].


I hope it helps.

Best regards,



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

NorbertHartl


Am 11.05.2018 um 17:48 schrieb Tim Mackinnon <[hidden email]>:

This is very interesting - as I was taking the hobbyist view I completely overlooked what docker solutions there are.

Would like to hear more - and whether there is something that’s easy/cheap to play with part time. I would hope for something I can just stick my Pharo images in and it takes care of restarts etc and could scale if my hobby grew.

I’m understanding your experience is that it’s still not quite that simple?

It can be quite simple. To create a simple docker container running a pharo image is very easy. Docker provides a restart policy for containers. So this is by far easier than those unix services (which I find more and more annoying over the years). But most things are easy if you have a single runtime and get really complicated as soon as you have two runtimes. Going from two to three is easy again ;)

Norbert


I really love the simplicity of lambda - and it’s pretty easy to run Pharo there too - but could you write a web app that way? They say no - but there are some attractions to escape this nonsense we still face today.

You’ve really got me thinking now...

Tim

Sent from my iPhone

On 11 May 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:



Am 10.05.2018 um 23:55 schrieb Tim Mackinnon <[hidden email]>:

I forgot to mention - you are correct that this isn’t a good solution for a widely use production system - as I think you would want to use a load balancer to stop traffic to one image while sessions complete before terminating it. OR - there must be some cloud based solution for this these days - presumably using docker…

All of this can be done. It just depends on the learning curve you want to take. I successfully escaped the unix/linux hell (used daemontools, monti, systemd before) for my services and entered docker hell :P 
I use docker swarm and that has everything you describe above. I can deploy any number of images across all my machines with ease. There is a zero-downtime option so there is no service interruption. But the amount of time I had to put into there was massive. So while I would encourage the usage of docker I’m not sure it is for everyone. 
The medium heavy approach for you should be:

1. nginx as frontend server. This proxies all requests to a docker service traefik (https://traefik.io/)
2. traefik listens on docker whenever a container is started and adds it to the load balancing routine.
3. You deploy any number of your application and they will automatically load balanced.

Norbert

Still for hobby experiments - this and Digital Ocean seems ideal.

Tim

On 10 May 2018, at 22:17, Tim Mackinnon <[hidden email]> wrote:

Ah - I see (hadn’t thought about having a configurable port number),

So if Ive understood correctly I could install supervisord with a config file like yours.

Then on completion of my build, I could sftp my new image up to DO, and then execute a command like: supervisorctl restart psworker?

Doing a check does seem to show lots of people use either it or monit (the latter being a bit more complicated).

Thanks for the help - this is my next learning step.

Tim

On 10 May 2018, at 19:47, Esteban A. Maringolo <[hidden email]> wrote:



On 10/05/2018 15:30, Esteban A. Maringolo wrote:
How would you know when to kill the running vm+image? It might have
active sessions.

If you don't care about that, what I used to manage a bunch of "worker
images" (process groups) with supervisord [1], so in that case you'd
stop all the related workers, copy the new image, and start the
workers again.
My supervisord.conf entry for a pool of working images was:

[program:psworker]
command=/home/trentosur/perfectstore/pharo-vm/pharo --nodisplay
/home/trentosur/perfectstore/ps.image worker.st 818%(process_num)1d
process_name=%(program_name)s_%(process_num)02d ; process_name expr
(default %(program_name)s)
numprocs=2
directory=/home/trentosur/perfectstore
autostart=false
autorestart=true
user=trentosur
stopasgroup=true
killasgroup=true


Part of the worker.st file handling the port number was:

"Seaside server start"
Smalltalk isHeadless ifTrue: [
Smalltalk commandLine arguments
 ifEmpty: [
   Transcript show: 'No port parameter was specified.'; cr.
   Smalltalk quitPrimitive. ]
 ifNotEmpty: [:args |
   | port |
   port := args first asNumber asInteger.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: false.
 ]
]
ifFalse: [
   | port |
   port := 8080.
   Transcript show: 'Starting worker image at port ', port asString; cr.
   ZnZincServerAdaptor  startOn: port.
   ZnZincServerAdaptor default server debugMode: true.
 ].


I hope it helps.

Best regards,



--
Esteban A. Maringolo
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Ci automated deploy of seaside to Digital Ocean

Pierce Ng-3
In reply to this post by Tim Mackinnon
On Fri, May 11, 2018 at 12:45:55PM +0100, Tim Mackinnon wrote:
> One small thought - I’ve been using
> my default root login account  (the tutorials showing Digital Ocean
> all seem to do this) - I have a nagging feeling this isn’t so cool -
> but I’m not sure what the general practice is these days with cloud
> infrastructure?

I use daemontools. It runs as root and starts Pharo as a dedicated uid
configured in its run file directly:

  #!/bin/sh
  /usr/bin/setuidgid app1 \
      /pkg/pharo5vm/gopharo -vm-display-null -vm-sound-null smallcms1.image --no-quit

gopharo is a script:

  #!/bin/sh
  PHAROVMPATH=$(dirname `readlink -f "$0"`)
  LD_LIBRARY_PATH="$PHAROVMPATH" exec "$PHAROVMPATH/pharo" $@ &

I use gopharo because I place my application-specific shared libraries
(such as custom-built libsqlite.so) in the Pharo VM directory, which is
/pkg/pharo5vm/ in my setup.

Pierce

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
12