Hi,
I've studied Sean's excellent post on faking an https client from Gemstone: http://www.monkeysnatchbanana.com/posts/2010/06/22/faking-a-https-client-for-glass.html . The gist of which seems to be why install stunnel when the web server you're already running supports proxying to https. The blog post describes the setup for nginx however I've picked lighttpd and can't work out an equivalent proxying setup for lighttpd or even if it's possible? Has anyone achieved a similar effect in lighttpd? In my case I want to call out from Gemstone to paypal.
Thanks Nick
|
After quickly reviewing the lighttpd proxy docs, I don't think that is possible.
However, my lighttpd knowledge is nowhere near as good as my nginx knowledge so I could be wrong. On Tue, Aug 17, 2010 at 1:55 AM, Nick Ager <[hidden email]> wrote: > Hi, > I've studied Sean's excellent post on faking an https client from Gemstone: > http://www.monkeysnatchbanana.com/posts/2010/06/22/faking-a-https-client-for-glass.html > . The gist of which seems to be why install stunnel when the web server > you're already running supports proxying to https. The blog post describes > the setup for nginx however I've picked lighttpd and can't work out an > equivalent proxying setup for lighttpd or even if it's possible? Has anyone > achieved a similar effect in lighttpd? In my case I want to call out from > Gemstone to paypal. > Thanks > Nick |
Hi Sean,
After quickly reviewing the lighttpd proxy docs, I don't think that is possible. Yes I think you're right - mod_proxy in lighttpd won't allow names to be used in the proxy only ip's and it doesn't seem like good practice to hard-code an ip to a service like pay-pal. It also doesn't allow the protocol to be specified ie https. For what's it's worth here's my best attempt to-date:
# a request from Gemstone to localhost:9050 should be redirected via https to a specific server $SERVER["socket"] == "127.0.0.1:9050" {
proxy.server = ( "" => ( ( "host" => "an ip address", "port" => 443
) ) ) } Nick |
I would say you can either:
setup stunnel. ues nginx for that https proxying ( not very memory intensive ). switch to using nginx from lighttpd. if you need help with any of the above, let me know. On Tue, Aug 17, 2010 at 3:10 AM, Nick Ager <[hidden email]> wrote: > Hi Sean, > >> After quickly reviewing the lighttpd proxy docs, I don't think that is >> possible. >> However, my lighttpd knowledge is nowhere near as good as my nginx >> knowledge >> so I could be wrong. > > Yes I think you're right - mod_proxy in lighttpd won't allow names to be > used in the proxy only ip's and it doesn't seem like good practice to > hard-code an ip to a service like pay-pal. It also doesn't allow the > protocol to be specified ie https. For what's it's worth here's my best > attempt to-date: > # a request from Gemstone to localhost:9050 should be redirected via https > to a specific server > $SERVER["socket"] == "127.0.0.1:9050" { > proxy.server = ( "" => > ( ( > "host" => "an ip address", > "port" => 443 > ) ) > ) > } > Nick |
Hi Sean
I would say you can either: I've setup stunnel - it was surprisingly straightforward to configure. However I do like the simplicity of minimising the moving parts to Gemstone and the web server. So I'll have another look at nginx. Do you have a Gemstone-fastcgi-nginx configuration laying around?
Cheers Nick |
I use proxying rather than fast cgi however, fast cgi is pretty easy to setup.
It is basically the same as how proxying works. ( Which is covered by http://www.monkeysnatchbanana.com/posts/2010/06/23/reverse-proxying-to-seaside-with-nginx.html but... you would want to use the upstream directive to setup many fast cgi listeners easily ). Directions are here: http://wiki.nginx.org/NginxHttpFcgiModule If you let me know what ports you are running seaside on, I can easily whip one up for you. Quick question if I do that.. Do you want to have custom error pages for certain 4xx and 5xx error messages from seaside or just pass back to the client? Do you want to have a setup where it first checks to see if a file exists and if not, then passes it to Seaside? this is easiest to setup but you get extra script kiddie hits on your backend server as it assumes that any uri that isnt found would be handled by the backend Or a setup where all dynamic content has a url like /seaside/XXXXX and we know that any ^/seaside url gets handed to the backend otherwise, handle in nginx? On Tue, Aug 17, 2010 at 9:48 AM, Nick Ager <[hidden email]> wrote: > Hi Sean > >> I would say you can either: >> >> setup stunnel. >> ues nginx for that https proxying ( not very memory intensive ). >> switch to using nginx from lighttpd. >> >> if you need help with any of the above, let me know. > > I've setup stunnel - it was surprisingly straightforward to configure. > However I do like the simplicity of minimising the moving parts to Gemstone > and the web server. So I'll have another look at nginx. Do you have a > Gemstone-fastcgi-nginx configuration laying around? > Cheers > Nick |
Hi Sean,
Thanks for the offer - I'll very happily take you up on it. Answers inline:
On 17 August 2010 15:02, Sean Allen <[hidden email]> wrote: I use proxying rather than fast cgi however, fast cgi is pretty easy to setup. ports 9001, 9002, 9003 Quick question if I do that.. Custom error pages would be great
I'd prefer anything under /files to search a specified directory. Note: I'm using Seaside30 so I don't have a ^/seaside the dynamic content comes straight from the root.
In my lighttpd I'd protected the ^/config and ^/tools urls so that they were forbidden unless the request came from localhost. Thanks a lot Nick
|
Untested but I think in general this should work. There are several
things that aren't setup. gzip isn't setup. error pages aren't setup. server_name and root need to be modified. content in /files is set to have an expire of 7 days. and more fun stuff could be setup. If it doesnt work, let me know the error message you get on startup/in error log and i can help you figure out what i left out, typo'd etc. I would have tested but I don't have an environment setup right now that I can easily drop this into. For the https proxying, you would add another server entry ala my blog post. If it does work, there is some duplication that can be removed. worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; upstream seaside { server localhost:9001; server localhost:9002; server localhost:9003; } server { server_name www.example.com; root /var/www/www.example.com/; location /files { expires 7d; } location /config { allow 127.0.0.1; deny all; error_page 404 = @seaside; } location /tools { allow 127.0.0.1; deny all; error_page 404 = @seaside; } location / { error_page 404 = @seaside; } location @seaside { fastcgi_intercept_errors on; fastcgi_pass seaside; } } } On Tue, Aug 17, 2010 at 10:24 AM, Nick Ager <[hidden email]> wrote: > Hi Sean, > Thanks for the offer - I'll very happily take you up on it. Answers inline: > > On 17 August 2010 15:02, Sean Allen <[hidden email]> wrote: >> >> I use proxying rather than fast cgi however, fast cgi is pretty easy to >> setup. >> It is basically the same as how proxying works. ( Which is covered by >> >> http://www.monkeysnatchbanana.com/posts/2010/06/23/reverse-proxying-to-seaside-with-nginx.html >> but... you would want to use the upstream directive to setup many fast >> cgi listeners easily ). >> >> Directions are here: >> >> http://wiki.nginx.org/NginxHttpFcgiModule >> >> If you let me know what ports you are running seaside on, I can easily >> whip one up for you. > > ports 9001, 9002, 9003 > >> >> Quick question if I do that.. >> >> Do you want to have custom error pages for certain 4xx and 5xx error >> messages from seaside or just pass back to the client? > > > Custom error pages would be great > >> >> Do you want to have a setup where it first checks to see if a file >> exists and if not, then passes it to Seaside? >> this is easiest to setup but you get extra script kiddie hits on >> your backend server as it assumes that any uri that isnt >> found would be handled by the backend >> >> Or a setup where all dynamic content has a url like /seaside/XXXXX >> and we know that any ^/seaside url gets handed to the backend >> otherwise, handle in nginx? > > > I'd prefer anything under /files to search a specified directory. > Note: I'm using Seaside30 so I don't have a ^/seaside the dynamic content > comes straight from the root. > In my lighttpd I'd protected the ^/config and ^/tools urls so that they were > forbidden unless the request came from localhost. > Thanks a lot > Nick |
Ok there is a serious couple of omissions from that...
Two things... i left out fast cgi param setup... dur. and the main index wouldnt get sent to seaside. corrections attached... this seems to work for me as i setup a system to give it a go. worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; upstream seaside { server localhost:9001; server localhost:9002; server localhost:9003; } server { server_name www.example.com; root /var/www/www.example.com/; location /files { expires 7d; } location /config { allow 127.0.0.1; deny all; error_page 403 = @seaside; } location /tools { allow 127.0.0.1; deny all; error_page 403 = @seaside; } location / { error_page 403 404 = @seaside; } location @seaside { include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass seaside; } } } On Tue, Aug 17, 2010 at 11:12 AM, Sean Allen <[hidden email]> wrote: > Untested but I think in general this should work. There are several > things that aren't setup. > > gzip isn't setup. error pages aren't setup. > server_name and root need to be modified. > content in /files is set to have an expire of 7 days. > and more fun stuff could be setup. > > > If it doesnt work, let me know the error message you get on startup/in > error log and i can help you figure > out what i left out, typo'd etc. I would have tested but I don't have > an environment setup right now that I can easily > drop this into. For the https proxying, you would add another server > entry ala my blog post. > > If it does work, there is some duplication that can be removed. > > worker_processes 1; > > events > { > worker_connections 1024; > } > > http > { > include mime.types; > default_type application/octet-stream; > > upstream seaside > { > server localhost:9001; > server localhost:9002; > server localhost:9003; > } > > server > { > server_name www.example.com; > > root /var/www/www.example.com/; > > location /files > { > expires 7d; > } > > location /config > { > allow 127.0.0.1; > deny all; > error_page 404 = @seaside; > } > > location /tools > { > allow 127.0.0.1; > deny all; > error_page 404 = @seaside; > } > > location / > { > error_page 404 = @seaside; > } > > location @seaside > { > fastcgi_intercept_errors on; > fastcgi_pass seaside; > } > } > } > > > > On Tue, Aug 17, 2010 at 10:24 AM, Nick Ager <[hidden email]> wrote: >> Hi Sean, >> Thanks for the offer - I'll very happily take you up on it. Answers inline: >> >> On 17 August 2010 15:02, Sean Allen <[hidden email]> wrote: >>> >>> I use proxying rather than fast cgi however, fast cgi is pretty easy to >>> setup. >>> It is basically the same as how proxying works. ( Which is covered by >>> >>> http://www.monkeysnatchbanana.com/posts/2010/06/23/reverse-proxying-to-seaside-with-nginx.html >>> but... you would want to use the upstream directive to setup many fast >>> cgi listeners easily ). >>> >>> Directions are here: >>> >>> http://wiki.nginx.org/NginxHttpFcgiModule >>> >>> If you let me know what ports you are running seaside on, I can easily >>> whip one up for you. >> >> ports 9001, 9002, 9003 >> >>> >>> Quick question if I do that.. >>> >>> Do you want to have custom error pages for certain 4xx and 5xx error >>> messages from seaside or just pass back to the client? >> >> >> Custom error pages would be great >> >>> >>> Do you want to have a setup where it first checks to see if a file >>> exists and if not, then passes it to Seaside? >>> this is easiest to setup but you get extra script kiddie hits on >>> your backend server as it assumes that any uri that isnt >>> found would be handled by the backend >>> >>> Or a setup where all dynamic content has a url like /seaside/XXXXX >>> and we know that any ^/seaside url gets handed to the backend >>> otherwise, handle in nginx? >> >> >> I'd prefer anything under /files to search a specified directory. >> Note: I'm using Seaside30 so I don't have a ^/seaside the dynamic content >> comes straight from the root. >> In my lighttpd I'd protected the ^/config and ^/tools urls so that they were >> forbidden unless the request came from localhost. >> Thanks a lot >> Nick > |
One more thing Nick...
using /files/ does bad things to the default seaside setup as it can get the /files/ that it expects from the file library. you might want to change that to /static/ so location /files { expires 7d; } becomes location /static { expires 7d; } On Tue, Aug 17, 2010 at 11:57 AM, Sean Allen <[hidden email]> wrote: > Ok there is a serious couple of omissions from that... > > Two things... i left out fast cgi param setup... dur. > and the main index wouldnt get sent to seaside. > > corrections attached... this seems to work for me as i setup a system > to give it a go. > > worker_processes 1; > > events > { > worker_connections 1024; > } > > http > { > include mime.types; > default_type application/octet-stream; > > upstream seaside > { > server localhost:9001; > server localhost:9002; > server localhost:9003; > } > > server > { > server_name www.example.com; > > root /var/www/www.example.com/; > > location /files > { > expires 7d; > } > > location /config > { > allow 127.0.0.1; > deny all; > error_page 403 = @seaside; > } > > location /tools > { > allow 127.0.0.1; > deny all; > error_page 403 = @seaside; > } > > location / > { > error_page 403 404 = @seaside; > } > > location @seaside > { > include fastcgi_params; > fastcgi_intercept_errors on; > fastcgi_pass seaside; > } > } > } > > > > On Tue, Aug 17, 2010 at 11:12 AM, Sean Allen > <[hidden email]> wrote: >> Untested but I think in general this should work. There are several >> things that aren't setup. >> >> gzip isn't setup. error pages aren't setup. >> server_name and root need to be modified. >> content in /files is set to have an expire of 7 days. >> and more fun stuff could be setup. >> >> >> If it doesnt work, let me know the error message you get on startup/in >> error log and i can help you figure >> out what i left out, typo'd etc. I would have tested but I don't have >> an environment setup right now that I can easily >> drop this into. For the https proxying, you would add another server >> entry ala my blog post. >> >> If it does work, there is some duplication that can be removed. >> >> worker_processes 1; >> >> events >> { >> worker_connections 1024; >> } >> >> http >> { >> include mime.types; >> default_type application/octet-stream; >> >> upstream seaside >> { >> server localhost:9001; >> server localhost:9002; >> server localhost:9003; >> } >> >> server >> { >> server_name www.example.com; >> >> root /var/www/www.example.com/; >> >> location /files >> { >> expires 7d; >> } >> >> location /config >> { >> allow 127.0.0.1; >> deny all; >> error_page 404 = @seaside; >> } >> >> location /tools >> { >> allow 127.0.0.1; >> deny all; >> error_page 404 = @seaside; >> } >> >> location / >> { >> error_page 404 = @seaside; >> } >> >> location @seaside >> { >> fastcgi_intercept_errors on; >> fastcgi_pass seaside; >> } >> } >> } >> >> >> >> On Tue, Aug 17, 2010 at 10:24 AM, Nick Ager <[hidden email]> wrote: >>> Hi Sean, >>> Thanks for the offer - I'll very happily take you up on it. Answers inline: >>> >>> On 17 August 2010 15:02, Sean Allen <[hidden email]> wrote: >>>> >>>> I use proxying rather than fast cgi however, fast cgi is pretty easy to >>>> setup. >>>> It is basically the same as how proxying works. ( Which is covered by >>>> >>>> http://www.monkeysnatchbanana.com/posts/2010/06/23/reverse-proxying-to-seaside-with-nginx.html >>>> but... you would want to use the upstream directive to setup many fast >>>> cgi listeners easily ). >>>> >>>> Directions are here: >>>> >>>> http://wiki.nginx.org/NginxHttpFcgiModule >>>> >>>> If you let me know what ports you are running seaside on, I can easily >>>> whip one up for you. >>> >>> ports 9001, 9002, 9003 >>> >>>> >>>> Quick question if I do that.. >>>> >>>> Do you want to have custom error pages for certain 4xx and 5xx error >>>> messages from seaside or just pass back to the client? >>> >>> >>> Custom error pages would be great >>> >>>> >>>> Do you want to have a setup where it first checks to see if a file >>>> exists and if not, then passes it to Seaside? >>>> this is easiest to setup but you get extra script kiddie hits on >>>> your backend server as it assumes that any uri that isnt >>>> found would be handled by the backend >>>> >>>> Or a setup where all dynamic content has a url like /seaside/XXXXX >>>> and we know that any ^/seaside url gets handed to the backend >>>> otherwise, handle in nginx? >>> >>> >>> I'd prefer anything under /files to search a specified directory. >>> Note: I'm using Seaside30 so I don't have a ^/seaside the dynamic content >>> comes straight from the root. >>> In my lighttpd I'd protected the ^/config and ^/tools urls so that they were >>> forbidden unless the request came from localhost. >>> Thanks a lot >>> Nick >> > |
Hi Sean,
Thanks for all the help. The good news is that the fastcgi to Gemstone is working well. Thanks - that's a great help. Nginx responds to requests for /config or /tools from non-localhost and requests them from Gemstone.
My plan with the /files directive is to export all the file libraries into /files so that the nginx deals with static content rather than Gemstone - this seems to work. Any thoughts on hidding /config and /tools from non-localhosts?
Thanks again Nick |
Hi Sean
I found the problem I guess I'm not handling the 403 in Seaside. If I comment out error_page 403 = @seaside then all is well - nginx puts up 403 Forbidden page. So it looks like I'm good to go.
Thanks again Nick
|
Are you connecting from a browser on the same machine?
If you are then it might see that as localhost and allow. Sorry that should be error_page 404 = @seaside in those tools and config locations. like: location /config { allow 127.0.0.1; deny all; error_page 404 = @seaside; } location /tools { allow 127.0.0.1; deny all; error_page 404 = @seaside; } On Tue, Aug 17, 2010 at 1:02 PM, Nick Ager <[hidden email]> wrote: > Hi Sean > I found the problem I guess I'm not handling the 403 in Seaside. If I > comment out error_page 403 = @seaside then all is well - nginx puts up 403 > Forbidden page. So it looks like I'm good to go. > Thanks again > Nick > > > |
Hi Sean,
I'm slowly beginning to grasp nginx configuration, adding the debug directive to error_log really helps: error_log /var/log/nginx/error.log debug; My understanding based on the debug output for a request of http://www.mysite.com/ with: location / { error_page 403 404 = @seaside; } results in nginx looking for root_path/index.html, which fails, then it tries a directory listing of root_path which I haven't enabled so results in a 403 the result of which is to forward the output to the named location @seaside. Cunning. Here's the relevant portion of the debugging output:
2010/08/18 06:43:26 [debug] 8850#0: *1 try files phase: 11 2010/08/18 06:43:26 [debug] 8850#0: *1 content phase: 12 2010/08/18 06:43:26 [debug] 8850#0: *1 open index "/var/nginx/www/index.html"
2010/08/18 06:43:26 [debug] 8850#0: *1 stat() "/var/nginx/www/index.html" failed (2: No such file or directory) 2010/08/18 06:43:26 [debug] 8850#0: *1 http index check dir: "/var/nginx/www"
2010/08/18 06:43:26 [debug] 8850#0: *1 content phase: 13 2010/08/18 06:43:26 [debug] 8850#0: *1 content phase: 14 2010/08/18 06:43:26 [debug] 8850#0: *1 content phase: 15 2010/08/18 06:43:26 [debug] 8850#0: *1 content phase: 16
2010/08/18 06:43:26 [error] 8850#0: *1 directory index of "/var/nginx/www/" is forbidden, client: 172.16.181.1, server: _, request: "GET / HTTP/1.1", host: "www.mysite.com"
2010/08/18 06:43:26 [debug] 8850#0: *1 http finalize request: 403, "/?" 1 2010/08/18 06:43:26 [debug] 8850#0: *1 http special response: 403, "/?" 2010/08/18 06:43:26 [debug] 8850#0: *1 test location: "@seaside"
2010/08/18 06:43:26 [debug] 8850#0: *1 using location: @seaside "/?" I read a little around the nginx docs and found the try_files directive. Changing the location / to:
location / { try_files $uri @seaside; } results a slightly terser output: 2010/08/18 06:38:06 [debug] 8841#0: *1 try files phase: 11
2010/08/18 06:38:06 [debug] 8841#0: *1 http script var: "/" 2010/08/18 06:38:06 [debug] 8841#0: *1 try to use file: "/" "/var/nginx/www/" 2010/08/18 06:38:06 [debug] 8841#0: *1 try to use file: "@seaside" "/var/nginx/www@seaside"
2010/08/18 06:38:06 [debug] 8841#0: *1 test location: "@seaside" 2010/08/18 06:38:06 [debug] 8841#0: *1 using location: @seaside "/?" 2010/08/18 06:38:06 [debug] 8841#0: *1 generic phase: 3
Any thoughts on the advantages and disadvantages of these two approaches? All appears to working equally well with either configuration.. Cheers
Nick |
try_files is more clear. i did it with error_page because it is more
explicit when it comes to understanding what is going on. If you do try_files as try_files $uri $uri/ @seaside; so that if someone leaves off a / for a directory, it will still work, then you are back to the 403 error that doesn't get passed back to @seaside. |
Free forum by Nabble | Edit this page |