[Girder-users] Deploy Girder on AWS with Elastic Beanstalk
Carlos Agüero
caguero at osrfoundation.org
Thu Apr 27 13:33:25 EDT 2017
On Thu, Apr 27, 2017 at 7:04 AM, Michael Grauer <michael.grauer at kitware.com>
wrote:
> [...]
>
> I hope you don't mind if I ask you some follow-up questions :) Just
> trying to understand your setup and choices in more detail.
>
Of course!
I agree that your security setup sounds reasonable, HTTPS to load balancer,
> HTTP from load balancer to Girder (though I have more questions on this
> below), assuming the instances are not visible to the outside world and
> only to the LB via the VPN, and Mongo/Instances talking to each other
> inside the same VPN. Out of curiosity (rather than suggesting a policy),
> how do you handle ssh, are each of the machines accessible to ssh or do you
> have a VPN ssh gateway machine?
>
Both the Girder instances and the EC2 machine with the Mongo database have
public IP addresses, share the same VPC (private network) and all the
machines support ssh access from the outside. On the instances under the
Elastic Beanstalk (Girder instances), this is the default behavior (you can
create a key pair .pem file for sshing). On the EC2 instance with the Mongo
DB, you have to attach a security group that configures the access. The
inbound rule for port 22 is set to 0.0.0.0 allowing access from machines
outside the private network. On the other hand, the rule for port 27017
restricts the access only from machines belonging to the same VPC.
> When you say load balancer, does that mean Elastic Load Balancer or
> something else? I'm confused about how you use Nginx, are you using ELB +
> Nginx, and if so how does ELB hand off to Nginx? Where does Nginx live, is
> it in a separate Docker container that redirects to the Girder instances?
>
I meant the load balancer provided by Elastic Beanstalk. If I'm not wrong,
the request hits the Nginx running on the load balancer. Then, the request
is forwarded to one of the Girder instances running another Nginx. Then,
the request is forwarded to the HTTP server running within the Docker
container. You can SSH into all the machines (including the load balancer)
if you want to poke around and see configurations, logs, etc. I was playing
with /etc/nginx/conf.d/ configurations, restarting the service, like in any
regular machine.
EB offers a way (although a bit convoluted in my opinion) to overload Nginx
configuration files when you deploy new versions of your code. I'm doing
this for tweaking the Nginx configuration in the Girder instances for
redirecting non-https requests to https. The solution involves creating
files under the .ebextension directory with some specific syntax. I managed
to solve the http-->https redirection in a non-dockerized instance but I
still have some issues here using Docker for Girder. In particular, EB
includes a health checker that monitors the instances. When I enable the
redirection, the load balancer receives a 301 response with the https URL
redirection. The expected response was a 200 OK, and that makes the load
balancer think that the instances are not behaving correctly. Maybe you
have more experience than me dealing with Nginx configurations.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/girder-users/attachments/20170427/7530de15/attachment.html>
More information about the Girder-users
mailing list