![]() But, as I said, they’re complicated and not foolproof. ![]() There are some complicated solutions that automate “rolling restarts” with multiple haproxy setups and restarting mongrels in different pools. If you’re really overloaded, this can result in 10s+ waits. They’re all doing the exact same thing but fighting each other for resources.ĭuring that time, you’ve killed your old mongrels so any users hitting your site have to wait for the mongrels to be fully started. Each one has to load all of Rails, all your gems, all your libraries, and your app into memory before it can start serving requests. When your server’s CPU is pegged, restarting 9 mongrels hurts. We don’t kill app servers often due to memory bloat, but it happens. You need to be prepared for things to not always be perfect, and so does your production environment. Engine Yard has a great post on memory bloat and how to deal with it. This is often a problem with parts of our application. We restart mongrels that hit a certain memory threshhold. Our production environment needs to handle errors and failures gracefully. But we have a complicated application with many moving parts and things go wrong. Yes, this is a problem with our application. Mongrels will often get into a “stuck” stage and need to be killed by some external process (e.g. ![]() This has proven unreliable due to Ruby’s threading. When actions take longer than 60s to complete, Mongrel will try to kill the thread. We ran this setup for a long time and were very happy with it. In the classic setup you have nginx sending requests to a pool of mongrels using a smart balancer or a simple round robin.Įventually you want better visibility and reliability out of your load balancing situation, so you throw haproxy into the mix: It uses Mongrel’s Ragel HTTP parser but has a dramatically different architecture and philosophy. Unicorn is an HTTP server for Ruby, similar to Mongrel or Thin. We’ve been running Unicorn for more than a month.
0 Comments
Leave a Reply. |