This Thursday we installed apt-cacher-ng on our project’s webserver. The idea was to test [under a ‘relative amount of stress’] both:
- how fabric performs parallel execution
- how apt-cacher-ng copes with getting the packages, caching them, and providing upon request, in parallel
As ‘relative amount of stress’ we set: parallel installation of Gnome onto 20 Ubuntu 11.04 machines. As a result of running our fabfile, which included installing gnome on all the 20 workstations in the LAN nothing happend, i.e. no gnome installed. From the output we saw that packages could not be found, apt-cacher-ng was crashing every time we tried to install gnome on all the machines. Changing its configuration to provide up to 25 parallel threads did not help.
The workaround which did help was to first run the installation of gnome on one machine, and only then (after the operation was completed) to let fabric install gnome on all machines in parallel.
Conclusion: apt-cacher-ng could not at the same time manage fetching packages, caching them and serving the 20 machines, however, it managed to fetch packages, cache them and serve one machine and to then serve [from its cache] another 19 machines in parallel.
This leaves us with either / or:
- constructing our fabfile so that every time we want to install packages in parallel we first do it on one machine and then on rest
- instead of apt-cacher-ng use some other tool for proxy caching e.g. our project supervisor Tero Karvinen was suggesting to use Squid, a well-known, well-developed, well-supported, and widely used web-cache / proxy server.
The group has decided on the latter, so tests with Squid have begun.
#1 by Doctor Who on December 3, 2011 - 2:45 pm
Why don’t you use more imagination before starting fingerpointing?
Why do you think it’s a problem with local proxy server? Why don’t you look for other bottlenecks?
The more likely candidate is the upstream mirror which you might be flooding with hundreds of requests from the same host so it starts limiting your bandwith.
#2 by armens movsesjans on December 3, 2011 - 9:17 pm
It must be a problem with the local proxy because:
– as i mentioned in the post – once the packages were cached and served to one machine there was no problem serving another 19, but not caching and serving 20 at the same time
– squid with almost default configuration has been performing in the same task flawlessly without having to first cache then serve.
But what do you mean with an “upstream mirror flooded with hundreds of requests from the same host”? what requests, to install gnome? If i tell 20 gnomeless machines to go directly to an ubuntu mirror and install gnome [at the same time] the mirror should get defensive, at least because of the NAT. But if i tell these machines to do the same via my local proxy like apt-cacher-ng only the proxy contacts the mirror, isnt that the whole point? One machine goes and gets the packages. Once. Should that be a problem [for a mirror]?
Anyway, no one is fingerpointing here. Our project’s focus is on automated system config management, and apt-proxying is touched only to provide the infrastructure for tests on fabric’s parallel execution e.g. when we simultaneously tell 20 machines to go and install gnome. So we needed a workable proxying solution with least resources spent, and apt-cacher-ng didn’t turn to be the one.
#3 by blackpornvideos on July 19, 2016 - 4:12 am
I have been exploring for a little for any high quality articles or
blog posts in this sort of area . Exploring in Yahoo I at last stumbled upon this website.
Reading this information So i am satisfied to convey that I have an incredibly just right uncanny feeling I
found out exactly what I needed. I so much indubitably will make certain to don?t forget
this web site and provides it a look regularly.