Already existing packages can be imported into apt-cacher-ng's cache pool instead of downloading them. There are some restrictions:
_import
directory.
HOWTO:
_import
subdirectory in the cache, ie. in /var/cache/apt-cacher/_import/
. The files may be links or symlinks, does not matter. When done, apt-cacher will move those files to its own internal locations. Example:
cd /var/cache
mkdir apt-cacher-ng/_import
cp -laf apt-proxy apt-cacher /var/cache/apt-cacher-ng/_import
chown -R apt-cacher-ng apt-cacher-ng/_import
_import
directory again. All files that could be identified as referenced by archive metadata should no longer be there if they have been successfully moved. If some files have been left behind, check whether the client can use them, i.e. with "apt-cache policy ..." and/or checking checksums with md5sum/sha1sum tools. Probably they are no longer needed by anyone and therefore apt-cacher-ng just left them behind. If no, follow the instructions in 1 or do similar things for your distribution and retry the import operation. Setting the verbosity flag (see checkbox on the command-and-control page) can also help to discover the reason for the refusal to import the particular files.
NOTE: APT is pretty efficient on avoiding unneccessary downloads which can make a proxy blind to some relevant files. ACNG makes some attempts to guess the remote locations of missed (not downloaded) files but these heuristics may fail, especially on non-Debian systems. When some files are permanently ignored, check the process output for messages about the update of Packages/Sources files. When some relevant package sources are missing there, there is a brute-force method for Debian/Ubuntu users to force their download to the client side. To do that, run:
rm /var/cache/apt/*cache.bin
rm /var/lib/apt/lists/*Packages
rm /var/lib/apt/lists/*Sources
on the client to purge APT's internal cache, and then rerun "apt-get update" there.
To get a basic overview of the cache contents, the distkill.pl script may be used. See section 7.2 for details and warnings.
# /usr/lib/apt-cacher-ng/distkill.pl
Scanning /var/cache/apt-cacher-ng, please wait...
Found distributions:
1. testing (6 index files)
2. sid (63 index files)
3. etch-unikl (30 index files)
4. etch (30 index files)
5. experimental (505 index files)
6. lenny (57 index files)
7. unstable (918 index files)
8. stable (10 index files)
WARNING: The removal action would wipe out whole directories containing
index files. Select d to see detailed list.
Which distribution to remove? (Number, 0 to exit, d for details): d
Directories to remove:
1. testing:
/var/cache/apt-cacher-ng/debrep/dists/testing
2. sid:
/var/cache/apt-cacher-ng/localstuff/dists/sid
/var/cache/apt-cacher-ng/debrep/dists/sid
4. etch:
/var/cache/apt-cacher-ng/ftp.debian-unofficial.org/debian/dists/etch
5. experimental:
/var/cache/apt-cacher-ng/debrep/dists/experimental
6. lenny:
/var/cache/apt-cacher-ng/security.debian.org/dists/lenny
/var/cache/apt-cacher-ng/debrep/dists/lenny
7. unstable:
/var/cache/apt-cacher-ng/debrep/dists/unstable
/var/cache/apt-cacher-ng/localstuff/debian/dists/unstable
8. stable:
/var/cache/apt-cacher-ng/debrep/dists/stable
Found distributions:
WARNING: The removal action would wipe out whole directories containing
index files. Select d to see detailed list.
It's possible to use apt-cacher-ng source with the jigdo-lite utility. There are some limitations, though:
But it's possible to feed jigdo-lite with the package contents from your mirror. To do that, first start jigdo-lite as usual, something like:
jigdo-lite http://cdimage.debian.org/.../...-DVD-1.jigdo
When asked about Debian mirror, enter something like:
http://proxy.host:3142/ftp.de.debian.org/debian/
i.e. construct the same URL as present in usual apt-cacher-ng's user's sources.list.
That's all, jigdo-lite will fetch the package files using apt-cacher-ng proxy.
Sometimes clients might need to access some remote side directly to do some non-file-transfer oriented work but still passing the data through configured apt-cacher-ng proxy. Such remote hosts can be marked for direct access in apt configuration, e.g. in /etc/apt/apt.conf
:
Acquire::HTTP::Proxy::archive.example.org "DIRECT";
//or Acquire::HTTP::Proxy::archive.example.org "other.proxy:port"
Sometimes clients to download through apt-cacher-ng but the data shall not be stored on the harddisk of the server. To get it, use the DontCache directive (see examples for details) to define such files.
Symptom: A common situation is a periodic download of hundreds of files through apt-cacher-ng where just a half is present in the cache. Although caching works fine, there are visible delays on some files during the download.
Possible cause and relief: the download from the real mirror gets interrupted while apt-cacher-ng delivers a set of files from the internal cache. While the connection is suspended, it times out and needs to be recreated when a miss occurs, i.e. apt-cacher-ng has to fetch more from the remote mirror. A workaround to this behaviour is simple, provided that the remote mirror can handle long request queues: set the pipelining depth to a very high value in apt.conf file or one of its replacement files in /etc/apt/apt.conf.d/. With something like:
Acquire::http { Pipeline-Depth "200"; }
there is a higher chance to get the server connection "preheated" before a stall occurs.
First, it should be clear what is needed to be done. In order to integrate the packages from a DVD or ISO image, read on in section 8.8.
The situation with ISO files import is complicated. They are not supported by the cache and there is also no expiration mode for them. The feature might be considered for addition in some future release of apt-cacher-ng.
What is possible now is publishing a directory with ISO files using its web server mode, see LocalDirs
config option for details.
Integrating package files from DVD or ISO images is not much different to the usual import operation, see above for instructions.
One possible way to get files into the _import
directory is simply mounting it there:
mount -o loop /dev/cdrom /var/cache/apt-cacher-ng/_import
After running the import operation, the disk can be umounted and removed.
A possible variation is import via symlinks. This can make sense when the space consumption must be reduced and the ISO image should stay on the server for a long time. To achive this, the image should be mounted at some mount point outside of the _import
directory, the state should be set permanently via an /etc/fstab entry (don't forget the loop option), then a symlink tree pointing to the mountpoint location should be created in the _import
directory (something like cp -as /mnt/image_sarge_01/pool /var/cache/apt-cacher-ng/_import
). The subsequent "import" operation should pick up the symlinks and keep them symlinks instead of making file copies.
It is possible to configure custom commands which are executed before the internet connection attempt and after a certain period after closing the connection. The commands are bound to a remapping configuration and the config file is named after the name of that remapping config, like debrep.hooks
for Remap-debrep
. See section 4.3.2, conf/*.hooks
and /usr/share/doc/apt-cacher-ng/examples/*.hooks
files for details.
Unless configured explicitely, the server listens to any interface with IPv4 or IPv6 protocol. To disable some of this, use the BindAddress
option. It should contain a list of IP adresseses associated with particular network interfaces, separated by space. When option is set then the server won't listen to addresses or protocols not included there.
To limit to specific IP protocol, the address should only be present in the protocol specific syntax (like 192.0.43.10) will limit the use to the specific protocol.
The usual wildcard addresses can also be used to match all interfaces configured for the specific protocol, like 0.0.0.0 for IPv4.
Usually, outgoing hosts are accessed by the protocol and with the target IP reported as the first candidate by operating system facilities (getaddrinfo). It is possible to change this behavior, i.e. to skip IPv6 or IPv4 versions or try IPv6 connection first and then use IPv4 as alternative (or vice versa). See option ConnectProto in configuration examples.
There is a general use case where the data storing behavior of APT is not so fortunate. Imagine an old laptop with a slow and small harddisk but a modern network connection (i.e. Cardbus-attached WLAN card). But there is not enough space for APT to store the downloaded packages on the local disk, or not enough to perform the upgrade afterwards.
A plausible workaround in this case are moving contents of /var/cache/apt/archives directory to a mounted NFS share and replacing the original directory with a symlink (or bind-mount to the mentioned share). However, this solution would transfer all data at least three times over network. Another plausible workaround might be the use of curlftpfs which would embedd a remote FTP share which then can be specified as file:// URL in sources.list. However, this solution won't work with a local HTTP proxy like apt-cacher-ng (and httpfs http://sourceforge.net/projects/httpfs/ is not an alternative because it works only with a single file per mount).
As real alternative, apt-cacher-ng comes with an own implementation of a http file system called acngfs
. It makes some assumptions of proxy's behaviour in order to emulate a real directory structure. Directories can be entered but not browsed (i.e. content listing is disallowed because of HTTP protocol limitations). Anyhow, this solution is good enough for APT. When it's checking the contents of the data source located on acngfs share, it reads the file contents of just the files required for the update which makes the apt-cacher-ng server download them on-the-fly.
And finally, angfs usage can be optimized for local access. This works best if the proxy daemons runs on the same machine as acngfs and there are hundreds of packages to update while filesystem access costs are negligible. Here the cache directory can be specified in acngfs parameters, and then it gets files directly from the cache if they are completely downloaded and don't have volatile contents.
It is possible to create a partial local mirror of a remote package repository. The method to do this is usually known as pre-caching. A such mirror would contain all files available to apt through apt-cacher-ng
, making the cache server suitable for pure off-line use.
The config uses index files in the local cache in order to declare which remote files shall be mirrored. Choice of relevant files decides which branch, which architecture or which source tree is to be mirrored. For convenience, it's possible to use glob expressions to create semi-dynamic list. The format is shell-like and relative to cache directory, a shell running in the cache directory can be helpful to verify the correctness.
Example:
PrecacheFor: debrep/dists/unstable/*/binary-amd64/Packages*
PrecacheFor: emacs.naquadah.org/unstable/*
Assuming that debrep repository is configured with proper remapping setup (see above), this would download all Debian packages listed for amd64 architecture in the unstable branch.
There is also support for faster file update using deltas, see Debdelta for details. The delta_uri URL mentioned there needs to be added as deltasrc option, see section 4.3.2 for details.
The operation is triggered using the web interface, various options or estimation mode can also be configured there. The CGI URL generated by the browser can be called with other clients to repeat this job, for example in a daily executed script. Another possible command line client can be the expire-caller.pl
script shipped with this package (replacing the CGI parameters through environment, see section 7.1.2). For regular tools like wget or curl, remember the need of quotation and secrecy of user/password data - command calls might expose them to local users.