We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth
Active subscription is required.
yum
upgrades for production use, this is the repository for you.
Active subscription is required.
You might have read our previous post on best tools for optimizing images, and now wondering how to automate those optimizations.
An instinctive solution is a cron job that runs optimization utilities against website files which have been added recently, e.g.:
0 1 * * 1 find /path/to/dir/ -name '*.jpg' -type f -mtime -7 -exec jpegoptim -q -s -p --all-progressive -m 65 {} \; >/dev/null 2>&1
Why this approach is wrong?
- Files may be uploaded with modification time set, which is not going to be picked up by the cron
- Missed cron job might not pick up on new files
- There is no “memory” of the optimization – batch processing of the directory is impossible / will result in extreme quality degradation (think of reducing quality to 65% twice, three times, etc.)
So what is the proper solution? Meet xattr and inotify Linux kernel subsystems:
- xattr will allow to efficiently store optimization level for an image file, allowing further runs to check it and prevemt re-optimizing things over and over
- inotify will allow for real time optimization, as soon as the files have been added
Leveraging xattr
#!/bin/bash
# expects: dir (or current)
# for setfattr command
yum -y install attr
find $1 -iname '*.jpg' -exec sh -c '
for i do
getfattr -n user.q --only-values "$i" 2>&1 | grep "No such attribute" >/dev/null && jpegoptim --strip-all --force --all-progressive "$i" && setfattr -n user.q -v 100 "$i"
done' sh {} +
find . -type f -iname '*.png' -exec \
getfattr -n user.optimized --only-values {} 2>&1 | \
grep "No such attribute" >/dev/null && \
setfattr -n user.optimized -v 1 {} \
&& zopflipng -m -y {} {} \;
Leveraging inotify
This script will help with automatic images performance optimization.
#!/bin/bash
# The backup directory to create where the image file is found before compressing it.
BACKUPDIR=backup
if [ "$#" -eq 1 ]; then
WATCH_DIR=$1
else
WATCH_DIR=`pwd`
echo -e "\e[1;31mNo watch directory provided as script parameter. Using current working directory as watch dir \e[0m "
fi
inotifywait -o /var/log/compressImage.log -dr --timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write $WATCH_DIR | while read date time DIR FILE; do
NEW_FILE=${DIR}${FILE}
echo "${NEW_FILE} found"
# Use appropriate if clause if you want to optimize other formats, beware output is always png
#if [[ ${FILE} = *.png ]] || [[ ${FILE} = *.gif ]] || [[ ${FILE} = *.bmp ]] || [[ ${FILE} = *.tiff ]]; then
if [[ ${FILE} = *.png ]]; then
echo "processing $NEW_FILE"
# As a result of optipng optimization below, a new close_write gets fired for the same file.
#But no loop as optipng won't be able to optimze it any further on second run and we will be ok.
optipng -o2 -keep -preserve -quiet ${NEW_FILE}
BACKUPDIR_PATH=${DIR}/backup
mkdir -p ${BACKUPDIR_PATH}
mv ${DIR}${FILE}.bak ${BACKUPDIR_PATH}/${FILE}
elif [[ ${FILE} = *.jpg ]] || [[ ${FILE} = *.jpeg ]]; then
echo "processing $NEW_FILE"
# As a result of optipng optimization below, a new close_write gets fired for the same file.
#But no loop as optipng won't be able to optimze it any further on second run and we will be ok.
jpegoptim -p -t --strip-all --strip-icc --strip-iptc $NEW_FILE
fi
done