yum
upgrades for production use, this is the repository for you.
Active subscription is required.
One aspect where your server can be optimized for performance is making its memory management more efficient.
By default, programs operate on memory using small chunks of data. When large blocks of memory have to be accessed and written in small chunks, things are getting slower.
Huge Pages mechanism enables programs to work with memory using larger chunks. And thus it is faster.
There are basically two types of Huge Pages available.
1. Explicit Huge Pages (Hugetlbfs)
First is explicit huge pages (Hugetlbfs). These require programs to be compiled with support for them.
It is considered an older mechanism for huge pages.
2. Transparent Huge Pages (THP)
Second is transparent huge pages. These must be enabled by the kernel settings. Even the programs which are not aware of huge pages can leverage them if transparent huge pages are enabled.
THP are enabled by default starting from CentOS/RHEL 6.
Transparent vs Explicit
Nice explanation on explicit (old) vs transparent huge pages can be found here and here.
Checking huge pages status can be done via grep Huge /proc/meminfo
, e.g.:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 128
HugePages_Free: 128
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 262144 kB
The above tells you that there are no transparent huge pages allocated (AnonHugePages
), while there’s a total of 128 explicit huge pages available (HugePages_Free
), and none of them is used (HugePages_Rsvd
).
As you can see, mostly the HugePages_*
parameters reflect on explicit huge pages status.
Some folks don’t get it
I have seen some performance enthusiasts skip the lines and going as far as saying:
… automatically enables Hugepages support if you have a Linux kernel that supports it + CentOS 7 and are not using Redis server. If you use Redis server, hugepages support is disabled for best Redis server performance and memory usage.
Obviously without giving any distinction to end-users about transparent vs explicit huge pages. These things can be enabled and disabled independently of each other. Not to say a careful approach needs to be taken while adjusting configuration values for each.
So what should we really do and what should you disable or keep?
It depends on the situation, but in most cases, you would want to:
- disable Transparent Huge Pages, for apps that may have problem with them
- enable explicit huge pages for apps which require them to work faster
Redis and Huge Pages
Redis only has a problem with transparent huge pages. So if you must use Redis, disable THP.
PHP and Huge Pages
PHP OPCache extension can actually benefit from huge pages. PHP 7, by default, is compiled with support for explicit huge pages.
If you are to use OPCache file-cache only, then all of the below is inapplicable.
But if you plan on using the standard approach of having OPCache store stuff in memory, you may want to configure a few things for smooth operation.
Your specific PHP OPCache version might have been compiled without support for huge pages. Run this to confirm:
php -i | grep opcache
Empty output means that you’re out of luck as PHP was compiled without --enable-huge-code-pages
switch (or old enough to not support it).
Result like this:
opcache.huge_code_pages => On => On
…means that huge pages are supported and enabled in Opcache.
If you want to enable huge pages support for Opcache (provided it was compiled with it), add to your php.ini
the following:
opcache.huge_code_pages=1
The next thing to know is that the number of huge pages is controlled via kernel setting /proc/sys/vm/nr_hugepages
. What’s the proper setting value?
Suppose that you want to allocate 256M of RAM to PHP Opcache (YMMV). Pages that are used as huge pages are reserved inside the kernel and cannot be used for other purposes. So we don’t want to allocate too many explicit huge pages.
Let’s find out the size of a huge page on your system:
grep "Hugepagesize:" /proc/meminfo
Hugepagesize: 2048 kB
So each huge page is equal 2MB and we need a total of 256 / 2 = 128 huge pages allocated by the kernel. Edit /etc/sysctl.conf
and add
# Allocate 128*2MiB for HugePageTables
vm.nr_hugepages = 128
Apply changes with:
sysctl -p
As per wiki here:
if grep “Huge” /proc/meminfo don’t show all the pages, you can try to free the cache with
sync ; echo 3 > /proc/sys/vm/drop_caches
(where “3” stands for “purge pagecache, dentries and inodes”) then trysysctl -p
again.
As a last resort, you can reboot the server so that the kernel can reallocate things properly. (You may get too few huge pages reserved unless rebooted).
Observe your huge pages have been allocated with:
grep HugePages_Total /proc/meminfo
Sample output:
AnonHugePages: 0 kB
HugePages_Total: 128
In this sample output, the kernel successfully allocated all huge pages (HugePages_Total
). Depending on the programs starting up during boot, you may not be so lucky. So the kernel manual gives as a helpful hint:
The administrator can allocate persistent huge pages on the kernel boot command line by specifying the “hugepages=N” parameter, where ‘N’ = the
number of huge pages requested. This is the most reliable method of allocating huge pages as memory has not yet become fragmented.