[php] What does "zend_mm_heap corrupted" mean

All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean.

OS: Fedora Core 8 Apache: 2.2.9 PHP: 5.2.6

This question is related to php heap fedora php-internals

The answer is


For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked.

$link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error());
mysql_select_db("db", $link);

Apache did not report any previous errors.

zend_mm_heap corrupted
zend_mm_heap corrupted
zend_mm_heap corrupted
[Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)

I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.


I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me.

PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)


I was in same situation here, nothing above helped, and checking more seriously I find my problem, it consist in try do die(header()) after send some output to buffer, the man who did this in the Code forgot about CakePHP resources and did not made a simples "return $this->redirect($url)".

Trying to re-invent the well, this was the problem.

I hope this relate help someone!


Had zend_mm_heap corrupted along with child pid ... exit signal Segmentation fault on a Debian server that was upgraded to jessie. After long investigation it turned out that XCache was installed prior Zend-Engine was generally available.

after apt-get remove php5-xcache and service apache2 restart the errors vanished.


On the off chance that somebody else has this problem in the same way that I do, I thought I'd offer the solution that worked for me.

I have php installed on Windows on a drive other than my system drive (H:).

In my php.ini file, the value of several different file system variables were written like \path\to\directory - which would've worked fine if my installation was on C:.

I needed to change the value to H:\path\to\directory. Adding the drive letter several different places in my php.ini file fixed the problem right away. I also made sure (though I don't think this is necessary) to fix the same problem in my PEAR config - as several variable values excluded the drive letter there as well.


I had this error using the Mongo 2.2 driver for PHP:

$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField')); 

^^DOESN'T WORK

$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField')); 
$collection->ensureIndex(array('yetAnotherField')); 

^^ WORKS! (?!)


For me none of the previous answers worked, until I tried:

opcache.fast_shutdown=0

That seems to work so far.

I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...


If you are on Linux box, try this on the command line

export USE_ZEND_ALLOC=0

I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.


I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not.

When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable.

I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped.

I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues.

On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take:

  • Make the error repeatable (what conditions?) every time
  • Find the common factor
  • Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again)
  • If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit.

Failing all of the above, you could also try things like:

  • Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed.
  • Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc...

Good luck.


I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.


I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another.

class A {} // in file a.php
class A // in file b.php
{
  public function foo() { // load a.php }
}

And it causes this problem in my case.

(Using laravel framework, running php artisan db:seed in real)


"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module. In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.


In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.


A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code.

I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring.

The actual fix, for me, was removing any existing breakpoints.

A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git)

Edit: Found the corresponding bug report in the xdebug issue tracker: https://bugs.xdebug.org/view.php?id=1647


For me it was RabbitMq with Xdebug into PHPStorm, so > Settings/Language and frameworks/PHP/Debug/Xdebug > untick "Can accept external connections".


Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption.

There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it.

I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.


As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.


I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up.

The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.


If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand.

https://bugs.php.net/bug.php?id=62339

Note: this bug is very very random; due to it's nature.


I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this:

opcache.enable_cli=0

Once switched the zend_mm_heap corrupted error went away.


Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.


On PHP 5.3 , after lot's of searching, this is the solution that worked for me:

I've disabled the PHP garbage collection for this page by adding:

<? gc_disable(); ?>

to the end of the problematic page, that made all the errors disappear.

source.


Setting

assert.active = 0 

in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)


Look for any module that uses buffering, and selectively disable it.

I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.


Some of tips that may helps some one

fedora 20, php 5.5.18

public function testRead() {
    $ri = new MediaItemReader(self::getMongoColl('Media'));

    foreach ($ri->dataReader(10) as $data) {
       // ...
    }
}

public function dataReader($numOfItems) {
    $cursor = $this->getStorage()->find()->limit($numOfItems);

    // here is the first place where "zend_mm_heap corrupted" error occurred
    // var_dump() inside foreach-loop and generator
    var_dump($cursor); 

    foreach ($cursor as $data) {
        // ...
        // and this is the second place where "zend_mm_heap corrupted" error occurred
        $data['Geo'] = [
            // try to access [0] index that is absent in ['Geo']
            'lon' => $data['Geo'][0],
            'lat' => $data['Geo'][1]
        ];
        // ...
        // Generator is used  !!!
        yield $data;
    }
}

using var_dummp() actually not an error, it was placed just for debugging and will be removed on production code. But real place where zend_mm_heap was happened is the second place.


I wrestled with this issue, for a week, This worked for me, or atleast so it seems

In php.ini make these changes

report_memleaks = Off  
report_zend_debug = 0  

My set up is

Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP  
with PHP Version 5.3.2-1ubuntu4.7  

This didn’t work.

So I tried using a benchmark script, and tried recording where the script was hanging up. I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!


Really hunt through your code for a silent error. In my Symfony app I got the zend_mm_heap corrupted error after removing a block from a twig base template not remembering it was referenced in sub templates. No error was thrown.


Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.


For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash.

I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...


This option has already been written above, but I want to walk you through the steps how I reproduced this error.

Briefly. It helped me:

opcache.fast_shutdown = 0

My legacy configuration:

  1. CentOS release 6.9 (Final)
  2. PHP 5.6.24 (fpm-fcgi) with Zend OPcache v7.0.6-dev
  3. Bitrix CMS

Step by step:

  1. Run phpinfo()
  2. Find "OPcache" in output. It should be enabled. If not, then this solution will definitely not help you.
  3. Execute opcache_reset() in any place (thanks to bug report, comment [2015-05-15 09:23 UTC] nax_hh at hotmail dot com). Load multiple pages on your site. If OPcache is to blame, then in the nginx logs will appear line with text

104: Connection reset by peer

and in the php-fpm logs

zend_mm_heap corrupted

and on the next line

fpm_children_bury()

  1. Set opcache.fast_shutdown=0 (for me in /etc/php.d/opcache.ini file)
  2. Restart php-fpm (e.g. service php-fpm restart)
  3. Load some pages of your site again. Execute opcache_reset() and load some pages again. Now there should be no mistakes.

By the way. In the output of phpinfo(), you can find the statistics of OPcache and then optimize the parameters (for example, increase the memory limit). Good instructions for tuning opcache (russian language, but you can use a translator)


For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.


In my case, i forgot following in the code:

);

I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault:

[Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11)

I'm on mac 10.6.7 and xampp.


This is not a problem that is necessarily solvable by changing configuration options.

Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all.

The nature of the error is this:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

int main(void) {
    void **mem = malloc(sizeof(char)*3);
    void *ptr;

    /* read past end */
    ptr = (char*) mem[5];   

    /* write past end */
    memcpy(mem[5], "whatever", sizeof("whatever"));

    /* free invalid pointer */
    free((void*) mem[3]);

    return 0;
}

The code above can be compiled with:

gcc -g -o corrupt corrupt.c

Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault:

krakjoe@fiji:/usr/src/php-src$ valgrind ./corrupt
==9749== Memcheck, a memory error detector
==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9749== Command: ./corrupt
==9749== 
==9749== Invalid read of size 8
==9749==    at 0x4005F7: main (an.c:10)
==9749==  Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749== 
==9749== Invalid read of size 8
==9749==    at 0x400607: main (an.c:13)
==9749==  Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749== 
==9749== Invalid write of size 2
==9749==    at 0x4C2F7E3: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749==    by 0x40061B: main (an.c:13)
==9749==  Address 0x50 is not stack'd, malloc'd or (recently) free'd
==9749== 
==9749== 
==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==9749==  Access not within mapped region at address 0x50
==9749==    at 0x4C2F7E3: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749==    by 0x40061B: main (an.c:13)
==9749==  If you believe this happened as a result of a stack
==9749==  overflow in your program's main thread (unlikely but
==9749==  possible), you can try to increase the size of the
==9749==  main thread stack using the --main-stacksize= flag.
==9749==  The main thread stack size used in this run was 8388608.
==9749== 
==9749== HEAP SUMMARY:
==9749==     in use at exit: 3 bytes in 1 blocks
==9749==   total heap usage: 1 allocs, 0 frees, 3 bytes allocated
==9749== 
==9749== LEAK SUMMARY:
==9749==    definitely lost: 0 bytes in 0 blocks
==9749==    indirectly lost: 0 bytes in 0 blocks
==9749==      possibly lost: 0 bytes in 0 blocks
==9749==    still reachable: 3 bytes in 1 blocks
==9749==         suppressed: 0 bytes in 0 blocks
==9749== Rerun with --leak-check=full to see details of leaked memory
==9749== 
==9749== For counts of detected and suppressed errors, rerun with: -v
==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
Segmentation fault

If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case).

If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error).

I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ...

These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer.

The course of action should be:

  1. Open a bug report on http://bugs.php.net
    • If you have a segfault, try to provide a backtrace
    • Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level.
    • Keep checking the bug report for updates, more information may be requested.
  2. If you have opcache loaded, disable optimizations
    • I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults.
    • If that doesn't work, even though your code may be slower, try unloading opcache first.
    • If any of this changes or fixes the problem, update the bug report you made.
  3. Disable all unnecessary extensions at once.
    • Begin to enable all your extensions individually, thoroughly testing after each configuration change.
    • If you find the problem extension, update your bug report with more info.
  4. Profit.

There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options.

It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem.

USE_ZEND_ALLOC

If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP.

Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut).

It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap.

It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot.

The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.


The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the

core notice: child pid exit signal Segmentation fault

and

zend_mm_heap corrupted

are apparently resolved with changes to

/etc/php.d/10-opcache.ini

I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value.

opcache.enable=1
opcache.enable_cli=0
opcache.fast_shutdown=0
opcache.memory_consumption=1024
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=60000

I experienced this issue in local development while using docker & php's built in dev server with Craft CMS.

My solution was to use Redis for Craft's sessions.

PHP 7.4


There was a bug fixed in PHP on Nov 13, 2014:

Fixed bug #68365 (zend_mm_heap corrupted after memory overflow in zend_hash_copy).

This was updated in versions 5.4.35, 5.5.19 and 5.6.3. In my case when I changed from using Ubuntu's official trusty package (5.5.9+dfsg-1ubuntu4.14) to the 5.5.30 version packaged by Ondrej Sury, the problem went away. None of the other solutions worked for me and I didn't want to disable opcache or suppress errors since this really was causing segfaults (500 responses).

Ubuntu 14.04 LTS:

export LANG=C.UTF-8       # May not be required on your system
add-apt-repository ondrej/php5
apt-get update
apt-get upgrade

Many of the answers here are old. For me (php 7.0.10 via Ondrej Sury's PPA on ubuntu 14.04 and 16.04) the problem appears to lie in APC. I was caching hundreds of small bits of data using apc_fetch() etc, and when invalidating a chunk of the cache I'd get the error. Work around was to switch to filesystem based caching.

More detail on github https://github.com/oerdnj/deb.sury.org/issues/452#issuecomment-245475283.


Examples related to php

I am receiving warning in Facebook Application using PHP SDK Pass PDO prepared statement to variables Parse error: syntax error, unexpected [ Preg_match backtrack error Removing "http://" from a string How do I hide the PHP explode delimiter from submitted form results? Problems with installation of Google App Engine SDK for php in OS X Laravel 4 with Sentry 2 add user to a group on Registration php & mysql query not echoing in html with tags? How do I show a message in the foreach loop?

Examples related to heap

Android Gradle Could not reserve enough space for object heap How to increase application heap size in Eclipse? Increase JVM max heap size for Eclipse Command-line Tool to find Java Heap Size and Memory Used (Linux)? how to increase java heap memory permanently? Finding the median of an unsorted array Find running median from a stream of integers Object creation on the stack/heap? How can building a heap be O(n) time complexity? Why should C++ programmers minimize use of 'new'?

Examples related to fedora

curl: (6) Could not resolve host: google.com; Name or service not known Permission denied on accessing host directory in Docker How do I check my gcc C++ compiler version for my Eclipse? What's the default password of mariadb on fedora? How do I install g++ for Fedora? How to view unallocated free space on a hard disk through terminal How do I enable --enable-soap in php on linux? How to access share folder in virtualbox. Host Win7, Guest Fedora 16? Could not reliably determine the server's fully qualified domain name How to list installed packages from a given repo using yum

Examples related to php-internals

How does PHP 'foreach' actually work? What does "zend_mm_heap corrupted" mean