[mysql] Lost connection to MySQL server during query?

Possible Duplicate:
Lost connection to MySQL server during query

I am importing some data from a large csv to a mysql table. I am losing the connection to the server during the process of importing the file to the table.

What is going wrong?

The error code is 2013: Lost connection to the mySql server during the query.

I am running these queries from a ubuntu machine remotely on a windows server.

This question is related to mysql

The answer is


This can also happen if someone or something kills your connection using the KILL command.


The same as @imxylz, but I had to use mycursor.execute('set GLOBAL max_allowed_packet=67108864') as I got a read-only error without using the GLOBAL parameter.

mysql.connector.__version__ 

8.0.16


1) you may have to increase the timeout on your connection.

2)You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option.

This logs some of the disconnected errors in the hostname.err file

You can use that for further investigation

3) if you are trying to send the data to BLOB columns, check server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in following link, “Packet too large”.

4) you can check the following url link

5) you should check your available disk space is bigger than the table you're trying to update link


This was happening to me with mariadb because I made a varchar(255) column a unique key.. guess that's too heavy for a unique, as the insert was timing out.


I was getting this error with a "broken pipe" when I tried to do bulk inserts with millions of records. I ended up solving this by chunking my data into smaller batch sizes and then running an executemany command with the mysql cursor for each of the inserts I needed to do. This solved the problem and didn't seem to affect the performance in any noticeable way.

eg.

def chunks(data):
    for i in range(0, len(data), CHUNK_SIZE):
        yield data[i:i + CHUNK_SIZE]


def bulk_import(update_list):
    new_list = list(chunks(update_list))
    for batch in new_list:
         cursor.execute(#SQL STATEMENT HERE)

I was running into the same problem. Because of some other issues I had tried to add a cnx.close() line to my other functions. Instead, I removed all these extraneous closes and setup my class like this:

class DBase:

config = {
      'user': 'root',
      'password': '',
      'host': '127.0.0.1',
      'database': 'bio',
      'raise_on_warnings': True,
      'use_pure': False,
      }

def __init__(self):
    import mysql.connector
    self.cnx = mysql.connector.connect(**self.config)
    self.cur = self.cnx.cursor(buffered=True)
    print(self.cnx)
def __enter__(self):
    return DBase()

def __exit__(self, exc_type, exc_val, exc_tb):
    self.cnx.commit()
    if self.cnx:
        self.cnx.close()

Any function that is called within this class is connects, commits, and closes.


I think you can use mysql_ping() function.

This function checks for connection to the server alive or not. if it fails then you can reconnect and proceed with your query.


In my case, I ran into this problem when sourcing an SQL dump which had placed the tables in the wrong order. The CREATE in question included a CONSTRAINT ... REFERENCES that referenced a table that had not been created yet.

I located the table in question, and moved its CREATE statement to above the offending one, and the error disappeared.

The other error I encountered relating to this faulty dump was ERROR 1005/ errno: 150 -- "Can't create table" , again a matter of tables being created out of order.


Make sure you close cursor before connection. I've resolved my problem with this:

if cur and con:                        
    cur.close() 
    con.close() 

Set the 'max_allowed_packet' setting to 64M and restart your MySql server. If that did not fixed your issues, the problem may lie elsewhere.

I've a multi-threaded PHP CLI application that does simultaneous queries and I recently noticed this issue. It's now obvious to me that MySql server consider all connections from the same IP as a 'single' connection and therefore drop all connections whenever a single query finishes.

I wonder though that is there a way to make MySql allow say 100 connections from the same IP and consider each connection as an individual connection.


I encountered similar problems too. In my case it was solved by getting the cursor in this way:

cursor = self.conn.cursor(buffered=True)

The mysql docs have a whole page dedicated to this error: http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

of note are

  • You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.

  • You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option. This logs some of the disconnected errors in the hostname.err file


very simple to solve, go to the control panel of you phpadmin and click on config/then edit the .ini file you see. look for port 3306 if that's not the port you are using for your connection change 3306 to the port you are using. on your login screen just put localhost for your server, your port if its not the default or if you did not change the file name my.ini in sql configuration leavit as is. then put your username:root or the one you created then the password:1234 or the one you assigned. if you are connecting localy, do not check the url option. then type the name of the database you want to edit. note: once you are connected you will see the list of databases you have on your server or the server you are connecting to.


Try the following 2 things...

1) Add this to your my.cnf / my.ini in the [mysqld] section

max_allowed_packet=32M

(you might have to set this value higher based on your existing database).

2) If the import still does not work, try it like this as well...

mysql -u <user> --password=<password> <database name> <file_to_import

This happend to me when my CONSTRAINT name have the same name with other CONSTRAINT name.

Changing my CONSTRAINT name solved this.


I my case the reason for the

ERROR 2013 (HY000): Lost connection to MySQL server during query

error was that parts of my table were corrupted. I was also not able to mysqldump my table because some rows broke it. The error was not related to any memory issues etc. like mentioned above.

The nice thing was that MySQL returned me the row number which was the first what failed. It was something like

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table mytable at row: 12723

The solution was to copy the data into a new table. In my case I lost 10 rows of data because i had to skip these corrupted rows. First I created a "tmp" table with the schema of the old one. SHOW CREATE TABLE is your friend here. E.g.

SHOW CREATE TABLE mydatabase.mytable;

With the i created the new table. Let's call it mytabletmp. And then copy the rows you are able to copy via e.g.

insert into mysqltabletmp select * from mytable where id < 12723;
insert into mysqltabletmp select * from mytable where id > 12733;

After that drop old table, rename tmp-table to the old table name.

There are also some nice Information from Peter regarding this problem.


There are three ways to enlarge the max_allowed_packet of mysql server:

  1. Change max_allowed_packet=64M in file /etc/mysql/my.cnf on the mysql server machine and restart the server
  2. Execute the sql on the mysql server: set global max_allowed_packet=67108864;
  3. Python executes sql after connecting to the mysql:
connection.execute('set max_allowed_packet=67108864')

Usually that happens when you exhaust one resource for the db session, such as memory, and mysql closes the connection.

Can you break the CSV file into smaller ones and process them? or do commit every 100 rows? The idea is that the transaction you're running shouldn't try to insert a large amount of data.

I forgot to add, this error is related to the configuration property max_allowed_packet, but I can't remember the details of what to change.


Multiprocessing and Django DB don't play well together.

I ended up closing Django DB connection first thing in the new process.

So that one will have no references to the connection used by the parent.

from multiprocessing import Pool

multi_core_arg = [[1,2,3], [4,5,6], [7,8,9]]
n_cpu = 4
pool = Pool(n_cpu)
pool.map(_etl_, multi_core_arg)
pool.close()
pool.join()

def _etl_(x):
    from django.db import connection 
    connection.close() 
    print(x)

OR

Process.start() calls a function which starts with

Some other suggest to use

from multiprocessing.dummy import Pool as ThreadPool

It solved my (2013, Lost connection) problem, but thread use GIL, when doing IO, to will release it when IO finish.

Comparatively, Process spawn a group of workers that communication each other, which may be slower.

I recommend you to time it. A side tips is to use joblib which is backed by scikit-learn project. some performance result shows it out perform the native Pool().. although it leave the responsibility to coder to verify the true run time cost.


You need to increase the timeout on your connection. If you can't or don't want to do that for some reason, you could try calling:

data = db.query(sql).store_result()

This will fetch all the results immediately, then your connection won't time out halfway through iterating over them.


You can also encounter this error with applications that fork child processes, all of which try to use the same connection to the MySQL server. This can be avoided by using a separate connection for each child process.

Forks might hit you. Beware not in this case though.


You might like to read this - http://dev.mysql.com/doc/refman/5.0/en/gone-away.html - that very well explains the reasons and fixes for "lost connection during query" scenarios.

In your case, it might be because of the max allowed packet size as pointed by Augusto. Or if you've verified it isn't the case, then it might be the connection wait timeout setting due to which the client is losing connection. However, I do not think latter is true here because it's a CSV file and not containing queries.


This happened to me when I tried to update a table whose size on disk was bigger than the available disk space. The solution for me was simply to increase the available disk space.