Ask your Symfony questions! Pay money and get answers fast! (more info)

How can I deal with very slow INSERT statements? Symfony

  • SOLVED

I have a client who has a very old site built with Symfony 1.0. They are unable to upgrade (big company, very small tech team, spread thin, limited resources, etc).

This is in a MySql database. I'm guessing the MySql is 5.0 or newer, but there is a risk it is in the 4.x range.

They have a big table with millions of rows. To help with SELECT statements, they have 2 indexes on this table. However, they do lots of INSERT statements. The ratio of SELECT statements to INSERT statements is maybe 5 to 1.

The problem is, the INSERTs are very slow. They have turned on the slow query log, and they've got INSERTs that take 55 seconds to run.

What can they do to speed this up?

UPDATE:

My client sends me this:

<blockquote>The thing is, cardinality is very high in those indexes, as the sess_id value is unique as a primary key. So the total number of sess_id values is essentially the same as the number of records in the table:

select count(1) from ole_sessions = 395,747
select count(distinct(sess_id)) from ole_sessions = 395,788</blockquote>

UPDATE:

I am uploading an image that shows the output of the SHOW CREATE TABLE query.

Answers (6)

2011-11-16

Teemu Rainio answers:

I also have experience with Symfony 1.0 installation (Propel) and MySQL 5.0. There are tables with millions of rows but the INSERT statement do not take nowhere near as long as in your case.

I would first check the indexes are defined properly and then make sure everything fits in RAM (check innotop tool). The key configuration option for InnoDB is innodb_buffer_pool_size.

MySQL comes with some good starting point configurations such as my-innodb-heavy-4G.cnf.


Lawrence Krubner comments:

What does configurations such as my-innodb-heavy-4G.cnf do?

2011-11-16

Yura Rodchyn answers:

If php client must work fast you can try to use INSERT DELAYED statement. But it not solve problem at all. Or use some type of table split "sharding". There can be lot of solutions but it depends of your system structure. You can also try to tune mysql server.


Lawrence Krubner comments:

What kind of sharding do you mean? What might help with INSERTs?


Yura Rodchyn comments:

OK let we have one table. If it is big you an split it into some tables and write into table with fresh database syncing old data to archive table using cron for example. But this way only if you don't need all table data at all.

Nex solution is to disable keys update if you have bulk updates.
ALTER TABLE table DISABLE KEYS;
....
....
ALTER TABLE table ENABLE KEYS;
this will not rush keys untill you insert block.


Also, make sure you are using InnoDB tables - MyISAM is not optimized for writes.

And if it InnoDB http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html


Yura Rodchyn comments:

I see you write response from client that say something about sess_id. Make sure that you use char(32) or how long it is not varchar. Thi can save you some CPU ticks :)


Yura Rodchyn comments:

Also you can use unsigned int instead of double if rows can fit in ~4294967295 records.
Also maybe there can be a way to not insert so much rows. (but it only idea).

To say something else about optimization we need more info about project structure.

2011-11-17

Milena Dimitrova answers:

The attached SHOW CREATE TABLE query shows as table 'cao_session' where sess_id is PRIMARY KEY, which means a unique identifier. However the table you mentioned in your previous update is not called 'cao_session' but 'ole_sessions' and it seems like sess_id is NOT a Primary key there because it is non unique (395,747<395,788). So which of the two tables suffers from the Slow Insert problem? Or both?

The table 'cao_session' has InnoDN Engine which is good but means that the INSERT DELAYED is not an option.

If you are inserting multiple rows from the same client at the same time, it would be useful to INSERT statements with multiple values lists to insert several rows at a time.
This is considerably faster than using separate single-row INSERT statements. Additionally, in that case, you can tune the mysql bulk_insert_buffer_size variable to make data insertion even faster.

However, to implement this in the context of symfony 1.0 you would have to rewrite inserts using a raw query and bypass Propel.

Here is a sample action:

public function executeMultiSave()
{
$data = array();
$data[0] = "('".uniqid()."','lorem ipsum dolor 1', ".time().")";
$data[1] = "('".uniqid()."','lorem ipsum dolor 1', ".time().")";
// etc., perhaps there is some sort of loop to collect data

$connection = Propel::getConnection();

/* optional - disable keys
$statement = $connection->prepareStatement('ALTER TABLE cao_session DISABLE KEYS;');
$statement->executeQuery(); */

$query = " INSERT INTO cao_session (sess_id, sess_data, sess_time) VALUES ".implode( ', ', $data )."; ";
$statement = $connection->prepareStatement($query);
try {
$statement->executeQuery();
} catch (Exception $e)
{
$this->setFlash('error', $e->getMessage());
}

/* optional - re-enable keys
$statement = $connection->prepareStatement('ALTER TABLE cao_session ENABLE KEYS;');
$statement->executeQuery(); */

}



Good luck!

2011-11-16

Luis Cordova answers:

wow, can they move to faster machines? what does unable to upgrade mean?

I have sharedhosting for $4 a month, php5.3 etc ... , it is certainly very constrained, this is more a database system than a symfony issue but interesting

perhaps is more information of the system is known some redesign of the tables can be worked out


Lawrence Krubner comments:

"Unable to upgrade"

I meant they need to stay with Symfony 1.0.

The hardware is in the Amazon cloud. Sorry I did not mention that before. They can scale up a lot in terms of machines, but not so much with INSERTs.

2011-11-16

Baruch Peña answers:

They definitely need deeper changes to optimize that

But meanwhile i would suggest you to check if there are any trigger when inserting rows in the table, you may be able to optimize the trigger.

2011-11-17

sn answers:

Is it possible to post the my.cnf? Which engine do these tables use (InnoDB, MyISAM, ...)