We have a table with 9.3 Milion structures on Mysql with Instant JChem as the frontend.
Server version: 5.5.25a MySQL Community Server (GPL)
IJC version 5.11.1 on Centos 6.1 x86_64
Substructure searches from Instant JChem take a long time, longer then expected, and throw timeouts.
Also, looking at the network usage on the client, it seems that if the search contains one or more ORs the entire dataset is transfered to the client once for each part of the query.
And then it folds up and goes on vacation:
INFO [com.im.commons.nbcommons.ErrorNotifier]: Error while Running query
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
Is there something i can do to
a: enlarge the timeout setting, so that the query can finish
b: speed up the query itself, perhaps by tweaking the database ?
The connectionpool settings are on the default.
Using a cursor: no effect measured
Mysql tuning: Buffer pool is maxed out on the server, though the database itself is quite a bit larger (160G) then the pool (6G).
As we are doing a structure search the tips on indexes seem to be non-applicable.
Let me give some figures:
Searching in a table of 370K structures -> 43 sec The querylog shows that the chunks of data that are read in from the database take about 20 ms per 2k rows.
Searching in a table of 6550K structures -> 900 sec = 15 minutes The querylog shows that the chunks of data that are read in from the database take about 115 ms per 2k rows, so 6 minutes of querytime. Significantly slower!
Searching in a table of 9300K structures -> The querylog shows that the first 900 chunks of data that are read in from the database take about 240 ms per 2k rows. That would suggest 18 minutes in querytime. The whole search takes about 40 minutes.
Though it seems odd that a simple select in a larger table would take longer per chunk, the gain from optimizing it (if possible) seems to be less then half of the time spent in the search.