Tuesday, November 28, 2006

How to keep db connections alive in FDS/Hibernate

Struggled with a good title on that one. Anyway... here's the deal. I noticed a while back that if i left my fds server running over night, and in the morning tried to open the sample CRM application, well, it would fail. Fail so badly in fact I'd have to restart or redeploy to get it working again. I'd get an error in the fds log that looked something like this
SEVERE: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.SocketException
MESSAGE: Software caused connection abort: socket write error
Communications link failure due to underlying exception:
...

So yesterday I decided to see if I could fix this. It turned out to be a very very long day. Here are the lessons I learned.

Lesson 1: By default, mySQL terminates db connections after 8 hours of inactivity
Here are the default mySQL settings:

#The number of seconds the server waits for activity on a connection before closing it
wait_timeout=28800
#The number of seconds the server waits for activity on an interactive connection before closing it.
interactive_timeout=28800
For the remainder of my testing I changed these values to 300 in the my.ini file so that my connections would close after only 5 minutes of activity. A much shorter timeframe for testing! So the initial problem is that once mySql closes down the connection, Hibernate does not natively manage this connection and reopen it. If you connect directly with a jdbc connection you may run into the same issue. The trick is to use some kind of connection pool manager to keep the connections alive... the question remains, how can we achieve this?

Lesson 2: Using App Server JNDI datasources gives you connection pooling
In jrun or jboss, it is possible to create jndi datasources. These will then be managed by the application server. I created a jndi datasource for the crm database that comes with FDS which I recreated a while back in mySQL. Creating this datasource involved adding the following configuration to my jrun-resources.xml in the \fds2\JRun4\servers\default\SERVER-INF folder:
<data-source>
<dbname>crm</dbname>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://127.0.0.1:3306/crm</url>
<username>username</username>
<password>password</password>
<encrypted>false</encrypted>
<encryption-class>jrun.security.JRunCrypterForTwofish</encryption-class>
<native-results>true</native-results>
<remove-on-exceptions>true</remove-on-exceptions>
<pool-statements>true</pool-statements>
<initial-connections>1</initial-connections>
<connection-timeout>1200</connection-timeout>
<pool-retry>30</pool-retry>
<transaction-timeout>20</transaction-timeout>
<cache-enabled>false</cache-enabled>
<cache-size>5</cache-size>
<cache-refresh-interval>30</cache-refresh-interval>
<jndi-name>crm</jndi-name>
<poolname>Pool</poolname>
<minimum-size>0</minimum-size>
<maximum-size>2147483647</maximum-size>
<user-timeout>20</user-timeout>
<skimmer-frequency>420</skimmer-frequency>
<shrink-by>5</shrink-by>
<maximum-soft>true</maximum-soft>
<debugging>false</debugging>
<disable-pooling>false</disable-pooling>
<description />
<isolation-level>READ_COMMITTED</isolation-level>
</data-source>
The key thing to note here is the jndi-name I've given this datasource. In Jboss I added a file called mysql-ds.xml to my deploy folder which looks like this:
<datasources> 
<local-tx-datasource>
<jndi-name>crm</jndi-name>
<connection-url>jdbc:mysql://localhost:3306/crm</connection-url>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>user</user-name>
<password>password</password>
<min-pool-size>5</min-pool-size>
<max-pool-size>20</max-pool-size>
<idle-timeout-minutes>4</idle-timeout-minutes>
</local-tx-datasource>
</datasources>
I've kept the jndi names the same in both jrun and jboss so that it's easy for me to deploy my apps from one to the other. I ran some tests after I did this. Hibernate - still using it's own connection - would still fail permanently after the connection timed out. Yet the crm app that was configured to use jdbc/jndi directly (ConnectionHelper.java):
javax.naming.Context context = new javax.naming.InitialContext();
return ((DataSource) context.lookup("java:crm")).getConnection();
would fail once and then reconnect successfully. It looked like I was on the right track, now it was time to reconfigure Hibernate.

Lesson 3: Hibernate is not recommended for connection pooling
I had to configure Hibernate to take advantage of my jndi datasources. Here's what the hibernate.cfg.xml looks like for both jrun and jboss:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="connection.datasource">java:crm</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
[...]
</session-factory>
</hibernate-configuration>
I saved my changes, restarted the server, ran my app, waited for the timeout, ran the app again, and even though it hiccupped on the first try, the connection was reestablished on the second try. But why this initial hiccup? Shouldn't there be some way to configure it so that even this initial error can be avoided? I still had much work to do!

Lesson 4: Make sure to have the latest mysql jdbc drivers installed
It turns out that Jboss jndi datasources can be configured to actively ping db connections and keep them alive. Yet I kept getting errors after configuring my datasource to do this 'pinging', and it turned out I was running older jdbc drivers. I upgraded to the latest ones and the errors went away. Here's the jboss datasource file with the ping features enabled:
<datasources> 
<local-tx-datasource>
<jndi-name>crm</jndi-name>
<connection-url>jdbc:mysql://localhost:3306/crm</connection-url>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>user</user-name>
<password>password</password>
<min-pool-size>5</min-pool-size>
<max-pool-size>20</max-pool-size>
<idle-timeout-minutes>4</idle-timeout-minutes>
<exception-sorter-class-name>
com.mysql.jdbc.integration.jboss.ExtendedMysqlExceptionSorter
</exception-sorter-class-name>
<valid-connection-checker-class-name>
com.mysql.jdbc.integration.jboss.MysqlValidConnectionChecker
</valid-connection-checker-class-name>

</local-tx-datasource>
</datasources>
Since my production environment is Jboss, and it had already been a long day, I decided to not worry about figuring out how to do the same in JRun. If anyone out there knows this, please leave a comment!

Lesson 5: Hibernate must be told that it's connection is now being managed
At this point i was able to retrieve data without error, despite connection time outs. When I went to update a record, however, I got a cool new error:
java.sql.SQLException: You cannot commit during a managed transaction!
So, I did some digging and changed my hibernate.cfg.xml to this:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="connection.datasource">java:crm</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="transaction.factory_class">
org.hibernate.transaction.JTATransactionFactory
</property>
<property name="transaction.manager_lookup_class">
org.hibernate.transaction.JBossTransactionManagerLookup
</property>

[...]
</session-factory>
</hibernate-configuration>
As you can see this is specific to Jboss. There are several other TransactionManagerLookups available that you can read about here

Summary
I spent alot of time on this yesterday, so this blog might seem a bit scarce and a bit rushed. Basically all the configuration files are here for you to use and customize. If you have questions, please ask. Again, this is blogged here for my benefit as well as yours. I would not want to have to redo this from scratch 6 months from now!

8 comments:

Karennhs said...

I'm still having the problem about the mySQL terminates db connections after 8 hours of inactivity. I don't know how to resolve it, I'm using Tomcat 5.5, hibernate and mysql 5.0 whit the mysql-connector-java-3.1.12-bin jar. I would like to know how to configure the "hibernate.cfg.xml" to avoid this problem. Please, if you can help me...

Vic Rubba said...

I would suggest you upgrade your mysql connector to the latest version! I'm running mysql-connector-java-5.0.4-bin.jar

Karennhs said...

It wasn't enough, I've changed the jar to the latest version but I'm still having the same problem! :(
In my hibernate.cfg.xml I have this properties:

hibernate.connection.autoReconnect = true
hibernate.connection.autoReconnectForPool=true
hibernate.connection.is-connection-validation-required=true


Thanks!

Karennhs said...

Do you know about another solution?

Vic Rubba said...

To be honest I only ever got this working with JBoss after I figured out how to get it to keep the connections alive
My guess would be to find the equivalent of this for Tomcat. Maybe this link will help:
http://dev.mysql.com/doc/
refman/5.0/en/
connector-j-usagenotes-j2ee.html
#connector-j-usagenotes-tomcat

(you'll have to stitch it back together and remove the line breaks)

G#D#R said...

Connections are not getting released after 15mins,

the idle timeout set.

idle-timeout- 15

connection is just hang on both teradata and jboss.

can you let me know the check list..

fyi ... i m using JDBC

Anonymous said...

Hi Vic,

This is a great post. MySQL documentation or hibernate documentation does not cover this.

I have moved from Tomcat to JBOSS. I feel the additional startup time and memory usage is acceptable, given the stability, transaction management that I get with JBOSS.

Followed your steps for directing hibernate to use a datasource managed by the server and the connection termination went away!

thank you

Anonymous said...

Big Thanks! This post helped me alot understanding my problem. But in my case c3po connection pooling was not enabled anymore after upgrading to hibernate3.

You have to set this option in your hibernate.cfg.xml to enable cp3o connection pooling (just in case someone has the same problem too):
connection.provider_class
=
org.hibernate.connection.C3P0ConnectionProvider


Thanks again!