26 October 2009

PITA 4: MySQL Error Messages

Today while connecting to my DB Server, I got a strange error message:
Got error 28 from storage engine
Tried to check the .err file located in var/lib/mysql (Xubuntu), couldn't open the file, when checked, the file size was more than 50GB.

This was the reason of the above error (No free space on the disk to write!). Stopped MySQL server and deleted the .err file. After starting the server I checked the file for errors and found this error repeated for millions times:
091026 ***** InnoDB: Error: page ***** log sequence number * *********
InnoDB: is in the future! Current system log sequence number 0 330424282.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/mysql/en/backing-up.html for more information.
Since I copied the complete data folder from one computer to another, the log files for InnoDB got corrupted. I could not restore them.

The only option I had to avoid any data loss or corruption was to change the Engine type to MyISAM, which doesn't look for log files and can be repaired with less effort.

04 October 2009

Lucene and MySQL (Correction!)

NOTE: This is just a correction post of Lucene and MySQL, please read that before running the following code.

In my
public void GetSqlData() {
        String IndexStoreDir = "F:/NewsIndex";      
        StandardAnalyzer analyzer = new StandardAnalyzer();
        Connection con = null;
        Statement stmt = null;
        ResultSet rs=null;
        try {
            String connUrl = "jdbc:mysql://localhost:3306/SQLDb"; 
           // put your mysql database name in place of SQLDb
            Connection con = DriverManager.getConnection(connUrl, "root", "SQLPassword");
           // put your mysql password in place of SQLPassword
            stmt = con.createStatement();
            stmt.executeQuery("select * from table");
            IndWri = new IndexWriter(IndexStoreDir, analyzer, false, IndexWriter.MaxFieldLength.UNLIMITED);
            int numberOfUpdates=0;
            rs = stmt.getResultSet();

            while (rs.next()) {
                Document docu = new Document();
                Field title= new Field("title", rs.getString("title"), Field.Store.YES, Field.Index.ANALYZED);
                Field content= new Field("content", rs.getString("content"), Field.Store.YES, Field.Index.ANALYZED);
        } catch (Exception ex) {
        finally {
            if(rs!=null){try{rs.close();} catch(Exception ex2){}}
            if(stmt!=null){try{stmt.close();} catch(Exception ex1){}}
            if(con != null){try{con.close(); }catch(Exception e){}}


There is no need of addDoc() method. In previous case the index was getting created for (number of fields)*(number of records) number of docs because every time addDoc() method was called which created a new document for a field.

PITA 3: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

I was trying to update around 70k records in a MySQL table through a loop, and kept getting the following exception after 16k+ records:

com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Last packet sent to the server was 0 ms ago.

Tried everything which I could get on the web, but nothing happened. Few suggestions which I found on the web are:

1. replace localhost with in MySQL connection string... In some cases this works because Windows Defender removes localhost entry from C:\Windows\system32\drivers\etc\hosts file therefore MySQL won't be able to resolve the host and throw that exception.

2. modify registry to connect from TCP ports greater than 5000... this also didn't work for me, you can see the opened connection by typing the command netstat -b.
Details about this are given here http://support.microsoft.com/kb/196271

3. add wait_timeout = X in my.ini (or my.cnf on *nix env.)... this will help if you have problems with MySQL itself, for that check MySQL error log located in mysql Data directory. If you have any errors about connections setting wait_timeout may help, but in my case there are no errors from MySQL site.

How did I resolve it??... Instead of processing all the records in one execution I processed only 1000 records at a time.

However the exact reason is still unknown.