Categories
osTicket

osTicket attachments migration for huge databases

.Recently I had some issues during the migration of files from database to filesystem for an istance of osTicket 1.12.x, but I think it’s related to all range 1.10-1.14.

The istance had more than 60.000 files ( 40GB of data ) pushed as attachments inside the database. And the database was quickly growing by 1GB/week causing problems to the nightly backup procedure on a DB Cloud. Simply, the procedure stopped to work when the database reached 20GB and the provider did not offer any alternative solution. So it was mandatory to proceed with the migration on filesystem to keep it smooth and easy to backup.

After I had enabled the plugin and double checked the procedure I launched this command in the root of osTicket:

php manage.php file migrate –verbose –backend D –to F –limit=10000 (*)

(*) you could try any limit you want, in my case I just launched the same command 6 times

I added the –verbose option that let you see the flow of files during the “download”. And here I had the first issue for 5 attachments, cause I received an IOError like this:

“IOError: /var/www/osticket/attachments/e/..hash..: Unable to open for reading

checking inside the instance of osTicket, I could see those attachments were really zero size files or corrupted ones, but after googling on forums it seems no one had really solved this: the migration simply fails to proceed. So I just forced the creation of empty files on filesystem with:

touch /var/www/osticket/attachments/e/..hash..

and after that I was able to restart and finish correctly the migration (excepting those 5 critical attachments of course).

At the end of the migration all the other attachments were correctly dumped in the target folder and mapped in %_file table with the indication of the “filesystem backend” as bk reference, but the %_file_chunk table was still 40GB. I have found a specific thread on osTicket forum on this problem where someone accidentally resolved just renaming the %_file_chunk table, but this was not acceptable for my situation.

That led me to analyze the table: it had exactly the same number of rows it had at the beginning, but with the curious indication of a large amount of InnoDB data free, I had never checked before. On Stackoverflow, some expert says it is the sum of several areas inside the binary table and suggested to optimize the table.

So I just tried this simple query:

OPTIMIZE TABLE %_file_chunk

but as anticipated on that thread, in my case it did not change anything. Anyway, I was sure that was the right way, so I launched a full optimization of all the database with this command:

mysqlcheck -o DB_NAME

and at the end the final size of the database was shrinked from 40GB to 1.1GB!

So it seems this is not really an osTicket issue, but a normal behaviour of MySql (with InnoDB) related to its internal allocation of areas that need to be manually checked for optimization.

2 replies on “osTicket attachments migration for huge databases”

Nice post Massimiliano. I have the same problem. I don’t have a huge database like yours, but taking a database backup with all the attachements is a pain in the ass. I am using osTicket 1.5.1 on shared web hosting. Any idea how to implement the commands you shared in your post on a shared web host? Thanks in advance.

Hi Bahri.

It should be possible to launch the manage.php via web interface, so you could try (just to a backup before!). Surely you have an instance of phpMyAdmin for your shared web hosting, so you could launch the query from there. In the list of tables you can see the option “Optimize” for each table, so you have to click there for each table and wait it finishes.

In alternative, you could backup to local or to a VPS everything and migrate online the upgraded version after.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.