Difference between revisions of "MySQL (5.5)"
Line 3: | Line 3: | ||
== Discussion Elements == | == Discussion Elements == | ||
− | What follows is some notes I wrote | + | What follows is some notes I wrote some years ago (May 2011) on my old blog. Most of it is still relevant, though I've cleaned some things up a bit. My forums are certainly a lot larger now. |
=== MyISAM versus InnoDB === | === MyISAM versus InnoDB === |
Revision as of 10:10, 18 April 2014
The way things are going, I'm not sure the next DB guide I write for Debian is going to have MySQL in the title. Regardless, this hasn't changed too much over the past few years. There are still a few notes in here left over from when this was a MySQL 5.0 and 5.1 guide, but they're pointed out.
Discussion Elements
What follows is some notes I wrote some years ago (May 2011) on my old blog. Most of it is still relevant, though I've cleaned some things up a bit. My forums are certainly a lot larger now.
MyISAM versus InnoDB
I still see people insisting on using MyISAM. While not garbage, MyISAM has very few advantages, in very limited situations:
- MyISAM's overhead is smaller, meaning that tables with fixed-width data will be smaller.
- MyISAM doesn't require a shared tablespace to do tracking, which additionally reduces required disk I/O.
- While for variable-length tables, InnoDB's compression algorithm can make tables smaller than MyISAM... the superior solution is to compress the data on input.
- Setting concurrent_insert=2 means that MyISAM will process multiple inserts at a time without issue, no matter what rows are deleted. This isn't an 'advantage' so much as 'not as much of a disadvantage as some claim'.
This makes MyISAM viable for a limited set of activities. Logging, in particular, is a fairly attractive use for MyISAM - not terribly critical, and you are merely appending a lot of data in sequential order. Something which MyISAM is well suited for.
That is about all, however. If your table allows UPDATE statements (editing, pm status updates, and so on), then these will force a table lock. Enough of these occur at once, and your site is effectively down at its most active moments.
Never a good situation.
If the table in question is doing anything besides getting INSERT and DELETE statements, then you should be working with InnoDB.
'New' InnoDB file formats and Compression
This was new four years ago. Still worthy of mention. It is available by default as of the version of InnoDB in 5.4.2. In MySQL 5.1, you will have to load the plugin instead:
# InnoDB ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so;innodb_trx=ha_innodb_plugin.so;innodb_locks=ha_innodb_plugin.so;innodb_lock_waits=ha_innodb_plugin.so;innodb_cmp=ha_innodb_plugin.so; innodb_cmp_reset=ha_innodb_plugin.so;innodb_cmpmem=ha_innodb_plugin.so;innodb_cmpmem_reset=ha_innodb_plugin.so default-storage-engine = InnoDB innodb_file_per_table innodb_file_format = barracuda
Again, the plugin is not needed as of MySQL 5.4.2 in order to enable the new file format or take advantage of other new features in InnoDB. The new Barracuda format requires the use of the innodb_file_per_table option, as the shared tablespace is stored in the original Antelope format.
This format enables two additional row formats for you to store your data in.
Dynamic rows stores long columns (BLOB, TEXT, VARCHAR) that don't fit into a single page as a twenty-byte pointer to the overflow page. When the row size exceeds page size, the longest columns get replaced by the overflow page pointer, until the column fits. This allows InnoDB tables to be more efficient when not actually analyzing these columns, as they do not need to be loaded into the buffer pool. This differs from the previous COMPACT format, which would store the first 768 bytes of these columns in-table, with the 20-byte pointer following, regardless of whether or not it made a good fit.
You can change a table's row format to DYNAMIC by running:
ALTER TABLE table_name ROW_FORMAT=DYNAMIC;
Compressed tables require a bit more discussion. They are basically Dynamic tables, but InnoDB will attempt to compress the contents of a row in a table and see if the results fit in the KEY_BLOCK_SIZE. If they don't, they're stored in the overflow page as with Dynamic, although the overflow page is also compressed. Specifying too large a value of KEY_BLOCK_SIZE will waste some space, but specifying too small a value can cause InnoDB to split nodes needlessly, and a lot of data will be forced into the overflow page. This can dramatically reduce write performance, I have discovered.
mysql> select * from information_schema.INNODB_CMP; +-----------+--------------+-----------------+---------------+----------------+-----------------+ | page_size | compress_ops | compress_ops_ok | compress_time | uncompress_ops | uncompress_time | +-----------+--------------+-----------------+---------------+----------------+-----------------+ | 1024 | 0 | 0 | 0 | 0 | 0 | | 2048 | 0 | 0 | 0 | 0 | 0 | | 4096 | 3357554 | 1512129 | 1889 | 219238221 | 26632 | | 8192 | 1039670 | 1031859 | 464 | 87310383 | 15538 | | 16384 | 0 | 0 | 0 | 0 | 0 | +-----------+--------------+-----------------+---------------+----------------+-----------------+ 5 rows in set (0.00 sec)
Information on using the compression information schema tables can be found here.
In a nutshell, however, you can see that, according to MySQL's documentation, my situation here isn't particularly ideal. I have three main compressed tables here:
- My ajax_chat_messages, with a KEY_BLOCK_SIZE of 4k. These are getting compressed at a somewhat better ratio than raw posts at 4k, but they are still getting compressed - converting them back to DYNAMIC doubled the size of the table on-disk. Converting to 2k increased the size of the table by about 15% from 4k, while decreasing performance massively. Converting them to 8k saw a slight increase in size again, at a massive benefit to performance.
- My private_messages table, with a KEY_BLOCK_SIZE of 8k. According to MySQL's documentation, compression is working perfectly. Only about half the messages are getting compressed, but it's still a net benefit.
- My smf_messages table (forum posts), with a KEY_BLOCK_SIZE of 4k. Unlike the above, about three quarters of my member's posts are getting compressed, but of these, only half end up successfully doing so.
Converting the posts table to a key_block_size of 8k means that my posts table is 3,087,007,744 bytes rather than 2,889,875,456 bytes, but the number of successful compressions is far superior:
mysql> select * from information_schema.INNODB_CMP WHERE page_size=8192; +-----------+--------------+-----------------+---------------+----------------+-----------------+ | page_size | compress_ops | compress_ops_ok | compress_time | uncompress_ops | uncompress_time | +-----------+--------------+-----------------+---------------+----------------+-----------------+ | 8192 | 2365184 | 2351065 | 973 | 95024340 | 16636 | +-----------+--------------+-----------------+---------------+----------------+-----------------+ 5 rows in set (0.00 sec)
Meaning, about 200,000 fewer posts actually got compressed (about 1.3 million, leaving out the million compressed private messages), but the overflow page is now far smaller - only about 8,000 private messages and 6,000 posts. This means the overflow pages will be called less, less RAM will be used to store compressed and uncompressed versions of pages, and may mean slightly faster thread views. For the most part, the difference is not too terribly great.
Compression saves me a couple gigabytes over MyISAM, personally - but your mileage will vary. MySQL's documentation on this matter should be taken to heart - overcompression is really, really bad for MySQL. Sustained write rates were an order of magnitude lower than for when compress_ops_ok is ~99% of compress_ops. Whether you see a performance benefit will thus largely depend on how well InnoDB is actually handling the compression. I use less disk space at a key_block_size of 4k on most tables, but it is also dramatically slower.
All things considered, I'm not exactly a fan. This sort of thing should be done in the forum script - not handled by the database. All tables should be stored as dynamic or compact, with their variable-length data gzip-compressed appropriately. This is a matter best addressed with developers, however, and in the mean time, it is a good stopgap measure to improve your disk i/o.
The Query Cache
Suggest this is still useful and get crucified! But... it can be. There are ways to check if this is being a net positive or negative for you.
The easiest way to check to see if the Query Cache is doing you any good is to enable it, give it enough space to work with, then check the status of it, e.g.
mysql> show status like 'Qcache%'; +-------------------------+------------+ | Variable_name | Value | +-------------------------+------------+ | Qcache_free_blocks | 1839 | | Qcache_free_memory | 94658784 | | Qcache_hits | 1123738642 | | Qcache_inserts | 584027197 | | Qcache_lowmem_prunes | 1775317 | | Qcache_not_cached | 144840211 | | Qcache_queries_in_cache | 105508 | | Qcache_total_blocks | 213789 | +-------------------------+------------+ 8 rows in set (0.00 sec)
If you are getting lowmem_prunes, you may need to first assign more space to the query cache. In addition, the space you have might be too fragmented for MySQL to work with, necessitating that you regularly run the MySQL command
FLUSH QUERY CACHE;
Every so often. I run it three times an hour through cron. Past that I find it's best to see if there's something I can avoid caching.
Finally, the Query cache limit is generally too high - large queries are much less likely to be repeated, so they just waste space. The solution here is to set query_cache_limit to a lower value - I use 256k:
# Query cache query_cache_type = 1 query_cache_limit = 256K query_cache_size = 256M query_cache_min_res_unit = 4K
After you've done this, and after you have assigned 'enough' RAM to the Query Cache - more than 256 megabytes is currently regarded as a bad idea (though 512MB might be okay in the future) - check the status after your server has been running for awhile as above.
Qcache_hits / Qcache_inserts = Your Query Cache's effectiveness.
A number less than or equal to one basically means that you aren't actually getting much out of the query cache - on average, the entry gets pruned before it gets used. A number greater than 1 means that the query cache is worthwhile to you - for now.
There is genuine talk of improving the Query Cache's design in future versions of MySQL - or removing it entirely. What is here should be taken as advice for MySQL versions up to 5.5, and possibly further if nothing improves. For my own 20 gigabyte database, however, the query cache is still giving me performance improvements via the above.
Setup and Configuration
What most people will prolly skip to.
Preparation
- Unless your database server is completely incidental to everything else you are doing, set up hugepages. The performance gain is extremely noticeable, even for sub-gigabyte databases. The exact way to do this may differ with your operating system, so adjust accordingly.
- Some distributions use a group to designate who can access Hugepages, e.g.
- usermod -a -G hugepager mysql
- Debian has its mysqld_safe section in a separate file ( /etc/mysql/conf.d/mysqld_safe_syslog.cnf ) I'm assuming it's gone, here.
- Make sure MySQL is shut down before doing the following tasks:
- If you will be changing the sizes of ib_logfiles, you will need to delete them:
- rm /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile1
- Adjust directory location as needed.
- If, and only if, this is a fresh, virgin installation:
- Delete your ibdata1 file.
- If not, you will want to move it if you are using a multi-disk setup. ibdata1 is where the double-buffer log is written (along with a number of other things that get split in future versions of MySQL), and will represent half of your database writes. Because these are all sequential writes, it's fine to stick this on a platter drive - ideal, even.
- If you are running a master-slave setup, and you are mounting /tmp on tmpfs, you will want to make your own temp directory for MySQL specifically, lest a power failure result in data corruption:
mkdir /mytmp chown mysql:mysql /mytmp chmod 0750 /mytmp
my.cnf
Do not blindly copy and paste this into your my.cnf! Path names in here are for Debian, and might not apply to your Red Hat/CentOS distribution. Moreover, I segregate database load, and you might decide to partition things differently.
# Obviously you will want to preserve whatever connection settings you # have for your distribution. port, socket, user, pid-file, basedir, # and other directory settings should only be changed from what your # distribution gives you if you know why they should change. In order # to prevent silliness I've commented most paths out. Again:
# I HAVE COMMENTED OUT ALMOST ALL PATH NAMES. BE SURE TO SET YOUR OWN # AND SET THEM PROPERLY WHERE YOU NEED TO.
# YOU SHOULD TAKE YOUR PATH DATA FROM YOUR DEFAULT my.cnf FILE UNLESS # YOU KNOW YOU NEED TO CHANGE IT.
# Ahem. # I don't want people blindly copying and pasting then watching things # blow up because they are running CentOS or something. These paths # won't work in default Debian either. # Ideally, you should go from section to section and just pull in the # optimizations as you need them.
[client] port = 3306 #socket = /run/mysqld/mysqld.sock # This was formerly known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] # socket = /run/mysqld/mysqld.sock nice = 0 # This open-files-limit is excessive. It's only needed if you insist on # staying with MyISAM and then only to crazy levels. I just don't have # any reason to remove it. open-files-limit = 32768 # I use log-error rather than syslog. I like error segregation. #syslog #log-error = /var/log/mysql/mysql-error.log
[mysqld] # * Basic Settings user = mysql # pid-file = /run/mysqld/mysqld.pid # socket = /run/mysqld/mysqld.sock port = 3306 # If you move your database to another directory, you will also have # to set it here. You'll want your datadir and ibdata1 files on # separate drives, with your ibdata1 preferably on a platter. # datadir = /data/mysql # If you make another temp directory just for MySQL, be sure to mount # it tmpfs as with any other temporary directory if this is not a # replication slave. # If this is a replication slave, you should set this to a directory # that survives reboot, so that temporary tables can be successfully # replicated. See # http://dev.mysql.com/doc/refman/5.1/en/temporary-files.html # If this is a slave, be sure to set this to an on-disk temp directory # appropriately. tmpdir = /tmp # language = /usr/share/mysql/english skip-external-locking
# Security # No community software I am aware of actually needs LOAD DATA LOCAL, # so I just disable it, and probably would not use software that did. See: # http://dev.mysql.com/doc/refman/5.1/en/load-data-local.html local-infile = 0 # This prevents granting access to stored procedures automagically just # for creating them. automatic_sp_privileges = 0 # safe-user-create prevents autocreating users with the GRANT statement # unless the user has the insert privilege on mysql.user safe-user-create = 1 # secure-auth is probably not relevant if your server was built in the # past decade. I just like it on. secure-auth = 1 # Remove skip-show-database if you use phpMyAdmin or a similar tool to # manage your databases, it will just frustrate you or your users. skip-show-database # This limits where the LOAD_FILE, LOAD_DATA and SELECT INTO OUTFILE # statements can read from or write to. This is a good option to set. secure-file-priv = /tmp
# Networking # I have replication setup, on a separate interface, so I bind MySQL to # a socket and the address for that interface. If you're just using a # single machine, use # skip-networking # instead. Sockets are slightly faster than TCP connections. # bind-address = 192.168.0.1 # There's no reason to waste time resolving domain names. If the ip # changes, we'll know. skip-name-resolve
# Tuning # You will need to enable Hugepages on your server to make use of # the large-pages option large-pages # The client default is 16M, while the server default is 1M. Setting # the server to 16M can make some large operations easier. # I've since hit 16M limits, so I use 32M now. max_allowed_packet = 32M # There's no serious reason to have a long interactive timeout. If you # are low on connections, you shouldn't set this higher than # wait_timeout interactive_timeout = 3600 # I am of the opinion that the default value is far too high. If you # use persistent connections, even a timeout of 300 may be too high. wait_timeout = 300 # The following two are best set to the same size, because the size # of temporary tables is limited by the lower of the two. # I have not found any benefit in increasing the value past my # tmp_table_size default. tmp_table_size = 32M max_heap_table_size = 32M # The next two lines replace the basic table_cache value as of MySQL # 5.1. table_definition_cache should be big enough for every table # in your database, plus temporary tables, and table_open_cache # should be a reflection of how many of these will be open in a live # connection at once - it will likely exceed your definition cache. # It doesn't hurt to set these to large values. They don't take a lot # of RAM and it's better than hitting the limit. table_definition_cache = 4096 table_open_cache = 16384 # The default optimizer_search_depth is 62. This causes MySQL to take # an obscene amount of time planning a query, though when it finally # executes, it is pretty close to optimal. Since the vast majority of # queries my software runs involve four or fewer relations, I set it # to four. optimizer_search_depth = 4 # I'm not personally a fan of UTF8, multi-byte strings are less # efficient and annoying, while websites perform just fine by using # ampersand codes to represent nonstandard characters. character-set-server = latin1
# Buffers # 256k for everyone! 256k blocks will generally mean that we are not # switching to the rather expensive mmap() operation most of the time. # read_rnd_buffer can be pretty greedy, at least for my setup using 2mb # is closer to optimal. join_buffer_size = 256K sort_buffer_size = 256K read_buffer_size = 256K read_rnd_buffer_size = 2M
# Threads # 256K is now the default for 64-bit systems, this line is just a # legacy from when I had it configured on 32-bits and was following # the above advice. thread_stack = 256K # max_connections is how many connections your server will tolerate at # once, while thread_cache_size is how many of these your server will # cache. There is no reason not to set these to be an equal number; I # have seen no evidence that the trivial amount of RAM a low # thread_cache_size is worth the performance hit of opening up a new # thread under load. # In realistic terms, you should 'tune to failure' - you don't want # to support more active connections than your system can feasibly handle. # 256 is a good number for most low-end servers produced these days. # Increase if you have multiple drive arrays or faster disks. thread_cache_size = 256 max_connections = 256
# Query cache # You may want to check the above info about how useful the # Query cache may actually be for you. # The only additional point I would make here is that changing # query_cache_min_res_unit from the default is effectively useless, # especially if you flush regularly like the above article describes. query_cache_type = 1 query_cache_limit = 256K query_cache_size = 128M query_cache_min_res_unit = 4K
# MyISAM # 16M is the most MySQL will store entirely in large-pages. Past that, # it will start shunting some of it off to normal memory. Since almost # everything I do is InnoDB, this isn't a problem. Otherwise, however, # it can make calculating how much space you need difficult. key_buffer = 16M # Set this to the size of a filesystem block - e.g. 4k key_cache_block_size = 4K # These two should both be the default values. If you are bulk-loading # data from a script, you may want to increase bulk_insert_buffer_size # to speed up operation. bulk_insert_buffer_size = 8M myisam_sort_buffer_size = 64M # Auto-creates a backup when running the recover operation. myisam-recover = BACKUP # By default, MyISAM tables will INSERT into deleted row space before # appending to the end of the disk. In exchange for saving this trivial # amount of space, once a row gets deleted, only one insert operation # may occur at a time until holes are filled. Setting concurrent_insert # to 2 stops this silly behavior, at the cost of wasting a bit of disk # space, for a significant performance improvement in MyISAM tables. concurrent_insert = 2
# InnoDB # The following three lines are only necessary in MySQL 5.1, for loading # the plugin which supports the new InnoDB file format. # If you have 5.5 or later, skip these lines. #ignore_builtin_innodb #plugin-load=innodb=ha_innodb_plugin.so;innodb_trx=ha_innodb_plugin.so;innodb_locks=ha_innodb_plugin.so;innodb_lock_waits=ha_innodb_plugin.so;innodb_cmp=ha_innodb_plugin.so;innodb_cmp_reset=ha_innodb_plugin.so;innodb_cmpmem=ha_innodb_plugin.so;innodb_cmpmem_reset=ha_innodb_plugin.so #default-storage-engine = InnoDB # In theory, you want this to encompass your entire database. In # practice, you only have so much RAM available. # Fortunately, as long as you can store a big enough chunk that it # gets a 99% hit rate, you will be fine. innodb_buffer_pool_size = 4096M # As with everything, only move these if you know you need to. # the log_group files belong with the rest of your database. It uses # very little io # innodb_log_group_home_dir = /data/mysql # ibdata1, on the other hand, will be roughly half of your writes. # innodb_data_home_dir = /innodb # I set 128M as the size of ibdata1 because that's how big individual # extent entries are in the ext4 filesystem. innodb_autoextend_increment = 128 innodb_data_file_path = ibdata1:128M:autoextend # Supposedly, smaller is better because it makes recovery faster, even # if larger means slightly better performance. I have no idea what the # logic of this is - if I have a crash, downtime is expected. InnoDB # has yet to crash on me in six years. innodb_log_files_in_group = 2 innodb_log_file_size = 512M # No reason to make the buffers too large, however. innodb_log_buffer_size = 2M innodb_additional_mem_pool_size = 2M # Setting innodb_flush_log_at_trx_commit to 0 causes InnoDB to only # flush to disk once per second, improving performance considerably. # In a community environment, you are more likely to lose a topic # read entry or something equally trivial than a post, so the data # is very rarely going to be critical, and users often have backups # of their own posts. innodb_flush_log_at_trx_commit = 0 # file_per_table makes checking out which tables are doing the heavy # lifting a lot easier, for sure. It is also required for the # barracuda file format. innodb_file_per_table innodb_file_format = barracuda # Note that if you have an established database, and haven't used # file_per_table, you'll need to ALTER TABLE each table you want to # split off from your ibdata file. # Enabling strict mode helps prevent messing up creating or altering # a table to support the new row formats in barracuda. innodb_strict_mode = 1 # I haven't played too much with innodb_thread_concurrency. In my # experience, for web purposes at least, Intel architectures work best # up to four times the number of cores. Your mileage may vary. innodb_thread_concurrency = 8 # Read and write threads - should generally set each of these to # four times your -physical- core count (keep in mind that many AMD # processors advertize as having double their physical count). innodb_read_io_threads = 64 innodb_write_io_threads = 64 # I set this to sixty because I have obsessive compulsive disorder. # Don't fret over it. innodb_lock_wait_timeout = 60 # O_DIRECT bypasses the operating system's handling of data. This # can drastically improve how well your system handles large amounts # of RAM by removing double buffers (once in InnoDB's cache, again # in the filesystem's cache), at a slight cost to reliability. # This appears to be more dramatic the more RAM you have. # Note that if you've mounted /tmp to tmpfs like a good admin, # InnoDB will whine at you regularly unless you've changed tmpdir to # something that isn't tmpfs. # Changing tmpdir in such a case only matters for your slave server, # but a good idea if you do. innodb_flush_method = O_DIRECT
# We like to be able to do cold backups innodb_fast_shutdown = 0
# thread_concurrency is for Solaris only. It does not apply to your # Linux box. #thread_concurrency = 10
# * Logging and Replication # # As the default configuration file says, as of 5.1 you can # enable the general log at runtime. #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging is defined in the mysqld_safe entry. # # Here you can see queries with especially long duration. Yes, # I consider one second to be inordinately long. slow_query_log = 1 #slow_query_log_file = /var/log/mysql/mysql-slow.log # You can set this to non-integer numbers now. One second is rare # enough for me that I consider it 'long'. long_query_time = 1 # In order to keep your sanity, you may want to only use this when # developing software. It would be nice if developers of community # software did track this more often. #log-queries-not-using-indexes # # The binlog is mostly for replication, but you may want to set aside # a 'small' recovery window like so. server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log binlog_cache_size = 256K # Setting sync_binlog fairly high is generally beneficial if the average # value of data is fairly small. sync_binlog = 16 expire_logs_days = 7 max_binlog_size = 512M # Statement is the default, but if you turn on replication, you may # wish to switch to row-based. #binlog_format = STATEMENT #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name
# The following items are largely defaults as setup by Debian # If you are largely using InnoDB, only the mysqldump section is of # any concern. [mysqldump] quick quote-names max_allowed_packet = 32M
[mysql] #no-auto-rehash # faster start of mysql but no tab completition # Tab completion is a freaking sanity saver. max_allowed_packet = 32M
[isamchk] key_buffer = 16M
# Debian likes to put some conf.d nonsense at the end. Your OS might too. Kill it with fire.
Final Touches
- If you are using the Query Cache (and it seems it's asking to be crucified to suggest that it is still useful) then you will need to setup a cron to flush it regularly. First, make an appropriate user:
- In MySQL:
- GRANT RELOAD ON *.* TO qcmaint@localhost IDENTIFIED BY 'crazylongpass';
- In a script file, run
- /usr/bin/mysql -u qcmaint -p"crazylongpass" -e 'FLUSH QUERY CACHE;'
- via cron, two to three times an hour. My /etc/cron.d/mysql:
- 4,24,44 * * * * root /root/flushqc.sh
- My experience is that there is a spot between running it every half hour and every 15 minuts where flushing no longer helps - you should probably stop using the query cache entirely if this breaks down for you.
- Since root can always set the MySQL password, it can be handy just to stick a giant uncrackable mess in /root/.my.cnf rather than have something crackable
[client] user=root password=yourrandompasshere socket=/run/mysqld/mysqld.sock