START-INFO-DIR-ENTRY * mysql: (mysql). MySQL documentation. END-INFO-DIR-ENTRY Table of Contents ***************** General Information About This Manual Conventions Used in This Manual What Is MySQL? History of MySQL The Main Features of MySQL How Stable Is MySQL? How Big Can MySQL Tables Be? Year 2000 Compliance What Is MySQL AB? The Business Model and Services of MySQL AB Support Training and Certification Consulting Commercial Licenses Partnering Advertising Contact Information MySQL Support and Licensing Support Offered by MySQL AB Copyrights and Licenses Used by MySQL MySQL Licenses Using the MySQL Software Under a Commercial License Using the MySQL Software for Free Under GPL MySQL AB Logos and Trademarks The Original MySQL Logo MySQL Logos that may be Used Without Written Permission When do you need a Written Permission to use MySQL Logos? MySQL AB Partnership Logos Using the word `MySQL' in Printed Text or Presentations Using the word `MySQL' in Company and Product Names MySQL 4.x In A Nutshell Stepwise Rollout Ready for Immediate Use Embedded MySQL Other Features Available From MySQL 4.0 Future MySQL 4.x Features MySQL 4.1, The Following Development Release MySQL Information Sources MySQL Mailing Lists The MySQL Mailing Lists Asking Questions or Reporting Bugs How to Report Bugs or Problems Guidelines for Answering Questions on the Mailing List MySQL Community Support on IRC (Internet Relay Chat) How Standards-compatible Is MySQL? What Standards Does MySQL Follow? Running MySQL in ANSI Mode MySQL Extensions to ANSI SQL92 MySQL Differences Compared to ANSI SQL92 Sub`SELECT's `SELECT INTO TABLE' Transactions and Atomic Operations Stored Procedures and Triggers Foreign Keys Views `--' as the Start of a Comment Known Errors and Design Deficiencies in MySQL Errors in 3.23 fixed in later MySQL version Open bugs / Design Deficiencies in MySQL MySQL and The Future (The TODO) Things That Should be in 4.0 Things That Should be in 4.1 Things That Should be in 5.0 Things That Must be Done in the Near Future Things That Have to be Done Sometime Things We Don't Plan To Do How MySQL Compares to Other Databases How MySQL Compares to `mSQL' How to Convert `mSQL' Tools for MySQL How `mSQL' and MySQL Client/Server Communications Protocols Differ How `mSQL' 2.0 SQL Syntax Differs from MySQL How MySQL Compares to `PostgreSQL' MySQL and PostgreSQL development strategies Featurewise Comparison of MySQL and PostgreSQL Benchmarking MySQL and PostgreSQL MySQL Installation Quick Standard Installation of MySQL Installing MySQL on Linux Installing MySQL on Windows Installing the Binaries Preparing the Windows MySQL Environment Starting the Server for the First Time Installing MySQL on Mac OS X General Installation Issues How to Get MySQL Verifying Package Integrity Using `MD5 Checksums' or `GnuPG' Operating Systems Supported by MySQL Which MySQL Version to Use Installation Layouts How and When Updates Are Released MySQL Binaries Compiled by MySQL AB Installing a MySQL Binary Distribution Installing a MySQL Source Distribution Quick Installation Overview Applying Patches Typical `configure' Options Installing from the Development Source Tree Problems Compiling MySQL? MIT-pthreads Notes Windows Source Distribution Post-installation Setup and Testing Problems Running `mysql_install_db' Problems Starting the MySQL Server Starting and Stopping MySQL Automatically Upgrading/Downgrading MySQL Upgrading From Version 4.0 to Version 4.1 Upgrading From Version 3.23 to Version 4.0 Upgrading From Version 3.22 to Version 3.23 Upgrading from Version 3.21 to Version 3.22 Upgrading from Version 3.20 to Version 3.21 Upgrading to Another Architecture Operating System Specific Notes Linux Notes (All Linux Versions) Linux Notes for Binary Distributions Linux x86 Notes Linux SPARC Notes Linux Alpha Notes Linux PowerPC Notes Linux MIPS Notes Linux IA64 Notes Windows Notes Starting MySQL on Windows 95, 98 or Me Starting MySQL on Windows NT, 2000 or XP Running MySQL on Windows Connecting to a Remote MySQL from Windows with SSH Splitting Data Across Different Disks on Windows Compiling MySQL Clients on Windows MySQL-Windows Compared to Unix MySQL Solaris Notes Solaris 2.7/2.8 Notes Solaris x86 Notes BSD Notes FreeBSD Notes NetBSD notes OpenBSD 2.5 Notes OpenBSD 2.8 Notes BSD/OS Version 2.x Notes BSD/OS Version 3.x Notes BSD/OS Version 4.x Notes Mac OS X Notes Mac OS X 10.x Mac OS X Server 1.2 (Rhapsody) Other Unix Notes HP-UX Notes for Binary Distributions HP-UX Version 10.20 Notes HP-UX Version 11.x Notes IBM-AIX notes SunOS 4 Notes Alpha-DEC-UNIX Notes (Tru64) Alpha-DEC-OSF/1 Notes SGI Irix Notes Caldera (SCO) Notes Caldera (SCO) Unixware Version 7.0 Notes OS/2 Notes BeOS Notes Novell NetWare Notes Perl Installation Comments Installing Perl on Unix Installing ActiveState Perl on Windows Installing the MySQL Perl Distribution on Windows Problems Using the Perl `DBI'/`DBD' Interface Tutorial Introduction Connecting to and Disconnecting from the Server Entering Queries Creating and Using a Database Creating and Selecting a Database Creating a Table Loading Data into a Table Retrieving Information from a Table Selecting All Data Selecting Particular Rows Selecting Particular Columns Sorting Rows Date Calculations Working with `NULL' Values Pattern Matching Counting Rows Using More Than one Table Getting Information About Databases and Tables Examples of Common Queries The Maximum Value for a Column The Row Holding the Maximum of a Certain Column Maximum of Column per Group The Rows Holding the Group-wise Maximum of a Certain Field Using user variables Using Foreign Keys Searching on Two Keys Calculating Visits Per Day Using `AUTO_INCREMENT' Using `mysql' in Batch Mode Queries from Twin Project Find all Non-distributed Twins Show a Table on Twin Pair Status Using MySQL with Apache Database Administration Configuring MySQL `mysqld' Command-line Options `my.cnf' Option Files Installing Many Servers on the Same Machine Running Multiple MySQL Servers on the Same Machine General Security Issues and the MySQL Access Privilege System General Security Guidelines How to Make MySQL Secure Against Crackers Startup Options for `mysqld' Concerning Security Security issues with LOAD DATA LOCAL What the Privilege System Does How the Privilege System Works Privileges Provided by MySQL Connecting to the MySQL Server Access Control, Stage 1: Connection Verification Access Control, Stage 2: Request Verification Causes of `Access denied' Errors MySQL User Account Management `GRANT' and `REVOKE' Syntax MySQL User Names and Passwords When Privilege Changes Take Effect Setting Up the Initial MySQL Privileges Adding New Users to MySQL Limiting user resources Setting Up Passwords Keeping Your Password Secure Using Secure Connections Basics Requirements Setting Up SSL Certificates for MySQL `GRANT' Options Disaster Prevention and Recovery Database Backups `BACKUP TABLE' Syntax `RESTORE TABLE' Syntax `CHECK TABLE' Syntax `REPAIR TABLE' Syntax Using `myisamchk' for Table Maintenance and Crash Recovery `myisamchk' Invocation Syntax General Options for `myisamchk' Check Options for `myisamchk' Repair Options for myisamchk Other Options for `myisamchk' `myisamchk' Memory Usage Using `myisamchk' for Crash Recovery How to Check Tables for Errors How to Repair Tables Table Optimisation Setting Up a Table Maintenance Regimen Getting Information About a Table Database Administration Language Reference `OPTIMIZE TABLE' Syntax `ANALYZE TABLE' Syntax `FLUSH' Syntax `RESET' Syntax `KILL' Syntax `SHOW' Syntax Retrieving information about Database, Tables, Columns, and Indexes `SHOW TABLE STATUS' `SHOW STATUS' `SHOW VARIABLES' `SHOW LOGS' `SHOW PROCESSLIST' `SHOW GRANTS' `SHOW CREATE TABLE' `SHOW WARNINGS | ERRORS' `SHOW TABLE TYPES' `SHOW PRIVILEGES' MySQL Localisation and International Usage The Character Set Used for Data and Sorting German character set Non-English Error Messages Adding a New Character Set The Character Definition Arrays String Collating Support Multi-byte Character Support Problems With Character Sets MySQL Server-Side Scripts and Utilities Overview of the Server-Side Scripts and Utilities `safe_mysqld', The Wrapper Around `mysqld' `mysqld_multi', A Program for Managing Multiple MySQL Servers `myisampack', The MySQL Compressed Read-only Table Generator `mysqld-max', An Extended `mysqld' Server MySQL Client-Side Scripts and Utilities Overview of the Client-Side Scripts and Utilities `mysql', The Command-line Tool `mysqladmin', Administrating a MySQL Server Using `mysqlcheck' for Table Maintenance and Crash Recovery `mysqldump', Dumping Table Structure and Data `mysqlhotcopy', Copying MySQL Databases and Tables `mysqlimport', Importing Data from Text Files `mysqlshow', Showing Databases, Tables, and Columns `mysql_config', Get compile options for compiling clients `perror', Explaining Error Codes How to Run SQL Commands from a Text File The MySQL Log Files The Error Log The General Query Log The Update Log The Binary Update Log The Slow Query Log Log File Maintenance Replication in MySQL Introduction Replication Implementation Overview How To Set Up Replication Replication Features and Known Problems Replication Options in `my.cnf' SQL Commands Related to Replication Replication FAQ Troubleshooting Replication MySQL Optimisation Optimisation Overview MySQL Design Limitations/Tradeoffs Portability What Have We Used MySQL For? The MySQL Benchmark Suite Using Your Own Benchmarks Optimising `SELECT's and Other Queries `EXPLAIN' Syntax (Get Information About a `SELECT') Estimating Query Performance Speed of `SELECT' Queries How MySQL Optimises `WHERE' Clauses How MySQL Optimises `DISTINCT' How MySQL Optimises `LEFT JOIN' and `RIGHT JOIN' How MySQL Optimises `ORDER BY' How MySQL Optimises `LIMIT' Speed of `INSERT' Queries Speed of `UPDATE' Queries Speed of `DELETE' Queries Other Optimisation Tips Locking Issues How MySQL Locks Tables Table Locking Issues Optimising Database Structure Design Choices Get Your Data as Small as Possible How MySQL Uses Indexes Column Indexes Multiple-Column Indexes Why So Many Open tables? How MySQL Opens and Closes Tables Drawbacks to Creating Large Numbers of Tables in the Same Database Optimising the MySQL Server System/Compile Time and Startup Parameter Tuning Tuning Server Parameters How Compiling and Linking Affects the Speed of MySQL How MySQL Uses Memory How MySQL uses DNS `SET' Syntax Disk Issues Using Symbolic Links Using Symbolic Links for Databases Using Symbolic Links for Tables MySQL Language Reference Language Structure Literals: How to Write Strings and Numbers Strings Numbers Hexadecimal Values `NULL' Values Database, Table, Index, Column, and Alias Names Case Sensitivity in Names User Variables System Variables Comment Syntax Is MySQL Picky About Reserved Words? Column Types Numeric Types Date and Time Types Y2K Issues and Date Types The `DATETIME', `DATE', and `TIMESTAMP' Types The `TIME' Type The `YEAR' Type String Types The `CHAR' and `VARCHAR' Types The `BLOB' and `TEXT' Types The `ENUM' Type The `SET' Type Choosing the Right Type for a Column Using Column Types from Other Database Engines Column Type Storage Requirements Functions for Use in `SELECT' and `WHERE' Clauses Non-Type-Specific Operators and Functions Parentheses Comparison Operators Logical Operators Control Flow Functions String Functions String Comparison Functions Case-Sensitivity Numeric Functions Arithmetic Operations Mathematical Functions Date and Time Functions Cast Functions Other Functions Bit Functions Miscellaneous Functions Functions for Use with `GROUP BY' Clauses Data Manipulation: `SELECT', `INSERT', `UPDATE', `DELETE' `SELECT' Syntax `JOIN' Syntax `UNION' Syntax `HANDLER' Syntax `INSERT' Syntax `INSERT ... SELECT' Syntax `INSERT DELAYED' Syntax `UPDATE' Syntax `DELETE' Syntax `TRUNCATE' Syntax `REPLACE' Syntax `LOAD DATA INFILE' Syntax `DO' Syntax Data Definition: `CREATE', `DROP', `ALTER' `CREATE DATABASE' Syntax `DROP DATABASE' Syntax `CREATE TABLE' Syntax Silent Column Specification Changes `ALTER TABLE' Syntax `RENAME TABLE' Syntax `DROP TABLE' Syntax `CREATE INDEX' Syntax `DROP INDEX' Syntax Basic MySQL User Utility Commands `USE' Syntax `DESCRIBE' Syntax (Get Information About Columns) MySQL Transactional and Locking Commands `BEGIN/COMMIT/ROLLBACK' Syntax `LOCK TABLES/UNLOCK TABLES' Syntax `SET TRANSACTION' Syntax MySQL Full-text Search Full-text Restrictions Fine-tuning MySQL Full-text Search Full-text Search TODO MySQL Query Cache How The Query Cache Operates Query Cache Configuration Query Cache Options in `SELECT' Query Cache Status and Maintenance MySQL Table Types `MyISAM' Tables Space Needed for Keys `MyISAM' Table Formats Static (Fixed-length) Table Characteristics Dynamic Table Characteristics Compressed Table Characteristics `MyISAM' Table Problems Corrupted `MyISAM' Tables Clients is using or hasn't closed the table properly `MERGE' Tables `MERGE' Table Problems `ISAM' Tables `HEAP' Tables `InnoDB' Tables InnoDB Tables Overview InnoDB Startup Options Creating InnoDB Tablespace If Something Goes Wrong in Database Creation Creating InnoDB Tables Converting MyISAM Tables to InnoDB Foreign Key Constraints Adding and Removing InnoDB Data and Log Files Backing up and Recovering an InnoDB Database Checkpoints Moving an InnoDB Database to Another Machine InnoDB Transaction Model Consistent Read Locking Reads Next-key Locking: Avoiding the Phantom Problem Locks Set by Different SQL Statements in InnoDB Deadlock Detection and Rollback An Example of How the Consistent Read Works in InnoDB How to cope with deadlocks? Performance Tuning Tips The InnoDB Monitor Implementation of Multi-versioning Table and Index Structures Physical Structure of an Index Insert Buffering Adaptive Hash Indexes Physical Record Structure How an Auto-increment Column Works in InnoDB File Space Management and Disk I/O Disk I/O File Space Management Defragmenting a Table Error Handling Restrictions on InnoDB Tables InnoDB Change History MySQL/InnoDB-4.1.0, April x, 2003 MySQL/InnoDB-3.23.56, March xx, 2003 MySQL/InnoDB-4.0.12, March xx, 2003 MySQL/InnoDB-4.0.11, February 25, 2003 MySQL/InnoDB-4.0.10, February 4, 2003 MySQL/InnoDB-3.23.55, January 24, 2003 MySQL/InnoDB-4.0.9, January 14, 2003 MySQL/InnoDB-4.0.8, January 7, 2003 MySQL/InnoDB-4.0.7, December 26, 2002 MySQL/InnoDB-4.0.6, December 19, 2002 MySQL/InnoDB-3.23.54, December 12, 2002 MySQL/InnoDB-4.0.5, November 18, 2002 MySQL/InnoDB-3.23.53, October 9, 2002 MySQL/InnoDB-4.0.4, October 2, 2002 MySQL/InnoDB-4.0.3, August 28, 2002 MySQL/InnoDB-3.23.52, August 16, 2002 MySQL/InnoDB-4.0.2, July 10, 2002 MySQL/InnoDB-3.23.51, June 12, 2002 MySQL/InnoDB-3.23.50, April 23, 2002 MySQL/InnoDB-3.23.49, February 17, 2002 MySQL/InnoDB-3.23.48, February 9, 2002 MySQL/InnoDB-3.23.47, December 28, 2001 MySQL/InnoDB-4.0.1, December 23, 2001 MySQL/InnoDB-3.23.46, November 30, 2001 MySQL/InnoDB-3.23.45, November 23, 2001 MySQL/InnoDB-3.23.44, November 2, 2001 MySQL/InnoDB-3.23.43, October 4, 2001 MySQL/InnoDB-3.23.42, September 9, 2001 MySQL/InnoDB-3.23.41, August 13, 2001 MySQL/InnoDB-3.23.40, July 16, 2001 MySQL/InnoDB-3.23.39, June 13, 2001 MySQL/InnoDB-3.23.38, May 12, 2001 InnoDB Contact Information `BDB' or `BerkeleyDB' Tables Overview of `BDB' Tables Installing `BDB' `BDB' startup options Characteristics of `BDB' tables: Things we need to fix for `BDB' in the near future: Operating systems supported by `BDB' Restrictions on `BDB' Tables Errors That May Occur When Using `BDB' Tables MySQL APIs MySQL PHP API Common Problems with MySQL and PHP MySQL Perl API `DBI' with `DBD::mysql' The `DBI' Interface More `DBI'/`DBD' Information MySQL ODBC Support How To Install MyODBC How to Fill in the Various Fields in the ODBC Administrator Program Connect parameters for MyODBC How to Report Problems with MyODBC Programs Known to Work with MyODBC How to Get the Value of an `AUTO_INCREMENT' Column in ODBC Reporting Problems with MyODBC MySQL C API C API Datatypes C API Function Overview C API Function Descriptions `mysql_affected_rows()' `mysql_change_user()' `mysql_character_set_name()' `mysql_close()' `mysql_connect()' `mysql_create_db()' `mysql_data_seek()' `mysql_debug()' `mysql_drop_db()' `mysql_dump_debug_info()' `mysql_eof()' `mysql_errno()' `mysql_error()' `mysql_escape_string()' `mysql_fetch_field()' `mysql_fetch_fields()' `mysql_fetch_field_direct()' `mysql_fetch_lengths()' `mysql_fetch_row()' `mysql_field_count()' `mysql_field_seek()' `mysql_field_tell()' `mysql_free_result()' `mysql_get_client_info()' `mysql_get_server_version()' `mysql_get_host_info()' `mysql_get_proto_info()' `mysql_get_server_info()' `mysql_info()' `mysql_init()' `mysql_insert_id()' `mysql_kill()' `mysql_list_dbs()' `mysql_list_fields()' `mysql_list_processes()' `mysql_list_tables()' `mysql_num_fields()' `mysql_num_rows()' `mysql_options()' `mysql_ping()' `mysql_query()' `mysql_real_connect()' `mysql_real_escape_string()' `mysql_real_query()' `mysql_reload()' `mysql_row_seek()' `mysql_row_tell()' `mysql_select_db()' `mysql_shutdown()' `mysql_stat()' `mysql_store_result()' `mysql_thread_id()' `mysql_use_result()' C Threaded Function Descriptions `my_init()' `mysql_thread_init()' `mysql_thread_end()' `mysql_thread_safe()' C Embedded Server Function Descriptions `mysql_server_init()' `mysql_server_end()' Common questions and problems when using the C API Why Is It that After `mysql_query()' Returns Success, `mysql_store_result()' Sometimes Returns `NULL'? What Results Can I Get From a Query? How Can I Get the Unique ID for the Last Inserted Row? Problems Linking with the C API Building Client Programs How to Make a Threaded Client libmysqld, the Embedded MySQL Server Library Overview of the Embedded MySQL Server Library Compiling Programs with `libmysqld' Restrictions when using the Embedded MySQL Server Using Option Files with the Embedded Server Things left to do in Embedded Server (TODO) A Simple Embedded Server Example Licensing the Embedded Server MySQL C++ APIs Borland C++ MySQL Java Connectivity (JDBC) MySQL Python APIs MySQL Tcl APIs MySQL Eiffel wrapper Extending MySQL MySQL Internals MySQL Threads MySQL Test Suite Running the MySQL Test Suite Extending the MySQL Test Suite Reporting Bugs in the MySQL Test Suite Adding New Functions to MySQL `CREATE FUNCTION/DROP FUNCTION' Syntax Adding a New User-definable Function UDF Calling Sequences for simple functions UDF Calling Sequences for aggregate functions Argument Processing Return Values and Error Handling Compiling and Installing User-definable Functions Adding a New Native Function Adding New Procedures to MySQL Procedure Analyse Writing a Procedure Problems and Common Errors How to Determine What Is Causing Problems Common Errors When Using MySQL `Access denied' Error `MySQL server has gone away' Error `Can't connect to [local] MySQL server' Error `Host '...' is blocked' Error `Too many connections' Error `Some non-transactional changed tables couldn't be rolled back' Error `Out of memory' Error `Packet too large' Error Communication Errors / Aborted Connection `The table is full' Error `Can't create/write to file' Error `Commands out of sync' Error in Client `Ignoring user' Error `Table 'xxx' doesn't exist' Error `Can't initialize character set xxx' error File Not Found Installation Related Issues Problems When Linking with the MySQL Client Library How to Run MySQL As a Normal User Problems with File Permissions Administration Related Issues What To Do If MySQL Keeps Crashing How to Reset a Forgotten Root Password How MySQL Handles a Full Disk Where MySQL Stores Temporary Files How to Protect or Change the MySQL Socket File `/tmp/mysql.sock' Time Zone Problems Query Related Issues Case-Sensitivity in Searches Problems Using `DATE' Columns Problems with `NULL' Values Problems with `alias' Deleting Rows from Related Tables Solving Problems with No Matching Rows Problems with Floating-Point Comparison Table Definition Related Issues Problems with `ALTER TABLE'. How To Change the Order of Columns in a Table TEMPORARY TABLE problems Contributed Programs APIs Converters Utilities Credits Developers at MySQL AB Contributors to MySQL Supporters to MySQL MySQL Change History Changes in release 5.0.0 (Development) Changes in release 4.1.x (Alpha) Changes in release 4.1.0 Changes in release 4.0.x (Gamma) Changes in release 4.0.12 (not released yet) Changes in release 4.0.11 (20 Feb 2003) Changes in release 4.0.10 (29 Jan 2003) Changes in release 4.0.9 (09 Jan 2003) Changes in release 4.0.8 (07 Jan 2003) Changes in release 4.0.7 (20 Dec 2002) Changes in release 4.0.6 (14 Dec 2002: Gamma) Changes in release 4.0.5 (13 Nov 2002) Changes in release 4.0.4 (29 Sep 2002) Changes in release 4.0.3 (26 Aug 2002: Beta) Changes in release 4.0.2 (01 Jul 2002) Changes in release 4.0.1 (23 Dec 2001) Changes in release 4.0.0 (Oct 2001: Alpha) Changes in release 3.23.x (Stable) Changes in release 3.23.57 (not released yet) Changes in release 3.23.56 (13 Mar 2003) Changes in release 3.23.55 (23 Jan 2003) Changes in release 3.23.54 (05 Dec 2002) Changes in release 3.23.53 (09 Oct 2002) Changes in release 3.23.52 (14 Aug 2002) Changes in release 3.23.51 (31 May 2002) Changes in release 3.23.50 (21 Apr 2002) Changes in release 3.23.49 Changes in release 3.23.48 (07 Feb 2002) Changes in release 3.23.47 (27 Dec 2001) Changes in release 3.23.46 (29 Nov 2001) Changes in release 3.23.45 (22 Nov 2001) Changes in release 3.23.44 (31 Oct 2001) Changes in release 3.23.43 (04 Oct 2001) Changes in release 3.23.42 (08 Sep 2001) Changes in release 3.23.41 (11 Aug 2001) Changes in release 3.23.40 Changes in release 3.23.39 (12 Jun 2001) Changes in release 3.23.38 (09 May 2001) Changes in release 3.23.37 (17 Apr 2001) Changes in release 3.23.36 (27 Mar 2001) Changes in release 3.23.35 (15 Mar 2001) Changes in release 3.23.34a Changes in release 3.23.34 (10 Mar 2001) Changes in release 3.23.33 (09 Feb 2001) Changes in release 3.23.32 (22 Jan 2001: Stable) Changes in release 3.23.31 (17 Jan 2001) Changes in release 3.23.30 (04 Jan 2001) Changes in release 3.23.29 (16 Dec 2000) Changes in release 3.23.28 (22 Nov 2000: Gamma) Changes in release 3.23.27 (24 Oct 2000) Changes in release 3.23.26 (18 Oct 2000) Changes in release 3.23.25 (29 Sep 2000) Changes in release 3.23.24 (08 Sep 2000) Changes in release 3.23.23 (01 Sep 2000) Changes in release 3.23.22 (31 Jul 2000) Changes in release 3.23.21 Changes in release 3.23.20 Changes in release 3.23.19 Changes in release 3.23.18 Changes in release 3.23.17 Changes in release 3.23.16 Changes in release 3.23.15 (May 2000: Beta) Changes in release 3.23.14 Changes in release 3.23.13 Changes in release 3.23.12 (07 Mar 2000) Changes in release 3.23.11 Changes in release 3.23.10 Changes in release 3.23.9 Changes in release 3.23.8 (02 Jan 2000) Changes in release 3.23.7 (10 Dec 1999) Changes in release 3.23.6 Changes in release 3.23.5 (20 Oct 1999) Changes in release 3.23.4 (28 Sep 1999) Changes in release 3.23.3 Changes in release 3.23.2 (09 Aug 1999) Changes in release 3.23.1 Changes in release 3.23.0 (05 Aug 1999: Alpha) Changes in release 3.22.x (Old; discontinued) Changes in release 3.22.35 Changes in release 3.22.34 Changes in release 3.22.33 Changes in release 3.22.32 (14 Feb 2000) Changes in release 3.22.31 Changes in release 3.22.30 Changes in release 3.22.29 (02 Jan 2000) Changes in release 3.22.28 (20 Oct 1999) Changes in release 3.22.27 Changes in release 3.22.26 (16 Sep 1999) Changes in release 3.22.25 Changes in release 3.22.24 (05 Jul 1999) Changes in release 3.22.23 (08 Jun 1999) Changes in release 3.22.22 (30 Apr 1999) Changes in release 3.22.21 Changes in release 3.22.20 (18 Mar 1999) Changes in release 3.22.19 (Mar 1999: Stable) Changes in release 3.22.18 Changes in release 3.22.17 Changes in release 3.22.16 (Feb 1999: Gamma) Changes in release 3.22.15 Changes in release 3.22.14 Changes in release 3.22.13 Changes in release 3.22.12 Changes in release 3.22.11 Changes in release 3.22.10 Changes in release 3.22.9 Changes in release 3.22.8 Changes in release 3.22.7 (Sep 1998: Beta) Changes in release 3.22.6 Changes in release 3.22.5 Changes in release 3.22.4 Changes in release 3.22.3 Changes in release 3.22.2 Changes in release 3.22.1 (Jun 1998: Alpha) Changes in release 3.22.0 Changes in release 3.21.x Changes in release 3.21.33 Changes in release 3.21.32 Changes in release 3.21.31 Changes in release 3.21.30 Changes in release 3.21.29 Changes in release 3.21.28 Changes in release 3.21.27 Changes in release 3.21.26 Changes in release 3.21.25 Changes in release 3.21.24 Changes in release 3.21.23 Changes in release 3.21.22 Changes in release 3.21.21a Changes in release 3.21.21 Changes in release 3.21.20 Changes in release 3.21.19 Changes in release 3.21.18 Changes in release 3.21.17 Changes in release 3.21.16 Changes in release 3.21.15 Changes in release 3.21.14b Changes in release 3.21.14a Changes in release 3.21.13 Changes in release 3.21.12 Changes in release 3.21.11 Changes in release 3.21.10 Changes in release 3.21.9 Changes in release 3.21.8 Changes in release 3.21.7 Changes in release 3.21.6 Changes in release 3.21.5 Changes in release 3.21.4 Changes in release 3.21.3 Changes in release 3.21.2 Changes in release 3.21.0 Changes in release 3.20.x Changes in release 3.20.18 Changes in release 3.20.17 Changes in release 3.20.16 Changes in release 3.20.15 Changes in release 3.20.14 Changes in release 3.20.13 Changes in release 3.20.11 Changes in release 3.20.10 Changes in release 3.20.9 Changes in release 3.20.8 Changes in release 3.20.7 Changes in release 3.20.6 Changes in release 3.20.3 Changes in release 3.20.0 Changes in release 3.19.x Changes in release 3.19.5 Changes in release 3.19.4 Changes in release 3.19.3 Porting to Other Systems Debugging a MySQL server Compiling MYSQL for Debugging Creating Trace Files Debugging mysqld under gdb Using a Stack Trace Using Log Files to Find Cause of Errors in mysqld Making a Test Case When You Experience Table Corruption Debugging a MySQL client The DBUG Package Locking methods Comments about RTS threads Differences between different thread packages Environment Variables MySQL Regular Expressions GNU General Public License Preamble How to Apply These Terms to Your New Programs GNU Lesser General Public License Preamble How to Apply These Terms to Your New Libraries SQL command, type and function index Concept Index This is the Reference Manual for the `MySQL Database System'. This version refers to the 4.0.12 version of `MySQL Server' but it is also applicable for any older version as changes are always indicated. General Information ******************* The `MySQL (TM)' software delivers a very fast, multi-threaded, multi-user, and robust `SQL' (`Structured Query Language') database server. `MySQL Server' is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. `MySQL' is a trademark of `MySQL AB'. The `MySQL' software is `Dual Licensed'. Users can choose to use the `MySQL' software as an `Open Source'/`Free Software' product under the terms of the `GNU General Public License' (`http://www.gnu.org/licenses/') or can purchase a standard commercial license from `MySQL AB'. *Note Licensing and Support::. The `MySQL' web site (`http://www.mysql.com/') provides the latest information about the `MySQL' software. The following list describes some sections of particular interest in this manual: * For information about the company behind the `MySQL Database Server', see *Note What is MySQL AB::. * For a discussion about the capabilities of the `MySQL Database Server', see *Note Features::. * For installation instructions, see *Note Installing::. * For tips on porting the `MySQL Database Software' to new architectures or operating systems, see *Note Porting::. * For information about upgrading from a Version 3.23 release, see *Note Upgrading-from-3.23::. * For information about upgrading from a Version 3.22 release, see *Note Upgrading-from-3.22::. * For a tutorial introduction to the `MySQL Database Server', see *Note Tutorial::. * For examples of `SQL' and benchmarking information, see the benchmarking directory (`sql-bench' in the distribution). * For a history of new features and bug fixes, see *Note News::. * For a list of currently known bugs and misfeatures, see *Note Bugs::. * For future plans, see *Note TODO::. * For a list of all the contributors to this project, see *Note Credits::. *Important*: Reports of errors (often called bugs), as well as questions and comments, should be sent to the mailing list at . *Note Bug reports::. The `mysqlbug' script should be used to generate bug reports. For source distributions, the `mysqlbug' script can be found in the `scripts' directory. For binary distributions, `mysqlbug' can be found in the `bin' directory (`/usr/bin' for the `MySQL-server' RPM package). If you have found a sensitive security bug in `MySQL Server', you should send an e-mail to . About This Manual ================= This is the `MySQL' reference manual; it documents `MySQL' up to Version 4.0.12. Functional changes are always indicated with reference to the version, so this manual is also suitable if you are using an older version of the `MySQL' software. Being a reference manual, it does not provide general instruction on `SQL' or relational database concepts. As the `MySQL Database Software' is under constant development, the manual is also updated frequently. The most recent version of this manual is available at `http://www.mysql.com/documentation/' in many different formats, including HTML, PDF, and Windows HLP versions. The primary document is the Texinfo file. The HTML version is produced automatically using a modified version of `texi2html'. The plain text and Info versions are produced with `makeinfo'. The PostScript version is produced using `texi2dvi' and `dvips'. The PDF version is produced with `pdftex'. If you have a hard time finding information in the manual, you can try our searchable version at `http://www.mysql.com/doc/'. If you have any suggestions concerning additions or corrections to this manual, please send them to the documentation team at . This manual was initially written by David Axmark and Michael (Monty) Widenius. It is currently maintained by Michael (Monty) Widenius, Arjen Lentz, and Paul DuBois. For other contributors, see *Note Credits::. The copyright (2003) to this manual is owned by the Swedish company `MySQL AB'. *Note Copyright::. Conventions Used in This Manual ------------------------------- This manual uses certain typographical conventions: `constant' Constant-width font is used for command names and options; SQL statements; database, table, and column names; C and Perl code; and environment variables. Example: "To see how `mysqladmin' works, invoke it with the `--help' option." `filename' Constant-width font with surrounding quotes is used for filenames and pathnames. Example: "The distribution is installed under the `/usr/local/' directory." `c' Constant-width font with surrounding quotes is also used to indicate character sequences. Example: "To specify a wildcard, use the `%' character." _italic_ Italic font is used for emphasis, _like this_. *boldface* Boldface font is used in table headings and to convey *especially strong emphasis*. When commands are shown that are meant to be executed by a particular program, the program is indicated by a prompt shown before the command. For example, `shell>' indicates a command that you execute from your login shell, and `mysql>' indicates a command that you execute from the `mysql' client program: shell> type a shell command here mysql> type a mysql command here Shell commands are shown using Bourne shell syntax. If you are using a `csh'-style shell, you may need to issue commands slightly differently. For example, the sequence to set an environment variable and run a command looks like this in Bourne shell syntax: shell> VARNAME=value some_command For `csh', you would execute the sequence like this: shell> setenv VARNAME value shell> some_command Often database, table, and column names must be substituted into commands. To indicate that such substitution is necessary, this manual uses `db_name', `tbl_name' and `col_name'. For example, you might see a statement like this: mysql> SELECT col_name FROM db_name.tbl_name; This means that if you were to enter a similar statement, you would supply your own database, table, and column names, perhaps like this: mysql> SELECT author_name FROM biblio_db.author_list; SQL keywords are not case-sensitive and may be written in uppercase or lowercase. This manual uses uppercase. In syntax descriptions, square brackets (`[' and `]') are used to indicate optional words or clauses. For example, in the following statement, `IF EXISTS' is optional: DROP TABLE [IF EXISTS] tbl_name When a syntax element consists of a number of alternatives, the alternatives are separated by vertical bars (`|'). When one member from a set of choices *may* be chosen, the alternatives are listed within square brackets (`[' and `]'): TRIM([[BOTH | LEADING | TRAILING] [remstr] FROM] str) When one member from a set of choices *must* be chosen, the alternatives are listed within braces (`{' and `}'): {DESCRIBE | DESC} tbl_name {col_name | wild} What Is MySQL? ============== `MySQL', the most popular `Open Source' SQL database, is developed, distributed and supported by `MySQL AB'. `MySQL AB' is a commercial company founded by the MySQL developers that builds its business providing services around the `MySQL' database. *Note What is MySQL AB::. The `MySQL' web site (`http://www.mysql.com/') provides the latest information about `MySQL' software and `MySQL AB'. `MySQL' is a database management system. A database is a structured collection of data. It may be anything from a simple shopping list to a picture gallery or the vast amounts of information in a corporate network. To add, access, and process data stored in a computer database, you need a database management system such as `MySQL' Server. Since computers are very good at handling large amounts of data, database management plays a central role in computing, as stand-alone utilities, or as parts of other applications. MySQL is a relational database management system. A relational database stores data in separate tables rather than putting all the data in one big storeroom. This adds speed and flexibility. The tables are linked by defined relations making it possible to combine data from several tables on request. The `SQL' part of "`MySQL'" stands for "`Structured Query Language'"the most common standardised language used to access databases. MySQL software is `Open Source'. `Open Source' means that it is possible for anyone to use and modify. Anybody can download the `MySQL' software from the Internet and use it without paying anything. Anybody so inclined can study the source code and change it to fit their needs. The `MySQL' software uses the `GPL' (`GNU General Public License'), `http://www.gnu.org/licenses/', to define what you may and may not do with the software in different situations. If you feel uncomfortable with the `GPL' or need to embed `MySQL' code into a commercial application you can buy a commercially licensed version from us. *Note MySQL licenses::. Why use the MySQL Database Server? The `MySQL Database Server' is very fast, reliable, and easy to use. If that is what you are looking for, you should give it a try. `MySQL Server' also has a practical set of features developed in close cooperation with our users. You can find a performance comparison of `MySQL Server' to some other database managers on our benchmark page. *Note MySQL Benchmarks::. `MySQL Server' was originally developed to handle large databases much faster than existing solutions and has been successfully used in highly demanding production environments for several years. Though under constant development, `MySQL Server' today offers a rich and useful set of functions. Its connectivity, speed, and security make `MySQL Server' highly suited for accessing databases on the Internet. The technical features of MySQL Server For advanced technical information, see *Note Reference::. The `MySQL Database Software' is a client/server system that consists of a multi-threaded `SQL' server that supports different backends, several different client programs and libraries, administrative tools, and a wide range of programming interfaces (`API's). We also provide `MySQL Server' as a multi-threaded library which you can link into your application to get a smaller, faster, easier-to-manage product. There is a large amount of contributed MySQL software available. It is very likely that you will find that your favorite application or language already supports the `MySQL Database Server'. The official way to pronounce `MySQL' is "My Ess Que Ell" (not "my sequel"), but we don't mind if you pronounce it as "my sequel" or in some other localised way. History of MySQL ---------------- We once started out with the intention of using `mSQL' to connect to our tables using our own fast low-level (ISAM) routines. However, after some testing we came to the conclusion that `mSQL' was not fast enough nor flexible enough for our needs. This resulted in a new SQL interface to our database but with almost the same API interface as `mSQL'. This API was chosen to ease porting of third-party code. The derivation of the name `MySQL' is not perfectly clear. Our base directory and a large number of our libraries and tools have had the prefix "my" for well over 10 years. However, Monty's daughter (some years younger) is also named My. Which of the two gave its name to `MySQL' is still a mystery, even for us. The Main Features of MySQL -------------------------- The following list describes some of the important characteristics of the `MySQL Database Software'. *Note MySQL 4.0 In A Nutshell::. Internals and Portability * Written in C and C++. Tested with a broad range of different compilers. * Works on many different platforms. *Note Which OS::. * Uses GNU Automake, Autoconf, and Libtool for portability. * APIs for C, C++, Eiffel, Java, Perl, PHP, Python, Ruby, and Tcl. *Note Clients::. * Fully multi-threaded using kernel threads. This means it can easily use multiple CPUs if available. * Very fast B-tree disk tables with index compression. * A very fast thread-based memory allocation system. * Very fast joins using an optimised one-sweep multi-join. * In-memory hash tables which are used as temporary tables. * SQL functions are implemented through a highly optimised class library and should be as fast as possible! Usually there isn't any memory allocation at all after query initialisation. * The `MySQL' code gets tested with Purify (a commercial memory leakage detector) as well as with Valgrind, a GPL tool (`http://developer.kde.org/~sewardj/'). Column Types * Many column types: signed/unsigned integers 1, 2, 3, 4, and 8 bytes long, `FLOAT', `DOUBLE', `CHAR', `VARCHAR', `TEXT', `BLOB', `DATE', `TIME', `DATETIME', `TIMESTAMP', `YEAR', `SET', and `ENUM' types. *Note Column types::. * Fixed-length and variable-length records. * All columns have default values. You can use `INSERT' to insert a subset of a table's columns; those columns that are not explicitly given values are set to their default values. Commands and Functions * Full operator and function support in the `SELECT' and `WHERE' parts of queries. For example: mysql> SELECT CONCAT(first_name, " ", last_name) -> FROM tbl_name -> WHERE income/dependents > 10000 AND age > 30; * Full support for SQL `GROUP BY' and `ORDER BY' clauses with expressions. Support for group functions (`COUNT()', `COUNT(DISTINCT ...)', `AVG()', `STD()', `SUM()', `MAX()', and `MIN()'). * Support for `LEFT OUTER JOIN' and `RIGHT OUTER JOIN' with ANSI SQL and ODBC syntax. * Aliases on tables and columns are allowed as in the SQL92 standard. * `DELETE', `INSERT', `REPLACE', and `UPDATE' return the number of rows that were changed (affected). It is possible to return the number of rows matched instead by setting a flag when connecting to the server. * The `MySQL'-specific `SHOW' command can be used to retrieve information about databases, tables, and indexes. The `EXPLAIN' command can be used to determine how the optimiser resolves a query. * Function names do not clash with table or column names. For example, `ABS' is a valid column name. The only restriction is that for a function call, no spaces are allowed between the function name and the `(' that follows it. *Note Reserved words::. * You can mix tables from different databases in the same query (as of Version 3.22). Security * A privilege and password system that is very flexible and secure, and allows host-based verification. Passwords are secure because all password traffic is encrypted when you connect to a server. Scalability and Limits * Handles large databases. We are using `MySQL Server' with some databases that contain 50 million records and we know of users that use `MySQL Server' with 60,000 tables and about 5,000,000,000 rows. * Up to 32 indexes per table are allowed. Each index may consist of 1 to 16 columns or parts of columns. The maximum index width is 500 bytes (this may be changed when compiling `MySQL Server'). An index may use a prefix of a `CHAR' or `VARCHAR' field. Connectivity * Clients may connect to the `MySQL' server using TCP/IP Sockets, Unix Sockets (Unix), or Named Pipes (NT). * `ODBC' (Open-DataBase-Connectivity) support for Win32 (with source). All ODBC 2.5 functions and many others. For example, you can use MS Access to connect to your `MySQL' server. *Note ODBC::. Localisation * The server can provide error messages to clients in many languages. *Note Languages::. * Full support for several different character sets, including ISO-8859-1 (Latin1), german, big5, ujis, and more. For example, the Scandinavian characters 'å', 'ä' and 'ö' are allowed in table and column names. * All data is saved in the chosen character set. All comparisons for normal string columns are case-insensitive. * Sorting is done according to the chosen character set (the Swedish way by default). It is possible to change this when the `MySQL' server is started. To see an example of very advanced sorting, look at the Czech sorting code. `MySQL Server' supports many different character sets that can be specified at compile and runtime. Clients and Tools * Includes `myisamchk', a very fast utility for table checking, optimisation, and repair. All of the functionality of `myisamchk' is also available through the SQL interface as well. *Note MySQL Database Administration::. * All `MySQL' programs can be invoked with the `--help' or `-?' options to obtain online assistance. How Stable Is MySQL? -------------------- This section addresses the questions "_How stable is MySQL Server?_" and "_Can I depend on MySQL Server in this project?_" We will try to clarify these issues and answer some important questions that concern many potential users. The information in this section is based on data gathered from the mailing list, which is very active in identifying problems as well as reporting types of use. Original code stems back from the early '80s, providing a stable code base, and the ISAM table format remains backward-compatible. At TcX, the predecessor of `MySQL AB', `MySQL' code has worked in projects since mid-1996, without any problems. When the `MySQL Database Software' was released to a wider public, we noticed that there were some pieces of "untested code" that were quickly found by the new users who made different types of queries from us. Each new release has had fewer portability problems (even though each new release has had many new features). Each release of the `MySQL Server' has been usable. There have only been problems when users try code from the "gray zones." Naturally, new users don't know what the gray zones are; this section attempts to indicate those that are currently known. The descriptions mostly deal with Version 3.23 of `MySQL Server'. All known and reported bugs are fixed in the latest version, with the exception of those listed in the bugs section, which are things that are design-related. *Note Bugs::. The `MySQL Server' design is multi-layered with independent modules. Some of the newer modules are listed here with an indication of how well-tested each of them is: *Replication - Gamma* Large server clusters using replication are in production use, with good results. Work on enhanced replication features is continuing in `MySQL' 4.x. *`InnoDB' tables - Stable (in 3.23 from 3.23.49)* The `InnoDB' transactional storage engine has now been declared stable in the `MySQL' 3.23 tree, starting from version 3.23.49. `InnoDB' is being used in large, heavy-load production systems. *`BDB' tables - Gamma* The `Berkeley DB' code is very stable, but we are still improving the `BDB' transactional storage engine interface in `MySQL Server', so it will take some time before this is as well tested as the other table types. *`FULLTEXT' - Beta* Full-text search works but is not yet widely used. Important enhancements are being implemented for `MySQL' 4.0. *`MyODBC 2.50' (uses ODBC SDK 2.5) - Gamma* Increasingly in wide use. Some issues brought up appear to be application-related and independent of the ODBC driver or underlying database server. *Automatic recovery of `MyISAM' tables - Gamma* This status only regards the new code in the `MyISAM' storage engine that checks if the table was closed properly on open and executes an automatic check/repair of the table if it wasn't. *Bulk-insert - Alpha* New feature in `MyISAM' tables in `MySQL' 4.0 for faster insert of many rows. *Locking - Gamma* This is very system-dependent. On some systems there are big problems using standard OS locking (`fcntl()'). In these cases, you should run `mysqld' with the `--skip-external-locking' flag. Problems are known to occur on some Linux systems, and on SunOS when using NFS-mounted filesystems. `MySQL AB' provides high-quality support for paying customers, but the `MySQL' mailing list usually provides answers to common questions. Bugs are usually fixed right away with a patch; for serious bugs, there is almost always a new release. How Big Can MySQL Tables Be? ---------------------------- `MySQL' Version 3.22 has a 4G limit on table size. With the new `MyISAM' table type in `MySQL' Version 3.23, the maximum table size is pushed up to 8 million terabytes (2 ^ 63 bytes). Note, however, that operating systems have their own file-size limits. Here are some examples: *Operating System* *File-Size Limit* Linux-Intel 32 bit 2G, 4G or more, depends on Linux version Linux-Alpha 8T (?) Solaris 2.5.1 2G (possible 4G with patch) Solaris 2.6 4G (can be changed with flag) Solaris 2.7 Intel 4G Solaris 2.7 512G UltraSPARC On Linux 2.2 you can get bigger tables than 2G by using the LFS patch for the ext2 filesystem. On Linux 2.4 patches also exist for ReiserFS to get support for big files. This means that the table size for `MySQL' databases is normally limited by the operating system. By default, `MySQL' tables have a maximum size of about 4G. You can check the maximum table size for a table with the `SHOW TABLE STATUS' command or with the `myisamchk -dv table_name'. *Note SHOW::. If you need bigger tables than 4G (and your operating system supports this), you should set the `AVG_ROW_LENGTH' and `MAX_ROWS' parameter when you create your table. *Note CREATE TABLE::. You can also set these later with `ALTER TABLE'. *Note ALTER TABLE::. If your big table is going to be read-only, you could use `myisampack' to merge and compress many tables to one. `myisampack' usually compresses a table by at least 50%, so you can have, in effect, much bigger tables. *Note `myisampack': myisampack. You can go around the operating system file limit for `MyISAM' data files by using the `RAID' option. *Note CREATE TABLE::. Another solution can be the included `MERGE' library, which allows you to handle a collection of identical tables as one. *Note `MERGE' tables: MERGE. Year 2000 Compliance -------------------- The `MySQL Server' itself has no problems with Year 2000 (Y2K) compliance: * `MySQL Server' uses Unix time functions and has no problems with dates until `2069'; all 2-digit years are regarded to be in the range `1970' to `2069', which means that if you store `01' in a `year' column, `MySQL Server' treats it as `2001'. * All `MySQL' date functions are stored in one file, `sql/time.cc', and are coded very carefully to be year 2000-safe. * In `MySQL' Version 3.22 and later, the new `YEAR' column type can store years `0' and `1901' to `2155' in 1 byte and display them using 2 or 4 digits. You may run into problems with applications that use `MySQL Server' in a way that is not Y2K-safe. For example, many old applications store or manipulate years using 2-digit values (which are ambiguous) rather than 4-digit values. This problem may be compounded by applications that use values such as `00' or `99' as "missing" value indicators. Unfortunately, these problems may be difficult to fix because different applications may be written by different programmers, each of whom may use a different set of conventions and date-handling functions. Here is a simple demonstration illustrating that `MySQL Server' doesn't have any problems with dates until the year 2030: mysql> DROP TABLE IF EXISTS y2k; Query OK, 0 rows affected (0.01 sec) mysql> CREATE TABLE y2k (date DATE, -> date_time DATETIME, -> time_stamp TIMESTAMP); Query OK, 0 rows affected (0.00 sec) mysql> INSERT INTO y2k VALUES -> ("1998-12-31","1998-12-31 23:59:59",19981231235959), -> ("1999-01-01","1999-01-01 00:00:00",19990101000000), -> ("1999-09-09","1999-09-09 23:59:59",19990909235959), -> ("2000-01-01","2000-01-01 00:00:00",20000101000000), -> ("2000-02-28","2000-02-28 00:00:00",20000228000000), -> ("2000-02-29","2000-02-29 00:00:00",20000229000000), -> ("2000-03-01","2000-03-01 00:00:00",20000301000000), -> ("2000-12-31","2000-12-31 23:59:59",20001231235959), -> ("2001-01-01","2001-01-01 00:00:00",20010101000000), -> ("2004-12-31","2004-12-31 23:59:59",20041231235959), -> ("2005-01-01","2005-01-01 00:00:00",20050101000000), -> ("2030-01-01","2030-01-01 00:00:00",20300101000000), -> ("2050-01-01","2050-01-01 00:00:00",20500101000000); Query OK, 13 rows affected (0.01 sec) Records: 13 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM y2k; +------------+---------------------+----------------+ | date | date_time | time_stamp | +------------+---------------------+----------------+ | 1998-12-31 | 1998-12-31 23:59:59 | 19981231235959 | | 1999-01-01 | 1999-01-01 00:00:00 | 19990101000000 | | 1999-09-09 | 1999-09-09 23:59:59 | 19990909235959 | | 2000-01-01 | 2000-01-01 00:00:00 | 20000101000000 | | 2000-02-28 | 2000-02-28 00:00:00 | 20000228000000 | | 2000-02-29 | 2000-02-29 00:00:00 | 20000229000000 | | 2000-03-01 | 2000-03-01 00:00:00 | 20000301000000 | | 2000-12-31 | 2000-12-31 23:59:59 | 20001231235959 | | 2001-01-01 | 2001-01-01 00:00:00 | 20010101000000 | | 2004-12-31 | 2004-12-31 23:59:59 | 20041231235959 | | 2005-01-01 | 2005-01-01 00:00:00 | 20050101000000 | | 2030-01-01 | 2030-01-01 00:00:00 | 20300101000000 | | 2050-01-01 | 2050-01-01 00:00:00 | 00000000000000 | +------------+---------------------+----------------+ 13 rows in set (0.00 sec) This shows that the `DATE' and `DATETIME' types will not give any problems with future dates (they handle dates until the year 9999). The `TIMESTAMP' type, which is used to store the current time, has a range up to only `2030-01-01'. `TIMESTAMP' has a range of `1970' to `2030' on 32-bit machines (signed value). On 64-bit machines it handles times up to `2106' (unsigned value). Even though `MySQL Server' is Y2K-compliant, it is your responsibility to provide unambiguous input. See *Note Y2K issues:: for `MySQL Server''s rules for dealing with ambiguous date input data (data containing 2-digit year values). What Is MySQL AB? ================= `MySQL AB' is the company of the `MySQL' founders and main developers. `MySQL AB' was originally established in Sweden by David Axmark, Allan Larsson, and Michael `Monty' Widenius. All the developers of the `MySQL' server are employed by the company. We are a virtual organisation with people in a dozen countries around the world. We communicate extensively over the Net every day with each other and with our users, supporters and partners. We are dedicated to developing the `MySQL' software and spreading our database to new users. `MySQL AB' owns the copyright to the `MySQL' source code, the `MySQL' logo and trademark, and this manual. *Note What-is::. The `MySQL' core values show our dedication to `MySQL' and `Open Source'. We want the `MySQL Database Software' to be: * The best and the most widely used database in the world. * Available and affordable for all. * Easy to use. * Continuously improving while remaining fast and safe. * Fun to use and improve. * Free from bugs. `MySQL AB' and the people at `MySQL AB': * Promote `Open Source' philosophy and support the `Open Source' community. * Aim to be good citizens. * Prefer partners that share our values and mind-set. * Answer e-mail and provide support. * Are a virtual company, networking with others. * Work against software patents. The `MySQL' web site (`http://www.mysql.com/') provides the latest information about `MySQL' and `MySQL AB'. The Business Model and Services of MySQL AB ------------------------------------------- One of the most common questions we encounter is: "_How can you make a living from something you give away for free?_" This is how. `MySQL AB' makes money on support, services, commercial licenses, and royalties, and we use these revenues to fund product development and to expand the `MySQL' business. The company has been profitable since its inception. In October 2001, we accepted venture financing from leading Scandinavian investors and a handful of business angels. This investment is used to solidify our business model and build a basis for sustainable growth. Support ....... `MySQL AB' is run and owned by the founders and main developers of the `MySQL' database. The developers are committed to giving support to customers and other users in order to stay in touch with their needs and problems. All our support is given by qualified developers. Really tricky questions are answered by Michael `Monty' Widenius, principal author of the `MySQL Server'. *Note Support::. For more information and ordering support at various levels, see `http://www.mysql.com/support/' or contact our sales staff at . Training and Certification .......................... `MySQL AB' delivers `MySQL' and related training worldwide. We offer both open courses and in-house courses tailored to the specific needs of your company. `MySQL Training' is also available through our partners, the `Authorised MySQL Training Centers'. Our training material uses the same example databases as our documentation and our sample applications, and it is always updated to reflect the latest `MySQL' version. Our trainers are backed by the development team to guarantee the quality of the training and the continuous development of the course material. This also ensures that no questions raised during the courses remain unanswered. Attending our training courses will enable you to achieve your goals related to your `MySQL' applications. You will also: * Save time. * Improve the performance of your application(s). * Reduce or eliminate the need for additional hardware, decreasing cost. * Enhance security. * Increase customers' and co-workers' satisfaction. * Prepare yourself for `MySQL Certification'. If you are interested in our training as a potential participant or as a training partner, please visit the training section at `http://www.mysql.com/training/' or contact us at: . For details about the `MySQL Certification Program', please see `http://www.mysql.com/certification/'. Consulting .......... `MySQL AB' and its `Authorised Partners' offer consulting services to users of `MySQL Server' and to those who embed `MySQL Server' in their own software, all over the world. Our consultants can help you design and tune your databases, construct efficient queries, tune your platform for optimal performance, resolve migration issues, set up replication, build robust transactional applications, and more. We also help customers embed `MySQL Server' in their products and applications for large-scale deployment. Our consultants work in close collaboration with our development team, which ensures the technical quality of our professional services. Consulting assignments range from 2-day power-start sessions to projects that span weeks and months. Our expertise not only covers `MySQL Server', but also extends into programming and scripting languages such as PHP, Perl, and more. If you are interested in our consulting services or want to become a consulting partner, please visit the consulting section of our web site at `http://www.mysql.com/consulting/' or contact our consulting staff at . Commercial Licenses ................... The `MySQL' database is released under the `GNU General Public License' (`GPL'). This means that the `MySQL' software can be used free of charge under the `GPL'. If you do not want to be bound by the `GPL' terms (like the requirement that your own application becomes `GPL' as well), you may purchase a commercial license for the same product from `MySQL AB'. See `http://www.mysql.com/products/pricing.html'. Since `MySQL AB' owns the copyright to the `MySQL' source code, we are able to employ `Dual Licensing' which means that the same product is available under `GPL' and under a commercial license. This does not in any way affect the `Open Source' commitment of `MySQL AB'. For details about when a commercial license is required, please see *Note MySQL licenses::. We also sell commercial licenses of third-party `Open Source GPL' software that adds value to `MySQL Server'. A good example is the `InnoDB' transactional storage engine that offers `ACID' support, row-level locking, crash recovery, multi-versioning, foreign key support, and more. *Note InnoDB::. Partnering .......... `MySQL AB' has a worldwide partner programme that covers training courses, consulting & support, publications plus reselling and distributing `MySQL' and related products. `MySQL AB Partners' get visibility on the `http://www.mysql.com/' web site and the right to use special versions of the `MySQL' trademarks to identify their products and promote their business. If you are interested in becoming a `MySQL AB Partner', please e-mail . The word `MySQL' and the `MySQL' dolphin logo are trademarks of `MySQL AB'. *Note MySQL AB Logos and Trademarks::. These trademarks represent a significant value that the `MySQL' founders have built over the years. Advertising ........... The `MySQL' web site (`http://www.mysql.com/') is popular among developers and users. In October 2001, we served 10 million page views. Our visitors represent a group that makes purchase decisions and recommendations for both software and hardware. Twelve percent of our visitors authorise purchase decisions, and only nine percent are not involved in purchase decisions at all. More than 65% have made one or more online business purchase within the last half-year, and 70% plan to make one in the next months. Contact Information ------------------- The `MySQL' web site (`http://www.mysql.com/') provides the latest information about `MySQL' and `MySQL AB'. For press service and inquiries not covered in our News releases (`http://www.mysql.com/news/'), please send e-mail to . If you have a valid support contract with `MySQL AB', you will get timely, precise answers to your technical questions about the `MySQL' software. For more information, see *Note Support::. On our website, see `http://www.mysql.com/support/', or send an e-mail message to . For information about `MySQL' training, please visit the training section at `http://www.mysql.com/training/'. If you have restricted access to the Internet, please contact the `MySQL AB' training staff at . *Note Business Services Training::. For information on the `MySQL Certification Program', please see `http://www.mysql.com/certification/'. *Note Business Services Training::. If you're interested in consulting, please visit the consulting section at `http://www.mysql.com/consulting/'. If you have restricted access to the Internet, please contact the `MySQL AB' consulting staff at . *Note Business Services Consulting::. Commercial licenses may be purchased online at `https://order.mysql.com/'. There you will also find information on how to fax your purchase order to `MySQL AB'. More information about licensing can be found at `http://www.mysql.com/products/pricing.html'. If you have questions regarding licensing or you want a quote for a high-volume license deal, please fill in the contact form on our web site (`http://www.mysql.com/') or send an e-mail message to (for licensing questions) or to (for sales inquiries). *Note MySQL licenses::. If you represent a business that is interested in partnering with `MySQL AB', please send e-mail to . *Note Business Services Partnering::. For more information on the `MySQL' trademark policy, refer to `http://www.mysql.com/company/trademark.html' or send e-mail to . *Note MySQL AB Logos and Trademarks::. If you are interested in any of the `MySQL AB' jobs listed in our jobs section (`http://www.mysql.com/company/jobs/'), please send an e-mail message to . Please do not send your CV as an attachment, but rather as plain text at the end of your e-mail message. For general discussion among our many users, please direct your attention to the appropriate mailing list. *Note Questions::. Reports of errors (often called bugs), as well as questions and comments, should be sent to the mailing list at . If you have found a sensitive security bug in the `MySQL Server', please send an e-mail to . *Note Bug reports::. If you have benchmark results that we can publish, please contact us at . If you have any suggestions concerning additions or corrections to this manual, please send them to the manual team at . For questions or comments about the workings or content of the `MySQL' web site (`http://www.mysql.com/'), please send e-mail to . `MySQL AB' has a privacy policy, which can be read at `http://www.mysql.com/company/privacy.html'. For any queries regarding this policy, please e-mail . For all other inquires, please send e-mail to . MySQL Support and Licensing =========================== This section describes `MySQL' support and licensing arrangements. Support Offered by MySQL AB --------------------------- Technical support from `MySQL AB' means individualised answers to your unique problems direct from the software engineers who code the `MySQL' database engine. We try to take a broad and inclusive view of technical support. Almost any problem involving `MySQL' software is important to us if it's important to you. Typically customers seek help on how to get different commands and utilities to work, remove performance bottlenecks, restore crashed systems, understand operating system or networking impacts on `MySQL', set up best practices for backup and recovery, utilise `API's, etc. Our support covers only the `MySQL' server and our own utilities, not third-party products that access the `MySQL' server, though we try to help with these where we can. Detailed information about our various support options is given at `http://www.mysql.com/support/', where support contracts can also be ordered online. If you have restricted access to the Internet, contact our sales staff at . Technical support is like life insurance. You can live happily without it for years, but when your hour arrives it becomes critically important, yet it's too late to buy it! If you use `MySQL' Server for important applications and encounter sudden troubles, it might take too long to figure out all the answers yourself. You may need immediate access to the most experienced `MySQL' troubleshooters available, those employed by `MySQL AB'. Copyrights and Licenses Used by MySQL ------------------------------------- `MySQL AB' owns the copyright to the `MySQL' source code, the `MySQL' logos and trademarks and this manual. *Note What is MySQL AB::. Several different licenses are relevant to the `MySQL' distribution: 1. All the `MySQL'-specific source in the server, the `mysqlclient' library and the client, as well as the `GNU' `readline' library is covered by the `GNU General Public License'. *Note GPL license::. The text of this license can also be found as the file `COPYING' in the distributions. 2. The `GNU' `getopt' library is covered by the `GNU Lesser General Public License'. *Note LGPL license::. 3. Some parts of the source (the `regexp' library) are covered by a Berkeley-style copyright. 4. Older versions of `MySQL' (3.22 and earlier) are subject to a more strict license (`http://www.mysql.com/products/mypl.html'). See the documentation of the specific version for information. 5. The manual is currently *not* distributed under a `GPL'-style license. Use of the manual is subject to the following terms: * Conversion to other formats is allowed, but the actual content may not be altered or edited in any way. * You may create a printed copy for your own personal use. * For all other uses, such as selling printed copies or using (parts of) the manual in another publication, prior written agreement from `MySQL AB' is required. Please e-mail for more information or if you are interested in doing a translation. For information about how the `MySQL' licenses work in practice, please refer to *Note MySQL licenses::. Also see *Note MySQL AB Logos and Trademarks::. MySQL Licenses -------------- The `MySQL' software is released under the `GNU General Public License' (`GPL'), which probably is the best known `Open Source' license. The formal terms of the `GPL' license can be found at `http://www.gnu.org/licenses/'. See also `http://www.gnu.org/licenses/gpl-faq.html' and `http://www.gnu.org/philosophy/enforcing-gpl.html'. Since the `MySQL' software is released under the `GPL', it may often be used for free, but for certain uses you may want or need to buy commercial licenses from `MySQL AB' at `https://order.mysql.com/'. See `http://www.mysql.com/products/licensing.html' for more information. Older versions of `MySQL' (3.22 and earlier) are subject to a more strict license (`http://www.mysql.com/products/mypl.html'). See the documentation of the specific version for information. Please note that the use of the `MySQL' software under commercial license, `GPL', or the old `MySQL' license does not automatically give you the right to use `MySQL AB' trademarks. *Note MySQL AB Logos and Trademarks::. Using the MySQL Software Under a Commercial License ................................................... The `GPL' license is contagious in the sense that when a program is linked to a `GPL' program all the source code for all the parts of the resulting product must also be released under the `GPL'. Otherwise you break the license terms and forfeit your right to use the `GPL' program altogether and also risk damages. You need a commercial license: * When you link a program with any `GPL' code from the `MySQL' software and don't want the resulting product to be `GPL', maybe because you want to build a commercial product or keep the added non-`GPL' code closed source for other reasons. When purchasing commercial licenses, you are not using the `MySQL' software under `GPL' even though it's the same code. * When you distribute a non-`GPL' application that *only* works with the `MySQL' software and ship it with the `MySQL' software. This type of solution is actually considered to be linking even if it's done over a network. * When you distribute copies of the `MySQL' software without providing the source code as required under the `GPL' license. * When you want to support the further development of the `MySQL' database even if you don't formally need a commercial license. Purchasing support directly from `MySQL AB' is another good way of contributing to the development of the `MySQL' software, with immediate advantages for you. *Note Support::. If you require a license, you will need one for each installation of the `MySQL' software. This covers any number of CPUs on a machine, and there is no artificial limit on the number of clients that connect to the server in any way. For commercial licenses, please visit our website at `http://www.mysql.com/products/licensing.html'. For support contracts, see `http://www.mysql.com/support/'. If you have special needs or you have restricted access to the Internet, please contact our sales staff at . Using the MySQL Software for Free Under GPL ........................................... You can use the `MySQL' software for free under the `GPL' if you adhere to the conditions of the `GPL'. For more complete coverage of the common questions about the `GPL' see the generic FAQ from the Free Software Foundation at `http://www.gnu.org/licenses/gpl-faq.html'. Some common cases: * When you distribute both your own application as well as the `MySQL' source code under the `GPL' with your product. * When you distribute the `MySQL' source code bundled with other programs that are not linked to or dependent on the `MySQL' system for their functionality even if you sell the distribution commercially. This is called mere aggregation in the `GPL' license. * If you are not distributing *any* part of the `MySQL' system, you can use it for free. * When you are an Internet Service Provider (ISP), offering web hosting with `MySQL' servers for your customers. However, we do encourage people to use ISPs that have MySQL support, as this will give them the confidence that if they have some problem with the `MySQL' installation, their ISP will in fact have the resources to solve the problem for them. Note that even if an ISP does not have a commercial license for `MySQL Server', they should at least give their customers read access to the source of the `MySQL' installation so that the customers can verify that it is patched correctly. * When you use the `MySQL' Database Software in conjunction with a web server, you do not need a commercial license (so long as it is not a product you distribute). This is true even if you run a commercial web server that uses `MySQL Server', because you are not distributing any part of the `MySQL' system. However, in this case we would like you to purchase `MySQL' support because the `MySQL' software is helping your enterprise. If your use of `MySQL' database software does not require a commercial license, we encourage you to purchase support from `MySQL AB' anyway. This way you contribute toward `MySQL' development and also gain immediate advantages for yourself. *Note Support::. If you use the `MySQL' database software in a commercial context such that you profit by its use, we ask that you further the development of the `MySQL' software by purchasing some level of support. We feel that if the `MySQL' database helps your business, it is reasonable to ask that you help `MySQL AB'. (Otherwise, if you ask us support questions, you are not only using for free something into which we've put a lot a work, you're asking us to provide free support, too.) MySQL AB Logos and Trademarks ----------------------------- Many users of the `MySQL' database want to display the `MySQL AB' dolphin logo on their web sites, books, or boxed products. We welcome and encourage this, although it should be noted that the word `MySQL' and the `MySQL' dolphin logo are trademarks of `MySQL AB' and may only be used as stated in our trademark policy at `http://www.mysql.com/company/trademark.html'. The Original MySQL Logo ....................... The `MySQL' dolphin logo was designed by the Finnish advertising agency Priority in 2001. The dolphin was chosen as a suitable symbol for the `MySQL' database since it is a smart, fast, and lean animal, effortlessly navigating oceans of data. We also happen to like dolphins. The original `MySQL' logo may only be used by representatives of `MySQL AB' and by those having a written agreement allowing them to do so. MySQL Logos that may be Used Without Written Permission ....................................................... We have designed a set of special _Conditional Use_ logos that may be downloaded from our web site at `http://www.mysql.com/press/logos.html' and used on third-party web sites without written permission from `MySQL AB'. The use of these logos is not entirely unrestricted but, as the name implies, subject to our trademark policy that is also available on our web site. You should read through the trademark policy if you plan to use them. The requirements are basically: * Use the logo you need as displayed on the `http://www.mysql.com/' site. You may scale it to fit your needs, but not change colours or design, or alter the graphics in any way. * Make it evident that you, and not `MySQL AB', are the creator and owner of the site that displays the `MySQL' trademark. * Don't use the trademark in a way that is detrimental to `MySQL AB' or to the value of `MySQL AB' trademarks. We reserve the right to revoke the right to use the `MySQL AB' trademark. * If you use the trademark on a web site, make it clickable, leading directly to `http://www.mysql.com/'. * If you are using the `MySQL' database under `GPL' in an application, your application must be `Open Source' and be able to connect to a `MySQL' server. Contact us at to inquire about special arrangements to fit your needs. When do you need a Written Permission to use MySQL Logos? ......................................................... In the following cases you need a written permission from `MySQL AB' before using `MySQL' logos: * When displaying any `MySQL AB' logo anywhere except on your web site. * When displaying any `MySQL AB' logo except the _Conditional Use_ logos mentioned previously on web sites or elsewhere. Out of legal and commercial reasons we have to monitor the use of MySQL trademarks on products, books, etc. We will usually require a fee for displaying `MySQL AB' logos on commercial products, since we think it is reasonable that some of the revenue is returned to fund further development of the `MySQL' database. MySQL AB Partnership Logos .......................... `MySQL' partnership logos may only be used by companies and persons having a written partnership agreement with `MySQL AB'. Partnerships include certification as a `MySQL' trainer or consultant. Please see *Note Partnering: Business Services Partnering. Using the word `MySQL' in Printed Text or Presentations ....................................................... `MySQL AB' welcomes references to the `MySQL' database, but note that the word `MySQL' is a trademark of `MySQL AB'. Because of this, you should append the trademark symbol (`TM') to the first or most prominent use of the word `MySQL' in a text and where appropriate, state that `MySQL' is a trademark of `MySQL AB'. Please refer to our trademark policy at `http://www.mysql.com/company/trademark.html' for details. Using the word `MySQL' in Company and Product Names ................................................... Use of the word `MySQL' in product or company names or in Internet domain names is not allowed without written permission from `MySQL AB'. MySQL 4.x In A Nutshell ======================= Long promised by `MySQL AB' and long awaited by our users, MySQL Server 4.0 is now available in beta version for download from `http://www.mysql.com/' and our mirrors. Main new features of MySQL Server 4.0 are geared toward our existing business and community users, enhancing the MySQL database software as the solution for mission-critical, heavy-load database systems. Other new features target the users of embedded databases. Stepwise Rollout ---------------- MySQL is starting from 4.0.6 been labelled gamma, which means that 4.0.x has been available more than 2 months (first in alpha and then in beta) without any found serious hard to fix bugs and should now be ready for production use. We will drop the gamma prefix when MySQL 4.0 has been out for more than a month without any serious bugs. Further new features are being added in MySQL 4.1, which is now available from our bk source tree, and is targeted for alpha release in first quarter of 2003. *Note Installing source tree::. Ready for Immediate Use ----------------------- All binary releases pass our extensive test suite without any errors on any of the platforms we test on. MySQL 4.0 has been tested on by a large number of users and is in production used by several big sites. Embedded MySQL -------------- `libmysqld' makes MySQL Server suitable for a vastly expanded realm of applications. Using the embedded MySQL server library, one can embed MySQL Server into various applications and electronics devices, where the end user has no knowledge of there actually being an underlying database. Embedded MySQL Server is ideal for use behind the scenes in Internet appliances, public kiosks, turnkey hardware/software combination units, high performance Internet servers, self-contained databases distributed on CD-ROM, etc. Many users of `libmysqld' will benefit from the MySQL _Dual Licensing_. For those not wishing to be bound by the GPL, the software is also made available under a commercial license. The embedded MySQL library uses the same interface as the normal client library, so it is convenient and easy to use. *Note libmysqld::. Other Features Available From MySQL 4.0 --------------------------------------- * Version 4.0 further increases _the speed of MySQL Server_ in a number of areas, such as bulk `INSERT's, searching on packed indexes, creation of `FULLTEXT' indexes, as well as `COUNT(DISTINCT)'. * The `InnoDB' storage engine is now offered as a feature of the standard MySQL server, including full support for `transactions' and `row-level locking'. * Our German, Austrian, and Swiss users will note that we have a new character set, `latin1_de', which corrects the _German sorting order_, placing German umlauts in the same order as German telephone books. * Features to simplify migration from other database systems to MySQL Server include `TRUNCATE TABLE' (like in Oracle) and `IDENTITY' as a synonym for automatically incremented keys (like in Sybase). Many users will also be happy to learn that MySQL Server now supports the `UNION' statement, a long-awaited standard SQL feature. * In the process of building features for new users, we have not forgotten requests by the community of loyal users. We have multi-table `DELETE' and `UPDATE' statements. By adding support for `symbolic linking' to `MyISAM' on the table level (and not just the database level as before), as well as by enabling symlink handling by default on Windows, we hope to show that we take enhancement requests seriously. Functions like `SQL_CALC_FOUND_ROWS' and `FOUND_ROWS()' make it possible to know how many rows a query would have returned without a `LIMIT' clause. Future MySQL 4.x Features ------------------------- For the upcoming MySQL Server 4.x releases, expect the following features now still under development: * Mission-critical, heavy-load users of MySQL Server will appreciate the additions to our replication system and our online hot backup. Later versions of 4.x will include `fail-safe replication'; already existing in 4.0, the `LOAD DATA FROM MASTER' command will soon automate slave setup. The `online backup' will make it easy to add a new replication slave without taking down the master, and have a very low performance penalty on update-heavy systems. * A convenience feature for Database Administrators is that `mysqld' parameters (startup options) can soon be set without taking down the servers. * The new `FULLTEXT' search properties of MySQL Server 4.0 enable the use of `FULLTEXT' indexing of large text masses with both binary and natural-language searching logic. Users can customise minimal word length and define their own stop word lists in any human language, enabling a new set of applications to be built on MySQL Server. * Many read-heavy applications will benefit from further increased speed through the rewritten `key cache'. * Many developers will also be happy to see the `MySQL command help' in the client. MySQL 4.1, The Following Development Release -------------------------------------------- MySQL Server 4.0 lays the foundation for the new features of MySQL Server 4.1 and onward, such as `nested subqueries' (4.1), `stored procedures' (5.0), and `foreign key integrity rules' for `MyISAM' tables (5.0), which form the top of the wish list for many of our customers. After those additions, critics of the MySQL Database Server have to be more imaginative than ever in pointing out deficiencies in the MySQL Database Management System. For long already known for its stability, speed, and ease of use, MySQL Server will then match the requirement checklist of very demanding buyers. MySQL Information Sources ========================= MySQL Mailing Lists ------------------- This section introduces you to the MySQL mailing lists, and gives some guidelines as to how to use them. By subscribing to a mailing list, you will receive as e-mail messages all other postings on the list, and you will be able to send in your own questions and answers. The MySQL Mailing Lists ....................... To subscribe to the main MySQL mailing list, send a message to the electronic mail address . To unsubscribe from the main MySQL mailing list, send a message to the electronic mail address . Only the address to which you send your messages is significant. The subject line and the body of the message are ignored. If your reply address is not valid, you can specify your address explicitly, by adding a hyphen to the subscribe or unsubscribe command word, followed by your address with the `@' character in your address replaced by a `='. For example, to subscribe `your_name@host.domain', send a message to `mysql-subscribe-your_name=host.domain@lists.mysql.com'. Mail to or is handled automatically by the ezmlm mailing list processor. Information about ezmlm is available at the ezmlm web site (`http://www.ezmlm.org/'). To post a message to the list itself, send your message to `mysql@lists.mysql.com'. However, please *do not* send mail about subscribing or unsubscribing to because any mail sent to that address is distributed automatically to thousands of other users. Your local site may have many subscribers to . If so, it may have a local mailing list, so messages sent from `lists.mysql.com' to your site are propagated to the local list. In such cases, please contact your system administrator to be added to or dropped from the local MySQL list. If you wish to have traffic for a mailing list go to a separate mailbox in your mail program, set up a filter based on the message headers. You can use either the `List-ID:' or `Delivered-To:' headers to identify list messages. The following MySQL mailing lists exist: ` announce' This is for announcement of new versions of MySQL and related programs. This is a low-volume list all MySQL users should subscribe to. ` mysql' The main list for general MySQL discussion. Please note that some topics are better discussed on the more-specialised lists. If you post to the wrong list, you may not get an answer! ` mysql-digest' The `mysql' list in digest form. That means you get all individual messages, sent as one large mail message once a day. ` bugs' On this list you should only post a full, repeatable bug report using the `mysqlbug' script (if you are running on Windows, you should include a description of the operating system and the MySQL version). Preferably, you should test the problem using the latest stable or development version of MySQL Server before posting! Anyone should be able to repeat the bug by just using `mysql test < script' on the included test case. All bugs posted on this list will be corrected or documented in the next MySQL release! If only small code changes are needed, we will also post a patch that fixes the problem. ` bugs-digest' The `bugs' list in digest form. ` internals' A list for people who work on the MySQL code. On this list one can also discuss MySQL development and post patches. ` internals-digest' A digest version of the `internals' list. ` java' Discussion about the MySQL server and Java. Mostly about the JDBC drivers including MySQL Connector/J. ` java-digest' A digest version of the `java' list. ` win32' All things concerning the MySQL software on Microsoft operating systems such as Windows 9x/Me/NT/2000/XP. ` win32-digest' A digest version of the `win32' list. ` myodbc' All things about connecting to the MySQL server with ODBC. ` myodbc-digest' A digest version of the `myodbc' list. ` mysqlcc' All things about the `MySQL Control Center' graphical client. ` mysqlcc-digest' A digest version of the `mysqlcc' list. ` plusplus' All things concerning programming with the C++ API to MySQL. ` plusplus-digest' A digest version of the `plusplus' list. ` msql-mysql-modules' A list about the Perl support for MySQL with msql-mysql-modules. ` msql-mysql-modules-digest' A digest version of the `msql-mysql-modules' list. You subscribe or unsubscribe to all lists in the same way as described previously. In your subscribe or unsubscribe message, just put the appropriate mailing list name rather than `mysql'. For example, to subscribe to or unsubscribe from the `myodbc' list, send a message to or . If you can't get an answer for your questions from the mailing list, one option is to pay for support from MySQL AB, which will put you in direct contact with MySQL developers. *Note Support::. The following table shows some MySQL mailing in languages other than English. Note that these are not operated by MySQL AB, so we can't guarantee the quality on these. ` A French mailing list' ` A Korean mailing list' E-mail `subscribe mysql your@e-mail.address' to this list. ` A German mailing list' E-mail `subscribe mysql-de your@e-mail.address' to this list. You can find information about this mailing list at `http://www.4t2.com/mysql/'. ` A Portugese mailing list' E-mail `subscribe mysql-br your@e-mail.address' to this list. ` A Spanish mailing list' E-mail `subscribe mysql your@e-mail.address' to this list. Asking Questions or Reporting Bugs .................................. Before posting a bug report or question, please do the following: * Start by searching the MySQL online manual at: `http://www.mysql.com/doc/' We try to keep the manual up to date by updating it frequently with solutions to newly found problems! The change history appendix (`http://www.mysql.com/doc/en/News.html') can be particularly useful since it is quite possible that a newer version already contains a solution to your problem. * Search in the bugs database at `http://bugs.mysql.com' if the bug has already been reported/solved. * Search the MySQL mailing list archives: `http://lists.mysql.com/' * You can also use `http://www.mysql.com/search/' to search all the web pages (including the manual) that are located at `http://www.mysql.com/'. If you can't find an answer in the manual or the archives, check with your local MySQL expert. If you still can't find an answer to your question, go ahead and read the next section about how to send mail to . How to Report Bugs or Problems .............................. Writing a good bug report takes patience, but doing it right the first time saves time for us and for you. A good bug report containing a full test case for the bug will make it very likely that we will fix it in the next release. This section will help you write your report correctly so that you don't waste your time doing things that may not help us much or at all. We encourage everyone to use the `mysqlbug' script to generate a bug report (or a report about any problem), if possible. `mysqlbug' can be found in the `scripts' directory in the source distribution, or for a binary distribution, in the `bin' directory under your MySQL installation directory. If you are unable to use `mysqlbug', you should still include all the necessary information listed in this section. The `mysqlbug' script helps you generate a report by determining much of the following information automatically, but if something important is missing, please include it with your message! Please read this section carefully and make sure that all the information described here is included in your report. The normal place to report bugs and problems is . If you can make a test case that clearly demonstrates the bug, you should post it to the list. Note that on this list you should only post a full, repeatable bug report using the `mysqlbug' script. If you are running on Windows, you should include a description of the operating system and the MySQL version. Preferably, you should test the problem using the latest stable or development version of MySQL Server before posting! Anyone should be able to repeat the bug by just using "`mysql test < script'" on the included test case or run the shell or Perl script that is included in the bug report. All bugs posted on the `bugs' list will be corrected or documented in the next MySQL release! If only small code changes are needed to correct this problem, we will also post a patch that fixes the problem. If you have found a sensitive security bug in MySQL, you should send an e-mail to . Remember that it is possible to respond to a message containing too much information, but not to one containing too little. Often people omit facts because they think they know the cause of a problem and assume that some details don't matter. A good principle is: if you are in doubt about stating something, state it! It is a thousand times faster and less troublesome to write a couple of lines more in your report than to be forced to ask again and wait for the answer because you didn't include enough information the first time. The most common errors are that people don't indicate the version number of the MySQL distribution they are using, or don't indicate what platform they have the MySQL server installed on (including the platform version number). This is highly relevant information, and in 99 cases out of 100 the bug report is useless without it! Very often we get questions like, "Why doesn't this work for me?" Then we find that the feature requested wasn't implemented in that MySQL version, or that a bug described in a report has been fixed already in newer MySQL versions. Sometimes the error is platform-dependent; in such cases, it is next to impossible to fix anything without knowing the operating system and the version number of the platform. Remember also to provide information about your compiler, if it is related to the problem. Often people find bugs in compilers and think the problem is MySQL-related. Most compilers are under development all the time and become better version by version. To determine whether your problem depends on your compiler, we need to know what compiler is used. Note that every compiling problem should be regarded as a bug report and reported accordingly. It is most helpful when a good description of the problem is included in the bug report. That is, a good example of all the things you did that led to the problem and the problem itself exactly described. The best reports are those that include a full example showing how to reproduce the bug or problem. *Note Reproduceable test case::. If a program produces an error message, it is very important to include the message in your report! If we try to search for something from the archives using programs, it is better that the error message reported exactly matches the one that the program produces. (Even the case should be observed!) You should never try to remember what the error message was; instead, copy and paste the entire message into your report! If you have a problem with MyODBC, you should try to generate a MyODBC trace file. *Note MyODBC bug report::. Please remember that many of the people who will read your report will do so using an 80-column display. When generating reports or examples using the `mysql' command-line tool, you should therefore use the `--vertical' option (or the `\G' statement terminator) for output that would exceed the available width for such a display (for example, with the `EXPLAIN SELECT' statement; see the example later in this section). Please include the following information in your report: * The version number of the MySQL distribution you are using (for example, MySQL Version 3.22.22). You can find out which version you are running by executing `mysqladmin version'. `mysqladmin' can be found in the `bin' directory under your MySQL installation directory. * The manufacturer and model of the machine you are working on. * The operating system name and version. For most operating systems, you can get this information by executing the Unix command `uname -a'. * Sometimes the amount of memory (real and virtual) is relevant. If in doubt, include these values. * If you are using a source distribution of the MySQL software, the name and version number of the compiler used is needed. If you have a binary distribution, the distribution name is needed. * If the problem occurs during compilation, include the exact error message(s) and also a few lines of context around the offending code in the file where the error occurred. * If `mysqld' died, you should also report the query that crashed `mysqld'. You can usually find this out by running `mysqld' with logging enabled. *Note Using log files::. * If any database table is related to the problem, include the output from `mysqldump --no-data db_name tbl_name1 tbl_name2 ...'. This is very easy to do and is a powerful way to get information about any table in a database that will help us create a situation matching the one you have. * For speed-related bugs or problems with `SELECT' statements, you should always include the output of `EXPLAIN SELECT ...', and at least the number of rows that the `SELECT' statement produces. You should also include the output from `SHOW CREATE TABLE table_name' for each involved table. The more information you give about your situation, the more likely it is that someone can help you! For example, the following is an example of a very good bug report (it should of course be posted with the `mysqlbug' script): Example run using the `mysql' command-line tool (note the use of the `\G' statement terminator for statements whose output width would otherwise exceed that of an 80-column display device): mysql> SHOW VARIABLES; mysql> SHOW COLUMNS FROM ...\G mysql> EXPLAIN SELECT ...\G mysql> FLUSH STATUS; mysql> SELECT ...; mysql> SHOW STATUS; * If a bug or problem occurs while running `mysqld', try to provide an input script that will reproduce the anomaly. This script should include any necessary source files. The more closely the script can reproduce your situation, the better. If you can make a reproduceable test case, you should post this to for a high-priority treatment! If you can't provide a script, you should at least include the output from `mysqladmin variables extended-status processlist' in your mail to provide some information of how your system is performing! * If you can't produce a test case in a few rows, or if the test table is too big to be mailed to the mailing list (more than 10 rows), you should dump your tables using `mysqldump' and create a `README' file that describes your problem. Create a compressed archive of your files using `tar' and `gzip' or `zip', and use `ftp' to transfer the archive to `ftp://support.mysql.com/pub/mysql/secret/'. Then send a short description of the problem to . * If you think that the MySQL server produces a strange result from a query, include not only the result, but also your opinion of what the result should be, and an account describing the basis for your opinion. * When giving an example of the problem, it's better to use the variable names, table names, etc., that exist in your actual situation than to come up with new names. The problem could be related to the name of a variable or table! These cases are rare, perhaps, but it is better to be safe than sorry. After all, it should be easier for you to provide an example that uses your actual situation, and it is by all means better for us. In case you have data you don't want to show to others, you can use `ftp' to transfer it to `ftp://support.mysql.com/pub/mysql/secret/'. If the data is really top secret and you don't want to show it even to us, then go ahead and provide an example using other names, but please regard this as the last choice. * Include all the options given to the relevant programs, if possible. For example, indicate the options that you use when you start the `mysqld' daemon and that you use to run any MySQL client programs. The options to programs like `mysqld' and `mysql', and to the `configure' script, are often keys to answers and are very relevant! It is never a bad idea to include them anyway! If you use any modules, such as Perl or PHP, please include the version number(s) of those as well. * If your question is related to the privilege system, please include the output of `mysqlaccess', the output of `mysqladmin reload', and all the error messages you get when trying to connect! When you test your privileges, you should first run `mysqlaccess'. After this, execute `mysqladmin reload version' and try to connect with the program that gives you trouble. `mysqlaccess' can be found in the `bin' directory under your MySQL installation directory. * If you have a patch for a bug, that is good. But don't assume the patch is all we need, or that we will use it, if you don't provide some necessary information such as test cases showing the bug that your patch fixes. We might find problems with your patch or we might not understand it at all; if so, we can't use it. If we can't verify exactly what the patch is meant for, we won't use it. Test cases will help us here. Show that the patch will handle all the situations that may occur. If we find a borderline case (even a rare one) where the patch won't work, it may be useless. * Guesses about what the bug is, why it occurs, or what it depends on are usually wrong. Even the MySQL team can't guess such things without first using a debugger to determine the real cause of a bug. * Indicate in your mail message that you have checked the reference manual and mail archive so that others know you have tried to solve the problem yourself. * If you get a `parse error', please check your syntax closely! If you can't find something wrong with it, it's extremely likely that your current version of MySQL Server doesn't support the query you are using. If you are using the current version and the manual at `http://www.mysql.com/doc/' doesn't cover the syntax you are using, MySQL Server doesn't support your query. In this case, your only options are to implement the syntax yourself or e-mail and ask for an offer to implement it! If the manual covers the syntax you are using, but you have an older version of MySQL Server, you should check the MySQL change history to see when the syntax was implemented. In this case, you have the option of upgrading to a newer version of MySQL Server. *Note News::. * If you have a problem such that your data appears corrupt or you get errors when you access some particular table, you should first check and then try repairing your tables with `myisamchk' or `CHECK TABLE' and `REPAIR TABLE'. *Note MySQL Database Administration::. * If you often get corrupted tables you should try to find out when and why this happens. In this case, the `mysql-data-directory/'hostname'.err' file may contain some information about what happened. *Note Error log::. Please include any relevant information from this file in your bug report. Normally `mysqld' should *never* crash a table if nothing killed it in the middle of an update! If you can find the cause of `mysqld' dying, it's much easier for us to provide you with a fix for the problem. *Note What is crashing::. * If possible, download and install the most recent version of MySQL Server and check whether it solves your problem. All versions of the MySQL software are thoroughly tested and should work without problems. We believe in making everything as backward-compatible as possible, and you should be able to switch MySQL versions without any hassle. *Note Which version::. If you are a support customer, please cross-post the bug report to for higher-priority treatment, as well as to the appropriate mailing list to see if someone else has experienced (and perhaps solved) the problem. For information on reporting bugs in `MyODBC', see *Note ODBC Problems::. For solutions to some common problems, see *Note Problems::. When answers are sent to you individually and not to the mailing list, it is considered good etiquette to summarise the answers and send the summary to the mailing list so that others may have the benefit of responses you received that helped you solve your problem! Guidelines for Answering Questions on the Mailing List ...................................................... If you consider your answer to have broad interest, you may want to post it to the mailing list instead of replying directly to the individual who asked. Try to make your answer general enough that people other than the original poster may benefit from it. When you post to the list, please make sure that your answer is not a duplication of a previous answer. Try to summarise the essential part of the question in your reply; don't feel obliged to quote the entire original message. Please don't post mail messages from your browser with HTML mode turned on! Many users don't read mail with a browser! MySQL Community Support on IRC (Internet Relay Chat) ---------------------------------------------------- In addition to the various MySQL mailing lists, you can find experienced community people on `IRC' (`Internet Relay Chat'). These are the best networks/channels currently known to us: * *freenode* (see `http://www.freenode.net/' for servers) * `#mysql' Primarily MySQL questions but other database and SQL questions welcome. * `#mysqlphp' Questions about MySQL+PHP, a popular combo. * *EFnet* (see `http://www.efnet.org/' for servers) * `#mysql' MySQL questions. If you are looking for IRC client software to connect to an IRC network, take a peek at `X-Chat' (`http://www.xchat.org/'). X-Chat is available for Unix as well as for Windows platforms. How Standards-compatible Is MySQL? ================================== This section describes how MySQL relates to the ANSI SQL standards. MySQL Server has many extensions to the ANSI SQL standards, and here you will find out what they are and how to use them. You will also find information about functionality missing from MySQL Server, and how to work around some differences. Our goal is to not, without a very good reason, restrict MySQL Server usability for any usage. Even if we don't have the resources to do development for every possible use, we are always willing to help and offer suggestions to people who are trying to use MySQL Server in new territories. One of our main goals with the product is to continue to work toward ANSI 99 compliancy, but without sacrificing speed or reliability. We are not afraid to add extensions to SQL or support for non-SQL features if this greatly increases the usability of MySQL Server for a big part of our users. (The new `HANDLER' interface in MySQL Server 4.0 is an example of this strategy. *Note `HANDLER': HANDLER.) We will continue to support transactional and non-transactional databases to satisfy both heavy web/logging usage and mission-critical 24/7 usage. MySQL Server was designed from the start to work with medium size databases (10-100 million rows, or about 100 MB per table) on small computer systems. We will continue to extend MySQL Server to work even better with terabyte-size databases, as well as to make it possible to compile a reduced MySQL version that is more suitable for hand-held devices and embedded usage. The compact design of the MySQL server makes both of these directions possible without any conflicts in the source tree. We are currently not targeting realtime support or clustered databases (even if you can already do a lot of things with our replication services). We don't believe that one should have native XML support in the database, but will instead add the XML support our users request from us on the client side. We think it's better to keep the main server code as "lean and clean" as possible and instead develop libraries to deal with the complexity on the client side. This is part of the strategy mentioned previously of not sacrificing speed or reliability in the server. What Standards Does MySQL Follow? --------------------------------- Entry-level SQL92. ODBC levels 0-3.51. We are aiming toward supporting the full ANSI SQL99 standard, but without concessions to speed and quality of the code. Running MySQL in ANSI Mode -------------------------- If you start `mysqld' with the `--ansi' option, the following behaviour of MySQL Server changes: * `||' is string concatenation instead of `OR'. * You can have any number of spaces between a function name and the `('. This forces all function names to be treated as reserved words. * `"' will be an identifier quote character (like the MySQL Server ``' quote character) and not a string quote character. * `REAL' will be a synonym for `FLOAT' instead of a synonym for `DOUBLE'. * The default transaction isolation level is `SERIALIZABLE'. *Note SET TRANSACTION::. * You can use a field/expression in `GROUP BY' that is not in the field list. This is the same as using `--sql-mode=REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES, IGNORE_SPACE,SERIALIZE,ONLY_FULL_GROUP_BY'. MySQL Extensions to ANSI SQL92 ------------------------------ MySQL Server includes some extensions that you probably will not find in other SQL databases. Be warned that if you use them, your code will not be portable to other SQL servers. In some cases, you can write code that includes MySQL extensions, but is still portable, by using comments of the form `/*! ... */'. In this case, MySQL Server will parse and execute the code within the comment as it would any other MySQL statement, but other SQL servers will ignore the extensions. For example: SELECT /*! STRAIGHT_JOIN */ col_name FROM table1,table2 WHERE ... If you add a version number after the `'!'', the syntax will be executed only if the MySQL version is equal to or newer than the used version number: CREATE /*!32302 TEMPORARY */ TABLE t (a int); This means that if you have Version 3.23.02 or newer, MySQL Server will use the `TEMPORARY' keyword. The following is a list of MySQL extensions: * The field types `MEDIUMINT', `SET', `ENUM', and the different `BLOB' and `TEXT' types. * The field attributes `AUTO_INCREMENT', `BINARY', `NULL', `UNSIGNED', and `ZEROFILL'. * All string comparisons are case-insensitive by default, with sort ordering determined by the current character set (ISO-8859-1 Latin1 by default). If you don't like this, you should declare your columns with the `BINARY' attribute or use the `BINARY' cast, which causes comparisons to be done according to the ASCII order used on the MySQL server host. * MySQL Server maps each database to a directory under the MySQL data directory, and tables within a database to filenames in the database directory. This has a few implications: - Database names and table names are case-sensitive in MySQL Server on operating systems that have case-sensitive filenames (like most Unix systems). *Note Name case sensitivity::. - Database, table, index, column, or alias names may begin with a digit (but may not consist solely of digits). - You can use standard system commands to back up, rename, move, delete, and copy tables. For example, to rename a table, rename the `.MYD', `.MYI', and `.frm' files to which the table corresponds. * In SQL statements, you can access tables from different databases with the `db_name.tbl_name' syntax. Some SQL servers provide the same functionality but call this `User space'. MySQL Server doesn't support tablespaces as in: `create table ralph.my_table...IN my_tablespace'. * `LIKE' is allowed on numeric columns. * Use of `INTO OUTFILE' and `STRAIGHT_JOIN' in a `SELECT' statement. *Note `SELECT': SELECT. * The `SQL_SMALL_RESULT' option in a `SELECT' statement. * `EXPLAIN SELECT' to get a description on how tables are joined. * Use of index names, indexes on a prefix of a field, and use of `INDEX' or `KEY' in a `CREATE TABLE' statement. *Note `CREATE TABLE': CREATE TABLE. * Use of `TEMPORARY' or `IF NOT EXISTS' with `CREATE TABLE'. * Use of `COUNT(DISTINCT list)' where `list' is more than one element. * Use of `CHANGE col_name', `DROP col_name', or `DROP INDEX', `IGNORE' or `RENAME' in an `ALTER TABLE' statement. *Note `ALTER TABLE': ALTER TABLE. * Use of `RENAME TABLE'. *Note `RENAME TABLE': RENAME TABLE. * Use of multiple `ADD', `ALTER', `DROP', or `CHANGE' clauses in an `ALTER TABLE' statement. * Use of `DROP TABLE' with the keywords `IF EXISTS'. * You can drop multiple tables with a single `DROP TABLE' statement. * The `LIMIT' clause of the `DELETE' statement. * The `DELAYED' clause of the `INSERT' and `REPLACE' statements. * The `LOW_PRIORITY' clause of the `INSERT', `REPLACE', `DELETE', and `UPDATE' statements. * Use of `LOAD DATA INFILE'. In many cases, this syntax is compatible with Oracle's `LOAD DATA INFILE'. *Note `LOAD DATA': LOAD DATA. * The `ANALYZE TABLE', `CHECK TABLE', `OPTIMIZE TABLE', and `REPAIR TABLE' statements. * The `SHOW' statement. *Note `SHOW': SHOW. * Strings may be enclosed by either `"' or `'', not just by `''. * Use of the escape `\' character. * The `SET' statement. *Note `SET': SET OPTION. * You don't need to name all selected columns in the `GROUP BY' part. This gives better performance for some very specific, but quite normal queries. *Note Group by functions::. * One can specify `ASC' and `DESC' with `GROUP BY'. * To make it easier for users who come from other SQL environments, MySQL Server supports aliases for many functions. For example, all string functions support both ANSI SQL syntax and ODBC syntax. * MySQL Server understands the `||' and `&&' operators to mean logical OR and AND, as in the C programming language. In MySQL Server, `||' and `OR' are synonyms, as are `&&' and `AND'. Because of this nice syntax, MySQL Server doesn't support the ANSI SQL `||' operator for string concatenation; use `CONCAT()' instead. Because `CONCAT()' takes any number of arguments, it's easy to convert use of the `||' operator to MySQL Server. * `CREATE DATABASE' or `DROP DATABASE'. *Note `CREATE DATABASE': CREATE DATABASE. * The `%' operator is a synonym for `MOD()'. That is, `N % M' is equivalent to `MOD(N,M)'. `%' is supported for C programmers and for compatibility with PostgreSQL. * The `=', `<>', `<=' ,`<', `>=',`>', `<<', `>>', `<=>', `AND', `OR', or `LIKE' operators may be used in column comparisons to the left of the `FROM' in `SELECT' statements. For example: mysql> SELECT col1=1 AND col2=2 FROM tbl_name; * The `LAST_INSERT_ID()' function. *Note `mysql_insert_id()': mysql_insert_id. * The `REGEXP' and `NOT REGEXP' extended regular expression operators. * `CONCAT()' or `CHAR()' with one argument or more than two arguments. (In MySQL Server, these functions can take any number of arguments.) * The `BIT_COUNT()', `CASE', `ELT()', `FROM_DAYS()', `FORMAT()', `IF()', `PASSWORD()', `ENCRYPT()', `MD5()', `ENCODE()', `DECODE()', `PERIOD_ADD()', `PERIOD_DIFF()', `TO_DAYS()', or `WEEKDAY()' functions. * Use of `TRIM()' to trim substrings. ANSI SQL only supports removal of single characters. * The `GROUP BY' functions `STD()', `BIT_OR()', and `BIT_AND()'. * Use of `REPLACE' instead of `DELETE' + `INSERT'. *Note `REPLACE': REPLACE. * The `FLUSH', `RESET' and `DO' statements. * The ability to set variables in a statement with `:=': SELECT @a:=SUM(total),@b=COUNT(*),@a/@b AS avg FROM test_table; SELECT @t1:=(@t2:=1)+@t3:=4,@t1,@t2,@t3; MySQL Differences Compared to ANSI SQL92 ---------------------------------------- We try to make MySQL Server follow the ANSI SQL standard and the ODBC SQL standard, but in some cases MySQL Server does things differently: * For `VARCHAR' columns, trailing spaces are removed when the value is stored. *Note Bugs::. * In some cases, `CHAR' columns are silently changed to `VARCHAR' columns. *Note Silent column changes::. * Privileges for a table are not automatically revoked when you delete a table. You must explicitly issue a `REVOKE' to revoke privileges for a table. *Note `GRANT': GRANT. * `NULL AND FALSE' will evaluate to `NULL' and not to `FALSE'. This is because we don't think it's good to have to evaluate a lot of extra conditions in this case. For a prioritised list indicating when new extensions will be added to MySQL Server, you should consult the online MySQL TODO list at `http://www.mysql.com/doc/en/TODO.html'. That is the latest version of the TODO list in this manual. *Note TODO::. Sub`SELECT's ............ MySQL Server until version 4.0 only supports nested queries of the form `INSERT ... SELECT ...' and `REPLACE ... SELECT ...'. You can, however, use the function `IN()' in other contexts. Subqueries have been implemented in the 4.1 development tree. Meanwhile, you can often rewrite the query without a subquery: SELECT * FROM table1 WHERE id IN (SELECT id FROM table2); This can be rewritten as: SELECT table1.* FROM table1,table2 WHERE table1.id=table2.id; The queries: SELECT * FROM table1 WHERE id NOT IN (SELECT id FROM table2); SELECT * FROM table1 WHERE NOT EXISTS (SELECT id FROM table2 WHERE table1.id=table2.id); Can be rewritten as: SELECT table1.* FROM table1 LEFT JOIN table2 ON table1.id=table2.id WHERE table2.id IS NULL; For more complicated subqueries you can often create temporary tables to hold the subquery. In some cases, however, this option will not work. The most frequently encountered of these cases arises with `DELETE' statements, for which standard SQL does not support joins (except in subqueries). For this situation there are two options available until subqueries are supported by MySQL Server. The first option is to use a procedural programming language (such as Perl or PHP) to submit a `SELECT' query to obtain the primary keys for the records to be deleted, and then use these values to construct the `DELETE' statement (`DELETE FROM ... WHERE ... IN (key1, key2, ...)'). The second option is to use interactive SQL to construct a set of `DELETE' statements automatically, using the MySQL extension `CONCAT()' (in lieu of the standard `||' operator). For example: SELECT CONCAT('DELETE FROM tab1 WHERE pkid = ', "'", tab1.pkid, "'", ';') FROM tab1, tab2 WHERE tab1.col1 = tab2.col2; You can place this query in a script file and redirect input from it to the `mysql' command-line interpreter, piping its output back to a second instance of the interpreter: shell> mysql --skip-column-names mydb < myscript.sql | mysql mydb MySQL Server 4.0 supports multi-table deletes that can be used to efficiently delete rows based on information from one table or even from many tables at the same time. `SELECT INTO TABLE' ................... MySQL Server doesn't yet support the Oracle SQL extension: `SELECT ... INTO TABLE ...'. MySQL Server supports instead the ANSI SQL syntax `INSERT INTO ... SELECT ...', which is basically the same thing. *Note INSERT SELECT::. INSERT INTO tblTemp2 (fldID) SELECT tblTemp1.fldOrder_ID FROM tblTemp1 WHERE tblTemp1.fldOrder_ID > 100; Alternatively, you can use `SELECT INTO OUTFILE...' or `CREATE TABLE ... SELECT'. Transactions and Atomic Operations .................................. MySQL Server supports transactions with the `InnoDB' and `BDB' `Transactional table handlers'. *Note Table types::. `InnoDB' provides `ACID' compliancy. However, the non-transactional table types in MySQL Server such as `MyISAM' follow another paradigm for data integrity called "`Atomic Operations'." Atomic operations often offer equal or even better integrity with much better performance. With MySQL Server supporting both paradigms, the user is able to decide if he needs the speed of atomic operations or if he need to use transactional features in his applications. This choice can be made on a per-table basis. How does one use the features of MySQL Server to maintain rigorous integrity and how do these features compare with the transactional paradigm? 1. In the transactional paradigm, if your applications are written in a way that is dependent on the calling of `ROLLBACK' instead of `COMMIT' in critical situations, transactions are more convenient. Transactions also ensure that unfinished updates or corrupting activities are not committed to the database; the server is given the opportunity to do an automatic rollback and your database is saved. MySQL Server, in almost all cases, allows you to resolve potential problems by including simple checks before updates and by running simple scripts that check the databases for inconsistencies and automatically repair or warn if such an inconsistency occurs. Note that just by using the MySQL log or even adding one extra log, one can normally fix tables perfectly with no data integrity loss. 2. More often than not, fatal transactional updates can be rewritten to be atomic. Generally speaking, all integrity problems that transactions solve can be done with `LOCK TABLES' or atomic updates, ensuring that you never will get an automatic abort from the database, which is a common problem with transactional databases. 3. Even a transactional system can lose data if the server goes down. The difference between different systems lies in just how small the time-lap is where they could lose data. No system is 100% secure, only "secure enough." Even Oracle, reputed to be the safest of transactional databases, is reported to sometimes lose data in such situations. To be safe with MySQL Server, whether using transactional tables or not, you only need to have backups and have the update logging turned on. With this you can recover from any situation that you could with any other transactional database. It is, of course, always good to have backups, independent of which database you use. The transactional paradigm has its benefits and its drawbacks. Many users and application developers depend on the ease with which they can code around problems where an abort appears to be, or is necessary. However, even if you are new to the atomic operations paradigm, or more familiar with transactions, do consider the speed benefit that non-transactional tables can offer on the order of three to five times the speed of the fastest and most optimally tuned transactional tables. In situations where integrity is of highest importance, MySQL Server offers transaction-level or better reliability and integrity even for non-transactional tables. If you lock tables with `LOCK TABLES', all updates will stall until any integrity checks are made. If you only obtain a read lock (as opposed to a write lock), reads and inserts are still allowed to happen. The new inserted records will not be seen by any of the clients that have a read lock until they release their read locks. With `INSERT DELAYED' you can queue inserts into a local queue, until the locks are released, without having the client wait for the insert to complete. *Note INSERT DELAYED::. "Atomic," in the sense that we mean it, is nothing magical. It only means that you can be sure that while each specific update is running, no other user can interfere with it, and there will never be an automatic rollback (which can happen with transactional tables if you are not very careful). MySQL Server also guarantees that there will not be any dirty reads. Following are some techniques for working with non-transactional tables: * Loops that need transactions normally can be coded with the help of `LOCK TABLES', and you don't need cursors when you can update records on the fly. * To avoid using `ROLLBACK', you can use the following strategy: 1. Use `LOCK TABLES ...' to lock all the tables you want to access. 2. Test conditions. 3. Update if everything is okay. 4. Use `UNLOCK TABLES' to release your locks. This is usually a much faster method than using transactions with possible `ROLLBACK's, although not always. The only situation this solution doesn't handle is when someone kills the threads in the middle of an update. In this case, all locks will be released but some of the updates may not have been executed. * You can also use functions to update records in a single operation. You can get a very efficient application by using the following techniques: * Modify fields relative to their current value. * Update only those fields that actually have changed. For example, when we are doing updates to some customer information, we update only the customer data that has changed and test only that none of the changed data, or data that depends on the changed data, has changed compared to the original row. The test for changed data is done with the `WHERE' clause in the `UPDATE' statement. If the record wasn't updated, we give the client a message: "Some of the data you have changed has been changed by another user." Then we show the old row versus the new row in a window, so the user can decide which version of the customer record he should use. This gives us something that is similar to column locking but is actually even better because we only update some of the columns, using values that are relative to their current values. This means that typical `UPDATE' statements look something like these: UPDATE tablename SET pay_back=pay_back+'relative change'; UPDATE customer SET customer_date='current_date', address='new address', phone='new phone', money_he_owes_us=money_he_owes_us+'new_money' WHERE customer_id=id AND address='old address' AND phone='old phone'; As you can see, this is very efficient and works even if another client has changed the values in the `pay_back' or `money_he_owes_us' columns. * In many cases, users have wanted `ROLLBACK' and/or `LOCK TABLES' for the purpose of managing unique identifiers for some tables. This can be handled much more efficiently by using an `AUTO_INCREMENT' column and either the SQL function `LAST_INSERT_ID()' or the C API function `mysql_insert_id()'. *Note `mysql_insert_id()': mysql_insert_id. You can generally code around row-level locking. Some situations really need it, but they are very few. `InnoDB' tables support row-level locking. With MyISAM, you can use a flag column in the table and do something like the following: UPDATE tbl_name SET row_flag=1 WHERE id=ID; MySQL returns 1 for the number of affected rows if the row was found and `row_flag' wasn't already 1 in the original row. You can think of it as though MySQL Server changed the preceding query to: UPDATE tbl_name SET row_flag=1 WHERE id=ID AND row_flag <> 1; Stored Procedures and Triggers .............................. A stored procedure is a set of SQL commands that can be compiled and stored in the server. Once this has been done, clients don't need to keep re-issuing the entire query but can refer to the stored procedure. This provides better performance because the query has to be parsed only once, and less information needs to be sent between the server and the client. You can also raise the conceptual level by having libraries of functions in the server. A trigger is a stored procedure that is invoked when a particular event occurs. For example, you can install a stored procedure that is triggered each time a record is deleted from a transaction table and that automatically deletes the corresponding customer from a customer table when all his transactions are deleted. The planned update language will be able to handle stored procedures. Our aim is to have stored procedures implemented in MySQL Server around version 5.0. We are also looking at triggers. Foreign Keys ............ Note that foreign keys in SQL are not used to join tables, but are used mostly for checking referential integrity (foreign key constraints). If you want to get results from multiple tables from a `SELECT' statement, you do this by joining tables: SELECT * FROM table1,table2 WHERE table1.id = table2.id; *Note `JOIN': JOIN. *Note example-Foreign keys::. In MySQL Server 3.23.44 and up, `InnoDB' tables support checking of foreign key constraints. *Note InnoDB::. For other table types, MySQL Server does parse the `FOREIGN KEY' syntax in `CREATE TABLE' commands, but without further action being taken. The `FOREIGN KEY' syntax without `ON DELETE ...' is mostly used for documentation purposes. Some ODBC applications may use this to produce automatic `WHERE' clauses, but this is usually easy to override. `FOREIGN KEY' is sometimes used as a constraint check, but this check is unnecessary in practice if rows are inserted into the tables in the right order. In MySQL Server, you can work around the problem of `ON DELETE ...' not being implemented by adding the appropriate `DELETE' statement to an application when you delete records from a table that has a foreign key. In practice this is as quick (in some cases quicker) and much more portable than using foreign keys. In MySQL Server 4.0 you can use multi-table delete to delete rows from many tables with one command. *Note DELETE::. In the near future we will extend the `FOREIGN KEY' implementation so that the information will be saved in the table specification file and may be retrieved by `mysqldump' and ODBC. At a later stage we will implement the foreign key constraints for applications that can't easily be coded to avoid them. Do keep in mind that foreign keys are often misused, which can cause severe problems. Even when used properly, it is not a magic solution for the referential integrity problem, although it does make things easier in some cases. Some advantages of foreign key enforcement: * Assuming proper design of the relations, foreign key constraints will make it more difficult for a programmer to introduce an inconsistency into the database. * Using cascading updates and deletes can simplify the client code. * Properly designed foreign key rules aid in documenting relations between tables. Disadvantages: * Mistakes, which are easy to make in designing key relations, can cause severe problemsfor example, circular rules, or the wrong combination of cascading deletes. * A properly written application will make sure internally that it is not violating referential integrity constraints before proceding with a query. Thus, additional checks on the database level will only slow down performance for such an application. * It is not uncommon for a DBA to make such a complex topology of relations that it becomes very difficult, and in some cases impossible, to back up or restore individual tables. Views ..... It is planned to implement views in MySQL Server around version 5.0. Views are mostly useful for letting users access a set of relations as one table (in read-only mode). Many SQL databases don't allow one to update any rows in a view, but you have to do the updates in the separate tables. As MySQL Server is mostly used in applications and on web systems where the application writer has full control on the database usage, most of our users haven't regarded views to be very important. (At least no one has been interested enough in this to be prepared to finance the implementation of views.) One doesn't need views in MySQL Server to restrict access to columns, as MySQL Server has a very sophisticated privilege system. *Note Privilege system::. `--' as the Start of a Comment .............................. Some other SQL databases use `--' to start comments. MySQL Server has `#' as the start comment character. You can also use the C comment style `/* this is a comment */' with MySQL Server. *Note Comments::. MySQL Server Version 3.23.3 and above support the `--' comment style, provided the comment is followed by a space. This is because this comment style has caused many problems with automatically generated SQL queries that have used something like the following code, where we automatically insert the value of the payment for `!payment!': UPDATE tbl_name SET credit=credit-!payment! Think about what happens if the value of `payment' is negative. Because `1--1' is legal in SQL, the consequences of allowing comments to start with `--' are terrible. Using our implementation of this method of commenting in MySQL Server Version 3.23.3 and up, `1-- This is a comment' is actually safe. Another safe feature is that the `mysql' command-line client removes all lines that start with `--'. The following information is relevant only if you are running a MySQL version earlier than 3.23.3: If you have a SQL program in a text file that contains `--' comments you should use: shell> replace " --" " #" < text-file-with-funny-comments.sql \ | mysql database instead of the usual: shell> mysql database < text-file-with-funny-comments.sql You can also edit the command file "in place" to change the `--' comments to `#' comments: shell> replace " --" " #" -- text-file-with-funny-comments.sql Change them back with this command: shell> replace " #" " --" -- text-file-with-funny-comments.sql Known Errors and Design Deficiencies in MySQL --------------------------------------------- Errors in 3.23 fixed in later MySQL version ........................................... The following known errors/bugs are not fixed in MySQL 3.23 because fixing them would involves changing a lot of code which could introduce other even worse bugs. The bugs are also classified as 'not fatal' or 'bearable'. * One can get a deadlock when doing `LOCK TABLE' on multiple tables and then in the same connection doing a `DROP TABLE' on one of them while another thread is trying to lock the table. One can however do a `KILL' on any of the involved threads to resolve this. Fixed in 4.0.12. * `SELECT MAX(key_column) FROM t1,t2,t3...' where one of the tables are empty doesn't return `NULL' but instead the maximum value for the column. Fixed in 4.0.11. Open bugs / Design Deficiencies in MySQL ........................................ The following problems are known and have a high priority to get fixed: * `ANALYZE TABLE' on a BDB table may in some case make the table unusable until one has restarted `mysqld'. When this happens you will see errors like the following in the MySQL error file: 001207 22:07:56 bdb: log_flush: LSN past current end-of-log * Don't execute `ALTER TABLE' on a `BDB' table on which you are running multi-statement transactions until all those transactions complete. (The transaction will probably be ignored.) * `ANALYZE TABLE', `OPTIMIZE TABLE', and `REPAIR TABLE' may cause problems on tables for which you are using `INSERT DELAYED'. * Doing a `LOCK TABLE ...' and `FLUSH TABLES ...' doesn't guarantee that there isn't a half-finished transaction in progress on the table. * BDB tables are a bit slow to open. If you have many BDB tables in a database, it will take a long time to use the `mysql' client on the database if you are not using the `-A' option or if you are using `rehash'. This is especially notable when you have a big table cache. The following problems are known and will be fixed in due time: * When using `RPAD' function, or any other string function that ends up adding blanks to the right, in a query that has to use temporary table to be resolved, then all resulting strings will be RTRIM'ed. This is an example of the query: `SELECT RPAD(t1.field1, 50, ' ') AS f2, RPAD(t2.field2, 50, ' ') AS f1 FROM table1 as t1 LEFT JOIN table2 AS t2 ON t1.record=t2.joinID ORDER BY t2.record;' Final result of this bug is that use will not be able to get blanks on the right side of the resulting field. The above behaviour exists in all versions of MySQL. The reason for this is due to the fact that HEAP tables, which are used first for temporary tables, are not capable of handling VARCHAR columns. This behaviour will be fixed in one of 4.1 releases. * When using `SET CHARACTER SET', one can't use translated characters in database, table, and column names. * One can't use `_' or `%' with `ESCAPE' in `LIKE ... ESCAPE'. * If you have a `DECIMAL' column with a number stored in different formats (+01.00, 1.00, 01.00), `GROUP BY' may regard each value as a different value. * `DELETE FROM merge_table' used without a `WHERE' will only clear the mapping for the table, not delete everything in the mapped tables. * You cannot build the server in another directory when using MIT-pthreads. Because this requires changes to MIT-pthreads, we are not likely to fix this. *Note MIT-pthreads::. * `BLOB' values can't "reliably" be used in `GROUP BY' or `ORDER BY' or `DISTINCT'. Only the first `max_sort_length' bytes (default 1024) are used when comparing `BLOB's in these cases. This can be changed with the `-O max_sort_length' option to `mysqld'. A workaround for most cases is to use a substring: `SELECT DISTINCT LEFT(blob,2048) FROM tbl_name'. * Calculation is done with `BIGINT' or `DOUBLE' (both are normally 64 bits long). It depends on the function which precision one gets. The general rule is that bit functions are done with `BIGINT' precision, `IF', and `ELT()' with `BIGINT' or `DOUBLE' precision and the rest with `DOUBLE' precision. One should try to avoid using unsigned long long values if they resolve to be bigger than 63 bits (9223372036854775807) for anything else than bit fields! MySQL Server 4.0 has better `BIGINT' handling than 3.23. * All string columns, except `BLOB' and `TEXT' columns, automatically have all trailing spaces removed when retrieved. For `CHAR' types this is okay, and may be regarded as a feature according to ANSI SQL92. The bug is that in MySQL Server, `VARCHAR' columns are treated the same way. * You can only have up to 255 `ENUM' and `SET' columns in one table. * In `MIN()', `MAX()' and other aggregate functions, MySQL currently compares `ENUM' and `SET' columns by their string value rather than by the string's relative position in the set. * `safe_mysqld' redirects all messages from `mysqld' to the `mysqld' log. One problem with this is that if you execute `mysqladmin refresh' to close and reopen the log, `stdout' and `stderr' are still redirected to the old log. If you use `--log' extensively, you should edit `safe_mysqld' to log to `'hostname'.err' instead of `'hostname'.log' so you can easily reclaim the space for the old log by deleting the old one and executing `mysqladmin refresh'. * In the `UPDATE' statement, columns are updated from left to right. If you refer to an updated column, you will get the updated value instead of the original value. For example: mysql> UPDATE tbl_name SET KEY=KEY+1,KEY=KEY+1; This will update `KEY' with `2' instead of with `1'. * You can't use temporary tables more than once in the same query. For example, the following doesn't work: mysql> SELECT * FROM temporary_table, temporary_table AS t2; * `RENAME' doesn't work with `TEMPORARY' tables or tables used in a `MERGE' table. * The optimiser may handle `DISTINCT' differently if you are using 'hidden' columns in a join or not. In a join, hidden columns are counted as part of the result (even if they are not shown) while in normal queries hidden columns don't participate in the `DISTINCT' comparison. We will probably change this in the future to never compare the hidden columns when executing `DISTINCT'. An example of this is: SELECT DISTINCT mp3id FROM band_downloads WHERE userid = 9 ORDER BY id DESC; and SELECT DISTINCT band_downloads.mp3id FROM band_downloads,band_mp3 WHERE band_downloads.userid = 9 AND band_mp3.id = band_downloads.mp3id ORDER BY band_downloads.id DESC; In the second case you may in MySQL Server 3.23.x get two identical rows in the result set (because the hidden `id' column may differ). Note that this happens only for queries where you don't have the ORDER BY columns in the result, something that you are not allowed to do in ANSI SQL. * Because MySQL Server allows you to work with table types that don't support transactions, and thus can't `rollback' data, some things behave a little differently in MySQL Server than in other SQL servers. This is just to ensure that MySQL Server never needs to do a rollback for a SQL command. This may be a little awkward at times as column values must be checked in the application, but this will actually give you a nice speed increase as it allows MySQL Server to do some optimisations that otherwise would be very hard to do. If you set a column to an incorrect value, MySQL Server will, instead of doing a rollback, store the `best possible value' in the column: - If you try to store a value outside the range in a numerical column, MySQL Server will instead store the smallest or biggest possible value in the column. - If you try to store a string that doesn't start with a number into a numerical column, MySQL Server will store 0 into it. - If you try to store `NULL' into a column that doesn't take `NULL' values, MySQL Server will store 0 or `''' (empty string) in it instead. (This behaviour can, however, be changed with the -DDONT_USE_DEFAULT_FIELDS compile option.) - MySQL allows you to store some wrong date values into `DATE' and `DATETIME' columns (like 2000-02-31 or 2000-02-00). The idea is that it's not the SQL server job to validate date. If MySQL can store a date and retrieve exactly the same date, then MySQL will store the date. If the date is totally wrong (outside the server's ability to store it), then the special date value 0000-00-00 will be stored in the column. - If you set an `ENUM' column to an unsupported value, it will be set to the error value `empty string', with numeric value 0. - If you set a `SET' column to an unsupported value, the value will be ignored. * If you execute a `PROCEDURE' on a query that returns an empty set, in some cases the `PROCEDURE' will not transform the columns. * Creation of a table of type `MERGE' doesn't check if the underlying tables are of compatible types. * MySQL Server can't yet handle `NaN', `-Inf', and `Inf' values in double. Using these will cause problems when trying to export and import data. We should as an intermediate solution change `NaN' to `NULL' (if possible) and `-Inf' and `Inf' to the minimum respective maximum possible `double' value. * `LIMIT' on negative numbers are treated as big positive numbers. * If you use `ALTER TABLE' to first add a `UNIQUE' index to a table used in a `MERGE' table and then use `ALTER TABLE' to add a normal index on the `MERGE' table, the key order will be different for the tables if there was an old key that was not unique in the table. This is because `ALTER TABLE' puts `UNIQUE' keys before normal keys to be able to detect duplicate keys as early as possible. The following are known bugs in earlier versions of MySQL: * You can get a hung thread if you do a `DROP TABLE' on a table that is one among many tables that is locked with `LOCK TABLES'. * In the following case you can get a core dump: - Delayed insert handler has pending inserts to a table. - `LOCK table' with `WRITE'. - `FLUSH TABLES'. * Before MySQL Server Version 3.23.2 an `UPDATE' that updated a key with a `WHERE' on the same key may have failed because the key was used to search for records and the same row may have been found multiple times: UPDATE tbl_name SET KEY=KEY+1 WHERE KEY > 100; A workaround is to use: mysql> UPDATE tbl_name SET KEY=KEY+1 WHERE KEY+0 > 100; This will work because MySQL Server will not use an index on expressions in the `WHERE' clause. * Before MySQL Server Version 3.23, all numeric types where treated as fixed-point fields. That means you had to specify how many decimals a floating-point field shall have. All results were returned with the correct number of decimals. For platform-specific bugs, see the sections about compiling and porting. MySQL and The Future (The TODO) =============================== This section lists the features that we plan to implement in MySQL Server. Everything in this list is approximately in the order it will be done. If you want to affect the priority order, please register a license or support us and tell us what you want to have done more quickly. *Note Licensing and Support::. The plan is that we in the future will support the full ANSI SQL99 standard, but with a lot of useful extensions. The challenge is to do this without sacrificing the speed or compromising the code. Things That Should be in 4.0 ---------------------------- All done. We now only do bug fixes for MySQL 4.0. *Note News-4.0.x::. Development has shifted to 4.1 & 5.0 Things That Should be in 4.1 ---------------------------- The following features are planned for inclusion into MySQL 4.1. For a list what is already done in MySQL 4.1, see *Note News-4.1.x::. * Stable OpenSSL support (MySQL 4.0 supports rudimentary, not 100% tested, support for OpenSSL). * Character set casts and syntax for handling multiple character sets. * Help for all commands from the client. * More testing of prepared statements and multiple characters sets for one table. Things That Should be in 5.0 ---------------------------- The following features are planned for inclusion into MySQL 5.0. Note that because we have many developers that are working on different projects, there will also be many additional features. There is also a small chance that some of these features will be added to MySQL 4.1. For a list what is already done in MySQL 4.1, see *Note News-4.1.x::. * Stored procedures. * Foreign keys support for all table types. * New text based table definition file format (`.frm' files) and a table cache for table definitions. This will enable us to do faster queries of table structures and do more efficient foreign key support. * `SHOW COLUMNS FROM table_name' (used by `mysql' client to allow expansions of column names) should not open the table, only the definition file. This will require less memory and be much faster. * Fail-safe replication. * Online backup with very low performance penalty. The online backup will make it easy to add a new replication slave without taking down the master. * `ROLLUP' and `CUBE' OLAP (Online Analytical Processing) grouping options for data warehousing applications. * Allow `DELETE' on `MyISAM' tables to use the record cache. To do this, we need to update the threads record cache when we update the `.MYD' file. * When using `SET CHARACTER SET' we should translate the whole query at once and not only strings. This will enable users to use the translated characters in database, table, and column names. * Resolving the issue of `RENAME TABLE' on a table used in an active `MERGE' table possibly corrupting the table. * Add options to the client/server protocol to get progress notes for long running commands. * Implement `RENAME DATABASE'. To make this safe for all storage engines, it should work as follows: * Create the new database. * For every table do a rename of the table to another database, as we do with the `RENAME' command. * Drop the old database. * Add true `VARCHAR' support (there is already support for this in `MyISAM'). * Optimise `BIT' type to take 1 bit (now `BIT' takes 1 char). * New internal file interface change. This will make all file handling much more general and make it easier to add extensions like RAID. (the current implementation is a hack.) * Better in-memory (`HEAP') tables: * Dynamic size rows. * Faster row handling (less copying) Things That Must be Done in the Near Future ------------------------------------------- * Don't allow more than a defined number of threads to run MyISAM recover at the same time. * Change `INSERT ... SELECT' to optionally use concurrent inserts. * Return the original field types() when doing `SELECT MIN(column) ... GROUP BY'. * Multiple result sets. * Make it possible to specify `long_query_time' with a granularity in microseconds. * Link the `myisampack' code into the server. * Port of the MySQL code to QNX. * Port of the MySQL code to BeOS. * Port of the MySQL clients to LynxOS. * Add a temporary key buffer cache during `INSERT/DELETE/UPDATE' so that we can gracefully recover if the index file gets full. * If you perform an `ALTER TABLE' on a table that is symlinked to another disk, create temporary tables on this disk. * Implement a `DATE/DATETIME' type that handles time zone information properly so that dealing with dates in different time zones is easier. * FreeBSD and MIT-pthreads; do sleeping threads take CPU time? * Check if locked threads take any CPU time. * Fix configure so that one can compile all libraries (like `MyISAM') without threads. * Add an option to periodically flush key pages for tables with delayed keys if they haven't been used in a while. * Allow join on key parts (optimisation issue). * `INSERT SQL_CONCURRENT' and `mysqld --concurrent-insert' to do a concurrent insert at the end of the file if the file is read-locked. * Server-side cursors. * Check if `lockd' works with modern Linux kernels; if not, we have to fix `lockd'! To test this, start `mysqld' with `--enable-locking' and run the different fork* test suits. They shouldn't give any errors if `lockd' works. * Allow SQL variables in `LIMIT', like in `LIMIT @a,@b'. * Allow update of variables in `UPDATE' statements. For example: `UPDATE TABLE foo SET @a=a+b,a=@a, b=@a+c'. * Change when user variables are updated so that one can use them with `GROUP BY', as in the following example: `SELECT id, @a:=COUNT(*), SUM(sum_col)/@a FROM table_name GROUP BY id'. * Don't add automatic `DEFAULT' values to columns. Give an error when using an `INSERT' that doesn't contain a column that doesn't have a `DEFAULT'. * Fix `libmysql.c' to allow two `mysql_query()' commands in a row without reading results or give a nice error message when one does this. * Check why MIT-pthreads `ctime()' doesn't work on some FreeBSD systems. * Add an `IMAGE' option to `LOAD DATA INFILE' to not update `TIMESTAMP' and `AUTO_INCREMENT' fields. * Added `LOAD DATE INFILE ... UPDATE' syntax. * For tables with primary keys, if the data contains the primary key, entries matching that primary key are updated from the remainder of the columns. However, columns *missing* from the incoming data feed are not touched. * For tables with primary keys that are missing some part of the key in the incoming data stream, or that have no primary key, the feed is treated as a `LOAD DATA INFILE ... REPLACE INTO' now. * Make `LOAD DATA INFILE' understand syntax like: LOAD DATA INFILE 'file_name.txt' INTO TABLE tbl_name TEXT_FIELDS (text_field1, text_field2, text_field3) SET table_field1=CONCAT(text_field1, text_field2), table_field3=23 IGNORE text_field3 This can be used to skip over extra columns in the text file, or update columns based on expressions of the read data. * `LOAD DATA INFILE 'file_name' INTO TABLE 'table_name' ERRORS TO err_table_name'. This would cause any errors and warnings to be logged into the `err_table_name' table. That table would have a structure like: line_number - line number in datafile error_message - the error/warning message and maybe data_line - the line from the datafile * Automatic output from `mysql' to Netscape. * `LOCK DATABASES' (with various options.) * Functions: ADD_TO_SET(value,set) and REMOVE_FROM_SET(value,set). * Add use of `t1 JOIN t2 ON ...' and `t1 JOIN t2 USING ...' Currently, you can only use this syntax with `LEFT JOIN'. * Many more variables for `show status'. Records reads and updates. Selects on 1 table and selects with joins. Mean number of tables in select. Number of `ORDER BY' and `GROUP BY' queries. * If you abort `mysql' in the middle of a query, you should open another connection and kill the old running query. Alternatively, an attempt should be made to detect this in the server. * Add a storage engine interface for table information so that you can use it as a system table. This would be a bit slow if you requested information about all tables, but very flexible. `SHOW INFO FROM tbl_name' for basic table information should be implemented. * Allow `SELECT a FROM crash_me LEFT JOIN crash_me2 USING (a)'; in this case `a' is assumed to come from the `crash_me' table. * * Oracle-like `CONNECT BY PRIOR ...' to search hierarchy structures. * `mysqladmin copy database new-database'; requires `COPY' command to be added to `mysqld'. * Processlist should show number of queries/threads. * `SHOW HOSTS' for printing information about the hostname cache. * `DELETE' and `REPLACE' options to the `UPDATE' statement (this will delete rows when one gets a duplicate key error while updating). * Change the format of `DATETIME' to store fractions of seconds. * Add all missing ANSI92 and ODBC 3.0 types. * Change table names from empty strings to `NULL' for calculated columns. * Don't use `Item_copy_string' on numerical values to avoid number->string->number conversion in case of: `SELECT COUNT(*)*(id+0) FROM table_name GROUP BY id' * Make it possible to use the new GNU regexp library instead of the current one (the GNU library should be much faster than the old one). * Change so that `ALTER TABLE' doesn't abort clients that execute `INSERT DELAYED'. * Fix so that when columns are referenced in an `UPDATE' clause, they contain the old values from before the update started. * Add simulation of `pread()'/`pwrite()' on Windows to enable concurrent inserts. * A logfile analyser that could parse out information about which tables are hit most often, how often multi-table joins are executed, etc. It should help users identify areas or table design that could be optimised to execute much more efficient queries. * Add `SUM(DISTINCT)'. * Add `ANY()', `EVERY()', and `SOME()' group functions. In ANSI SQL these work only on boolean columns, but we can extend these to work on any columns/expressions by applying: value == 0 -> FALSE and value <> 0 -> TRUE. * Fix that the type for `MAX(column)' is the same as the column type: mysql> CREATE TABLE t1 (a DATE); mysql> INSERT INTO t1 VALUES (NOW()); mysql> CREATE TABLE t2 SELECT MAX(a) FROM t1; mysql> SHOW COLUMNS FROM t2; * Come up with a nice syntax for a statement that will `UPDATE' the row if it exists and `INSERT' a new row if the row didn't exist (like `REPLACE' works with `INSERT' / `DELETE'). Things That Have to be Done Sometime ------------------------------------ * Implement function: `get_changed_tables(timeout,table1,table2,...)'. * Change reading through tables to use memmap when possible. Now only compressed tables use memmap. * Make the automatic timestamp code nicer. Add timestamps to the update log with `SET TIMESTAMP=#;'. * Use read/write mutex in some places to get more speed. * Full foreign key support in for `MyISAM' tables, probably after the implementation of stored procedures with triggers. * Simple views (first on one table, later on any expression). * Automatically close some tables if a table, temporary table, or temporary files gets error 23 (not enough open files). * When one finds a field=#, change all occurrences of field to #. Now this is only done for some simple cases. * Change all const expressions with calculated expressions if possible. * Optimise key = expression. At the moment only key = field or key = constant are optimised. * Join some of the copy functions for nicer code. * Change `sql_yacc.yy' to an inline parser to reduce its size and get better error messages (5 days). * Change the parser to use only one rule per different number of arguments in function. * Use of full calculation names in the order part (for ACCESS97). * `MINUS', `INTERSECT', and `FULL OUTER JOIN'. (Currently `UNION' [in 4.0] and `LEFT OUTER JOIN' are supported.) * `SQL_OPTION MAX_SELECT_TIME=#' to put a time limit on a query. * Make the update log write to a database. * Add to `LIMIT' to allow retrieval of data from the end of a result set. * Alarm around client connect/read/write functions. * Please note the changes to `safe_mysqld': according to FSSTND (which Debian tries to follow) PID files should go into `/var/run/.pid' and log files into `/var/log'. It would be nice if you could put the "DATADIR" in the first declaration of "pidfile" and "log", so the placement of these files can be changed with a single statement. * Allow a client to request logging. * Add use of `zlib()' for `gzip'-ed files to `LOAD DATA INFILE'. * Fix sorting and grouping of `BLOB' columns (partly solved now). * Stored procedures. Triggers are also being looked at. * A simple (atomic) update language that can be used to write loops and such in the MySQL server. * Change to use semaphores when counting threads. One should first implement a semaphore library to MIT-pthreads. * Don't assign a new `AUTO_INCREMENT' value when one sets a column to 0. Use `NULL' instead. * Add full support for `JOIN' with parentheses. * As an alternative for one thread/connection manage a pool of threads to handle the queries. * Allow one to get more than one lock with `GET_LOCK'. When doing this, one must also handle the possible deadlocks this change will introduce. Time is given according to amount of work, not real time. Things We Don't Plan To Do -------------------------- * Nothing; we aim toward full ANSI 92/ANSI 99 compliancy. How MySQL Compares to Other Databases ===================================== Our users have successfully run their own benchmarks against a number of `Open Source' and traditional database servers. We are aware of tests against `Oracle' server, `DB/2' server, `Microsoft SQL Server', and other commercial products. Due to legal reasons we are restricted from publishing some of those benchmarks in our reference manual. This section includes a comparison with `mSQL' for historical reasons and with `PostgreSQL' as it is also an `Open Source' database. If you have benchmark results that we can publish, please contact us at . For comparative lists of all supported functions and types as well as measured operational limits of many different database systems, see the `crash-me' web page at `http://www.mysql.com/information/crash-me.php'. How MySQL Compares to `mSQL' ---------------------------- *Performance* For a true comparison of speed, consult the growing MySQL benchmark suite. *Note MySQL Benchmarks::. Because there is no thread creation overhead, a small parser, few features, and simple security, `mSQL' should be quicker at: * Tests that perform repeated connects and disconnects, running a very simple query during each connection. * `INSERT' operations into very simple tables with few columns and keys. * `CREATE TABLE' and `DROP TABLE'. * `SELECT' on something that isn't an index. (A table scan is very easy.) Because these operations are so simple, it is hard to be better at them when you have a higher startup overhead. After the connection is established, MySQL Server should perform much better. On the other hand, MySQL Server is much faster than `mSQL' (and most other SQL implementations) on the following: * Complex `SELECT' operations. * Retrieving large results (MySQL Server has a better, faster, and safer protocol). * Tables with variable-length strings because MySQL Server has more efficient handling and can have indexes on `VARCHAR' columns. * Handling tables with many columns. * Handling tables with large record lengths. * `SELECT' with many expressions. * `SELECT' on large tables. * Handling many connections at the same time. MySQL Server is fully multi-threaded. Each connection has its own thread, which means that no thread has to wait for another (unless a thread is modifying a table another thread wants to access). In `mSQL', once one connection is established, all others must wait until the first has finished, regardless of whether the connection is running a query that is short or long. When the first connection terminates, the next can be served, while all the others wait again, etc. * Joins. `mSQL' can become pathologically slow if you change the order of tables in a `SELECT'. In the benchmark suite, a time more than 15,000 times slower than MySQL Server was seen. This is due to `mSQL''s lack of a join optimiser to order tables in the optimal order. However, if you put the tables in exactly the right order in `mSQL'2 and the `WHERE' is simple and uses index columns, the join will be relatively fast! *Note MySQL Benchmarks::. * `ORDER BY' and `GROUP BY'. * `DISTINCT'. * Using `TEXT' or `BLOB' columns. *SQL Features* * `GROUP BY' and `HAVING'. `mSQL' does not support `GROUP BY' at all. MySQL Server supports a full `GROUP BY' with both `HAVING' and the following functions: `COUNT()', `AVG()', `MIN()', `MAX()', `SUM()', and `STD()'. `COUNT(*)' is optimised to return very quickly if the `SELECT' retrieves from one table, no other columns are retrieved, and there is no `WHERE' clause. `MIN()' and `MAX()' may take string arguments. * `INSERT' and `UPDATE' with calculations. MySQL Server can do calculations in an `INSERT' or `UPDATE'. For example: mysql> UPDATE SET x=x*10+y WHERE x<20; * Aliasing. MySQL Server has column aliasing. * Qualifying column names. In MySQL Server, if a column name is unique among the tables used in a query, you do not have to use the full qualifier. * `SELECT' with functions. MySQL Server has many functions (too many to list here; see *Note Functions::). *Disk Space Efficiency* That is, how small can you make your tables? MySQL Server has very precise types, so you can create tables that take very little space. An example of a useful MySQL datatype is the `MEDIUMINT' that is 3 bytes long. If you have 100 million records, saving even 1 byte per record is very important. `mSQL2' has a more limited set of column types, so it is more difficult to get small tables. *Stability* This is harder to judge objectively. For a discussion of MySQL Server stability, see *Note Stability::. We have no experience with `mSQL' stability, so we cannot say anything about that. *Price* Another important issue is the license. MySQL Server has a more flexible license than `mSQL', and is also less expensive than `mSQL'. Whichever product you choose to use, remember to at least consider paying for a license or e-mail support. *Perl Interfaces* MySQL Server has basically the same interfaces to Perl as `mSQL' with some added features. *JDBC (Java)* MySQL Server currently has a lot of different JDBC drivers: * MySQL Connector/J is a native Java driver. Version 3.x is released under dual licensing (GPL and commercial). * The Resin driver: this is a commercial JDBC driver released under open source. `http://www.caucho.com/projects/jdbc-mysql/index.xtp' * The gwe driver: a Java interface by GWE technologies (not supported anymore). * The jms driver: an improved gwe driver by Xiaokun Kelvin ZHU (not supported anymore). * The twz driver: a type 4 JDBC driver by Terrence W. Zellers . This is commercial but is free for private and educational use (not supported anymore). The recommended driver is the mm driver. The Resin driver may also be good (at least the benchmarks look good), but we haven't received that much information about this yet. We know that `mSQL' has a JDBC driver, but we have too little experience with it to compare. *Rate of Development* MySQL Server has a small core team of developers, but we are quite used to coding C and C++ very rapidly. Because threads, functions, `GROUP BY', and so on are still not implemented in `mSQL', it has a lot of catching up to do. To get some perspective on this, you can view the `mSQL' `HISTORY' file for the last year and compare it with the News section of the MySQL Reference Manual (*note News::). It should be pretty obvious which one has developed most rapidly. *Utility Programs* Both `mSQL' and MySQL Server have many interesting third-party tools. Because it is very easy to port upward (from `mSQL' to MySQL Server), almost all the interesting applications that are available for `mSQL' are also available for MySQL Server. MySQL Server comes with a simple `msql2mysql' program that fixes differences in spelling between `mSQL' and MySQL Server for the most-used C API functions. For example, it changes instances of `msqlConnect()' to `mysql_connect()'. Converting a client program from `mSQL' to MySQL Server usually requires only minor effort. How to Convert `mSQL' Tools for MySQL ..................................... According to our experience, it doesn't take long to convert tools such as `msql-tcl' and `msqljava' that use the `mSQL' C API so that they work with the MySQL C API. The conversion procedure is: 1. Run the shell script `msql2mysql' on the source. This requires the `replace' program, which is distributed with MySQL Server. 2. Compile. 3. Fix all compiler errors. Differences between the `mSQL' C API and the MySQL C API are: * MySQL Server uses a `MYSQL' structure as a connection type (`mSQL' uses an `int'). * `mysql_connect()' takes a pointer to a `MYSQL' structure as a parameter. It is easy to define one globally or to use `malloc()' to get one. `mysql_connect()' also takes two parameters for specifying the user and password. You may set these to `NULL, NULL' for default use. * `mysql_error()' takes the `MYSQL' structure as a parameter. Just add the parameter to your old `msql_error()' code if you are porting old code. * MySQL Server returns an error number and a text error message for all errors. `mSQL' returns only a text error message. * Some incompatibilities exist as a result of MySQL Server supporting multiple connections to the server from the same process. How `mSQL' and MySQL Client/Server Communications Protocols Differ .................................................................. There are enough differences that it is impossible (or at least not easy) to support both. The most significant ways in which the MySQL protocol differs from the `mSQL' protocol are listed here: * A message buffer may contain many result rows. * The message buffers are dynamically enlarged if the query or the result is bigger than the current buffer, up to a configurable server and client limit. * All packets are numbered to catch duplicated or missing packets. * All column values are sent in ASCII. The lengths of columns and rows are sent in packed binary coding (1, 2, or 3 bytes). * MySQL can read in the result unbuffered (without having to store the full set in the client). * If a single read/write takes more than 30 seconds, the server closes the connection. * If a connection is idle for 8 hours, the server closes the connection. How `mSQL' 2.0 SQL Syntax Differs from MySQL ............................................ *Column types* `MySQL Server' Has the following additional types (among others; *note `CREATE TABLE': CREATE TABLE.): * `ENUM' type for one of a set of strings. * `SET' type for many of a set of strings. * `BIGINT' type for 64-bit integers. `' MySQL Server also supports the following additional type attributes: * `UNSIGNED' option for integer and floating-point columns. * `ZEROFILL' option for integer columns. * `AUTO_INCREMENT' option for integer columns that are a `PRIMARY KEY'. *Note `mysql_insert_id()': mysql_insert_id. * `DEFAULT' value for all columns. `mSQL2' `mSQL' column types correspond to the MySQL types shown in the following table: `mSQL' *Corresponding MySQL type* *type* `CHAR(len)'`CHAR(len)' `TEXT(len)'`TEXT(len)'. `len' is the maximal length. And `LIKE' works. `INT' `INT'. With many more options! `REAL' `REAL'. Or `FLOAT'. Both 4- and 8-byte versions are available. `UINT' `INT UNSIGNED' `DATE' `DATE'. Uses ANSI SQL format rather than `mSQL''s own format. `TIME' `TIME' `MONEY' `DECIMAL(12,2)'. A fixed-point value with two decimals. *Index Creation* `MySQL Server' Indexes may be specified at table creation time with the `CREATE TABLE' statement. `mSQL' Indexes must be created after the table has been created, with separate `CREATE INDEX' statements. *To Insert a Unique Identifier into a Table* `MySQL Server' Use `AUTO_INCREMENT' as a column type specifier. *Note `mysql_insert_id()': mysql_insert_id. `mSQL' Create a `SEQUENCE' on a table and select the `_seq' column. *To Obtain a Unique Identifier for a Row* `MySQL Server' Add a `PRIMARY KEY' or `UNIQUE' key to the table and use this. New in Version 3.23.11: If the `PRIMARY' or `UNIQUE' key consists of only one column and this is of type integer, one can also refer to it as `_rowid'. `mSQL' Use the `_rowid' column. Observe that `_rowid' may change over time depending on many factors. *To Get the Time a Column Was Last Modified* `MySQL Server' Add a `TIMESTAMP' column to the table. This column is automatically set to the current date and time for `INSERT' or `UPDATE' statements if you don't give the column a value or if you give it a `NULL' value. `mSQL' Use the `_timestamp' column. *`NULL' Value Comparisons* `MySQL Server' MySQL Server follows ANSI SQL, and a comparison with `NULL' is always `NULL'. `mSQL' In `mSQL', `NULL = NULL' is TRUE. You must change `=NULL' to `IS NULL' and `<>NULL' to `IS NOT NULL' when porting old code from `mSQL' to MySQL Server. *String Comparisons* `MySQL Server' Normally, string comparisons are performed in case-independent fashion with the sort order determined by the current character set (ISO-8859-1 Latin1 by default). If you don't like this, declare your columns with the `BINARY' attribute, which causes comparisons to be done according to the ASCII order used on the MySQL server host. `mSQL' All string comparisons are performed in case-sensitive fashion with sorting in ASCII order. *Case-insensitive Searching* `MySQL Server' `LIKE' is a case-insensitive or case-sensitive operator, depending on the columns involved. If possible, MySQL uses indexes if the `LIKE' argument doesn't start with a wildcard character. `mSQL' Use `CLIKE'. *Handling of Trailing Spaces* `MySQL Server' Strips all spaces at the end of `CHAR' and `VARCHAR' columns. Use a `TEXT' column if this behaviour is not desired. `mSQL' Retains trailing space. *`WHERE' Clauses* `MySQL Server' MySQL correctly prioritises everything (`AND' is evaluated before `OR'). To get `mSQL' behaviour in MySQL Server, use parentheses (as shown in an example later in this section). `mSQL' Evaluates everything from left to right. This means that some logical calculations with more than three arguments cannot be expressed in any way. It also means you must change some queries when you upgrade to MySQL Server. You do this easily by adding parentheses. Suppose you have the following `mSQL' query: mysql> SELECT * FROM table WHERE a=1 AND b=2 OR a=3 AND b=4; To make MySQL Server evaluate this the way that `mSQL' would, you must add parentheses: mysql> SELECT * FROM table WHERE (a=1 AND (b=2 OR (a=3 AND (b=4)))); *Access Control* `MySQL Server' Has tables to store grant (permission) options per user, host, and database. *Note Privileges::. `mSQL' Has a file `mSQL.acl' in which you can grant read/write privileges for users. How MySQL Compares to `PostgreSQL' ---------------------------------- When reading the following, please note that both products are continually evolving. We at MySQL AB and the PostgreSQL developers are both working on making our respective databases as good as possible, so we are both a serious alternative to any commercial database. The following comparison is made by us at MySQL AB. We have tried to be as accurate and fair as possible, but although we know MySQL Server thoroughly, we don't have a full knowledge of all PostgreSQL features, so we may have got some things wrong. We will, however, correct these when they come to our attention. We would first like to note that PostgreSQL and MySQL Server are both widely used products, but with different design goals, even if we are both striving toward ANSI SQL compliancy. This means that for some applications MySQL Server is more suited, while for others PostgreSQL is more suited. When choosing which database to use, you should first check if the database's feature set satisfies your application. If you need raw speed, MySQL Server is probably your best choice. If you need some of the extra features that only PostgreSQL can offer, you should use `PostgreSQL'. MySQL and PostgreSQL development strategies ........................................... When adding things to MySQL Server we take pride to do an optimal, definite solution. The code should be so good that we shouldn't have any need to change it in the foreseeable future. We also do not like to sacrifice speed for features but instead will do our utmost to find a solution that will give maximal throughput. This means that development will take a little longer, but the end result will be well worth this. This kind of development is only possible because all server code are checked by one of a few (currently two) persons before it's included in the MySQL server. We at MySQL AB believe in frequent releases to be able to push out new features quickly to our users. Because of this we do a new small release about every three weeks, and a major branch every year. All releases are thoroughly tested with our testing tools on a lot of different platforms. PostgreSQL is based on a kernel with lots of contributors. In this setup it makes sense to prioritise adding a lot of new features, instead of implementing them optimally, because one can always optimise things later if there arises a need for this. Another big difference between MySQL Server and PostgreSQL is that nearly all of the code in the MySQL server is coded by developers that are employed by MySQL AB and are still working on the server code. The exceptions are the transaction engines and the regexp library. This is in sharp contrast to the PostgreSQL code, the majority of which is coded by a big group of people with different backgrounds. It was only recently that the PostgreSQL developers announced that their current developer group had finally had time to take a look at all the code in the current PostgreSQL release. Both of the aforementioned development methods have their own merits and drawbacks. We here at MySQL AB think, of course, that our model is better because our model gives better code consistency, more optimal and reusable code, and in our opinion, fewer bugs. Because we are the authors of the MySQL server code, we are better able to coordinate new features and releases. Featurewise Comparison of MySQL and PostgreSQL .............................................. On the `crash-me' page (`http://www.mysql.com/information/crash-me.php') you can find a list of those database constructs and limits that one can detect automatically with a program. Note, however, that a lot of the numerical limits may be changed with startup options for their respective databases. This web page is, however, extremely useful when you want to ensure that your applications work with many different databases or when you want to convert your application from one database to another. MySQL Server offers the following advantages over PostgreSQL: * `MySQL' Server is generally much faster than PostgreSQL. MySQL 4.0.1 also has a query cache that can boost up the query speed for mostly-read-only sites many times. * MySQL has a much larger user base than PostgreSQL. Therefore, the code is tested more and has historically proven more stable than PostgreSQL. MySQL Server is used more in production environments than PostgreSQL, mostly thanks to the fact that MySQL AB, formerly TCX DataKonsult AB, has provided top-quality commercial support for MySQL Server from the day it was released, whereas until recently PostgreSQL was unsupported. * MySQL Server works better on Windows than PostgreSQL does. MySQL Server runs as a native Windows application (a service on NT/2000/XP), while PostgreSQL is run under the `Cygwin' emulation. We have heard that PostgreSQL is not yet that stable on Windows but we haven't been able to verify this ourselves. * MySQL has more APIs to other languages and is supported by more existing programs than PostgreSQL. *Note Contrib::. * MySQL Server works on 24/7 heavy-duty systems. In most circumstances you never have to run any cleanups on MySQL Server. PostgreSQL doesn't yet support 24/7 systems because you have to run `VACUUM' once in a while to reclaim space from `UPDATE' and `DELETE' commands and to perform statistics analyses that are critical to get good performance with PostgreSQL. `VACUUM' is also needed after adding a lot of new rows to a table. On a busy system with lots of changes, `VACUUM' must be run very frequently, in the worst cases even many times a day. During the `VACUUM' run, which may take hours if the database is big, the database is, from a production standpoint, practically dead. Please note: in PostgreSQL version 7.2, basic vacuuming no longer locks tables, thus allowing normal user access during the vacuum. A new `VACUUM FULL' command does old-style vacuum by locking the table and shrinking the on-disk copy of the table. * MySQL replication has been thoroughly tested, and is used by sites like: - Yahoo Finance (`http://finance.yahoo.com/') - Mobile.de (`http://www.mobile.de/') - Slashdot (`http://www.slashdot.org/') * Included in the MySQL distribution are two different testing suites, `mysql-test-run' and `crash-me' (`http://www.mysql.com/information/crash-me.php'), as well as a benchmark suite. The test system is actively updated with code to test each new feature and almost all reproduceable bugs that have come to our attention. We test MySQL Server with these on a lot of platforms before every release. These tests are more sophisticated than anything we have seen from PostgreSQL, and they ensure that the MySQL Server is kept to a high standard. * There are far more books in print about MySQL Server than about PostgreSQL. O'Reilly, SAMS, Que, and New Riders are all major publishers with books about MySQL. All MySQL features are also documented in the MySQL online manual because when a new feature is implemented, the MySQL developers are required to document it before it's included in the source. * MySQL Server supports more of the standard ODBC functions than `PostgreSQL'. * MySQL Server has a much more sophisticated `ALTER TABLE'. * MySQL Server has support for tables without transactions for applications that need all the speed they can get. The tables may be memory-based, `HEAP' tables or disk based `MyISAM'. *Note Table types::. * MySQL Server has support for two different storage engines that support transactions, `InnoDB', and `BerkeleyDB'. Because every transaction engine performs differently under different conditions, this gives the application writer more options to find an optimal solution for his or her setup, if need be per individual table. *Note Table types::. * `MERGE' tables gives you a unique way to instantly make a view over a set of identical tables and use these as one. This is perfect for systems where you have log files that you order, for example, by month. *Note MERGE::. * The option to compress read-only tables, but still have direct access to the rows in the table, gives you better performance by minimising disk reads. This is very useful when you are archiving things. *Note `myisampack': myisampack. * MySQL Server has internal support for full-text search. *Note Fulltext Search::. * You can access many databases from the same connection (depending, of course, on your privileges). * MySQL Server is coded from the start to be multi-threaded, while PostgreSQL uses processes. Context switching and access to common storage areas is much faster between threads than between separate processes. This gives MySQL Server a big speed advantage in multi-user applications and also makes it easier for MySQL Server to take full advantage of symmetric multiprocessor (SMP) systems. * MySQL Server has a much more sophisticated privilege system than PostgreSQL. While PostgreSQL only supports `INSERT', `SELECT', and `UPDATE/DELETE' grants per user on a database or a table, MySQL Server allows you to define a full set of different privileges on the database, table, and column level. MySQL Server also allows you to specify the privilege on host and user combinations. *Note GRANT::. * MySQL Server supports a compressed client/server protocol which improves performance over slow links. * MySQL Server employs a "storage engine" concept, and is the only relational database we know of built around this concept. This allows different low-level table types to be called from the SQL engine, and each table type can be optimised for different performance characteristics. * All MySQL table types (except `InnoDB') are implemented as files (one table per file), which makes it really easy to back up, move, delete, and even symlink databases and tables, even when the server is down. * Tools to repair and optimise `MyISAM' tables (the most common MySQL table type). A repair tool is only needed when a physical corruption of a datafile happens, usually from a hardware failure. It allows a majority of the data to be recovered. * Upgrading MySQL Server is painless. When you are upgrading MySQL Server, you don't need to dump/restore your data, as you have to do with most PostgreSQL upgrades. Drawbacks with MySQL Server compared to PostgreSQL: * The transaction support in MySQL Server is not yet as well tested as PostgreSQL's system. * Because MySQL Server uses threads, which are not yet flawless on many OSes, one must either use binaries from `http://www.mysql.com/downloads/', or carefully follow our instructions in *Note Installing source:: to get an optimal binary that works in all cases. * Table locking, as used by the non-transactional `MyISAM' tables, is in many cases faster than page locks, row locks, or versioning. The drawback, however, is that if one doesn't take into account how table locks work, a single long-running query can block a table for updates for a long time. This can usually be avoided when designing the application. If not, one can always switch the trouble table to use one of the transactional table types. *Note Table locking::. * With UDF (user-defined functions) one can extend MySQL Server with both normal SQL functions and aggregates, but this is not yet as easy or as flexible as in PostgreSQL. *Note Adding functions::. * Updates that run over multiple tables used to be harder to do in MySQL Server. However, this has been fixed in MySQL Server 4.0.2 with multi-table `UPDATE' and in MySQL Server 4.1 with subqueries. In MySQL Server 4.0 one can use multi-table deletes to delete from many tables at the same time. *Note DELETE::. PostgreSQL currently offers the following advantages over MySQL Server: Note that because we know the MySQL road map, we have included in the following table the version when MySQL Server should support this feature. Unfortunately we couldn't do this for previous comparisons, because we don't know the PostgreSQL roadmap. *Feature* *MySQL version* Subqueries 4.1 Foreign keys 5.0 (3.23 with InnoDB) Views 5.0 Stored procedures 5.0 Triggers 5.0 Unions 4.0 Full join 4.1 Constraints 4.1 or 5.0 Cursors 4.1 or 5.0 R-trees 4.1 (for MyISAM tables) Inherited tables Not planned Extensible type Not planned system Other reasons someone may consider using PostgreSQL: * Standard usage in PostgreSQL is closer to ANSI SQL in some cases. * One can speed up PostgreSQL by coding things as stored procedures. * For geographical data, R-trees make PostgreSQL better than MySQL Server. (note: MySQL version 4.1 has R-trees for MyISAM tables). * The PostgreSQL optimiser can do some optimisation that the current MySQL optimiser can't do. Most notable is doing joins when you don't have the proper keys in place and doing a join where you are using different keys combined with OR. The MySQL benchmark suite at `http://www.mysql.com/information/benchmarks.html' shows you what kind of constructs you should watch out for when using different databases. * PostgreSQL has a bigger team of developers that contribute to the server. Drawbacks with PostgreSQL compared to MySQL Server: * `VACUUM' makes PostgreSQL hard to use in a 24/7 environment. * Only transactional tables. * Much slower `INSERT', `DELETE', and `UPDATE'. For a complete list of drawbacks, you should also examine the first table in this section. Benchmarking MySQL and PostgreSQL ................................. The only `Open Source' benchmark that we know of that can be used to benchmark MySQL Server and PostgreSQL (and other databases) is our own. It can be found at `http://www.mysql.com/information/benchmarks.html'. We have many times asked the PostgreSQL developers and some PostgreSQL users to help us extend this benchmark to make it the definitive benchmark for databases, but unfortunately we haven't gotten any feedback for this. We, the MySQL developers, have, because of this, spent a lot of hours to get maximum performance from PostgreSQL for the benchmarks, but because we don't know PostgreSQL intimately, we are sure that there are things that we have missed. We have on the benchmark page documented exactly how we did run the benchmark so that it should be easy for anyone to repeat and verify our results. The benchmarks are usually run with and without the `--fast' option. When run with `--fast' we are trying to use every trick the server can do to get the code to execute as fast as possible. The idea is that the normal run should show how the server would work in a default setup and the `--fast' run shows how the server would do if the application developer would use extensions in the server to make his application run faster. When running with PostgreSQL and `--fast' we do a `VACUUM' after every major table `UPDATE' and `DROP TABLE' to make the database in perfect shape for the following `SELECT's. The time for `VACUUM' is measured separately. When running with PostgreSQL 7.1.1 we could, however, not run with `--fast' because during the `INSERT' test, the postmaster (the PostgreSQL daemon) died and the database was so corrupted that it was impossible to restart postmaster. After this happened twice, we decided to postpone the `--fast' test until the next PostgreSQL release. The details about the machine we run the benchmark on can be found on the benchmark page. Before going to the other benchmarks we know of, we would like to give some background on benchmarks. It's very easy to write a test that shows *any* database to be the best database in the world, by just restricting the test to something the database is very good at and not testing anything that the database is not good at. If one, after doing this, summarises the result as a single figure, things are even easier. This would be like us measuring the speed of MySQL Server compared to PostgreSQL by looking at the summary time of the MySQL benchmarks on our web page. Based on this MySQL Server would be more than 40 times faster than PostgreSQL, something that is, of course, not true. We could make things even worse by just taking the test where PostgreSQL performs worst and claim that MySQL Server is more than 2000 times faster than PostgreSQL. The case is that MySQL does a lot of optimisations that PostgreSQL doesn't do. This is, of course, also true the other way around. An SQL optimiser is a very complex thing, and a company could spend years just making the optimiser faster and faster. When looking at the benchmark results you should look for things that you do in your application and just use these results to decide which database would be best suited for your application. The benchmark results also show things a particular database is not good at and should give you a notion about things to avoid and what you may have to do in other ways. We know of two benchmark tests that claim that PostgreSQL performs better than MySQL Server. These both where multi-user tests, a test that we here at MySQL AB haven't had time to write and include in the benchmark suite, mainly because it's a big task to do this in a manner that is fair to all databases. One is the benchmark paid for by Great Bridge, the company that for 16 months attempted to build a business based on PostgreSQL but now has ceased operations. This is probably the worst benchmark we have ever seen anyone conduct. This was not only tuned to only test what PostgreSQL is absolutely best at, but it was also totally unfair to every other database involved in the test. *Note*: We know that even some of the main PostgreSQL developers did not like the way Great Bridge conducted the benchmark, so we don't blame the PostgreSQL team for the way the benchmark was done. This benchmark has been condemned in a lot of postings and newsgroups, so here we will just briefly repeat some things that were wrong with it. * The tests were run with an expensive commercial tool that makes it impossible for an `Open Source' company like us to verify the benchmarks, or even check how the benchmarks were really done. The tool is not even a true benchmark tool, but an application/setup testing tool. To refer to this as a "standard" benchmark tool is to stretch the truth a long way. * Great Bridge admitted that they had optimised the PostgreSQL database (with `VACUUM' before the test) and tuned the startup for the tests, something they hadn't done for any of the other databases involved. They say "This process optimises indexes and frees up disk space a bit. The optimised indexes boost performance by some margin." Our benchmarks clearly indicate that the difference in running a lot of selects on a database with and without `VACUUM' can easily differ by a factor of 10. * The test results were also strange. The AS3AP test documentation mentions that the test does "selections, simple joins, projections, aggregates, one-tuple updates, and bulk updates." PostgreSQL is good at doing `SELECT's and `JOIN's (especially after a `VACUUM'), but doesn't perform as well on `INSERT's or `UPDATE's. The benchmarks seem to indicate that only `SELECT's were done (or very few updates). This could easily explain the good results for PostgreSQL in this test. The bad results for MySQL will be obvious a bit down in this document. * They did run the so-called benchmark from a Windows machine against a Linux machine over ODBC, a setup that no normal database user would ever do when running a heavy multi-user application. This tested more the ODBC driver and the Windows protocol used between the clients than the database itself. * When running the database against Oracle and MS-SQL (Great Bridge has indirectly indicated the databases they used in the test), they didn't use the native protocol but instead ODBC. Anyone that has ever used Oracle knows that all real applications use the native interface instead of ODBC. Doing a test through ODBC and claiming that the results had anything to do with using the database in a real-world situation can't be regarded as fair. They should have done two tests with and without ODBC to provide the right facts (after having gotten experts to tune all involved databases, of course). * They refer to the TPC-C tests, but they don't mention anywhere that the test they did was not a true TPC-C test and they were not even allowed to call it a TPC-C test. A TPC-C test can only be conducted by the rules approved by the TPC Council (`http://www.tpc.org/'). Great Bridge didn't do that. By doing this they have both violated the TPC trademark and miscredited their own benchmarks. The rules set by the TPC Council are very strict to ensure that no one can produce false results or make unprovable statements. Apparently Great Bridge wasn't interested in doing this. * After the first test, we contacted Great Bridge and mentioned to them some of the obvious mistakes they had done with MySQL Server: - Running with a debug version of our ODBC driver - Running on a Linux system that wasn't optimised for threads - Using an old MySQL version when there was a recommended newer one available - Not starting MySQL Server with the right options for heavy multi-user use (the default installation of MySQL Server is tuned for minimal resource use) Great Bridge did run a new test, with our optimised ODBC driver and with better startup options for MySQL Server, but refused to either use our updated glibc library or our standard binary (used by 80% of our users), which was statically linked with a fixed glibc library. According to what we know, Great Bridge did nothing to ensure that the other databases were set up correctly to run well in their test environment. We are sure, however, that they didn't contact Oracle or Microsoft to ask for their advice in this matter. ;) * The benchmark was paid for by Great Bridge, and they decided to publish only partial, chosen results (instead of publishing it all). Tim Perdue, a long-time PostgreSQL fan and a reluctant MySQL user, published a comparison on PHPbuilder (`http://www.phpbuilder.com/columns/tim20001112.php3'). When we became aware of the comparison, we phoned Tim Perdue about this because there were a lot of strange things in his results. For example, he claimed that MySQL Server had a problem with five users in his tests, when we know that there are users with similar machines as his that are using MySQL Server with 2000 simultaneous connections doing 400 queries per second. (In this case the limit was the web bandwidth, not the database.) It sounded like he was using a Linux kernel that either had some problems with many threads, such as kernels before 2.4, which had a problem with many threads on multi-CPU machines. We have documented in this manual how to fix this and Tim should be aware of this problem. The other possible problem could have been an old glibc library and that Tim didn't use a MySQL binary from our site, which is linked with a corrected glibc library, but had compiled a version of his own. In any of these cases, the symptom would have been exactly what Tim had measured. We asked Tim if we could get access to his data so that we could repeat the benchmark and if he could check the MySQL version on the machine to find out what was wrong and he promised to come back to us about this. He has not done that yet. Because of this we can't put any trust in this benchmark either. :( Over time things also change and the preceding benchmarks are not that relevant anymore. MySQL Server now has a couple of different storage engines with different speed/concurrency tradeoffs. *Note Table types::. It would be interesting to see how the above tests would run with the different transactional table types in MySQL Server. PostgreSQL has, of course, also got new features since the test was made. As these tests are not publicly available there is no way for us to know how the database would perform in the same tests today. Conclusion: The only benchmarks that exist today that anyone can download and run against MySQL Server and PostgreSQL are the MySQL benchmarks. We here at MySQL AB believe that `Open Source' databases should be tested with `Open Source' tools! This is the only way to ensure that no one does tests that nobody can reproduce and use this to claim that one database is better than another. Without knowing all the facts it's impossible to answer the claims of the tester. The thing we find strange is that every test we have seen about PostgreSQL, that is impossible to reproduce, claims that PostgreSQL is better in most cases while our tests, which anyone can reproduce, clearly show otherwise. With this we don't want to say that PostgreSQL isn't good at many things (it is!) or that it isn't faster than MySQL Server under certain conditions. We would just like to see a fair test where PostgreSQL performs very well, so that we could get some friendly competition going! For more information about our benchmark suite, see *Note MySQL Benchmarks::. We are working on an even better benchmark suite, including multi-user tests, and a better documentation of what the individual tests really do and how to add more tests to the suite. MySQL Installation ****************** This chapter describes how to obtain and install MySQL: * For a list of sites from which you can obtain MySQL, see *Note Getting MySQL: Getting MySQL. * To see which platforms are supported, see *Note Which OS::. Please note that not all supported systems are equally good for running MySQL on them. On some it is much more robust and efficient than otherssee *Note Which OS:: for details. * Several versions of MySQL are available in both binary and source distributions. We also provide public access to our current source tree for those who want to see our most recent developments and help us test new code. To determine which version and type of distribution you should use, see *Note Which version::. When in doubt, use the binary distribution. * Installation instructions for binary and source distributions are described in *Note Installing binary::, and *Note Installing source::. Each set of instructions includes a section on system-specific problems you may run into. * For post-installation procedures, see *Note Post-installation::. These procedures apply whether you install MySQL using a binary or source distribution. Quick Standard Installation of MySQL ==================================== Installing MySQL on Linux ------------------------- The recommended way to install MySQL on Linux is by using the RPM packages. The MySQL RPMs are currently being built on a SuSE Linux 7.3 system but should work on most versions of Linux that support `rpm' and use `glibc'. If you have problems with an RPM file, for example, if you receive the error "`Sorry, the host 'xxxx' could not be looked up'"see *Note Binary notes-Linux::. The RPM files you may want to use are: * `MySQL-server-VERSION.i386.rpm' The MySQL server. You will need this unless you only want to connect to a MySQL server running on another machine. Please note that this package was called `MySQL-VERSION.i386.rpm' before MySQL 4.0.10. * `MySQL-client-VERSION.i386.rpm' The standard MySQL client programs. You probably always want to install this package. * `MySQL-bench-VERSION.i386.rpm' Tests and benchmarks. Requires Perl and msql-mysql-modules RPMs. * `MySQL-devel-VERSION.i386.rpm' Libraries and include files needed if you want to compile other MySQL clients, such as the Perl modules. * `MySQL-shared-VERSION.i386.rpm' This package contains the shared libraries (`libmysqlclient.so*') which certain languages and applications need to dynamically load and use MySQL. * `MySQL-embedded-VERSION.i386.rpm' The embedded MySQL server library (MySQL 4.x and onwards only). * `MySQL-VERSION.src.rpm' This contains the source code for all of the previous packages. It can also be used to rebuild the RPMs on other architectures (for example, Alpha or SPARC). To see all files in an RPM package, run: shell> rpm -qpl MySQL-VERSION.i386.rpm To perform a standard minimal installation, run: shell> rpm -i MySQL-server-VERSION.i386.rpm MySQL-client-VERSION.i386.rpm To install just the client package, run: shell> rpm -i MySQL-client-VERSION.i386.rpm The RPM places data in `/var/lib/mysql'. The RPM also creates the appropriate entries in `/etc/init.d/' to start the server automatically at boot time. (This means that if you have performed a previous installation, you may want to make a copy of your previously installed MySQL startup file if you made any changes to it, so you don't lose your changes.) If you want to install the MySQL RPM on older Linux distributions that do not support init scripts in `/etc/init.d' (directly or via a symlink), you should create a symbolic link pointing to the old location before installing the RPM: shell> cd /etc ; ln -s rc.d/init.d . However, all current major Linux distributions should already support this new directory layout as it is required for LSB (Linux Standard Base) compliance. After installing the RPM file(s), the `mysqld' daemon should be up and running and you should now be able to start using MySQL. *Note Post-installation::. If something goes wrong, you can find more information in the binary installation chapter. *Note Installing binary::. Installing MySQL on Windows --------------------------- The MySQL server for Windows is available in two distribution types: 1. The binary distribution contains a setup program which installs everything you need so that you can start the server immediately. 2. The source distribution contains all the code and support files for building the executables using the VC++ 6.0 compiler. *Note Windows source build::. Generally speaking, you should use the binary distribution. You will need the following: * A 32-bit Windows Operating System such as 9x, Me, NT, 2000, or XP. The NT family (NT, Windows 2000 and XP) permits running the MySQL server as a service. *Note NT start::. If you want to use tables bigger than 4G, you should install MySQL on an NTFS or newer filesystem. Don't forget to use `MAX_ROWS' and `AVG_ROW_LENGTH' when you create the table. *Note CREATE TABLE::. * TCP/IP protocol support. * A copy of the MySQL binary or distribution for Windows, which can be downloaded from `http://www.mysql.com/downloads/'. Note: The distribution files are supplied with a zipped format and we recommend the use of an adequate FTP client with resume feature to avoid corruption of files during the download process. * A `ZIP' program to unpack the distribution file. * Enough space on the hard drive to unpack, install, and create the databases in accorandance with your requirements. * If you plan to connect to the MySQL server via `ODBC', you will also need the `MyODBC' driver. *Note ODBC::. Installing the Binaries ....................... 1. If you are working on an NT/2000/XP server, logon as a user with administrator privileges. 2. If you are doing an upgrade of an earlier MySQL installation, it is necessary to stop the server. If you are running the server as a service, use: C:\> NET STOP MySQL Otherwise, use: C:\mysql\bin> mysqladmin -u root shutdown 3. On NT/2000/XP machines, if you want to change the server executable (e.g., -max or -nt), it is also necessary to remove the service: C:\mysql\bin> mysqld-max-nt --remove 4. Unzip the distribution file to a temporary directory. 5. Run the `setup.exe' file to begin the installation process. If you want to install into another directory than the default `c:\mysql', use the `Browse' button to specify your preferred directory. 6. Finish the install process. Preparing the Windows MySQL Environment ....................................... Starting with MySQL 3.23.38, the Windows distribution includes both the normal and the MySQL-Max server binaries. Here is a list of the different MySQL servers you can use: *Binary* *Description* `mysqld' Compiled with full debugging and automatic memory allocation checking, symbolic links, InnoDB, and BDB tables. `mysqld-opt' Optimised binary with no support for transactional tables. `mysqld-nt' Optimised binary for NT/2000/XP with support for named pipes. You can run this version on Windows 9x/Me, but in this case no named pipes are created and you must have TCP/IP installed. `mysqld-max' Optimised binary with support for symbolic links, InnoDB and BDB tables. `mysqld-max-nt' Like `mysqld-max', but compiled with support for named pipes. Starting from 3.23.50, named pipes are only enabled if one starts mysqld with `--enable-named-pipe'. All of the preceding binaries are optimised for the Pentium Pro processor but should work on any Intel processor >= i386. You will need to use an option file to specify your MySQL configuration under the following circumstances: * The installation or data directories are different from the default locations (`c:\mysql' and `c:\mysql\data'). * You want to use one of these servers: * mysqld.exe * mysqld-max.exe * mysqld-max-nt.exe * You need to tune the server settings. Normally you can use the `WinMySQLAdmin' tool to edit the option file `my.ini'. In this case you don't have to worry about the following section. There are two option files with the same function: `my.cnf' and `my.ini'. However, to avoid confusion, it's best if you use only of one them. Both files are plain text. The `my.cnf' file, if used, should be created in the root directory of the C drive. The `my.ini' file, if used, should be created in the Windows system directory. (This directory is typically something like `C:\WINDOWS' or `C:\WINNT'. You can determine its exact location from the value of the `windir' environment variable.) MySQL looks first for the `my.ini' file, then for the `my.cnf' file. If your PC uses a boot loader where the C drive isn't the boot drive, your only option is to use the `my.ini' file. Also note that if you use the `WinMySQLAdmin' tool, it uses only the `my.ini' file. The `\mysql\bin' directory contains a help file with instructions for using this tool. Using `notepad.exe', create the option file and edit the `[mysqld]' section to specify values for the `basedir' and `datadir' parameters: [mysqld] # set basedir to installation path, e.g., c:/mysql basedir=the_install_path # set datadir to location of data directory, # e.g., c:/mysql/data or d:/mydata/data datadir=the_data_path Note that Windows pathnames should be specified in option files using forward slashes rather than backslashes. If you do use backslashes, you must double them. If you would like to use a data directory different from the default of `c:\mysql\data', you must copy the entire contents of the `c:\mysql\data' directory to the new location. If you want to use the `InnoDB' transactional tables, you need to manually create two new directories to hold the InnoDB data and log filese.g., `c:\ibdata' and `c:\iblogs'. You will also need to add some extra lines to the option file. *Note InnoDB start::. If you don't want to use `InnoDB' tables, add the `skip-innodb' option to the option file. Now you are ready to test starting the server. Starting the Server for the First Time ...................................... Testing from a DOS command prompt is the best thing to do because the server displays status messages that appear in the DOS window. If something is wrong with your configuration, these messages will make it easier for you to identify and fix any problems. Make sure you are in the directory where the server is located, then enter this command: C:\mysql\bin> mysqld-max --standalone You should see the following messages as the server starts up: InnoDB: The first specified datafile c:\ibdata\ibdata1 did not exist: InnoDB: a new database to be created! InnoDB: Setting file c:\ibdata\ibdata1 size to 209715200 InnoDB: Database physically writes the file full: wait... InnoDB: Log file c:\iblogs\ib_logfile0 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile0 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile1 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile1 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile2 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile2 size to 31457280 InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: creating foreign key constraint system tables InnoDB: foreign key constraint system tables created 011024 10:58:25 InnoDB: Started For further information about running MySQL on Windows, see *Note Windows::. Installing MySQL on Mac OS X ---------------------------- Beginning with MySQL 4.0.11, you can install MySQL on Mac OS X 10.2 ("Jaguar") using a Mac OS X `PKG' binary package instead of the binary tarball distribution. Please note that older versions of Mac OS X (e.g. 10.1.x) are not supported by this package! The package is located inside a disk image (`.dmg') file, that you first need to mount by double-clicking its icon in the Finder. It should then mount the image and display its contents. *NOTE*: Before proceeding with the installation, please make sure that no other MySQL server is running! Please shut down all running MySQL instances before continuing by either using the MySQL Manager Application (on Mac OS X Server) or via `mysqladmin shutdown' on the command line. To actually install the MySQL PKG, double click on the package icon. This will launch the Mac OS Package Installer, which will guide you through the installation of MySQL. The Mac OS X PKG of MySQL will install itself into `/usr/local/mysql-' and will also install a symbolic link `/usr/local/mysql', pointing to the new location. If a directory named `/usr/local/mysql' already exists, it will be renamed to `/usr/local/mysql.bak' first. Additionally, it will install the mysql grant tables by executing `mysql_install_db' after the installation. The installation layout is similar to the one of the binary distribution, all MySQL binaries are located in directory `/usr/local/mysql/bin'. The MySQL socket will be put into `/etc/mysql.sock' by default. *Note Installation layouts::. It requires a user account named `mysql' (which should exist by default on Mac OS X 10.2 and up). If you are running Mac OS X Server, you already have a version of MySQL installed: * Mac OS X Server 10.2-10.2.2 come with MySQL 3.23.51 installed * Mac OS X Server 10.2.3 and 10.2.4 ship with MySQL 3.23.53 This manual section covers the installation of the official MySQL Mac OS X PKG only. Make sure to read Apple's help about installing MySQL (Run the "Help View" application, select "Mac OS X Server" help, and do a search for "MySQL" and read the item entitled "Installing MySQL"). Especially note, that the pre-installed version of MySQL on Mac OS X Server is being started with the command `safe_mysqld' instead of `mysqld_safe'! If you previously used Marc Liyanage's MySQL packages for Mac OS X from `http://www.entropy.ch', you can simply follow the update instructions for packages using the binary installation layout as given on his pages. If you are upgrading from Marc's version or from the Mac OS X Server version of MySQL to the official MySQL PKG, you also need to convert the existing MySQL privilege tables. *Note Upgrading-from-3.23::. After the installation, you can start up MySQL by running the following commands in a terminal window. Please note that you need to have administrator privileges to perform this task! shell> cd /usr/local/mysql shell> sudo ./bin/mysqld_safe (Enter your password) (Press CTRL+Z) shell> bg (Press CTRL+D to exit the shell) You should now be able to connect to the MySQL server, e.g. by running `/usr/local/mysql/bin/mysql'. To enable the automatic startup of MySQL on bootup, you can download Marc Liyanage's MySQL StartupItem from the following location: `http://www2.entropy.ch/download/mysql-startupitem.pkg.tar.gz' We plan to add a StartupItem to the official MySQL PKG in the near future. Please note that installing a new MySQL PKG does not remove the directory of an older installation - unfortunately the Mac OS X Installer does not yet offer the functionality required to properly upgrade previously installed packages. After you have copied over the MySQL database files from the previous version and have successfully started the new version, you should consider removing the old installation files to save up disk space. Additionally, you should also remove older versions of the Package Receipt directories located in `/Library/Receipts/mysql-.pkg'. General Installation Issues =========================== How to Get MySQL ---------------- Check the MySQL homepage (`http://www.mysql.com/') for information about the current version and for downloading instructions. Our main mirror is located at `http://mirrors.sunsite.dk/mysql/'. For a complete upto-date list of MySQL web/download mirrors, see `http://www.mysql.com/downloads/mirrors.html'. There you will also find information about becoming a MySQL mirror site and how to report a bad or out-of-date mirror. Verifying Package Integrity Using `MD5 Checksums' or `GnuPG' ------------------------------------------------------------ After you have downloaded the MySQL package that suits your needs and before you attempt to install it, you should make sure it is intact and has not been tampered with. MySQL AB offers two means of integrity checking: `MD5 checksums' and cryptographic signatures using `GnuPG', the `GNU Privacy Guard'. Verifying the `MD5 Checksum' ---------------------------- After you have downloaded the package, you should check, if the MD5 checksum matches the one provided on the MySQL download pages. Each package has an individual checksum, that you can verify with the following command: shell> md5sum Note, that not all operating systems support the `md5sum' command - on some it is simply called `md5', others do not ship it at all. On Linux, it is part of the `GNU Text Utilities' package, which is available for a wide range of platforms. You can download the source code from `http://www.gnu.org/software/textutils/' as well. If you have `OpenSSL' installed, you can also use the command `openssl md5 ' instead. A DOS/Windows implementation of the `md5' command is available from `http://www.fourmilab.ch/md5/'. Example: shell> md5sum mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz 155836a7ed8c93aee6728a827a6aa153 mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz You should check, if the resulting checksum matches the one printed on the download page right below the respective package. Most mirror sites also offer a file named `MD5SUMS', which also includes the MD5 checksums for all files included in the `Downloads' directory. Please note however that it's very easy to modify this file and it's not a very reliable method! If in doubt, you should consult different mirror sites and compare the results. Signature Checking Using `GnuPG' -------------------------------- A more reliable method of verifying the integrity of a package is using cryptographic signatures. MySQL AB uses the `GNU Privacy Guard' (`GnuPG'), an `Open Source' alternative to the very well-known `Pretty Good Privacy' (`PGP') by Phil Zimmermann. See `http://www.gnupg.org/' and `http://www.openpgp.org/' for more information about `OpenPGP'/`GnuPG' and how to obtain and install `GnuPG' on your system. Most Linux distributions already ship with `GnuPG' installed by default. Beginning with MySQL 4.0.10 (February 2003), MySQL AB has started signing their downloadable packages with `GnuPG'. Cryptographic signatures are a much more reliable method of verifying the integrity and authenticity of a file. To verify the signature for a specific package, you first need to obtain a copy of MySQL AB's public GPG build key . You can either cut and paste it directly from here, or obtain it from `http://www.keyserver.net/'. Key ID: pub 1024D/5072E1F5 2003-02-03 MySQL Package signing key (www.mysql.com) Fingerprint: A4A9 4068 76FC BD3C 4567 70C8 8C71 8D3B 5072 E1F5 Public Key (ASCII-armored): -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org mQGiBD4+owwRBAC14GIfUfCyEDSIePvEW3SAFUdJBtoQHH/nJKZyQT7h9bPlUWC3 RODjQReyCITRrdwyrKUGku2FmeVGwn2u2WmDMNABLnpprWPkBdCk96+OmSLN9brZ fw2vOUgCmYv2hW0hyDHuvYlQA/BThQoADgj8AW6/0Lo7V1W9/8VuHP0gQwCgvzV3 BqOxRznNCRCRxAuAuVztHRcEAJooQK1+iSiunZMYD1WufeXfshc57S/+yeJkegNW hxwR9pRWVArNYJdDRT+rf2RUe3vpquKNQU/hnEIUHJRQqYHo8gTxvxXNQc7fJYLV K2HtkrPbP72vwsEKMYhhr0eKCbtLGfls9krjJ6sBgACyP/Vb7hiPwxh6rDZ7ITnE kYpXBACmWpP8NJTkamEnPCia2ZoOHODANwpUkP43I7jsDmgtobZX9qnrAXw+uNDI QJEXM6FSbi0LLtZciNlYsafwAPEOMDKpMqAK6IyisNtPvaLd8lH0bPAnWqcyefep rv0sxxqUEMcM3o7wwgfN83POkDasDbs3pjwPhxvhz6//62zQJ7Q7TXlTUUwgUGFj a2FnZSBzaWduaW5nIGtleSAod3d3Lm15c3FsLmNvbSkgPGJ1aWxkQG15c3FsLmNv bT6IXQQTEQIAHQUCPj6jDAUJCWYBgAULBwoDBAMVAwIDFgIBAheAAAoJEIxxjTtQ cuH1cY4AnilUwTXn8MatQOiG0a/bPxrvK/gCAJ4oinSNZRYTnblChwFaazt7PF3q zIhMBBMRAgAMBQI+PqPRBYMJZgC7AAoJEElQ4SqycpHyJOEAn1mxHijft00bKXvu cSo/pECUmppiAJ41M9MRVj5VcdH/KN/KjRtW6tHFPYhMBBMRAgAMBQI+QoIDBYMJ YiKJAAoJELb1zU3GuiQ/lpEAoIhpp6BozKI8p6eaabzF5MlJH58pAKCu/ROofK8J Eg2aLos+5zEYrB/LsrkCDQQ+PqMdEAgA7+GJfxbMdY4wslPnjH9rF4N2qfWsEN/l xaZoJYc3a6M02WCnHl6ahT2/tBK2w1QI4YFteR47gCvtgb6O1JHffOo2HfLmRDRi Rjd1DTCHqeyX7CHhcghj/dNRlW2Z0l5QFEcmV9U0Vhp3aFfWC4Ujfs3LU+hkAWzE 7zaD5cH9J7yv/6xuZVw411x0h4UqsTcWMu0iM1BzELqX1DY7LwoPEb/O9Rkbf4fm Le11EzIaCa4PqARXQZc4dhSinMt6K3X4BrRsKTfozBu74F47D8Ilbf5vSYHbuE5p /1oIDznkg/p8kW+3FxuWrycciqFTcNz215yyX39LXFnlLzKUb/F5GwADBQf+Lwqq a8CGrRfsOAJxim63CHfty5mUc5rUSnTslGYEIOCR1BeQauyPZbPDsDD9MZ1ZaSaf anFvwFG6Llx9xkU7tzq+vKLoWkm4u5xf3vn55VjnSd1aQ9eQnUcXiL4cnBGoTbOW I39EcyzgslzBdC++MPjcQTcA7p6JUVsP6oAB3FQWg54tuUo0Ec8bsM8b3Ev42Lmu QT5NdKHGwHsXTPtl0klk4bQk4OajHsiy1BMahpT27jWjJlMiJc+IWJ0mghkKHt92 6s/ymfdf5HkdQ1cyvsz5tryVI3Fx78XeSYfQvuuwqp2H139pXGEkg0n6KdUOetdZ Whe70YGNPw1yjWJT1IhMBBgRAgAMBQI+PqMdBQkJZgGAAAoJEIxxjTtQcuH17p4A n3r1QpVC9yhnW2cSAjq+kr72GX0eAJ4295kl6NxYEuFApmr1+0uUq/SlsQ== =YJkx -----END PGP PUBLIC KEY BLOCK----- You can import this key into your public `GPG' keyring by using `gpg --import'. See the `GPG' documentation for more info on how to work with public keys. After you have downloaded and imported the public build key, now download your desired MySQL package and the corresponding signature, which is also available from the download page. The signature has the file name extension `.asc'. For example, the signature for `mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz' would be `mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz.asc'. Make sure that both files are stored in the same directory and then run the following command to verify the signature for this file: shell> gpg --verify .asc Example: shell> gpg --verify mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz.asc gpg: Warning: using insecure memory! gpg: Signature made Mon 03 Feb 2003 08:50:39 PM MET using DSA key ID 5072E1F5 gpg: Good signature from "MySQL Package signing key (www.mysql.com) " The "Good signature" message indicates that everything is all right. For `RPM' packages, there is no separate signature - `RPM' packages actually have a built-in `GPG' signature and `MD5 checksum'. You can verify them by running the following command: shell> rpm --checksig .rpm Example: shell> rpm --checksig MySQL-server-4.0.10-0.i386.rpm MySQL-server-4.0.10-0.i386.rpm: md5 gpg OK *Note:* If you are using RPM 4.1 and it complains about `(GPG) NOT OK (MISSING KEYS: GPG#5072e1f5)' (even though you have imported it into your GPG public keyring), you need to import the key into the RPM keyring first. RPM 4.1 does not use your GPG keyring (and GPG itself) anymore, but rather maintains its own keyring (because it's a system wide application and the GPG public keyring is user-specific file). To import the MySQL public key into the RPM keyring, please use the following command: shell> rpm --import Example: shell> rpm --import mysql_pubkey.asc In case you notice that the `MD5 checksum' or `GPG' signatures do not match, first try to download the respective package one more time, maybe from another mirror site. If you repeatedly can not successfully verify the integrity of the package, please notify us about such incidents including the full package name and the download site you have been using at or . Operating Systems Supported by MySQL ------------------------------------ We use GNU Autoconf, so it is possible to port MySQL to all modern systems with working Posix threads and a C++ compiler. (To compile only the client code, a C++ compiler is required but not threads.) We use and develop the software ourselves primarily on Sun Solaris (Versions 2.5 - 2.7) and SuSE Linux Version 7.x. Note that for many operating systems, the native thread support works only in the latest versions. MySQL has been reported to compile successfully on the following operating system/thread package combinations: * AIX 4.x, 5.x with native threads. *Note IBM-AIX::. * Amiga. * BSDI 2.x with the MIT-pthreads package. *Note BSDI::. * BSDI 3.0, 3.1 and 4.x with native threads. *Note BSDI::. * DEC Unix 4.x with native threads. *Note Alpha-DEC-UNIX::. * FreeBSD 2.x with the MIT-pthreads package. *Note FreeBSD::. * FreeBSD 3.x and 4.x with native threads. *Note FreeBSD::. * HP-UX 10.20 with the DCE threads or the MIT-pthreads package. *Note HP-UX 10.20::. * HP-UX 11.x with the native threads. *Note HP-UX 11.x::. * Linux 2.0+ with LinuxThreads 0.7.1+ or `glibc' 2.0.7+. *Note Linux::. * Mac OS X. *Note Mac OS X::. * NetBSD 1.3/1.4 Intel and NetBSD 1.3 Alpha (Requires GNU make). *Note NetBSD::. * OpenBSD > 2.5 with native threads. OpenBSD < 2.5 with the MIT-pthreads package. *Note OpenBSD::. * OS/2 Warp 3, FixPack 29 and OS/2 Warp 4, FixPack 4. *Note OS/2::. * SGI Irix 6.x with native threads. *Note SGI-Irix::. * Solaris 2.5 and above with native threads on SPARC and x86. *Note Solaris::. * SunOS 4.x with the MIT-pthreads package. *Note Solaris::. * Caldera (SCO) OpenServer with a recent port of the FSU Pthreads package. *Note Caldera::. * Caldera (SCO) UnixWare 7.0.1. *Note Caldera Unixware::. * Tru64 Unix * Windows 9x, Me, NT, 2000 and XP. *Note Windows::. Note that not all platforms are suited equally well for running MySQL. How well a certain platform is suited for a high-load mission-critical MySQL server is determined by the following factors: * General stability of the thread library. A platform may have excellent reputation otherwise, but if the thread library is unstable in the code that is called by MySQL, even if everything else is perfect, MySQL will be only as stable as the thread library. * The ability of the kernel and/or thread library to take advantage of *SMP* on multi-processor systems. In other words, when a process creates a thread, it should be possible for that thread to run on a different CPU than the original process. * The ability of the kernel and/or the thread library to run many threads which acquire/release a mutex over a short critical region frequently without excessive context switches. In other words, if the implementation of `pthread_mutex_lock()' is too anxious to yield CPU time, this will hurt MySQL tremendously. If this issue is not taken care of, adding extra CPUs will actually make MySQL slower. * General filesystem stability/performance. * Ability of the filesystem to deal with large files at all and deal with them efficiently, if your tables are big. * Our level of expertise here at MySQL AB with the platform. If we know a platform well, we introduce platform-specific optimisations/fixes enabled at compile time. We can also provide advice on configuring your system optimally for MySQL. * The amount of testing of similar configurations we have done internally. * The number of users that have successfully run MySQL on that platform in similar configurations. If this number is high, the chances of hitting some platform-specific surprises are much smaller. Based on the preceding criteria, the best platforms for running MySQL at this point are x86 with SuSE Linux 7.1, 2.4 kernel, and ReiserFS (or any similar Linux distribution) and SPARC with Solaris 2.7 or 2.8. FreeBSD comes third, but we really hope it will join the top club once the thread library is improved. We also hope that at some point we will be able to include all other platforms on which MySQL compiles, runs okay, but not quite with the same level of stability and performance, into the top category. This will require some effort on our part in cooperation with the developers of the OS/library components MySQL depends upon. If you are interested in making one of those components better, are in a position to influence their development, and need more detailed instructions on what MySQL needs to run better, send an e-mail to . Please note that the preceding comparison is not to say that one OS is better or worse than the other in general. We are talking about choosing a particular OS for a dedicated purposerunning MySQL, and compare platforms in that regard only. With this in mind, the result of this comparison would be different if we included more issues into it. And in some cases, the reason one OS is better than the other could simply be that we have put forth more effort into testing on and optimising for that particular platform. We are just stating our observations to help you decide on which platform to use MySQL on in your setup. Which MySQL Version to Use -------------------------- The first decision to make is whether you want to use the latest development release or the last stable release: * Normally, if you are beginning to use MySQL for the first time or trying to port it to some system for which there is no binary distribution, we recommend going with the stable release (currently version 3.23). Note that all MySQL releases are checked with the MySQL benchmarks and an extensive test suite before each release (even the development releases). * Otherwise, if you are running an old system and want to upgrade, but don't want to take chances with a non-seamless upgrade, you should upgrade to the latest in the same branch you are using (where only the last version number is newer than yours). We have tried to fix only fatal bugs and make small, relatively safe changes to that version. The second decision to make is whether you want to use a source distribution or a binary distribution. In most cases you should probably use a binary distribution, if one exists for your platform, as this generally will be easier to install than a source distribution. In the following cases you probably will be better off with a source installation: * If you want to install MySQL at some explicit location. (The standard binary distributions are "ready to run" at any place, but you may want to get even more flexibility). * To be able to satisfy different user requirements, we are providing two different binary versions: one compiled with the non-transactional storage engines (a small, fast binary), and one configured with the most important extended options like transaction-safe tables. Both versions are compiled from the same source distribution. All native `MySQL' clients can connect to both MySQL versions. The extended MySQL binary distribution is marked with the `-max' suffix and is configured with the same options as `mysqld-max'. *Note `mysqld-max': mysqld-max. If you want to use the MySQL-Max RPM, you must first install the standard MySQL RPM. * If you want to configure `mysqld' with some extra features that are not in the standard binary distributions. Here is a list of the most common extra options that you may want to use: * `--with-innodb' * `--with-berkeley-db' * `--with-raid' * `--with-libwrap' * `--with-named-z-lib (This is done for some of the binaries)' * `--with-debug[=full]' * The default binary distribution is normally compiled with support for all character sets and should work on a variety of processors from the same processor family. If you want a faster MySQL server you may want to recompile it with support for only the character sets you need, use a better compiler (like `pgcc'), or use compiler options that are better optimised for your processor. * If you have found a bug and reported it to the MySQL development team you will probably receive a patch that you need to apply to the source distribution to get the bug fixed. * If you want to read (and/or modify) the C and C++ code that makes up MySQL, you should get a source distribution. The source code is always the ultimate manual. Source distributions also contain more tests and examples than binary distributions. The MySQL naming scheme uses release numbers that consist of three numbers and a suffix. For example, a release name like `mysql-3.21.17-beta' is interpreted like this: * The first number (`3') describes the file format. All Version 3 releases have the same file format. * The second number (`21') is the release level. Normally there are two to choose from. One is the release/stable branch (currently `23') and the other is the development branch (currently `4.0'). Normally both are stable, but the development version may have quirks, may be missing documentation on new features, or may fail to compile on some systems. * The third number (`17') is the version number within the release level. This is incremented for each new distribution. Usually you want the latest version for the release level you have chosen. * The suffix (`beta') indicates the stability level of the release. The possible suffixes are: - `alpha' indicates that the release contains some large section of new code that hasn't been 100% tested. Known bugs (usually there are none) should be documented in the News section. *Note News::. There are also new commands and extensions in most alpha releases. Active development that may involve major code changes can occur on an alpha release, but everything will be tested before doing a release. There should be no known bugs in any MySQL release. - `beta' means that all new code has been tested. No major new features that could cause corruption on old code are added. There should be no known bugs. A version changes from alpha to beta when there haven't been any reported fatal bugs within an alpha version for at least a month and we don't plan to add any features that could make any old command more unreliable. - `gamma' is a beta that has been around a while and seems to work fine. Only minor fixes are added. This is what many other companies call a release. - If there is no suffix, it means that the version has been run for a while at many different sites with no reports of bugs other than platform-specific bugs. Only critical bug fixes are applied to the release. This is what we call a stable release. All versions of MySQL are run through our standard tests and benchmarks to ensure that they are relatively safe to use. Because the standard tests are extended over time to check for all previously found bugs, the test suite keeps getting better. Note that all releases have been tested at least with: An internal test suite This is part of a production system for a customer. It has many tables with hundreds of megabytes of data. The MySQL benchmark suite This runs a range of common queries. It is also a test to see whether the latest batch of optimisations actually made the code faster. *Note MySQL Benchmarks::. The `crash-me' test This tries to determine what features the database supports and what its capabilities and limitations are. *Note MySQL Benchmarks::. Another test is that we use the newest MySQL version in our internal production environment, on at least one machine. We have more than 100 gigabytes of data to work with. Installation Layouts -------------------- This section describes the default layout of the directories created by installing binary and source distributions. A binary distribution is installed by unpacking it at the installation location you choose (typically `/usr/local/mysql') and creates the following directories in that location: *Directory* *Contents of directory* `bin' Client programs and the `mysqld' server `data' Log files, databases `include' Include (header) files `lib' Libraries `scripts' `mysql_install_db' `share/mysql'Error message files `sql-bench' Benchmarks A source distribution is installed after you configure and compile it. By default, the installation step installs files under `/usr/local', in the following subdirectories: *Directory* *Contents of directory* `bin' Client programs and scripts `include/mysql'Include (header) files `info' Documentation in Info format `lib/mysql' Libraries `libexec' The `mysqld' server `share/mysql'Error message files `sql-bench' Benchmarks and `crash-me' test `var' Databases and log files Within an installation directory, the layout of a source installation differs from that of a binary installation in the following ways: * The `mysqld' server is installed in the `libexec' directory rather than in the `bin' directory. * The data directory is `var' rather than `data'. * `mysql_install_db' is installed in the `/usr/local/bin' directory rather than in `/usr/local/mysql/scripts'. * The header file and library directories are `include/mysql' and `lib/mysql' rather than `include' and `lib'. You can create your own binary installation from a compiled source distribution by executing the script `scripts/make_binary_distribution'. How and When Updates Are Released --------------------------------- MySQL is evolving quite rapidly here at MySQL AB and we want to share this with other MySQL users. We try to make a release when we have very useful features that others seem to have a need for. We also try to help out users who request features that are easy to implement. We take note of what our licensed users want to have, and we especially take note of what our extended e-mail supported customers want and try to help them out. No one has to download a new release. The News section will tell you if the new release has something you really want. *Note News::. We use the following policy when updating MySQL: * For each minor update, the last number in the version string is incremented. When there are major new features or minor incompatibilities with previous versions, the second number in the version string is incremented. When the file format changes, the first number is increased. * Stable-tested releases are meant to appear about 1-2 times a year, but if small bugs are found, a release with only bug fixes will be released. * Working releases/bug fixes to old releases are meant to appear about every 1-8 weeks. * Binary distributions for some platforms will be made by us for major releases. Other people may make binary distributions for other systems but probably less frequently. * We usually make patches available as soon as we have located and fixed small bugs. They are posted to and will be added to the next release. * For non-critical but annoying bugs, we will add them the MySQL source repository and they will be fixed in the next release. * If there is, by any chance, a fatal bug in a release we will make a new release as soon as possible. We would like other companies to do this, too. The current stable release is Version 3.23; we have already moved active development to Version 4.0. Bugs will still be fixed in the stable version. We don't believe in a complete freeze, as this also leaves out bug fixes and things that "must be done." "Somewhat frozen" means that we may add small things that "almost surely will not affect anything that's already working." MySQL uses a slightly different naming scheme from most other products. In general it's relatively safe to use any version that has been out for a couple of weeks without being replaced with a new version. *Note Which version::. MySQL Binaries Compiled by MySQL AB ----------------------------------- As a service, we at MySQL AB provide a set of binary distributions of MySQL that are compiled at our site or at sites where customers kindly have given us access to their machines. These distributions are generated using the script `Build-tools/Do-compile' which compiles the source code and creates the binary `tar.gz' archive using `scripts/make_binary_distribution' These binaries are configured and built with the following compilers and options. Binaries built on MySQL AB development systems: Linux 2.4.xx i386 with `gcc' 2.95.3 `CFLAGS="-O2 -mcpu=pentiumpro" CXX=gcc CXXFLAGS="-O2 -mcpu=pentiumpro -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --disable-shared --with-client-ldflags=-all-static --with-mysqld-ldflags=-all-static' Linux 2.4.xx ia64 with `ecc' (Intel C++ Itanium Compiler 7.0) `CC=ecc CFLAGS=-tpp1 CXX=ecc CXXFLAGS=-tpp1 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile' Linux 2.4.xx alpha with `ccc' (Compaq C V6.2-505 / Compaq C++ V6.3-006) `CC=ccc CFLAGS="-fast -arch generic" CXX=cxx CXXFLAGS="-fast -arch generic -noexceptions -nortti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-mysqld-ldflags=-non_shared --with-client-ldflags=-non_shared --disable-shared' Linux 2.2.xx sparc with `egcs' 1.1.2 `CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --disable-shared' Linux 2.4.xx s390 with `gcc' 2.95.3 `CFLAGS="-O2" CXX=gcc CXXFLAGS="-O2 -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared --with-client-ldflags=-all-static --with-mysqld-ldflags=-all-static' Sun Solaris 2.8 sparc with `gcc' 3.2 `CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=no --with-named-curses-libs=-lcurses --disable-shared' Sun Solaris 2.9 sparc with `gcc' 2.95.3 `CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-curses-libs=-lcurses --disable-shared' Sun Solaris 2.9 sparc with `cc-5.0' (Sun Forte 5.0) `CC=cc-5.0 CXX=CC ASFLAGS="-xarch=v9" CFLAGS="-Xa -xstrconst -mt -D_FORTEC_ -xarch=v9" CXXFLAGS="-noex -mt -D_FORTEC_ -xarch=v9" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=no --enable-thread-safe-client --disable-shared' IBM AIX 4.3.2 ppc with `gcc' 3.2.1 `CFLAGS="-O2 -mcpu=powerpc -Wa,-many " CXX=gcc CXXFLAGS="-O2 -mcpu=powerpc -Wa,-many -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --disable-shared' IBM AIX 5.1.0 ppc with `gcc' 3.2.1 `CFLAGS="-O2 -mcpu=powerpc -Wa,-many" CXX=gcc CXXFLAGS="-O2 -mcpu=powerpc -Wa,-many -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --with-server-suffix="-pro" --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --disable-shared --with-innodb' HP-UX 10.20 pa-risc1.1 with `gcc' 3.1 `CFLAGS="-DHPUX -I/opt/dce/include -O3 -fPIC" CXX=gcc CXXFLAGS="-DHPUX -I/opt/dce /include -felide-constructors -fno-exceptions -fno-rtti -O3 -fPIC" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-pthread --with-named-thread-libs=-ldce --with-lib-ccflags=-fPIC --disable-shared' HP-UX 11.11 pa-risc2.0 with `aCC' (HP ANSI C++ B3910B A.03.33) `CC=cc CXX=aCC CFLAGS=+DD64 CXXFLAGS=+DD64 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared' Apple Mac OS X 10.2 powerpc with `gcc' 3.1 `CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared' FreeBSD 4.7 i386 with `gcc' 2.95.4 `CFLAGS=-DHAVE_BROKEN_REALPATH ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=not-used --disable-shared' The following binaries are built on third-party systems kindly provided to MySQL AB by other users. Please note that these are only provided as a courtesy. Since MySQL AB does not have full control over these systems, we can only provide limited support for the binaries built on these systems. SCO Unix 3.2v5.0.6 i386 with `gcc' 2.95.3 `CFLAGS="-O3 -mpentium" LDFLAGS=-static CXX=gcc CXXFLAGS="-O3 -mpentium -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --enable-thread-safe-client --disable-shared' Caldera Open Unix 8.0.0 i386 with `CC' 3.2 `CC=cc CFLAGS="-O" CXX=CC ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --enable-thread-safe-client --disable-shared' Compaq Tru64 OSF/1 V5.1 732 alpha with `cc/cxx' (Compaq C V6.3-029i / DIGITAL C++ V6.1-027) `CC="cc -pthread" CFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all" CXX="cxx -pthread" CXXFLAGS="-O4 -ansi_alias -fast -inline speed -speculate all -noexceptions -nortti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-prefix=/usr/local/mysql --with-named-thread-libs="-lpthread -lmach -lexc -lc" --disable-shared --with-mysqld-ldflags=-all-static' SGI Irix 6.5 IP32 with `gcc' 3.0.1 `CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared' The following compile options have been used for binary packages MySQL AB used to provide in the past. These binaries are currently not being updated anymore, but the compile options are kept in here for reference purposes. Linux 2.2.x with x686 with `gcc' 2.95.2 `CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --disable-shared --with-extra-charsets=complex' SunOS 4.1.4 2 sun4c with `gcc' 2.7.2.1 `CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors" ./configure --prefix=/usr/local/mysql --disable-shared --with-extra-charsets=complex --enable-assembler' SunOS 5.5.1 (and above) sun4u with `egcs' 1.0.3a or 2.90.27 or gcc 2.95.2 and newer `CC=gcc CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex --enable-assembler' SunOS 5.6 i86pc with `gcc' 2.8.1 `CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex' BSDI BSD/OS 3.1 i386 with `gcc' 2.7.2.1 `CC=gcc CXX=gcc CXXFLAGS=-O ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex' BSDI BSD/OS 2.1 i386 with `gcc' 2.7.2 `CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex' AIX 2 4 with `gcc' 2.7.2.2 `CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex' Anyone who has more optimal options for any of the preceding configurations listed can always mail them to the developer's mailing list at . RPM distributions prior to MySQL Version 3.22 are user-contributed. Beginning with Version 3.22, the RPMs are generated by us at MySQL AB. If you want to compile a debug version of MySQL, you should add `--with-debug' or `--with-debug=full' to the preceding configure lines and remove any `-fomit-frame-pointer' options. For the Windows distribution, please see *Note Windows installation::. Installing a MySQL Binary Distribution -------------------------------------- See also *Note Windows binary installation::, *Note Linux-RPM::, and *Note Building clients::. You need the following tools to install a MySQL binary distribution: * GNU `gunzip' to uncompress the distribution. * A reasonable `tar' to unpack the distribution. GNU `tar' is known to work. Sun `tar' is known to have problems. An alternative installation method under Linux is to use RPM-based (RPM Package Manager) distributions. *Note Linux-RPM::. If you run into problems, *please always use `mysqlbug'* when posting questions to . Even if the problem isn't a bug, `mysqlbug' gathers system information that will help others solve your problem. By not using `mysqlbug', you lessen the likelihood of getting a solution to your problem! You will find `mysqlbug' in the `bin' directory after you unpack the distribution. *Note Bug reports::. The basic commands you must execute to install and use a MySQL binary distribution are: shell> groupadd mysql shell> useradd -g mysql mysql shell> cd /usr/local shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s full-path-to-mysql-VERSION-OS mysql shell> cd mysql shell> scripts/mysql_install_db shell> chown -R root . shell> chown -R mysql data shell> chgrp -R mysql . shell> bin/safe_mysqld --user=mysql & or shell> bin/mysqld_safe --user=mysql & if you are running MySQL 4.x You can add new users using the `bin/mysql_setpermission' script if you install the `DBI' and `Msql-Mysql-modules' Perl modules. A more detailed description follows. To install a binary distribution, follow these steps, then proceed to *Note Post-installation::, for post-installation setup and testing: 1. Pick the directory under which you want to unpack the distribution, and move into it. In the following example, we unpack the distribution under `/usr/local' and create a directory `/usr/local/mysql' into which MySQL is installed. (The following instructions, therefore, assume you have permission to create files in `/usr/local'. If that directory is protected, you will need to perform the installation as `root'.) 2. Obtain a distribution file from one of the sites listed in *Note Getting MySQL: Getting MySQL. MySQL binary distributions are provided as compressed `tar' archives and have names like `mysql-VERSION-OS.tar.gz', where `VERSION' is a number (for example, `3.21.15'), and `OS' indicates the type of operating system for which the distribution is intended (for example, `pc-linux-gnu-i586'). 3. If you see a binary distribution marked with the `-max' suffix, this means that the binary has support for transaction-safe tables and other features. *Note `mysqld-max': mysqld-max. Note that all binaries are built from the same MySQL source distribution. 4. Add a user and group for `mysqld' to run as: shell> groupadd mysql shell> useradd -g mysql mysql These commands add the `mysql' group and the `mysql' user. The syntax for `useradd' and `groupadd' may differ slightly on different versions of Unix. They may also be called `adduser' and `addgroup'. You may wish to call the user and group something else instead of `mysql'. 5. Change into the intended installation directory: shell> cd /usr/local 6. Unpack the distribution and create the installation directory: shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s full-path-to-mysql-VERSION-OS mysql The first command creates a directory named `mysql-VERSION-OS'. The second command makes a symbolic link to that directory. This lets you refer more easily to the installation directory as `/usr/local/mysql'. 7. Change into the installation directory: shell> cd mysql You will find several files and subdirectories in the `mysql' directory. The most important for installation purposes are the `bin' and `scripts' subdirectories. `bin' This directory contains client programs and the server You should add the full pathname of this directory to your `PATH' environment variable so that your shell finds the MySQL programs properly. *Note Environment variables::. `scripts' This directory contains the `mysql_install_db' script used to initialise the `mysql' database containing the grant tables that store the server access permissions. 8. If you would like to use `mysqlaccess' and have the MySQL distribution in some non-standard place, you must change the location where `mysqlaccess' expects to find the `mysql' client. Edit the `bin/mysqlaccess' script at approximately line 18. Search for a line that looks like this: $MYSQL = '/usr/local/bin/mysql'; # path to mysql executable Change the path to reflect the location where `mysql' actually is stored on your system. If you do not do this, you will get a `Broken pipe' error when you run `mysqlaccess'. 9. Create the MySQL grant tables (necessary only if you haven't installed MySQL before): shell> scripts/mysql_install_db Note that MySQL versions older than Version 3.22.10 started the MySQL server when you run `mysql_install_db'. This is no longer true! 10. Change ownership of binaries to `root' and ownership of the data directory to the user that you will run `mysqld' as: shell> chown -R root /usr/local/mysql/. shell> chown -R mysql /usr/local/mysql/data shell> chgrp -R mysql /usr/local/mysql/. The first command changes the `owner' attribute of the files to the `root' user, the second one changes the `owner' attribute of the data directory to the `mysql' user, and the third one changes the `group' attribute to the `mysql' group. 11. If you want to install support for the Perl `DBI'/`DBD' interface, see *Note Perl support::. 12. If you would like MySQL to start automatically when you boot your machine, you can copy `support-files/mysql.server' to the location where your system has its startup files. More information can be found in the `support-files/mysql.server' script itself and in *Note Automatic start::. After everything has been unpacked and installed, you should initialise and test your distribution. You can start the MySQL server with the following command: shell> bin/safe_mysqld --user=mysql & Now proceed to *Note `safe_mysqld': safe_mysqld, and *Note Post-installation::. Installing a MySQL Source Distribution ====================================== Before you proceed with the source installation, check first to see if our binary is available for your platform and if it will work for you. We put a lot of effort into making sure that our binaries are built with the best possible options. You need the following tools to build and install MySQL from source: * GNU `gunzip' to uncompress the distribution. * A reasonable `tar' to unpack the distribution. GNU `tar' is known to work. Sun `tar' is known to have problems. * A working ANSI C++ compiler. `gcc' >= 2.95.2, `egcs' >= 1.0.2 or `egcs 2.91.66', SGI C++, and SunPro C++ are some of the compilers that are known to work. `libg++' is not needed when using `gcc'. `gcc' 2.7.x has a bug that makes it impossible to compile some perfectly legal C++ files, such as `sql/sql_base.cc'. If you only have `gcc' 2.7.x, you must upgrade your `gcc' to be able to compile MySQL. `gcc' 2.8.1 is also known to have problems on some platforms, so it should be avoided if a new compiler exists for the platform. `gcc' >= 2.95.2 is recommended when compiling MySQL Version 3.23.x. * A good `make' program. GNU `make' is always recommended and is sometimes required. If you have problems, we recommend trying GNU `make' 3.75 or newer. If you are using a recent version of `gcc', recent enough to understand the `-fno-exceptions' option, it is *very important* that you use it. Otherwise, you may compile a binary that crashes randomly. We also recommend that you use `-felide-constructors' and `-fno-rtti' along with `-fno-exceptions'. When in doubt, do the following: CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions \ -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler \ --with-mysqld-ldflags=-all-static On most systems this will give you a fast and stable binary. If you run into problems, *please always use `mysqlbug'* when posting questions to . Even if the problem isn't a bug, `mysqlbug' gathers system information that will help others solve your problem. By not using `mysqlbug', you lessen the likelihood of getting a solution to your problem! You will find `mysqlbug' in the `scripts' directory after you unpack the distribution. *Note Bug reports::. Quick Installation Overview --------------------------- The basic commands you must execute to install a MySQL source distribution are: shell> groupadd mysql shell> useradd -g mysql mysql shell> gunzip < mysql-VERSION.tar.gz | tar -xvf - shell> cd mysql-VERSION shell> ./configure --prefix=/usr/local/mysql shell> make shell> make install shell> scripts/mysql_install_db shell> chown -R root /usr/local/mysql shell> chown -R mysql /usr/local/mysql/var shell> chgrp -R mysql /usr/local/mysql shell> cp support-files/my-medium.cnf /etc/my.cnf shell> /usr/local/mysql/bin/safe_mysqld --user=mysql & or shell> /usr/local/mysql/bin/mysqld_safe --user=mysql & if you are running MySQL 4.x. If you want to have support for InnoDB tables, you should edit the `/etc/my.cnf' file and remove the `#' character before the parameter that starts with `innodb_...'. *Note Option files::, and *Note InnoDB start::. If you start from a source RPM, do the following: shell> rpm --rebuild --clean MySQL-VERSION.src.rpm This will make a binary RPM that you can install. You can add new users using the `bin/mysql_setpermission' script if you install the `DBI' and `Msql-Mysql-modules' Perl modules. A more detailed description follows. To install a source distribution, follow these steps, then proceed to *Note Post-installation::, for post-installation initialisation and testing: 1. Pick the directory under which you want to unpack the distribution, and move into it. 2. Obtain a distribution file from one of the sites listed in *Note Getting MySQL: Getting MySQL. 3. If you are interested in using Berkeley DB tables with MySQL, you will need to obtain a patched version of the Berkeley DB source code. Please read the chapter on Berkeley DB tables before proceeding. *Note BDB::. MySQL source distributions are provided as compressed `tar' archives and have names like `mysql-VERSION.tar.gz', where `VERSION' is a number like 4.0.12. 4. Add a user and group for `mysqld' to run as: shell> groupadd mysql shell> useradd -g mysql mysql These commands add the `mysql' group and the `mysql' user. The syntax for `useradd' and `groupadd' may differ slightly on different versions of Unix. They may also be called `adduser' and `addgroup'. You may wish to call the user and group something else instead of `mysql'. 5. Unpack the distribution into the current directory: shell> gunzip < /path/to/mysql-VERSION.tar.gz | tar xvf - This command creates a directory named `mysql-VERSION'. 6. Change into the top-level directory of the unpacked distribution: shell> cd mysql-VERSION Note that currently you must configure and build MySQL from this top-level directory. You cannot build it in a different directory. 7. Configure the release and compile everything: shell> ./configure --prefix=/usr/local/mysql shell> make When you run `configure', you might want to specify some options. Run `./configure --help' for a list of options. *Note `configure' options: configure options, discusses some of the more useful options. If `configure' fails, and you are going to send mail to to ask for assistance, please include any lines from `config.log' that you think can help solve the problem. Also include the last couple of lines of output from `configure' if `configure' aborts. Post the bug report using the `mysqlbug' script. *Note Bug reports::. If the compile fails, see *Note Compilation problems::, for help with a number of common problems. 8. Install everything: shell> make install You might need to run this command as `root'. 9. Create the MySQL grant tables (necessary only if you haven't installed MySQL before): shell> scripts/mysql_install_db Note that MySQL versions older than Version 3.22.10 started the MySQL server when you run `mysql_install_db'. This is no longer true! 10. Change ownership of binaries to `root' and ownership of the data directory to the user that you will run `mysqld' as: shell> chown -R root /usr/local/mysql shell> chown -R mysql /usr/local/mysql/var shell> chgrp -R mysql /usr/local/mysql The first command changes the `owner' attribute of the files to the `root' user, the second one changes the `owner' attribute of the data directory to the `mysql' user, and the third one changes the `group' attribute to the `mysql' group. 11. If you want to install support for the Perl `DBI'/`DBD' interface, see *Note Perl support::. 12. If you would like MySQL to start automatically when you boot your machine, you can copy `support-files/mysql.server' to the location where your system has its startup files. More information can be found in the `support-files/mysql.server' script itself and in *Note Automatic start::. After everything has been installed, you should initialise and test your distribution: shell> /usr/local/mysql/bin/safe_mysqld --user=mysql & If that command fails immediately with `mysqld daemon ended', you can find some information in the file `mysql-data-directory/'hostname'.err'. The likely reason is that you already have another `mysqld' server running. *Note Multiple servers::. Now proceed to *Note Post-installation::. Applying Patches ---------------- Sometimes patches appear on the mailing list or are placed in the patches area of the MySQL web site (`http://www.mysql.com/downloads/patches.html'). To apply a patch from the mailing list, save the message in which the patch appears in a file, change into the top-level directory of your MySQL source tree, and run these commands: shell> patch -p1 < patch-file-name shell> rm config.cache shell> make clean Patches from the FTP site are distributed as plain text files or as files compressed with `gzip'. Apply a plain patch as shown previously for mailing list patches. To apply a compressed patch, change into the top-level directory of your MySQL source tree and run these commands: shell> gunzip < patch-file-name.gz | patch -p1 shell> rm config.cache shell> make clean After applying a patch, follow the instructions for a normal source install, beginning with the `./configure' step. After running the `make install' step, restart your MySQL server. You may need to bring down any currently running server before you run `make install'. (Use `mysqladmin shutdown' to do this.) Some systems do not allow you to install a new version of a program if it replaces the version that is currently executing. Typical `configure' Options --------------------------- The `configure' script gives you a great deal of control over how you configure your MySQL distribution. Typically you do this using options on the `configure' command-line. You can also affect `configure' using certain environment variables. *Note Environment variables::. For a list of options supported by `configure', run this command: shell> ./configure --help Some of the more commonly-used `configure' options are described here: * To compile just the MySQL client libraries and client programs and not the server, use the `--without-server' option: shell> ./configure --without-server If you don't have a C++ compiler, `mysql' will not compile (it is the one client program that requires C++). In this case, you can remove the code in `configure' that tests for the C++ compiler and then run `./configure' with the `--without-server' option. The compile step will still try to build `mysql', but you can ignore any warnings about `mysql.cc'. (If `make' stops, try `make -k' to tell it to continue with the rest of the build even if errors occur.) * If you want to get an embedded MySQL library (`libmysqld.a') you should use the `--with-embedded-server' option. * If you don't want your log files and database directories located under `/usr/local/var', use a `configure' command, something like one of these: shell> ./configure --prefix=/usr/local/mysql shell> ./configure --prefix=/usr/local \ --localstatedir=/usr/local/mysql/data The first command changes the installation prefix so that everything is installed under `/usr/local/mysql' rather than the default of `/usr/local'. The second command preserves the default installation prefix, but overrides the default location for database directories (normally `/usr/local/var') and changes it to `/usr/local/mysql/data'. After you have compiled MySQL, you can change these options with option files. *Note Option files::. * If you are using Unix and you want the MySQL socket located somewhere other than the default location (normally in the directory `/tmp' or `/var/run') use a `configure' command like this: shell> ./configure --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock Note that the given file must be an absolute pathname! You can also later change the location `mysql.sock' by using the MySQL option files. *Note Problems with mysql.sock::. * If you want to compile statically linked programs (for example, to make a binary distribution, to get more speed, or to work around problems with some RedHat Linux distributions), run `configure' like this: shell> ./configure --with-client-ldflags=-all-static \ --with-mysqld-ldflags=-all-static * If you are using `gcc' and don't have `libg++' or `libstdc++' installed, you can tell `configure' to use `gcc' as your C++ compiler: shell> CC=gcc CXX=gcc ./configure When you use `gcc' as your C++ compiler, it will not attempt to link in `libg++' or `libstdc++'. This may be a good idea to do even if you have the above libraries installed, as some versions of these libraries have caused strange problems for MySQL users in the past. Here are some common environment variables to set depending on the compiler you are using: *Compiler* *Recommended options* gcc 2.7.2.1 CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors" egcs 1.0.3a CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" gcc 2.95.2 CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro \ -felide-constructors -fno-exceptions -fno-rtti" pgcc 2.90.29 CFLAGS="-O3 -mpentiumpro -mstack-align-double" or newer CXX=gcc \ CXXFLAGS="-O3 -mpentiumpro -mstack-align-double -felide-constructors \ -fno-exceptions -fno-rtti" In most cases you can get a reasonably optimal MySQL binary by using the options from the preceding table and adding the following options to the configure line: --prefix=/usr/local/mysql --enable-assembler \ --with-mysqld-ldflags=-all-static The full configure line would, in other words, be something like the following for all recent gcc versions: CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro \ -felide-constructors -fno-exceptions -fno-rtti" ./configure \ --prefix=/usr/local/mysql --enable-assembler \ --with-mysqld-ldflags=-all-static The binaries we provide on the MySQL web site at `http://www.mysql.com/' are all compiled with full optimisation and should be perfect for most users. *Note MySQL binaries::. There are some things you can tweak to make an even faster binary, but this is only for advanced users. *Note Compile and link options::. If the build fails and produces errors about your compiler or linker not being able to create the shared library `libmysqlclient.so.#' (`#' is a version number), you can work around this problem by giving the `--disable-shared' option to `configure'. In this case, `configure' will not build a shared `libmysqlclient.so.#' library. * You can configure MySQL not to use `DEFAULT' column values for non-`NULL' columns (that is, columns that are not allowed to be `NULL'). This causes `INSERT' statements to generate an error unless you explicitly specify values for all columns that require a non-`NULL' value. To suppress use of default values, run `configure' like this: shell> CXXFLAGS=-DDONT_USE_DEFAULT_FIELDS ./configure * By default, MySQL uses the ISO-8859-1 (Latin1) character set. To change the default set, use the `--with-charset' option: shell> ./configure --with-charset=CHARSET `CHARSET' may be one of `big5', `cp1251', `cp1257', `czech', `danish', `dec8', `dos', `euc_kr', `gb2312', `gbk', `german1', `hebrew', `hp8', `hungarian', `koi8_ru', `koi8_ukr', `latin1', `latin2', `sjis', `swe7', `tis620', `ujis', `usa7', or `win1251ukr'. *Note Character sets::. If you want to convert characters between the server and the client, you should take a look at the `SET CHARACTER SET' command. *Note `SET': SET OPTION. *Warning*: If you change character sets after having created any tables, you will have to run `myisamchk -r -q --set-character-set=charset' on every table. Your indexes may be sorted incorrectly otherwise. (This can happen if you install MySQL, create some tables, then reconfigure MySQL to use a different character set and reinstall it.) With the option `--with-extra-charsets=LIST' you can define which additional character sets should be compiled into the server. Here `LIST' is either a list of character sets separated with spaces, `complex' to include all characters that can't be dynamically loaded, or `all' to include all character sets into the binaries. * To configure MySQL with debugging code, use the `--with-debug' option: shell> ./configure --with-debug This causes a safe memory allocator to be included that can find some errors and that provides output about what is happening. *Note Debugging server::. * If your client programs are using threads, you need to also compile a thread-safe version of the MySQL client library with the `--enable-thread-safe-client' configure options. This will create a `libmysqlclient_r' library with which you should link your threaded applications. *Note Threaded clients::. * Options that pertain to particular systems can be found in the system-specific section of this manual. *Note Operating System Specific Notes::. Installing from the Development Source Tree ------------------------------------------- *Caution*: You should read this section only if you are interested in helping us test our new code. If you just want to get MySQL up and running on your system, you should use a standard release distribution (either a source or binary distribution will do). To obtain our most recent development source tree, use these instructions: 1. Download `BitKeeper' from `http://www.bitmover.com/cgi-bin/download.cgi'. You will need `Bitkeeper' 3.0 or newer to access our repository. 2. Follow the instructions to install it. 3. After `BitKeeper' is installed, first go to the directory you want to work from, and then use one of the following commands to clone the MySQL version branch of your choice: To clone the 3.23 branch, use this command: shell> bk clone bk://mysql.bkbits.net/mysql-3.23 mysql-3.23 To clone the 4.0 branch, use this command: shell> bk clone bk://mysql.bkbits.net/mysql-4.0 mysql-4.0 To clone the 4.1 branch, use this command: shell> bk clone bk://mysql.bkbits.net/mysql-4.1 mysql-4.1 In the preceding examples the source tree will be set up in the `mysql-3.23/', `mysql-4.0/', or `mysql-4.1/' subdirectory of your current directory. If you are behind a firewall and can only initiate HTTP connections, you can also use `BitKeeper' via HTTP. If you are required to use a proxy server, simply set the environment variable `http_proxy' to point to your proxy: shell> export http_proxy="http://your.proxy.server:8080/" Now, simply replace the `bk://' with `http://' when doing a clone. Example: shell> bk clone http://mysql.bkbits.net/mysql-4.1 mysql-4.1 The initial download of the source tree may take a while, depending on the speed of your connection - please be patient. 4. You will need GNU `make', `autoconf 2.53 (or newer)', `automake 1.5', `libtool 1.4', and `m4' to run the next set of commands. Note that `automake 1.7 or newer' doesn't yet work. If you are using trying to configure MySQL 4.1 you will also need `bison 1.75'. Older versions of `bison' may report this error: `sql_yacc.yy:#####: fatal error: maximum table size (32767) exceeded'. Note: the maximum table size is not actually exceeded, the error is caused by bugs in these earlier `bison' versions. The typical command to do in a shell is: cd mysql-4.0 bk -r get -Sq aclocal; autoheader; autoconf; automake (cd innobase ; aclocal; autoheader; autoconf; automake) # for InnoDB (cd bdb/dist ; sh s_all ) # for Berkeley DB ./configure # Add your favorite options here make If you get some strange error during this stage, check that you really have `libtool' installed! A collection of our standard configure scripts is located in the `BUILD/' subdirectory. If you are lazy, you can use `BUILD/compile-pentium-debug'. To compile on a different architecture, modify the script by removing flags that are Pentium-specific. 5. When the build is done, run `make install'. Be careful with this on a production machine; the command may overwrite your live release installation. If you have another installation of MySQL, we recommand that you run `./configure' with different values for the `prefix', `with-tcp-port', and `unix-socket-path' options than those used for your production server. 6. Play hard with your new installation and try to make the new features crash. Start by running `make test'. *Note MySQL test suite::. 7. If you have gotten to the `make' stage and the distribution does not compile, please report it to . If you have installed the latest versions of the required GNU tools, and they crash trying to process our configuration files, please report that also. However, if you execute `aclocal' and get a `command not found' error or a similar problem, do not report it. Instead, make sure all the necessary tools are installed and that your `PATH' variable is set correctly so that your shell can find them. 8. After the initial `bk clone' operation to get the source tree, you should run `bk pull' periodically to get the updates. 9. You can examine the change history for the tree with all the diffs by using `bk sccstool'. If you see some funny diffs or code that you have a question about, do not hesitate to send e-mail to . Also, if you think you have a better idea on how to do something, send an e-mail to the same address with a patch. `bk diffs' will produce a patch for you after you have made changes to the source. If you do not have the time to code your idea, just send a description. 10. `BitKeeper' has a nice help utility that you can access via `bk helptool'. 11. Please note that any commits (`bk ci' or `bk citool') will trigger the posting of a message with the changeset to our internals mailing list, as well as the usual openlogging.org submission with just the changeset comments. Generally, you wouldn't need to use commit (since the public tree will not allow `bk push'), but rather use the `bk diffs' method described previously. You can also browse changesets, comments and sourcecode online by browsing to e.g. `http://mysql.bkbits.net:8080/mysql-4.1' For MySQL 4.1. The manual is in a separate tree which can be cloned with: shell> bk clone bk://mysql.bkbits.net/mysqldoc mysqldoc Problems Compiling MySQL? ------------------------- All MySQL programs compile cleanly for us with no warnings on Solaris or Linux using `gcc'. On other systems, warnings may occur due to differences in system include files. See *Note MIT-pthreads:: for warnings that may occur when using MIT-pthreads. For other problems, check the following list. The solution to many problems involves reconfiguring. If you do need to reconfigure, take note of the following: * If `configure' is run after it already has been run, it may use information that was gathered during its previous invocation. This information is stored in `config.cache'. When `configure' starts up, it looks for that file and reads its contents if it exists, on the assumption that the information is still correct. That assumption is invalid when you reconfigure. * Each time you run `configure', you must run `make' again to recompile. However, you may want to remove old object files from previous builds first because they were compiled using different configuration options. To prevent old configuration information or object files from being used, run these commands before rerunning `configure': shell> rm config.cache shell> make clean Alternatively, you can run `make distclean'. The following list describes some of the problems when compiling MySQL that have been found to occur most often: * If you get errors when compiling `sql_yacc.cc', such as the ones shown here, you have probably run out of memory or swap space: Internal compiler error: program cc1plus got fatal signal 11 or Out of virtual memory or Virtual memory exhausted The problem is that `gcc' requires huge amounts of memory to compile `sql_yacc.cc' with inline functions. Try running `configure' with the `--with-low-memory' option: shell> ./configure --with-low-memory This option causes `-fno-inline' to be added to the compile line if you are using `gcc' and `-O0' if you are using something else. You should try the `--with-low-memory' option even if you have so much memory and swap space that you think you can't possibly have run out. This problem has been observed to occur even on systems with generous hardware configurations, and the `--with-low-memory' option usually fixes it. * By default, `configure' picks `c++' as the compiler name and GNU `c++' links with `-lg++'. If you are using `gcc', that behaviour can cause problems during configuration such as this: configure: error: installation or configuration problem: C++ compiler cannot create executables. You might also observe problems during compilation related to `g++', `libg++', or `libstdc++'. One cause of these problems is that you may not have `g++', or you may have `g++' but not `libg++', or `libstdc++'. Take a look at the `config.log' file. It should contain the exact reason why your c++ compiler didn't work! To work around these problems, you can use `gcc' as your C++ compiler. Try setting the environment variable `CXX' to `"gcc -O3"'. For example: shell> CXX="gcc -O3" ./configure This works because `gcc' compiles C++ sources as well as `g++' does, but does not link in `libg++' or `libstdc++' by default. Another way to fix these problems, of course, is to install `g++', `libg++', and `libstdc++'. We would however like to recommend you to not use `libg++' or `libstdc++' with MySQL as this will only increase the binary size of mysqld without giving you any benefits. Some versions of these libraries have also caused strange problems for MySQL users in the past. * If your compile fails with errors, such as any of the following, you must upgrade your version of `make' to GNU `make': making all in mit-pthreads make: Fatal error in reader: Makefile, line 18: Badly formed macro assignment or make: file `Makefile' line 18: Must be a separator (: or pthread.h: No such file or directory Solaris and FreeBSD are known to have troublesome `make' programs. GNU `make' Version 3.75 is known to work. * If you want to define flags to be used by your C or C++ compilers, do so by adding the flags to the `CFLAGS' and `CXXFLAGS' environment variables. You can also specify the compiler names this way using `CC' and `CXX'. For example: shell> CC=gcc shell> CFLAGS=-O3 shell> CXX=gcc shell> CXXFLAGS=-O3 shell> export CC CFLAGS CXX CXXFLAGS See *Note MySQL binaries::, for a list of flag definitions that have been found to be useful on various systems. * If you get an error message like this, you need to upgrade your `gcc' compiler: client/libmysql.c:273: parse error before `__attribute__' `gcc' 2.8.1 is known to work, but we recommend using `gcc' 2.95.2 or `egcs' 1.0.3a instead. * If you get errors such as those shown here when compiling `mysqld', `configure' didn't correctly detect the type of the last argument to `accept()', `getsockname()', or `getpeername()': cxx: Error: mysqld.cc, line 645: In this statement, the referenced type of the pointer value "&length" is "unsigned long", which is not compatible with "int". new_sock = accept(sock, (struct sockaddr *)&cAddr, &length); To fix this, edit the `config.h' file (which is generated by `configure'). Look for these lines: /* Define as the base type of the last arg to accept */ #define SOCKET_SIZE_TYPE XXX Change `XXX' to `size_t' or `int', depending on your operating system. (Note that you will have to do this each time you run `configure' because `configure' regenerates `config.h'.) * The `sql_yacc.cc' file is generated from `sql_yacc.yy'. Normally the build process doesn't need to create `sql_yacc.cc', because MySQL comes with an already generated copy. However, if you do need to re-create it, you might encounter this error: "sql_yacc.yy", line xxx fatal: default action causes potential... This is a sign that your version of `yacc' is deficient. You probably need to install `bison' (the GNU version of `yacc') and use that instead. * If you need to debug `mysqld' or a MySQL client, run `configure' with the `--with-debug' option, then recompile and link your clients with the new client library. *Note Debugging client::. MIT-pthreads Notes ------------------ This section describes some of the issues involved in using MIT-pthreads. Note that on Linux you should *not* use MIT-pthreads but install LinuxThreads! *Note Linux::. If your system does not provide native thread support, you will need to build MySQL using the MIT-pthreads package. This includes older FreeBSD systems, SunOS 4.x, Solaris 2.4 and earlier, and some others. *Note Which OS::. Note, that beginning with MySQL 4.0.2 MIT-pthreads are no longer part of the source distribution! If you require this package, you need to download it separately from After downloading, extract this source archive into the top level of the MySQL source directory. It will create a new subdirectory `mit-pthreads'. * On most systems, you can force MIT-pthreads to be used by running `configure' with the `--with-mit-threads' option: shell> ./configure --with-mit-threads Building in a non-source directory is not supported when using MIT-pthreads because we want to minimise our changes to this code. * The checks that determine whether to use MIT-pthreads occur only during the part of the configuration process that deals with the server code. If you have configured the distribution using `--without-server' to build only the client code, clients will not know whether MIT-pthreads is being used and will use Unix socket connections by default. Because Unix sockets do not work under MIT-pthreads on some platforms, this means you will need to use `-h' or `--host' when you run client programs. * When MySQL is compiled using MIT-pthreads, system locking is disabled by default for performance reasons. You can tell the server to use system locking with the `--external-locking' option. This is only needed if you want to be able to run two MySQL servers against the same data files (not recommended). * Sometimes the pthread `bind()' command fails to bind to a socket without any error message (at least on Solaris). The result is that all connections to the server fail. For example: shell> mysqladmin version mysqladmin: connect to server at '' failed; error: 'Can't connect to mysql server on localhost (146)' The solution to this is to kill the `mysqld' server and restart it. This has only happened to us when we have forced the server down and done a restart immediately. * With MIT-pthreads, the `sleep()' system call isn't interruptible with `SIGINT' (break). This is only noticeable when you run `mysqladmin --sleep'. You must wait for the `sleep()' call to terminate before the interrupt is served and the process stops. * When linking, you may receive warning messages like these (at least on Solaris); they can be ignored: ld: warning: symbol `_iob' has differing sizes: (file /my/local/pthreads/lib/libpthread.a(findfp.o) value=0x4; file /usr/lib/libc.so value=0x140); /my/local/pthreads/lib/libpthread.a(findfp.o) definition taken ld: warning: symbol `__iob' has differing sizes: (file /my/local/pthreads/lib/libpthread.a(findfp.o) value=0x4; file /usr/lib/libc.so value=0x140); /my/local/pthreads/lib/libpthread.a(findfp.o) definition taken * Some other warnings also can be ignored: implicit declaration of function `int strtoll(...)' implicit declaration of function `int strtoul(...)' * We haven't gotten `readline' to work with MIT-pthreads. (This isn't needed, but may be interesting for someone.) Windows Source Distribution --------------------------- You will need the following: * VC++ 6.0 compiler (updated with 4 or 5 SP and Pre-processor package) The Pre-processor package is necessary for the macro assembler. More details at: `http://msdn.microsoft.com/vstudio/sp/vs6sp5/faq.asp'. * The MySQL source distribution for Windows, which can be downloaded from `http://www.mysql.com/downloads/'. Building MySQL 1. Create a work directory (e.g., workdir). 2. Unpack the source distribution in the aforementioned directory. 3. Start the VC++ 6.0 compiler. 4. In the `File' menu, select `Open Workspace'. 5. Open the `mysql.dsw' workspace you find on the work directory. 6. From the `Build' menu, select the `Set Active Configuration' menu. 7. Click over the screen selecting `mysqld - Win32 Debug' and click OK. 8. Press `F7' to begin the build of the debug server, libs, and some client applications. 9. When the compilation finishes, copy the libs and the executables to a separate directory. 10. Compile the release versions that you want, in the same way. 11. Create the directory for the MySQL stuff: e.g., `c:\mysql' 12. From the workdir directory copy for the c:\mysql directory the following directories: * Data * Docs * Share 13. Create the directory `c:\mysql\bin' and copy all the servers and clients that you compiled previously. 14. If you want, also create the `lib' directory and copy the libs that you compiled previously. 15. Do a clean using Visual Studio. Set up and start the server in the same way as for the binary Windows distribution. *Note Windows prepare environment::. Post-installation Setup and Testing =================================== Once you've installed MySQL (from either a binary or source distribution), you need to initialise the grant tables, start the server, and make sure that the server works okay. You may also wish to arrange for the server to be started and stopped automatically when your system starts up and shuts down. Normally you install the grant tables and start the server like this for installation from a source distribution: shell> ./scripts/mysql_install_db shell> cd mysql_installation_directory shell> ./bin/safe_mysqld --user=mysql & For a binary distribution (not RPM or pkg packages), do this: shell> cd mysql_installation_directory shell> ./scripts/mysql_install_db shell> ./bin/safe_mysqld --user=mysql & or shell> ./bin/mysqld_safe --user=mysql & if you are running MySQL 4.x. This creates the `mysql' database which will hold all database privileges, the `test' database which you can use to test MySQL, and also privilege entries for the user that run `mysql_install_db' and a `root' user (without any passwords). This also starts the `mysqld' server. `mysql_install_db' will not overwrite any old privilege tables, so it should be safe to run in any circumstances. If you don't want to have the `test' database you can remove it with `mysqladmin -u root drop test'. Testing is most easily done from the top-level directory of the MySQL distribution. For a binary distribution, this is your installation directory (typically something like `/usr/local/mysql'). For a source distribution, this is the main directory of your MySQL source tree. In the commands shown in this section and in the following subsections, `BINDIR' is the path to the location in which programs like `mysqladmin' and `safe_mysqld' are installed. For a binary distribution, this is the `bin' directory within the distribution. For a source distribution, `BINDIR' is probably `/usr/local/bin', unless you specified an installation directory other than `/usr/local' when you ran `configure'. `EXECDIR' is the location in which the `mysqld' server is installed. For a binary distribution, this is the same as `BINDIR'. For a source distribution, `EXECDIR' is probably `/usr/local/libexec'. Testing is described in detail: 1. If necessary, start the `mysqld' server and set up the initial MySQL grant tables containing the privileges that determine how users are allowed to connect to the server. This is normally done with the `mysql_install_db' script: shell> scripts/mysql_install_db Typically, `mysql_install_db' needs to be run only the first time you install MySQL. Therefore, if you are upgrading an existing installation, you can skip this step. (However, `mysql_install_db' is quite safe to use and will not update any tables that already exist, so if you are unsure of what to do, you can always run `mysql_install_db'.) `mysql_install_db' creates six tables (`user', `db', `host', `tables_priv', `columns_priv', and `func') in the `mysql' database. A description of the initial privileges is given in *Note Default privileges::. Briefly, these privileges allow the MySQL `root' user to do anything, and allow anybody to create or use databases with a name of `test' or starting with `test_'. If you don't set up the grant tables, the following error will appear in the log file when you start the server: mysqld: Can't find file: 'host.frm' This may also happen with a binary MySQL distribution if you don't start MySQL by executing exactly `./bin/safe_mysqld'! *Note `safe_mysqld': safe_mysqld. You might need to run `mysql_install_db' as `root'. However, if you prefer, you can run the MySQL server as an unprivileged (non-`root') user, provided that the user can read and write files in the database directory. Instructions for running MySQL as an unprivileged user are given in *Note Changing MySQL user: Changing MySQL user. If you have problems with `mysql_install_db', see *Note `mysql_install_db': mysql_install_db. There are some alternatives to running the `mysql_install_db' script as it is provided in the MySQL distribution: * You may want to edit `mysql_install_db' before running it, to change the initial privileges that are installed into the grant tables. This is useful if you want to install MySQL on a lot of machines with the same privileges. In this case you probably should need only to add a few extra `INSERT' statements to the `mysql.user' and `mysql.db' tables! * If you want to change things in the grant tables after installing them, you can run `mysql_install_db', then use `mysql -u root mysql' to connect to the grant tables as the MySQL `root' user and issue SQL statements to modify the grant tables directly. * It is possible to re-create the grant tables completely after they have already been created. You might want to do this if you've already installed the tables but then want to re-create them after editing `mysql_install_db'. For more information about these alternatives, see *Note Default privileges::. 2. Start the MySQL server like this: shell> cd mysql_installation_directory shell> bin/safe_mysqld & If you have problems starting the server, see *Note Starting server::. 3. Use `mysqladmin' to verify that the server is running. The following commands provide a simple test to check that the server is up and responding to connections: shell> BINDIR/mysqladmin version shell> BINDIR/mysqladmin variables The output from `mysqladmin version' varies slightly depending on your platform and version of MySQL, but should be similar to that shown here: shell> BINDIR/mysqladmin version mysqladmin Ver 8.14 Distrib 3.23.32, for linux on i586 Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license. Server version 3.23.32-debug Protocol version 10 Connection Localhost via Unix socket TCP port 3306 UNIX socket /tmp/mysql.sock Uptime: 16 sec Threads: 1 Questions: 9 Slow queries: 0 Opens: 7 Flush tables: 2 Open tables: 0 Queries per second avg: 0.000 Memory in use: 132K Max memory used: 16773K To get a feeling for what else you can do with `BINDIR/mysqladmin', invoke it with the `--help' option. 4. Verify that you can shut down the server: shell> BINDIR/mysqladmin -u root shutdown 5. Verify that you can restart the server. Do this using `safe_mysqld' or by invoking `mysqld' directly. For example: shell> BINDIR/safe_mysqld --log & If `safe_mysqld' fails, try running it from the MySQL installation directory (if you are not already there). If that doesn't work, see *Note Starting server::. 6. Run some simple tests to verify that the server is working. The output should be similar to what is shown here: shell> BINDIR/mysqlshow +-----------+ | Databases | +-----------+ | mysql | +-----------+ shell> BINDIR/mysqlshow mysql Database: mysql +--------------+ | Tables | +--------------+ | columns_priv | | db | | func | | host | | tables_priv | | user | +--------------+ shell> BINDIR/mysql -e "SELECT host,db,user FROM db" mysql +------+--------+------+ | host | db | user | +------+--------+------+ | % | test | | | % | test_% | | +------+--------+------+ There is also a benchmark suite in the `sql-bench' directory (under the MySQL installation directory) that you can use to compare how MySQL performs on different platforms. The `sql-bench/Results' directory contains the results from many runs against different databases and platforms. To run all tests, execute these commands: shell> cd sql-bench shell> run-all-tests If you don't have the `sql-bench' directory, you are probably using an RPM for a binary distribution. (Source distribution RPMs include the benchmark directory.) In this case, you must first install the benchmark suite before you can use it. Beginning with MySQL Version 3.22, there are benchmark RPM files named `mysql-bench-VERSION-i386.rpm' that contain benchmark code and data. If you have a source distribution, you can also run the tests in the `tests' subdirectory. For example, to run `auto_increment.tst', do this: shell> BINDIR/mysql -vvf test < ./tests/auto_increment.tst The expected results are shown in the `./tests/auto_increment.res' file. Problems Running `mysql_install_db' ----------------------------------- The purpose of the `mysql_install_db' script is to generate new MySQL privilege tables. It will not affect any other data! It will also not do anything if you already have MySQL privilege tables installed! If you want to re-create your privilege tables, you should take down the `mysqld' server, if it's running, and then do something like: mv mysql-data-directory/mysql mysql-data-directory/mysql-old mysql_install_db This section lists problems you might encounter when you run `mysql_install_db': *`mysql_install_db' doesn't install the grant tables* You may find that `mysql_install_db' fails to install the grant tables and terminates after displaying the following messages: starting mysqld daemon with databases from XXXXXX mysql daemon ended In this case, you should examine the log file very carefully! The log should be located in the directory `XXXXXX' named by the error message, and should indicate why `mysqld' didn't start. If you don't understand what happened, include the log when you post a bug report using `mysqlbug'! *Note Bug reports::. *There is already a `mysqld' daemon running* In this case, you probably don't have to run `mysql_install_db' at all. You have to run `mysql_install_db' only once, when you install MySQL the first time. *Installing a second `mysqld' daemon doesn't work when one daemon is running* This can happen when you already have an existing MySQL installation, but want to put a new installation in a different place (for example, for testing, or perhaps you simply want to run two installations at the same time). Generally the problem that occurs when you try to run the second server is that it tries to use the same socket and port as the old one. In this case you will get the error message: `Can't start server: Bind on TCP/IP port: Address already in use' or `Can't start server: Bind on unix socket...'. *Note Installing many servers::. *You don't have write access to `/tmp'* If you don't have write access to create a socket file at the default place (in `/tmp') or permission to create temporary files in `/tmp,' you will get an error when running `mysql_install_db' or when starting or using `mysqld'. You can specify a different socket and temporary directory as follows: shell> TMPDIR=/some_tmp_dir/ shell> MYSQL_UNIX_PORT=/some_tmp_dir/mysqld.sock shell> export TMPDIR MYSQL_UNIX_PORT See *Note Problems with mysql.sock::. `some_tmp_dir' should be the path to some directory for which you have write permission. *Note Environment variables::. After this you should be able to run `mysql_install_db' and start the server with these commands: shell> scripts/mysql_install_db shell> BINDIR/safe_mysqld & *`mysqld' crashes immediately* If you are running RedHat Version 5.0 with a version of `glibc' older than 2.0.7-5, you should make sure you have installed all `glibc' patches! There is a lot of information about this in the MySQL mail archives. Links to the mail archives are available online at `http://lists.mysql.com/'. Also, see *Note Linux::. You can also start `mysqld' manually using the `--skip-grant-tables' option and add the privilege information yourself using `mysql': shell> BINDIR/safe_mysqld --skip-grant-tables & shell> BINDIR/mysql -u root mysql From `mysql', manually execute the SQL commands in `mysql_install_db'. Make sure you run `mysqladmin flush-privileges' or `mysqladmin reload' afterward to tell the server to reload the grant tables. Problems Starting the MySQL Server ---------------------------------- If you are going to use tables that support transactions (InnoDB, BDB), you should first create a `my.cnf' file and set startup options for the table types you plan to use. *Note Table types::. Generally, you start the `mysqld' server in one of these ways: * By invoking `mysql.server'. This script is used primarily at system startup and shutdown, and is described more fully in *Note Automatic start::. * By invoking `safe_mysqld', which tries to determine the proper options for `mysqld' and then runs it with those options. *Note `safe_mysqld': safe_mysqld. * For Windows NT/2000/XP, please see *Note NT start::. * By invoking `mysqld' directly. When the `mysqld' daemon starts up, it changes the directory to the data directory. This is where it expects to write log files and the pid (process ID) file, and where it expects to find databases. The data directory location is hardwired in when the distribution is compiled. However, if `mysqld' expects to find the data directory somewhere other than where it really is on your system, it will not work properly. If you have problems with incorrect paths, you can find out what options `mysqld' allows and what the default path settings are by invoking `mysqld' with the `--help' option. You can override the defaults by specifying the correct pathnames as command-line arguments to `mysqld'. (These options can be used with `safe_mysqld' as well.) Normally you should need to tell `mysqld' only the base directory under which MySQL is installed. You can do this with the `--basedir' option. You can also use `--help' to check the effect of changing path options (note that `--help' *must* be the final option of the `mysqld' command). For example: shell> EXECDIR/mysqld --basedir=/usr/local --help Once you determine the path settings you want, start the server without the `--help' option. Whichever method you use to start the server, if it fails to start up correctly, check the log file to see if you can find out why. Log files are located in the data directory (typically `/usr/local/mysql/data' for a binary distribution, `/usr/local/var' for a source distribution, and `\mysql\data\mysql.err' on Windows). Look in the data directory for files with names of the form `host_name.err' and `host_name.log' where `host_name' is the name of your server host. Then check the last few lines of these files: shell> tail host_name.err shell> tail host_name.log Look for something like the following in the log file: 000729 14:50:10 bdb: Recovery function for LSN 1 27595 failed 000729 14:50:10 bdb: warning: ./test/t1.db: No such file or directory 000729 14:50:10 Can't init databases This means that you didn't start `mysqld' with `--bdb-no-recover' and Berkeley DB found something wrong with its log files when it tried to recover your databases. To be able to continue, you should move away the old Berkeley DB log file from the database directory to some other place, where you can later examine it. The log files are named `log.0000000001', where the number will increase over time. If you are running `mysqld' with BDB table support and `mysqld' core dumps at start this could be because of some problems with the BDB recover log. In this case you can try starting `mysqld' with `--bdb-no-recover'. If this helps, then you should remove all `log.*' files from the data directory and try starting `mysqld' again. If you get the following error, it means that some other program (or another `mysqld' server) is already using the TCP/IP port or socket `mysqld' is trying to use: Can't start server: Bind on TCP/IP port: Address already in use or Can't start server : Bind on unix socket... Use `ps' to make sure that you don't have another `mysqld' server running. If you can't find another server running, you can try to execute the command `telnet your-host-name tcp-ip-port-number' and press Enter a couple of times. If you don't get an error message like `telnet: Unable to connect to remote host: Connection refused', something is using the TCP/IP port `mysqld' is trying to use. See *Note mysql_install_db:: and *Note Multiple servers::. If `mysqld' is currently running, you can find out what path settings it is using by executing this command: shell> mysqladmin variables or shell> mysqladmin -h 'your-host-name' variables If you get `Errcode 13', which means `Permission denied', when starting `mysqld' this means that you didn't have the right to read/create files in the MySQL database or log directory. In this case you should either start `mysqld' as the root user or change the permissions for the involved files and directories so that you have the right to use them. If `safe_mysqld' starts the server but you can't connect to it, you should make sure you have an entry in `/etc/hosts' that looks like this: 127.0.0.1 localhost This problem occurs only on systems that don't have a working thread library and for which MySQL must be configured to use MIT-pthreads. If you can't get `mysqld' to start you can try to make a trace file to find the problem. *Note Making trace files::. If you are using InnoDB tables, refer to the InnoDB-specific startup options. *Note InnoDB start::. If you are using BDB (Berkeley DB) tables, you should familiarise yourself with the different BDB-specific startup options. *Note BDB start::. Starting and Stopping MySQL Automatically ----------------------------------------- The `mysql.server' and `safe_mysqld' scripts can be used to start the server automatically at system startup time. `mysql.server' can also be used to stop the server. The `mysql.server' script can be used to start or stop the server by invoking it with `start' or `stop' arguments: shell> mysql.server start shell> mysql.server stop `mysql.server' can be found in the `share/mysql' directory under the MySQL installation directory or in the `support-files' directory of the MySQL source tree. Before `mysql.server' starts the server, it changes the directory to the MySQL installation directory, then invokes `safe_mysqld'. You might need to edit `mysql.server' if you have a binary distribution that you've installed in a non-standard location. Modify it to `cd' into the proper directory before it runs `safe_mysqld'. If you want the server to run as some specific user, add an appropriate `user' line to the `/etc/my.cnf' file, as shown later in this section. `mysql.server stop' brings down the server by sending a signal to it. You can also take down the server manually by executing `mysqladmin shutdown'. You need to add these start and stop commands to the appropriate places in your `/etc/rc*' files when you want to start up MySQL automatically on your server. On most current Linux distributions, it is sufficient to copy the file `mysql.server' into the `/etc/init.d' directory (or `/etc/rc.d/init.d' on older Red Hat systems). Afterwards, run the following command to enable the startup of MySQL on system bootup: shell> chkconfig --add mysql.server As an alternative to the above, some operating systems also use `/etc/rc.local' or `/etc/init.d/boot.local' to start additional services on bootup. To start up MySQL using this method, you could append something like the following to it: /bin/sh -c 'cd /usr/local/mysql ; ./bin/safe_mysqld --user=mysql &' You can also add options for `mysql.server' in a global `/etc/my.cnf' file. A typical `/etc/my.cnf' file might look like this: [mysqld] datadir=/usr/local/mysql/var socket=/var/tmp/mysql.sock port=3306 user=mysql [mysql_server] basedir=/usr/local/mysql The `mysql.server' script understands the following options: `datadir', `basedir', and `pid-file'. The following table shows which option groups each of the startup scripts read from option files: *Script* *Option groups* `mysqld' `mysqld' and `server' `mysql.server'`mysql.server', `mysqld', and `server' `safe_mysqld'`mysql.server', `mysqld', and `server' *Note Option files::. Upgrading/Downgrading MySQL =========================== You can always move the MySQL form and datafiles between different versions on the same architecture as long as you have the same base version of MySQL. The current base version is 3. If you change the character set when running MySQL (which may also change the sort order), you must run `myisamchk -r -q --set-character-set=charset' on all tables. Otherwise, your indexes may not be ordered correctly. If you are afraid of new versions, you can always rename your old `mysqld' to something like `mysqld-old-version-number'. If your new `mysqld' then does something unexpected, you can simply shut it down and restart with your old `mysqld'! When you do an upgrade you should also back up your old databases, of course. If after an upgrade, you experience problems with recompiled client programs, like `Commands out of sync' or unexpected core dumps, you probably have used an old header or library file when compiling your programs. In this case you should check the date for your `mysql.h' file and `libmysqlclient.a' library to verify that they are from the new MySQL distribution. If not, please recompile your programs! If you get some problems that the new `mysqld' server doesn't want to start or that you can't connect without a password, check that you don't have some old `my.cnf' file from your old installation! You can check this with: `program-name --print-defaults'. If this outputs anything other than the program name, you have an active `my.cnf' file that will affect things! It is a good idea to rebuild and reinstall the `Msql-Mysql-modules' distribution whenever you install a new release of MySQL, particularly if you notice symptoms such as all your `DBI' scripts dumping core after you upgrade MySQL. Upgrading From Version 4.0 to Version 4.1 ----------------------------------------- In general what you have to do when upgrading to 4.1 from an earlier MySQL version: * Run the script `mysql_fix_privilege_tables' to generate the new password field that is needed for secure handling of passwords. The following is a more complete lists tell what you have to watch out for when upgrading to version 4.1; * Functions that return a DATE, DATETIME or TIME result is now fixed up when returned to the client. mysql> SELECT cast("2001-1-1" as DATE) -> '2001-01-01' * All column and tables now have a character set, which shows up in `SHOW CREATE TABLE' and `mysqldump'. (MySQL 4.0.6 and above can read the new dump files, but not previous MySQL versions). * Timestamp is now returned as string of type `'YYYY-MM-DD HH:MM:DD''. If you want to have this as a number you should add +0 to the timestamp column. Different timestamp lengths are not supported. * If you are running multiple servers on the same windows machine, you should use a different `--shared_memory_base_name' option on all machines. *Note* that the table definition format (.frm) has changed slightly in 4.1. MySQL 4.0.11 can read the new .frm format but older version can not. If you need to go move tables from 4.1 to and earlier MySQL version you should use `mysqldump'. *Note mysqldump::. Upgrading From Version 3.23 to Version 4.0 ------------------------------------------ In general what you have to do when upgrading to 4.0 from an earlier MySQL version: * Run the script `mysql_fix_privilege_tables' to add new privileges and features to the MySQL privilege tables. * Edit any MySQL startup scripts or configure files to not use any of the deprecated options listed below. * Convert your old ISAM files to MyISAM files with the command: `mysql_convert_table_format database'. Note that this should only be run if all tables in the given database is ISAM or MyISAM tables. If this is not the case you should run `ALTER TABLE table_name TYPE=MyISAM' on all ISAM tables. * Ensure that you don't have any MySQL clients that uses shared libraries (like the Perl Msql-Mysql-modules). If you have, you should recompile them as structures used in `libmysqlclient.so' have changed. MySQL 4.0 will work even if you don't do the above, but you will not be able to use the new security privileges that MySQL 4.0 and you may run into problems when upgrading later to MySQL 4.1 or newer. The ISAM file format still works in MySQL 4.0 but it's deprecated and will be disabled in MySQL 5.0. Old clients should work with a Version 4.0 server without any problems. Even if you do the above, you can still downgrade to MySQL 3.23.52 or newer if you run into problems with the MySQL 4.0 series. In this case you have to do a `mysqldump' of any tables using a full-text index and restore these in 3.23 (because 4.0 uses a new format for full-text index). The following is a more complete lists tell what you have to watch out for when upgrading to version 4.0; * MySQL 4.0 has a lot of new privileges in the `mysql.user' table. *Note GRANT::. To get these new privileges to work, one must run the `mysql_fix_privilege_tables' script. Until this script is run all users have the `SHOW DATABASES', `CREATE TEMPORARY TABLES', and `LOCK TABLES' privileges. `SUPER' and `EXECUTE' privileges take their value from `PROCESS'. `REPLICATION SLAVE' and `REPLICATION CLIENT' take their values from `FILE'. If you have any scripts that creates new users, you may want to change them to use the new privileges. If you are not using `GRANT' commands in the scripts, this is a good time to change your scripts. In version 4.0.2 the option `--safe-show-database' is deprecated (and no longer does anything). *Note Privileges options::. If you get access denied errors for new users in version 4.0.2, you should check if you need some of the new grants that you didn't need before. In particular, you will need `REPLICATION SLAVE' (instead of `FILE') for new slaves. * The startup parameters `myisam_max_extra_sort_file_size' and `myisam_max_extra_sort_file_size' are now given in bytes (was megabytes before 4.0.3). External system locking of MyISAM/ISAM files is now turned off by default. One can turn this on by doing `--external-locking'. (For most users this is never needed). * The following startup variables/options have been renamed: *From* *to*. `myisam_bulk_insert_tree_size' `bulk_insert_buffer_size' `query_cache_startup_type' `query_cache_type' `record_buffer' `read_buffer_size' `record_rnd_buffer' `read_rnd_buffer_size' `sort_buffer' `sort_buffer_size' `warnings' `log-warnings' `err-log' `--log-error' (for `mysqld_safe') The startup options `record_buffer', `sort_buffer' and `warnings' will still work in MySQL 4.0 but are deprecated. * The following SQL variables have changed name. *From* *To*. `SQL_BIG_TABLES' `BIG_TABLES' `SQL_LOW_PRIORITY_UPDATES' `LOW_PRIORITY_UPDATES' `SQL_MAX_JOIN_SIZE' `MAX_JOIN_SIZE' `SQL_QUERY_CACHE_TYPE' `QUERY_CACHE_TYPE' The old names still work in MySQL 4.0 but are deprecated. * You have to use `SET GLOBAL SQL_SLAVE_SKIP_COUNTER=#' instead of `SET SQL_SLAVE_SKIP_COUNTER=#'. * Renamed mysqld startup options `--skip-locking' to `--skip-external-locking' and `--enable-locking' to `--external-locking'. * `SHOW MASTER STATUS' now returns an empty set if binary log is not enabled. * `SHOW SLAVE STATUS' now returns an empty set if slave is not initialised. * mysqld now has the option `--temp-pool' enabled by default as this gives better performance with some OS (Most notable Linux). * `DOUBLE' and `FLOAT' columns now honour the `UNSIGNED' flag on storage (before, `UNSIGNED' was ignored for these columns). * `ORDER BY column DESC' now always sorts `NULL' values first; in 3.23 this was not always consistent. Note: MySQL 4.0.11 restored the original behaviour. * `SHOW INDEX' has 2 columns more (`Null' and `Index_type') than it had in 3.23. * `CHECK', `SIGNED', `LOCALTIME' and `LOCALTIMESTAMP' are now reserved words. * The result of all bitwise operators `|', `&', `<<', `>>', and `~' is now unsigned. This may cause problems if you are using them in a context where you want a signed result. *Note Cast Functions::. * *Note*: when you use subtraction between integer values where one is of type `UNSIGNED', the result will be unsigned! In other words, before upgrading to MySQL 4.0, you should check your application for cases where you are subtracting a value from an unsigned entity and want a negative answer or subtracting an unsigned value from an integer column. You can disable this behaviour by using the `--sql-mode=NO_UNSIGNED_SUBTRACTION' option when starting `mysqld'. *Note Cast Functions::. * To use `MATCH ... AGAINST (... IN BOOLEAN MODE)' with your tables, you need to rebuild them with `REPAIR TABLE table_name USE_FRM'. * `LOCATE()' and `INSTR()' are case-sensitive if one of the arguments is a binary string. Otherwise they are case-insensitive. * `STRCMP()' now uses the current character set when doing comparisons, which means that the default comparison behaviour now is case-insensitive. * `HEX(string)' now returns the characters in string converted to hexadecimal. If you want to convert a number to hexadecimal, you should ensure that you call `HEX()' with a numeric argument. * In 3.23, `INSERT INTO ... SELECT' always had `IGNORE' enabled. In 4.0.1, MySQL will stop (and possibly roll back) in case of an error if you don't specify `IGNORE'. * `safe_mysqld' is renamed to `mysqld_safe'. For some time we will in our binary distributions include `safe_mysqld' as a symlink to `mysqld_safe'. * The old C API functions `mysql_drop_db', `mysql_create_db', and `mysql_connect' are not supported anymore, unless you compile MySQL with `CFLAGS=-DUSE_OLD_FUNCTIONS'. Instead of doing this, it is preferable to change the client to use the new 4.0 API. * In the `MYSQL_FIELD' structure, `length' and `max_length' have changed from `unsigned int' to `unsigned long'. This should not cause any problems, except that they may generate warning messages when used as arguments in the `printf()' class of functions. * You should use `TRUNCATE TABLE' when you want to delete all rows from a table and you don't care how many rows were deleted. (Because `TRUNCATE TABLE' is faster than `DELETE FROM table_name'). * You will get an error if you have an active `LOCK TABLES' or transaction when trying to execute `TRUNCATE TABLE' or `DROP DATABASE'. * You should use integers to store values in BIGINT columns (instead of using strings, as you did in MySQL 3.23). Using strings will still work, but using integers is more efficient. * Format of `SHOW OPEN TABLE' has changed. * Multi-threaded clients should use `mysql_thread_init()' and `mysql_thread_end()'. *Note Threaded clients::. * If you want to recompile the Perl DBD::mysql module, you must get Msql-Mysql-modules version 1.2218 or newer because the older DBD modules used the deprecated `drop_db()' call. * `RAND(seed)' returns a different random number series in 4.0 than in 3.23; this was done to further differentiate `RAND(seed)' and `RAND(seed+1)'. * The default type returned by `IFNULL(A,B)' is now set to be the more 'general' of the types of `A' and `B'. (The order is `STRING', `REAL' or `INTEGER'). Upgrading From Version 3.22 to Version 3.23 ------------------------------------------- MySQL Version 3.23 supports tables of the new `MyISAM' type and the old `ISAM' type. You don't have to convert your old tables to use these with Version 3.23. By default, all new tables will be created with type `MyISAM' (unless you start `mysqld' with the `--default-table-type=isam' option). You can change an `ISAM' table to a `MyISAM' table with `ALTER TABLE table_name TYPE=MyISAM' or the Perl script `mysql_convert_table_format'. Version 3.22 and 3.21 clients will work without any problems with a Version 3.23 server. The following list tells what you have to watch out for when upgrading to Version 3.23: * All tables that use the `tis620' character set must be fixed with `myisamchk -r' or `REPAIR TABLE'. * If you do a `DROP DATABASE' on a symbolic linked database, both the link and the original database are deleted. (This didn't happen in 3.22 because configure didn't detect the `readlink' system call.) * `OPTIMIZE TABLE' now works only for `MyISAM' tables. For other table types, you can use `ALTER TABLE' to optimise the table. During `OPTIMIZE TABLE' the table is now locked from other threads. * The MySQL client `mysql' is now by default started with the option `--no-named-commands (-g)'. This option can be disabled with `--enable-named-commands (-G)'. This may cause incompatibility problems in some casesfor example, in SQL scripts that use named commands without a semicolon! Long format commands still work from the first line. * Date functions that work on parts of dates (like `MONTH()') will now return 0 for `0000-00-00' dates. (MySQL 3.22 returned `NULL'.) * If you are using the `german' character sort order, you must repair all your tables with `isamchk -r', as we have made some changes in the sort order! * The default return type of `IF' will now depend on both arguments and not only the first argument. * `AUTO_INCREMENT' will not work with negative numbers. The reason for this is that negative numbers caused problems when wrapping from -1 to 0. `AUTO_INCREMENT' for MyISAM tables is now handled at a lower level and is much faster than before. For MyISAM tables old numbers are also not reused anymore, even if you delete some rows from the table. * `CASE', `DELAYED', `ELSE', `END', `FULLTEXT', `INNER', `RIGHT', `THEN', and `WHEN' are now reserved words. * `FLOAT(X)' is now a true floating-point type and not a value with a fixed number of decimals. * When declaring `DECIMAL(length,dec)' the length argument no longer includes a place for the sign or the decimal point. * A `TIME' string must now be of one of the following formats: `[[[DAYS] [H]H:]MM:]SS[.fraction]' or `[[[[[H]H]H]H]MM]SS[.fraction]'. * `LIKE' now compares strings using the same character comparison rules as `='. If you require the old behaviour, you can compile MySQL with the `CXXFLAGS=-DLIKE_CMP_TOUPPER' flag. * `REGEXP' is now case-insensitive for normal (not binary) strings. * When you check/repair tables you should use `CHECK TABLE' or `myisamchk' for `MyISAM' tables (`.MYI') and `isamchk' for ISAM (`.ISM') tables. * If you want your `mysqldump' files to be compatible between MySQL Version 3.22 and Version 3.23, you should not use the `--opt' or `--all' option to `mysqldump'. * Check all your calls to `DATE_FORMAT()' to make sure there is a `%' before each format character. (MySQL Version 3.22 and later already allowed this syntax.) * `mysql_fetch_fields_direct' is now a function (it was a macro) and it returns a pointer to a `MYSQL_FIELD' instead of a `MYSQL_FIELD'. * `mysql_num_fields()' can no longer be used on a `MYSQL*' object (it's now a function that takes `MYSQL_RES*' as an argument, so you should use `mysql_field_count()' instead). * In MySQL Version 3.22, the output of `SELECT DISTINCT ...' was almost always sorted. In Version 3.23, you must use `GROUP BY' or `ORDER BY' to obtain sorted output. * `SUM()' now returns `NULL', instead of 0, if there are no matching rows. This is according to ANSI SQL. * An `AND' or `OR' with `NULL' values will now return `NULL' instead of 0. This mostly affects queries that use `NOT' on an `AND/OR' expression as `NOT NULL' = `NULL'. `LPAD()' and `RPAD()' will shorten the result string if it's longer than the length argument. Upgrading from Version 3.21 to Version 3.22 ------------------------------------------- Nothing that affects compatibility has changed between versions 3.21 and 3.22. The only pitfall is that new tables that are created with `DATE' type columns will use the new way to store the date. You can't access these new fields from an old version of `mysqld'. After installing MySQL Version 3.22, you should start the new server and then run the `mysql_fix_privilege_tables' script. This will add the new privileges that you need to use the `GRANT' command. If you forget this, you will get `Access denied' when you try to use `ALTER TABLE', `CREATE INDEX', or `DROP INDEX'. If your MySQL root user requires a password, you should give this as an argument to `mysql_fix_privilege_tables'. The C API interface to `mysql_real_connect()' has changed. If you have an old client program that calls this function, you must place a `0' for the new `db' argument (or recode the client to send the `db' element for faster connections). You must also call `mysql_init()' before calling `mysql_real_connect()'! This change was done to allow the new `mysql_options()' function to save options in the `MYSQL' handler structure. The `mysqld' variable `key_buffer' has changed names to `key_buffer_size', but you can still use the old name in your startup files. Upgrading from Version 3.20 to Version 3.21 ------------------------------------------- If you are running a version older than Version 3.20.28 and want to switch to Version 3.21, you need to do the following: You can start the `mysqld' Version 3.21 server with `safe_mysqld --old-protocol' to use it with clients from a Version 3.20 distribution. In this case, the new client function `mysql_errno()' will not return any server error, only `CR_UNKNOWN_ERROR' (but it works for client errors), and the server uses the old `password()' checking rather than the new one. If you are *not* using the `--old-protocol' option to `mysqld', you will need to make the following changes: * All client code must be recompiled. If you are using ODBC, you must get the new `MyODBC' 2.x driver. * The script `scripts/add_long_password' must be run to convert the `Password' field in the `mysql.user' table to `CHAR(16)'. * All passwords must be reassigned in the `mysql.user' table (to get 62-bit rather than 31-bit passwords). * The table format hasn't changed, so you don't have to convert any tables. MySQL Version 3.20.28 and above can handle the new `user' table format without affecting clients. If you have a MySQL version earlier than Version 3.20.28, passwords will no longer work with it if you convert the `user' table. So to be safe, you should first upgrade to at least Version 3.20.28 and then upgrade to Version 3.21. The new client code works with a 3.20.x `mysqld' server, so if you experience problems with 3.21.x, you can use the old 3.20.x server without having to recompile the clients again. If you are not using the `--old-protocol' option to `mysqld', old clients will issue the error message: ERROR: Protocol mismatch. Server Version = 10 Client Version = 9 The new Perl `DBI'/`DBD' interface also supports the old `mysqlperl' interface. The only change you have to make if you use `mysqlperl' is to change the arguments to the `connect()' function. The new arguments are: `host', `database', `user', and `password' (the `user' and `password' arguments have changed places). *Note Perl `DBI' Class: Perl DBI Class. The following changes may affect queries in old applications: * `HAVING' must now be specified before any `ORDER BY' clause. * The parameters to `LOCATE()' have been swapped. * There are some new reserved words. The most notable are `DATE', `TIME', and `TIMESTAMP'. Upgrading to Another Architecture --------------------------------- If you are using MySQL Version 3.23, you can copy the `.frm', `.MYI', and `.MYD' files between different architectures that support the same floating-point format. (MySQL takes care of any byte-swapping issues.) The MySQL `ISAM' data and index files (`.ISD' and `*.ISM', respectively) are architecture-dependent and in some cases OS-dependent. If you want to move your applications to another machine that has a different architecture or OS than your current machine, you should not try to move a database by simply copying the files to the other machine. Use `mysqldump' instead. By default, `mysqldump' will create a file full of SQL statements. You can then transfer the file to the other machine and feed it as input to the `mysql' client. Try `mysqldump --help' to see what options are available. If you are moving the data to a newer version of MySQL, you should use `mysqldump --opt' with the newer version to get a fast, compact dump. The easiest (although not the fastest) way to move a database between two machines is to run the following commands on the machine on which the database is located: shell> mysqladmin -h 'other hostname' create db_name shell> mysqldump --opt db_name \ | mysql -h 'other hostname' db_name If you want to copy a database from a remote machine over a slow network, you can use: shell> mysqladmin create db_name shell> mysqldump -h 'other hostname' --opt --compress db_name \ | mysql db_name You can also store the result in a file, then transfer the file to the target machine and load the file into the database there. For example, you can dump a database to a file on the source machine like this: shell> mysqldump --quick db_name | gzip > db_name.contents.gz (The file created in this example is compressed.) Transfer the file containing the database contents to the target machine and run these commands there: shell> mysqladmin create db_name shell> gunzip < db_name.contents.gz | mysql db_name You can also use `mysqldump' and `mysqlimport' to accomplish the database transfer. For big tables, this is much faster than simply using `mysqldump'. In the following commands, `DUMPDIR' represents the full pathname of the directory you use to store the output from `mysqldump'. First, create the directory for the output files and dump the database: shell> mkdir DUMPDIR shell> mysqldump --tab=DUMPDIR db_name Then transfer the files in the `DUMPDIR' directory to some corresponding directory on the target machine and load the files into MySQL there: shell> mysqladmin create db_name # create database shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables Also, don't forget to copy the `mysql' database because that's where the grant tables (`user', `db', `host') are stored. You may have to run commands as the MySQL `root' user on the new machine until you have the `mysql' database in place. After you import the `mysql' database on the new machine, execute `mysqladmin flush-privileges' so that the server reloads the grant table information. Operating System Specific Notes =============================== Linux Notes (All Linux Versions) -------------------------------- The following notes regarding `glibc' apply only to the situation when you build MySQL yourself. If you are running Linux on an x86 machine, in most cases it is much better for you to just use our binary. We link our binaries against the best patched version of `glibc' we can come up with and with the best compiler options, in an attempt to make it suitable for a high-load server. So if you read the following text, and are in doubt about what you should do, try our binary first to see if it meets your needs, and worry about your own build only after you have discovered that our binary is not good enough. In that case, we would appreciate a note about it, so we can build a better binary next time. For a typical user, even for setups with a lot of concurrent connections and/or tables exceeding the 2G limit, our binary in most cases is the best choice. MySQL uses LinuxThreads on Linux. If you are using an old Linux version that doesn't have `glibc2', you must install LinuxThreads before trying to compile MySQL. You can get LinuxThreads at `http://www.mysql.com/downloads/os-linux.html'. *Note*: we have seen some strange problems with Linux 2.2.14 and MySQL on SMP systems. If you have a SMP system, we recommend you upgrade to Linux 2.4 as soon as possible! Your system will be faster and more stable by doing this! Note that `glibc' versions before and including Version 2.1.1 have a fatal bug in `pthread_mutex_timedwait' handling, which is used when you do `INSERT DELAYED'. We recommend that you not use `INSERT DELAYED' before upgrading glibc. If you plan to have 1000+ concurrent connections, you will need to make some changes to LinuxThreads, recompile it, and relink MySQL against the new `libpthread.a'. Increase `PTHREAD_THREADS_MAX' in `sysdeps/unix/sysv/linux/bits/local_lim.h' to 4096 and decrease `STACK_SIZE' in `linuxthreads/internals.h' to 256 KB. The paths are relative to the root of `glibc' Note that MySQL will not be stable with around 600-1000 connections if `STACK_SIZE' is the default of 2 MB. If MySQL can't open enough files, or connections, it may be that you haven't configured Linux to handle enough files. In Linux 2.2 and onward, you can check the number of allocated file handles by doing: cat /proc/sys/fs/file-max cat /proc/sys/fs/dquot-max cat /proc/sys/fs/super-max If you have more than 16 MB of memory, you should add something like the following to your init scripts (e.g. `/etc/init.d/boot.local' on SuSE Linux): echo 65536 > /proc/sys/fs/file-max echo 8192 > /proc/sys/fs/dquot-max echo 1024 > /proc/sys/fs/super-max You can also run the preceding commands from the command-line as root, but these settings will be lost the next time your computer reboots. Alternatively, you can set these parameters on bootup by using the `sysctl' tool, which is used by many Linux distributions (SuSE has added it as well, beginning with SuSE Linux 8.0). Just put the following values into a file named `/etc/sysctl.conf': # Increase some values for MySQL fs.file-max = 65536 fs.dquot-max = 8192 fs.super-max = 1024 You should also add the following to `/etc/my.cnf': [safe_mysqld] open-files-limit=8192 This should allow MySQL to create up to 8192 connections + files. The `STACK_SIZE' constant in LinuxThreads controls the spacing of thread stacks in the address space. It needs to be large enough so that there will be plenty of room for the stack of each individual thread, but small enough to keep the stack of some threads from running into the global `mysqld' data. Unfortunately, the Linux implementation of `mmap()', as we have experimentally discovered, will successfully unmap an already mapped region if you ask it to map out an address already in use, zeroing out the data on the entire page, instead of returning an error. So, the safety of `mysqld' or any other threaded application depends on the "gentleman" behaviour of the code that creates threads. The user must take measures to make sure the number of running threads at any time is sufficiently low for thread stacks to stay away from the global heap. With `mysqld', you should enforce this "gentleman" behaviour by setting a reasonable value for the `max_connections' variable. If you build MySQL yourself and do not want to mess with patching LinuxThreads, you should set `max_connections' to a value no higher than 500. It should be even less if you have a large key buffer, large heap tables, or some other things that make `mysqld' allocate a lot of memory, or if you are running a 2.2 kernel with a 2G patch. If you are using our binary or RPM version 3.23.25 or later, you can safely set `max_connections' at 1500, assuming no large key buffer or heap tables with lots of data. The more you reduce `STACK_SIZE' in LinuxThreads the more threads you can safely create. We recommend the values between 128K and 256K. If you use a lot of concurrent connections, you may suffer from a "feature" in the 2.2 kernel that penalises a process for forking or cloning a child in an attempt to prevent a fork bomb attack. This will cause MySQL not to scale well as you increase the number of concurrent clients. On single-CPU systems, we have seen this manifested in a very slow thread creation, which means it may take a long time to connect to MySQL (as long as 1 minute), and it may take just as long to shut it down. On multiple-CPU systems, we have observed a gradual drop in query speed as the number of clients increases. In the process of trying to find a solution, we have received a kernel patch from one of our users, who claimed it made a lot of difference for his site. The patch is available at `http://www.mysql.com/Downloads/Patches/linux-fork.patch'. We have now done rather extensive testing of this patch on both development and production systems. It has significantly improved `MySQL' performance without causing any problems and we now recommend it to our users who are still running high-load servers on 2.2 kernels. This issue has been fixed in the 2.4 kernel, so if you are not satisfied with the current performance of your system, rather than patching your 2.2 kernel, it might be easier to just upgrade to 2.4, which will also give you a nice SMP boost in addition to fixing this fairness bug. We have tested MySQL on the 2.4 kernel on a 2-CPU machine and found MySQL scales *much* betterthere was virtually no slowdown on queries throughput all the way up to 1000 clients, and the MySQL scaling factor (computed as the ratio of maximum throughput to the throughput with one client) was 180%. We have observed similar results on a 4-CPU systemvirtually no slowdown as the number of clients was increased up to 1000, and 300% scaling factor. So for a high-load SMP server we would definitely recommend the 2.4 kernel at this point. We have discovered that it is essential to run `mysqld' process with the highest possible priority on the 2.4 kernel to achieve maximum performance. This can be done by adding `renice -20 $$' command to `safe_mysqld'. In our testing on a 4-CPU machine, increasing the priority gave 60% increase in throughput with 400 clients. We are currently also trying to collect more info on how well `MySQL' performs on 2.4 kernel on 4-way and 8-way systems. If you have access such a system and have done some benchmarks, please send a mail to with the results - we will include them in the manual. There is another issue that greatly hurts MySQL performance, especially on SMP systems. The implementation of mutex in LinuxThreads in `glibc-2.1' is very bad for programs with many threads that only hold the mutex for a short time. On an SMP system, ironic as it is, if you link MySQL against unmodified `LinuxThreads', removing processors from the machine improves MySQL performance in many cases. We have made a patch available for `glibc 2.1.3' to correct this behaviour (`http://www.mysql.com/Downloads/Linux/linuxthreads-2.1-patch'). With `glibc-2.2.2' MySQL version 3.23.36 will use the adaptive mutex, which is much better than even the patched one in `glibc-2.1.3'. Be warned, however, that under some conditions, the current mutex code in `glibc-2.2.2' overspins, which hurts MySQL performance. The chance of this condition can be reduced by renicing `mysqld' process to the highest priority. We have also been able to correct the overspin behaviour with a patch, available at `http://www.mysql.com/Downloads/Linux/linuxthreads-2.2.2.patch'. It combines the correction of overspin, maximum number of threads, and stack spacing all in one. You will need to apply it in the `linuxthreads' directory with `patch -p0 128K memory on stack for this call. This stack size is now the default on MySQL 4.0.10 and above. If you are using gcc 3.0 and above to compile MySQL, you must install the `libstdc++v3' library before compiling MySQL; if you don't do this you will get an error about a missing `__cxa_pure_virtual' symbol during linking! On some older Linux distributions, `configure' may produce an error like this: Syntax error in sched.h. Change _P to __P in the /usr/include/sched.h file. See the Installation chapter in the Reference Manual. Just do what the error message says and add an extra underscore to the `_P' macro that has only one underscore, then try again. You may get some warnings when compiling; those shown here can be ignored: mysqld.cc -o objs-thread/mysqld.o mysqld.cc: In function `void init_signals()': mysqld.cc:315: warning: assignment of negative value `-1' to `long unsigned int' mysqld.cc: In function `void * signal_hand(void *)': mysqld.cc:346: warning: assignment of negative value `-1' to `long unsigned int' `mysql.server' can be found in the `share/mysql' directory under the MySQL installation directory or in the `support-files' directory of the MySQL source tree. If `mysqld' always core dumps when it starts up, the problem may be that you have an old `/lib/libc.a'. Try renaming it, then remove `sql/mysqld' and do a new `make install' and try again. This problem has been reported on some Slackware installations. If you get the following error when linking `mysqld', it means that your `libg++.a' is not installed correctly: /usr/lib/libc.a(putc.o): In function `_IO_putc': putc.o(.text+0x0): multiple definition of `_IO_putc' You can avoid using `libg++.a' by running `configure' like this: shell> CXX=gcc ./configure Linux SPARC Notes ................. In some implementations, `readdir_r()' is broken. The symptom is that `SHOW DATABASES' always returns an empty set. This can be fixed by removing `HAVE_READDIR_R' from `config.h' after configuring and before compiling. Some problems will require patching your Linux installation. The patch can be found at `http://www.mysql.com/Downloads/patches/Linux-sparc-2.0.30.diff'. This patch is against the Linux distribution `sparclinux-2.0.30.tar.gz' that is available at `vger.rutgers.edu' (a version of Linux that was never merged with the official 2.0.30). You must also install LinuxThreads Version 0.6 or newer. Linux Alpha Notes ................. MySQL Version 3.23.12 is the first MySQL version that is tested on Linux-Alpha. If you plan to use MySQL on Linux-Alpha, you should ensure that you have this version or newer. We have tested MySQL on Alpha with our benchmarks and test suite, and it appears to work nicely. We currently build the MySQL binary packages on SuSE Linux 7.0 for AXP, kernel 2.4.4-SMP, Compaq C compiler (V6.2-505) and Compaq C++ compiler (V6.3-006) on a Compaq DS20 machine with an Alpha EV6 processor. You can find the above compilers at `http://www.support.compaq.com/alpha-tools/'). By using these compilers, instead of gcc, we get about 9-14% better performance with MySQL. Note that until MySQL version 3.23.52 and 4.0.2 we optimised the binary for the current CPU only (by using the `-fast' compile option); this meant that you could only use our binaries if you had an Alpha EV6 processor. Starting with all following releases we added the `-arch generic' flag to our compile options, which makes sure the binary runs on all Alpha processors. We also compile statically to avoid library problems. CC=ccc CFLAGS="-fast -arch generic" CXX=cxx \ CXXFLAGS="-fast -arch generic -noexceptions -nortti" \ ./configure --prefix=/usr/local/mysql --disable-shared \ --with-extra-charsets=complex --enable-thread-safe-client \ --with-mysqld-ldflags=-non_shared --with-client-ldflags=-non_shared If you want to use egcs the following configure line worked for us: CFLAGS="-O3 -fomit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ --disable-shared Some known problems when running MySQL on Linux-Alpha: * Debugging threaded applications like MySQL will not work with `gdb 4.18'. You should download and use gdb 5.1 instead! * If you try linking `mysqld' statically when using `gcc', the resulting image will core dump at start. In other words, *don't* use `--with-mysqld-ldflags=-all-static' with `gcc'. Linux PowerPC Notes ................... MySQL should work on MkLinux with the newest `glibc' package (tested with `glibc' 2.0.7). Linux MIPS Notes ................ To get MySQL to work on Qube2, (Linux Mips), you need the newest `glibc' libraries (`glibc-2.0.7-29C2' is known to work). You must also use the `egcs' C++ compiler (`egcs-1.0.2-9', `gcc 2.95.2' or newer). Linux IA64 Notes ................ To get MySQL to compile on Linux IA64, we use the following compile line: Using `gcc-2.96': CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ "--with-comment=Official MySQL binary" --with-extra-charsets=complex On IA64 the MySQL client binaries are using shared libraries. This means that if you install our binary distribution in some other place than `/usr/local/mysql' you need to either modify `/etc/ld.so.conf' or add the path to the directory where you have `libmysqlclient.so' to the `LD_LIBRARY_PATH' environment variable. *Note Link errors::. Windows Notes ------------- This section describes using MySQL on Windows. This information is also provided in the `README' file that comes with the MySQL Windows distribution. *Note Windows installation::. Starting MySQL on Windows 95, 98 or Me ...................................... MySQL uses TCP/IP to connect a client to a server. (This will allow any machine on your network to connect to your MySQL server.) Because of this, you must install TCP/IP on your machine before starting MySQL. You can find TCP/IP on your Windows CD-ROM. Note that if you are using an old Windows 95 release (for example OSR2), it's likely that you have an old Winsock package; MySQL requires Winsock 2! You can get the newest Winsock from `http://www.microsoft.com/'. Windows 98 has the new Winsock 2 library, so the above doesn't apply there. To start the `mysqld' server, you should start an MS-DOS window and type: C:\> C:\mysql\bin\mysqld This will start `mysqld' in the background without a window. You can kill the MySQL server by executing: C:\> C:\mysql\bin\mysqladmin -u root shutdown This calls the MySQL administation utility as user `root', which is the default Administrator in the MySQL grant system. Please note that the MySQL grant system is wholly independent from any login users under Windows. Note that Windows 95/98/Me don't support creation of named pipes. So on those platforms, you can only use named pipes to connect to a remote MySQL server running on a Windows NT/2000/XP server host. (The MySQL server must also support named pipes, of course. For example, using `mysqld-opt' under NT/2000/XP will not allow named pipe connections. You should use either `mysqld-nt' or `mysqld-max-nt'.) If `mysqld' doesn't start, please check the `\mysql\data\mysql.err' file to see if the server wrote any message there to indicate the cause of the problem. You can also try to start the server with `mysqld --standalone'; in this case, you may get some useful information on the screen that may help solve the problem. The last option is to start `mysqld' with `--standalone --debug'. In this case `mysqld' will write a log file `C:\mysqld.trace' that should contain the reason why `mysqld' doesn't start. *Note Making trace files::. Use `mysqld --help' to display all the options that `mysqld' understands! Starting MySQL on Windows NT, 2000 or XP ........................................ To get MySQL to work with TCP/IP on Windows NT 4, you must install service pack 3 (or newer)! Normally you should install MySQL as a service on Windows NT/2000/XP. In case the server was already running, first stop it using the following command: C:\mysql\bin> mysqladmin -u root shutdown This calls the MySQL administation utility as user ``root'', which is the default `Administrator' in the MySQL grant system. Please note that the MySQL grant system is wholly independent from any login users under Windows. Now install the server service: C:\mysql\bin> mysqld-max-nt --install If any options are required, they must be specified as "`Start parameters'" in the Windows `Services' utility before you start the MySQL service. The `Services' utility (`Windows Service Control Manager') can be found in the `Windows Control Panel' (under `Administrative Tools' on Windows 2000). It is advisable to close the Services utility while performing the `--install' or `--remove' operations, this prevents some odd errors. For information about which server binary to run, see *Note Windows prepare environment::. Please note that from MySQL version 3.23.44, you have the choice of set up the service as `Manual' instead (if you don't wish the service to be started automatically during the boot process): C:\mysql\bin> mysqld-max-nt --install-manual The service is installed with the name `MySQL'. Once installed, it can be immediately started from the `Services' utility, or by using the command `NET START MySQL'. Once running, `mysqld-max-nt' can be stopped using `mysqladmin', from the Services utility, or by using the command `NET STOP MySQL'. When running as a service, the operating system will automatically stop the MySQL service on computer shutdown. In MySQL versions < 3.23.47, Windows only waited for a few seconds for the shutdown to complete, and killed the database server process if the time limit was exceeded (potentially causing problems). For instance, at the next startup the `InnoDB' storage engine had to do crash recovery. Starting from MySQL version 3.23.48, the Windows will wait longer for the MySQL server shutdown to complete. If you notice this is not enough for your intallation, it is safest to run the MySQL server not as a service, but from the Command prompt, and shut it down with `mysqladmin shutdown'. There is a problem that Windows NT (but not Windows 2000/XP) by default only waits 20 seconds for a service to shut down, and after that kills the service process. You can increase this default by opening the Registry Editor `\winnt\system32\regedt32.exe' and editing the value of `WaitToKillServiceTimeout' at `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control' in the Registry tree. Specify the new larger value in milliseconds, for example 120000 to have Windows NT wait upto 120 seconds. Please note that when run as a service, `mysqld-max-nt' has no access to a console and so no messages can be seen. Errors can be checked in `c:\mysql\data\mysql.err'. If you have problems installing `mysqld-max-nt' as a service, try starting it with the full path: C:\> C:\mysql\bin\mysqld-max-nt --install If this doesn't work, you can get `mysqld-max-nt' to start properly by fixing the path in the registry! If you don't want to start `mysqld-max-nt' as a service, you can start it as follows: C:\> C:\mysql\bin\mysqld-max-nt --standalone or C:\> C:\mysql\bin\mysqld --standalone --debug The last method gives you a debug trace in `C:\mysqld.trace'. *Note Making trace files::. Running MySQL on Windows ........................ MySQL supports TCP/IP on all Windows platforms and named pipes on NT/2000/XP. The default is to use named pipes for local connections on NT/2000/XP and TCP/IP for all other cases if the client has TCP/IP installed. The host name specifies which protocol is used: *Host *Protocol* name* NULL On NT/2000/XP, try named pipes first; if (none) that doesn't work, use TCP/IP. On 9x/Me, TCP/IP is used. . Named pipes localhost TCP/IP to current host hostname TCP/IP You can force a MySQL client to use named pipes by specifying the `--pipe' option or by specifying `.' as the host name. Use the `--socket' option to specify the name of the pipe. In MySQL 4.1 you should use the `--protocol=PIPE' option. Note that starting from 3.23.50, named pipes are only enabled if mysqld is started with `--enable-named-pipe'. This is because some users have experienced problems shutting down the MySQL server when one uses named pipes. You can test whether MySQL is working by executing the following commands: C:\> C:\mysql\bin\mysqlshow C:\> C:\mysql\bin\mysqlshow -u root mysql C:\> C:\mysql\bin\mysqladmin version status proc C:\> C:\mysql\bin\mysql test If `mysqld' is slow to answer to connections on Windows 9x/Me, there is probably a problem with your DNS. In this case, start `mysqld' with `--skip-name-resolve' and use only `localhost' and IP numbers in the MySQL grant tables. You can also avoid DNS when connecting to a `mysqld-nt' MySQL server running on NT/2000/XP by using the `--pipe' argument to specify use of named pipes. This works for most MySQL clients. There are two versions of the MySQL command-line tool: *Binary**Description* `mysql' Compiled on native Windows, which offers very limited text editing capabilities. `mysqlc'Compiled with the Cygnus GNU compiler and libraries, which offers `readline' editing. If you want to use `mysqlc.exe', you must copy `C:\mysql\lib\cygwinb19.dll' to your Windows system directory (`\windows\system' or similar place). The default privileges on Windows give all local users full privileges to all databases without specifying a password. To make MySQL more secure, you should set a password for all users and remove the row in the `mysql.user' table that has `Host='localhost'' and `User='''. You should also add a password for the `root' user. The following example starts by removing the anonymous user that has all privileges, then sets a `root' user password: C:\> C:\mysql\bin\mysql mysql mysql> DELETE FROM user WHERE Host='localhost' AND User=''; mysql> QUIT C:\> C:\mysql\bin\mysqladmin reload C:\> C:\mysql\bin\mysqladmin -u root password your_password After you've set the password, if you want to take down the `mysqld' server, you can do so using this command: C:\> mysqladmin --user=root --password=your_password shutdown If you are using the old shareware version of MySQL Version 3.21 under Windows, the above command will fail with an error: `parse error near 'SET password''. The solution for this is to download and upgrade to the latest MySQL version, which is now freely available. With the current MySQL versions you can easily add new users and change privileges with `GRANT' and `REVOKE' commands. *Note GRANT::. Connecting to a Remote MySQL from Windows with SSH .................................................. Here is a note about how to connect to get a secure connection to remote MySQL server with SSH (by David Carlson ): * Install an SSH client on your Windows machine. As a user, the best non-free one I've found is from `SecureCRT' from `http://www.vandyke.com/'. Another option is `f-secure' from `http://www.f-secure.com/'. You can also find some free ones on `Google' at `http://directory.google.com/Top/Computers/Security/Products_and_Tools/Cryptography/SSH/Clients/Windows/'. * Start your Windows SSH client. Set `Host_Name = yourmysqlserver_URL_or_IP'. Set `userid=your_userid' to log in to your server (probably not the same as your MySQL login/password. * Set up port forwarding. Either do a remote forward (Set `local_port: 3306', `remote_host: yourmysqlservername_or_ip', `remote_port: 3306' ) or a local forward (Set `port: 3306', `host: localhost', `remote port: 3306'). * Save everything, otherwise you'll have to redo it the next time. * Log in to your server with SSH session you just created. * On your Windows machine, start some ODBC application (such as Access). * Create a new file in Windows and link to MySQL using the ODBC driver the same way you normally do, except type in `localhost' for the MySQL host servernot `yourmysqlservername'. You should now have an ODBC connection to MySQL, encrypted using SSH. Splitting Data Across Different Disks on Windows ................................................ Beginning with MySQL Version 3.23.16, the `mysqld-max' and `mysql-max-nt' servers in the MySQL distribution are compiled with the `-DUSE_SYMDIR' option. This allows you to put a database on different disk by adding a symbolic link to it (in a manner similar to the way that symbolic links work on Unix). On Windows, you make a symbolic link to a database by creating a file that contains the path to the destination directory and saving this in the `mysql_data' directory under the filename `database.sym'. Note that the symbolic link will be used only if the directory `mysql_data_dir\database' doesn't exist. For example, if the MySQL data directory is `C:\mysql\data' and you want to have database `foo' located at `D:\data\foo', you should create the file `C:\mysql\data\foo.sym' that contains the text `D:\data\foo\'. After that, all tables created in the database `foo' will be created in `D:\data\foo'. Note that because of the speed penalty you get when opening every table, we have not enabled this by default even if you have compiled MySQL with support for this. To enable symlinks you should put in your `my.cnf' or `my.ini' file the following entry: [mysqld] use-symbolic-links In MySQL 4.0 we will enable symlinks by default. Then you should instead use the `skip-symlink' option if you want to disable this. Compiling MySQL Clients on Windows .................................. In your source files, you should include `windows.h' before you include `mysql.h': #if defined(_WIN32) || defined(_WIN64) #include #endif #include You can either link your code with the dynamic `libmysql.lib' library, which is just a wrapper to load in `libmysql.dll' on demand, or link with the static `mysqlclient.lib' library. Note that as the mysqlclient libraries are compiled as threaded libraries, you should also compile your code to be multi-threaded! MySQL-Windows Compared to Unix MySQL .................................... MySQL-Windows has by now proven itself to be very stable. This version of MySQL has the same features as the corresponding Unix version with the following exceptions: *Windows 95 and threads* Windows 95 leaks about 200 bytes of main memory for each thread creation. Each connection in MySQL creates a new thread, so you shouldn't run `mysqld' for an extended time on Windows 95 if your server handles many connections! Other versions of Windows don't suffer from this bug. *Concurrent reads* MySQL depends on the `pread()' and `pwrite()' calls to be able to mix `INSERT' and `SELECT'. Currently we use mutexes to emulate `pread()'/`pwrite()'. We will, in the long run, replace the file level interface with a virtual interface so that we can use the `readfile()'/`writefile()' interface on NT/2000/XP to get more speed. The current implementation limits the number of open files MySQL can use to 1024, which means that you will not be able to run as many concurrent threads on NT/2000/XP as on Unix. *Blocking read* MySQL uses a blocking read for each connection. This means that: * A connection will not be disconnected automatically after 8 hours, as happens with the Unix version of MySQL. * If a connection hangs, it's impossible to break it without killing MySQL. * `mysqladmin kill' will not work on a sleeping connection. * `mysqladmin shutdown' can't abort as long as there are sleeping connections. We plan to fix this problem when our Windows developers have figured out a nice workaround. *`DROP DATABASE'* You can't drop a database that is in use by some thread. *Killing MySQL from the task manager* You can't kill MySQL from the task manager or with the shutdown utility in Windows 95. You must take it down with `mysqladmin shutdown'. *Case-insensitive names* Filenames are case-insensitive on Windows, so database and table names are also case-insensitive in MySQL for Windows. The only restriction is that database and table names must be specified using the same case throughout a given statement. *Note Name case sensitivity::. *The `\' directory character* Pathname components in Windows 95 are separated by the `\' character, which is also the escape character in MySQL. If you are using `LOAD DATA INFILE' or `SELECT ... INTO OUTFILE', you must double the `\' character: mysql> LOAD DATA INFILE "C:\\tmp\\skr.txt" INTO TABLE skr; mysql> SELECT * INTO OUTFILE 'C:\\tmp\\skr.txt' FROM skr; Alternatively, use Unix style filenames with `/' characters: mysql> LOAD DATA INFILE "C:/tmp/skr.txt" INTO TABLE skr; mysql> SELECT * INTO OUTFILE 'C:/tmp/skr.txt' FROM skr; *`Can't open named pipe' error* If you use a MySQL 3.22 version on NT with the newest mysql-clients you will get the following error: error 2017: can't open named pipe to host: . pipe... This is because the release version of MySQL uses named pipes on NT by default. You can avoid this error by using the `--host=localhost' option to the new MySQL clients or create an option file `C:\my.cnf' that contains the following information: [client] host = localhost Starting from 3.23.50, named pipes are only enabled if `mysqld' is started with `--enable-named-pipe'. *`Access denied for user' error* If you get the error `Access denied for user: 'some-user@unknown' to database 'mysql'' when accessing a MySQL server on the same machine, this means that MySQL can't resolve your host name properly. To fix this, you should create a file `\windows\hosts' with the following information: 127.0.0.1 localhost *`ALTER TABLE'* While you are executing an `ALTER TABLE' statement, the table is locked from usage by other threads. This has to do with the fact that on Windows, you can't delete a file that is in use by another threads. (In the future, we may find some way to work around this problem.) ** `DROP TABLE' on a table that is in use by a `MERGE' table will not work on Windows because the `MERGE' handler does the table mapping hidden from the upper layer of MySQL. Because Windows doesn't allow you to drop files that are open, you first must flush all `MERGE' tables (with `FLUSH TABLES') or drop the `MERGE' table before dropping the table. We will fix this at the same time we introduce `VIEW's. ** `DATA DIRECTORY' and `INDEX DIRECTORY' directives in `CREATE TABLE' is ignored on Windows, because Windows doesn't support symbolic links. Here are some open issues for anyone who might want to help us with the Windows release: * Make a single-user `MYSQL.DLL' server. This should include everything in a standard MySQL server, except thread creation. This will make MySQL much easier to use in applications that don't need a true client/server and don't need to access the server from other hosts. * Add some nice start and shutdown icons to the MySQL installation. * When registering `mysqld' as a service with `--install' (on NT) it would be nice if you could also add default options on the command-line. For the moment, the workaround is to list the parameters in the `C:\my.cnf' file instead. * It would be really nice to be able to kill `mysqld' from the task manager. For the moment, you must use `mysqladmin shutdown'. * Port `readline' to Windows for use in the `mysql' command-line tool. * GUI versions of the standard MySQL clients (`mysql', `mysqlshow', `mysqladmin', and `mysqldump') would be nice. * It would be nice if the socket read and write functions in `net.c' were interruptible. This would make it possible to kill open threads with `mysqladmin kill' on Windows. * `mysqld' always starts in the "C" locale and not in the default locale. We would like to have `mysqld' use the current locale for the sort order. * Add macros to use the faster thread-safe increment/decrement methods provided by Windows. Other Windows-specific issues are described in the `README' file that comes with the MySQL-Windows distribution. Solaris Notes ------------- On Solaris, you may run into trouble even before you get the MySQL distribution unpacked! Solaris `tar' can't handle long file names, so you may see an error like this when you unpack MySQL: x mysql-3.22.12-beta/bench/Results/ATIS-mysql_odbc-NT_4.0-cmp-db2,\ informix,ms-sql,mysql,oracle,solid,sybase, 0 bytes, 0 tape blocks tar: directory checksum error In this case, you must use GNU `tar' (`gtar') to unpack the distribution. You can find a precompiled copy for Solaris at `http://www.mysql.com/downloads/os-solaris.html'. Sun native threads work only on Solaris 2.5 and higher. For Version 2.4 and earlier, MySQL will automatically use MIT-pthreads. *Note MIT-pthreads::. If you get the following error from configure: checking for restartable system calls... configure: error can not run test programs while cross compiling This means that you have something wrong with your compiler installation! In this case you should upgrade your compiler to a newer version. You may also be able to solve this problem by inserting the following row into the `config.cache' file: ac_cv_sys_restartable_syscalls=${ac_cv_sys_restartable_syscalls='no'} If you are using Solaris on a SPARC, the recommended compiler is `gcc' 2.95.2. You can find this at `http://gcc.gnu.org/'. Note that `egcs' 1.1.1 and `gcc' 2.8.1 don't work reliably on SPARC! The recommended `configure' line when using `gcc' 2.95.2 is: CC=gcc CFLAGS="-O3" \ CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" \ ./configure --prefix=/usr/local/mysql --with-low-memory --enable-assembler If you have an UltraSPARC, you can get 4% more performance by adding "-mcpu=v8 -Wa,-xarch=v8plusa" to CFLAGS and CXXFLAGS. If you have Sun's Forte 5.0 (or newer) compiler, you can run `configure' like this: CC=cc CFLAGS="-Xa -fast -native -xstrconst -mt" \ CXX=CC CXXFLAGS="-noex -mt" \ ./configure --prefix=/usr/local/mysql --enable-assembler You can create a 64 bit binary with: CC=cc CFLAGS="-Xa -fast -native -xstrconst -mt -xarch=v9" \ CXX=CC CXXFLAGS="-noex -mt -xarch=v9" ASFLAGS="-xarch=v9" \ ./configure --prefix=/usr/local/mysql --enable-assembler In the MySQL benchmarks, we got a 4% speedup on an UltraSPARC when using Forte 5.0 in 32 bit mode compared to using gcc 3.2 with -mcpu flags. If you create a 64 bit binary, it's 4 % slower than the 32 bit binary, but mysqld can instead handle more treads and memory. If you get a problem with `fdatasync' or `sched_yield', you can fix this by adding `LIBS=-lrt' to the configure line The following paragraph is only relevant for older compilers than WorkShop 5.3: You may also have to edit the `configure' script to change this line: #if !defined(__STDC__) || __STDC__ != 1 to this: #if !defined(__STDC__) If you turn on `__STDC__' with the `-Xc' option, the Sun compiler can't compile with the Solaris `pthread.h' header file. This is a Sun bug (broken compiler or broken include file). If `mysqld' issues the error message shown here when you run it, you have tried to compile MySQL with the Sun compiler without enabling the multi-thread option (`-mt'): libc internal error: _rmutex_unlock: rmutex not held Add `-mt' to `CFLAGS' and `CXXFLAGS' and try again. If you are using the SFW version of gcc (which comes with Solaris 8), you must add `/opt/sfw/lib' to the environment variable `LD_LIBRARY_PATH' before running configure. If you are using the gcc available from `sunfreeware.com', you may have many problems. You should recompile gcc and GNU binutils on the machine you will be running them from to avoid any problems. If you get the following error when compiling MySQL with `gcc', it means that your `gcc' is not configured for your version of Solaris: shell> gcc -O3 -g -O2 -DDBUG_OFF -o thr_alarm ... ./thr_alarm.c: In function `signal_hand': ./thr_alarm.c:556: too many arguments to function `sigwait' The proper thing to do in this case is to get the newest version of `gcc' and compile it with your current `gcc' compiler! At least for Solaris 2.5, almost all binary versions of `gcc' have old, unusable include files that will break all programs that use threads (and possibly other programs)! Solaris doesn't provide static versions of all system libraries (`libpthreads' and `libdl'), so you can't compile MySQL with `--static'. If you try to do so, you will get the error: ld: fatal: library -ldl: not found or undefined reference to `dlopen' or cannot find -lrt If too many processes try to connect very rapidly to `mysqld', you will see this error in the MySQL log: Error in accept: Protocol error You might try starting the server with the `--set-variable back_log=50' option as a workaround for this. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--back_log=50' on its own. *Note Command-line options::. If you are linking your own MySQL client, you might get the following error when you try to execute it: ld.so.1: ./my: fatal: libmysqlclient.so.#: open failed: No such file or directory The problem can be avoided by one of the following methods: * Link the client with the following flag (instead of `-Lpath'): `-Wl,r/full-path-to-libmysqlclient.so'. * Copy `libmysqclient.so' to `/usr/lib'. * Add the pathname of the directory where `libmysqlclient.so' is located to the `LD_RUN_PATH' environment variable before running your client. If you have problems with configure trying to link with `-lz' and you don't have `zlib' installed, you have two options: * If you want to be able to use the compressed communication protocol, you need to get and install zlib from ftp.gnu.org. * Configure with `--with-named-z-libs=no'. If you are using gcc and have problems with loading `UDF' functions into MySQL, try adding `-lgcc' to the link line for the `UDF' function. If you would like MySQL to start automatically, you can copy `support-files/mysql.server' to `/etc/init.d' and create a symbolic link to it named `/etc/rc3.d/S99mysql.server'. As Solaris doesn't support core files for `setuid()' applications, you can't get a core file from `mysqld' if you are using the `--user' option. Solaris 2.7/2.8 Notes ..................... You can normally use a Solaris 2.6 binary on Solaris 2.7 and 2.8. Most of the Solaris 2.6 issues also apply for Solaris 2.7 and 2.8. Note that MySQL Version 3.23.4 and above should be able to autodetect new versions of Solaris and enable workarounds for the following problems! Solaris 2.7 / 2.8 has some bugs in the include files. You may see the following error when you use `gcc': /usr/include/widec.h:42: warning: `getwc' redefined /usr/include/wchar.h:326: warning: this is the location of the previous definition If this occurs, you can do the following to fix the problem: Copy `/usr/include/widec.h' to `.../lib/gcc-lib/os/gcc-version/include' and change line 41 from: #if !defined(lint) && !defined(__lint) to #if !defined(lint) && !defined(__lint) && !defined(getwc) Alternatively, you can edit `/usr/include/widec.h' directly. Either way, after you make the fix, you should remove `config.cache' and run `configure' again! If you get errors like this when you run `make', it's because `configure' didn't detect the `curses.h' file (probably because of the error in `/usr/include/widec.h'): In file included from mysql.cc:50: /usr/include/term.h:1060: syntax error before `,' /usr/include/term.h:1081: syntax error before `;' The solution to this is to do one of the following: * Configure with `CFLAGS=-DHAVE_CURSES_H CXXFLAGS=-DHAVE_CURSES_H ./configure'. * Edit `/usr/include/widec.h' as indicted above and rerun configure. * Remove the `#define HAVE_TERM' line from `config.h' file and run `make' again. If you get a problem that your linker can't find `-lz' when linking your client program, the problem is probably that your `libz.so' file is installed in `/usr/local/lib'. You can fix this by one of the following methods: * Add `/usr/local/lib' to `LD_LIBRARY_PATH'. * Add a link to `libz.so' from `/lib'. * If you are using Solaris 8, you can install the optional zlib from your Solaris 8 CD distribution. * Configure MySQL with the `--with-named-z-libs=no' option. Solaris x86 Notes ................. On Solaris 2.8 on x86, `mysqld' will core dump if you run 'strip' in. If you are using `gcc' or `egcs' on Solaris x86 and you experience problems with core dumps under load, you should use the following `configure' command: CC=gcc CFLAGS="-O3 -fomit-frame-pointer -DHAVE_CURSES_H" \ CXX=gcc \ CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors -fno-exceptions \ -fno-rtti -DHAVE_CURSES_H" \ ./configure --prefix=/usr/local/mysql This will avoid problems with the `libstdc++' library and with C++ exceptions. If this doesn't help, you should compile a debug version and run it with a trace file or under `gdb'. *Note Using gdb on mysqld::. BSD Notes --------- This section provides information for the various BSD flavours, as well as specific versions within those. FreeBSD Notes ............. FreeBSD 3.x is recommended for running MySQL since the thread package is much more integrated. The easiest and therefore the preferred way to install is to use the mysql-server and mysql-client ports available on `http://www.freebsd.org/'. Using these gives you: * A working MySQL with all optimisations known to work on your version of FreeBSD enabled. * Automatic configuration and build. * Startup scripts installed in /usr/local/etc/rc.d. * Ability to see which files that are installed with pkg_info -L. And to remove them all with pkg_delete if you no longer want MySQL on that machine. It is recommended you use MIT-pthreads on FreeBSD 2.x and native threads on Versions 3 and up. It is possible to run with native threads on some late 2.2.x versions but you may encounter problems shutting down `mysqld'. The MySQL `Makefile's require GNU make (`gmake') to work. If you want to compile MySQL you need to install GNU `make' first. Be sure to have your name resolver setup correct. Otherwise, you may experience resolver delays or failures when connecting to `mysqld'. Make sure that the `localhost' entry in the `/etc/hosts' file is correct (otherwise, you will have problems connecting to the database). The `/etc/hosts' file should start with a line: 127.0.0.1 localhost localhost.your.domain The recommended way to compile and install MySQL on FreeBSD with gcc (2.95.2 and up) is: CC=gcc CFLAGS="-O2 -fno-strength-reduce" \ CXX=gcc CXXFLAGS="-O2 -fno-rtti -fno-exceptions -felide-constructors \ -fno-strength-reduce" \ ./configure --prefix=/usr/local/mysql --enable-assembler gmake gmake install ./scripts/mysql_install_db cd /usr/local/mysql ./bin/mysqld_safe & If you notice that `configure' will use MIT-pthreads, you should read the MIT-pthreads notes. *Note MIT-pthreads::. If you get an error from `make install' that it can't find `/usr/include/pthreads', `configure' didn't detect that you need MIT-pthreads. This is fixed by executing these commands: shell> rm config.cache shell> ./configure --with-mit-threads FreeBSD is also known to have a very low default file handle limit. *Note Not enough file handles::. Uncomment the ulimit -n section in safe_mysqld or raise the limits for the `mysqld' user in /etc/login.conf (and rebuild it with cap_mkdb /etc/login.conf). Also be sure you set the appropriate class for this user in the password file if you are not using the default (use: chpass mysqld-user-name). *Note `safe_mysqld': safe_mysqld. If you have a lot of memory you should consider rebuilding the kernel to allow MySQL to take more than 512M of RAM. Take a look at `option MAXDSIZ' in the LINT config file for more info. If you get problems with the current date in MySQL, setting the `TZ' variable will probably help. *Note Environment variables::. To get a secure and stable system you should only use FreeBSD kernels that are marked `-RELEASE'. NetBSD notes ............ To compile on NetBSD you need GNU `make'. Otherwise, the compile will crash when `make' tries to run `lint' on C++ files. OpenBSD 2.5 Notes ................. On OpenBSD Version 2.5, you can compile MySQL with native threads with the following options: CFLAGS=-pthread CXXFLAGS=-pthread ./configure --with-mit-threads=no OpenBSD 2.8 Notes ................. Our users have reported that OpenBSD 2.8 has a threading bug which causes problems with MySQL. The OpenBSD Developers have fixed the problem, but as of January 25th, 2001, it's only available in the "-current" branch. The symptoms of this threading bug are: slow response, high load, high CPU usage, and crashes. If you get an error like `Error in accept:: Bad file descriptor' or error 9 when trying to open tables or directories, the problem is probably that you haven't allocated enough file descriptors for MySQL. In this case try starting `safe_mysqld' as root with the following options: `--user=mysql --open-files-limit=2048' BSD/OS Version 2.x Notes ........................ If you get the following error when compiling MySQL, your `ulimit' value for virtual memory is too low: item_func.h: In method `Item_func_ge::Item_func_ge(const Item_func_ge &)': item_func.h:28: virtual memory exhausted make[2]: *** [item_func.o] Error 1 Try using `ulimit -v 80000' and run `make' again. If this doesn't work and you are using `bash', try switching to `csh' or `sh'; some BSDI users have reported problems with `bash' and `ulimit'. If you are using `gcc', you may also use have to use the `--with-low-memory' flag for `configure' to be able to compile `sql_yacc.cc'. If you get problems with the current date in MySQL, setting the `TZ' variable will probably help. *Note Environment variables::. BSD/OS Version 3.x Notes ........................ Upgrade to BSD/OS Version 3.1. If that is not possible, install BSDIpatch M300-038. Use the following command when configuring MySQL: shell> env CXX=shlicc++ CC=shlicc2 \ ./configure \ --prefix=/usr/local/mysql \ --localstatedir=/var/mysql \ --without-perl \ --with-unix-socket-path=/var/mysql/mysql.sock The following is also known to work: shell> env CC=gcc CXX=gcc CXXFLAGS=-O3 \ ./configure \ --prefix=/usr/local/mysql \ --with-unix-socket-path=/var/mysql/mysql.sock You can change the directory locations if you wish, or just use the defaults by not specifying any locations. If you have problems with performance under heavy load, try using the `--skip-thread-priority' option to `mysqld'! This will run all threads with the same priority; on BSDI Version 3.1, this gives better performance (at least until BSDI fixes their thread scheduler). If you get the error `virtual memory exhausted' while compiling, you should try using `ulimit -v 80000' and run `make' again. If this doesn't work and you are using `bash', try switching to `csh' or `sh'; some BSDI users have reported problems with `bash' and `ulimit'. BSD/OS Version 4.x Notes ........................ BSDI Version 4.x has some thread-related bugs. If you want to use MySQL on this, you should install all thread-related patches. At least M400-023 should be installed. On some BSDI Version 4.x systems, you may get problems with shared libraries. The symptom is that you can't execute any client programs, for example, `mysqladmin'. In this case you need to reconfigure not to use shared libraries with the `--disable-shared' option to configure. Some customers have had problems on BSDI 4.0.1 that the `mysqld' binary after a while can't open tables. This is because some library/system related bug causes `mysqld' to change current directory without asking for this! The fix is to either upgrade to 3.23.34 or after running `configure' remove the line `#define HAVE_REALPATH' from `config.h' before running make. Note that the above means that you can't symbolic link a database directories to another database directory or symbolic link a table to another database on BSDI! (Making a symbolic link to another disk is okay). Mac OS X Notes -------------- Mac OS X 10.x ............. MySQL should work without any problems on Mac OS X 10.x (Darwin). You don't need the pthread patches for this OS! This also applies to Mac OS X 10.x Server. Compiling for the Server platform is the same as for the client version of Mac OS X. However please note that MySQL comes preinstalled on the Server! *Note Mac OS X installation::. Mac OS X Server 1.2 (Rhapsody) .............................. Before trying to configure MySQL on Mac OS X server you must first install the pthread package from `http://www.prnet.de/RegEx/mysql.html'. Our binary for Mac OS X is compiled on Darwin 6.3 with the following configure line: CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ --with-extra-charsets=complex --enable-thread-safe-client \ --enable-local-infile --disable-shared You might want to also add aliases to your shell's resource file to access `mysql' and `mysqladmin' from the command-line: alias mysql '/usr/local/mysql/bin/mysql' alias mysqladmin '/usr/local/mysql/bin/mysqladmin' Alternatively, you could simply add `/usr/local/mysql/bin' to your `PATH' environment variable, e.g. by adding the following to `$HOME/.tcshrc': setenv PATH $PATH:/usr/local/bin Other Unix Notes ---------------- HP-UX Notes for Binary Distributions .................................... Some of the binary distributions of MySQL for HP-UX are distributed as an HP depot file and as a tar file. To use the depot file you must be running at least HP-UX 10.x to have access to HP's software depot tools. The HP version of MySQL was compiled on an HP 9000/8xx server under HP-UX 10.20, and uses MIT-pthreads. It is known to work well under this configuration. MySQL Version 3.22.26 and newer can also be built with HP's native thread package. Other configurations that may work: * HP 9000/7xx running HP-UX 10.20+ * HP 9000/8xx running HP-UX 10.30 The following configurations almost definitely won't work: * HP 9000/7xx or 8xx running HP-UX 10.x where x < 2 * HP 9000/7xx or 8xx running HP-UX 9.x To install the distribution, use one of the commands here, where `/path/to/depot' is the full pathname of the depot file: * To install everything, including the server, client and development tools: shell> /usr/sbin/swinstall -s /path/to/depot mysql.full * To install only the server: shell> /usr/sbin/swinstall -s /path/to/depot mysql.server * To install only the client package: shell> /usr/sbin/swinstall -s /path/to/depot mysql.client * To install only the development tools: shell> /usr/sbin/swinstall -s /path/to/depot mysql.developer The depot places binaries and libraries in `/opt/mysql' and data in `/var/opt/mysql'. The depot also creates the appropriate entries in `/etc/init.d' and `/etc/rc2.d' to start the server automatically at boot time. Obviously, this entails being `root' to install. To install the HP-UX tar.gz distribution, you must have a copy of GNU `tar'. HP-UX Version 10.20 Notes ......................... There are a couple of small problems when compiling MySQL on HP-UX. We recommend that you use `gcc' instead of the HP-UX native compiler, because `gcc' produces better code! We recommend using gcc 2.95 on HP-UX. Don't use high optimisation flags (like -O6) as this may not be safe on HP-UX. The following configure line should work with gcc 2.95: CFLAGS="-I/opt/dce/include -fpic" \ CXXFLAGS="-I/opt/dce/include -felide-constructors -fno-exceptions \ -fno-rtti" CXX=gcc ./configure --with-pthread \ --with-named-thread-libs='-ldce' --prefix=/usr/local/mysql --disable-shared The following configure line should work with gcc 3.1: CFLAGS="-DHPUX -I/opt/dce/include -O3 -fPIC" CXX=gcc \ CXXFLAGS="-DHPUX -I/opt/dce/include -felide-constructors -fno-exceptions \ -fno-rtti -O3 -fPIC" ./configure --prefix=/usr/local/mysql \ --with-extra-charsets=complex --enable-thread-safe-client \ --enable-local-infile --with-pthread \ --with-named-thread-libs=-ldce --with-lib-ccflags=-fPIC --disable-shared HP-UX Version 11.x Notes ........................ For HP-UX Version 11.x we recommend MySQL Version 3.23.15 or later. Because of some critical bugs in the standard HP-UX libraries, you should install the following patches before trying to run MySQL on HP-UX 11.0: PHKL_22840 Streams cumulative PHNE_22397 ARPA cumulative This will solve the problem of getting `EWOULDBLOCK' from `recv()' and `EBADF' from `accept()' in threaded applications. If you are using `gcc' 2.95.1 on an unpatched HP-UX 11.x system, you will get the error: In file included from /usr/include/unistd.h:11, from ../include/global.h:125, from mysql_priv.h:15, from item.cc:19: /usr/include/sys/unistd.h:184: declaration of C function ... /usr/include/sys/pthread.h:440: previous declaration ... In file included from item.h:306, from mysql_priv.h:158, from item.cc:19: The problem is that HP-UX doesn't define `pthreads_atfork()' consistently. It has conflicting prototypes in `/usr/include/sys/unistd.h':184 and `/usr/include/sys/pthread.h':440 (details below). One solution is to copy `/usr/include/sys/unistd.h' into `mysql/include' and edit `unistd.h' and change it to match the definition in `pthread.h'. Here's the diff: 183,184c183,184 < extern int pthread_atfork(void (*prepare)(), void (*parent)(), < void (*child)()); --- > extern int pthread_atfork(void (*prepare)(void), void (*parent)(void), > void (*child)(void)); After this, the following configure line should work: CFLAGS="-fomit-frame-pointer -O3 -fpic" CXX=gcc \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti -O3" \ ./configure --prefix=/usr/local/mysql --disable-shared If you are using MySQL 4.0.5 with the HP-UX compiler you can use: (tested with cc B.11.11.04): CC=cc CXX=aCC CFLAGS=+DD64 CXXFLAGS=+DD64 ./configure --with-extra-character-set=complex You can ignore any errors of the following type: aCC: warning 901: unknown option: `-3': use +help for online documentation If you get the following error from `configure' checking for cc option to accept ANSI C... no configure: error: MySQL requires a ANSI C compiler (and a C++ compiler). Try gcc. See the Installation chapter in the Reference Manual. Check that you don't have the path to the K&R compiler before the path to the HP-UX C and C++ compiler. Another reason for not beeing able to compile is that you didn't define the `+DD64' flags above. IBM-AIX notes ............. Automatic detection of `xlC' is missing from Autoconf, so a `configure' command something like this is needed when compiling MySQL (This example uses the IBM compiler): export CC="xlc_r -ma -O3 -qstrict -qoptimize=3 -qmaxmem=8192 " export CXX="xlC_r -ma -O3 -qstrict -qoptimize=3 -qmaxmem=8192" export CFLAGS="-I /usr/local/include" export LDFLAGS="-L /usr/local/lib" export CPPFLAGS=$CFLAGS export CXXFLAGS=$CFLAGS ./configure --prefix=/usr/local \ --localstatedir=/var/mysql \ --sysconfdir=/etc/mysql \ --sbindir='/usr/local/bin' \ --libexecdir='/usr/local/bin' \ --enable-thread-safe-client \ --enable-large-files Above are the options used to compile the MySQL distribution that can be found at `http://www-frec.bull.com/'. If you change the `-O3' to `-O2' in the above configure line, you must also remove the `-qstrict' option (this is a limitation in the IBM C compiler). If you are using `gcc' or `egcs' to compile MySQL, you *must* use the `-fno-exceptions' flag, as the exception handling in `gcc'/`egcs' is not thread-safe! (This is tested with `egcs' 1.1.) There are also some known problems with IBM's assembler, which may cause it to generate bad code when used with gcc. We recommend the following `configure' line with `egcs' and `gcc 2.95' on AIX: CC="gcc -pipe -mcpu=power -Wa,-many" \ CXX="gcc -pipe -mcpu=power -Wa,-many" \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti" \ ./configure --prefix=/usr/local/mysql --with-low-memory The `-Wa,-many' is necessary for the compile to be successful. IBM is aware of this problem but is in to hurry to fix it because of the workaround available. We don't know if the `-fno-exceptions' is required with `gcc 2.95', but as MySQL doesn't use exceptions and the above option generates faster code, we recommend that you should always use this option with `egcs / gcc'. If you get a problem with assembler code try changing the -mcpu=xxx to match your CPU. Typically power2, power, or powerpc may need to be used, alternatively you might need to use 604 or 604e. I'm not positive but I would think using "power" would likely be safe most of the time, even on a power2 machine. If you don't know what your CPU is then do a "uname -m", this will give you back a string that looks like "000514676700", with a format of xxyyyyyymmss where xx and ss are always 0's, yyyyyy is a unique system id and mm is the id of the CPU Planar. A chart of these values can be found at `http://publib.boulder.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds5/uname.htm'. This will give you a machine type and a machine model you can use to determine what type of CPU you have. If you have problems with signals (MySQL dies unexpectedly under high load) you may have found an OS bug with threads and signals. In this case you can tell MySQL not to use signals by configuring with: shell> CFLAGS=-DDONT_USE_THR_ALARM CXX=gcc \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti \ -DDONT_USE_THR_ALARM" \ ./configure --prefix=/usr/local/mysql --with-debug --with-low-memory This doesn't affect the performance of MySQL, but has the side effect that you can't kill clients that are "sleeping" on a connection with `mysqladmin kill' or `mysqladmin shutdown'. Instead, the client will die when it issues its next command. On some versions of AIX, linking with `libbind.a' makes `getservbyname' core dump. This is an AIX bug and should be reported to IBM. For AIX 4.2.1 and gcc you have to do the following changes. After configuring, edit `config.h' and `include/my_config.h' and change the line that says #define HAVE_SNPRINTF 1 to #undef HAVE_SNPRINTF And finally, in `mysqld.cc' you need to add a prototype for initgoups. #ifdef _AIX41 extern "C" int initgroups(const char *,int); #endif If you need to allocate a lot of memory to the mysqld process, it's not enough to just set 'ulimit -d unlimited'. You may also have to set in `mysqld_safe' something like: export LDR_CNTRL='MAXDATA=0x80000000' You can find more about using a lot of memory at: `http://publib16.boulder.ibm.com/pseries/en_US/aixprggd/genprogc/lrg_prg_support.htm'. SunOS 4 Notes ............. On SunOS 4, MIT-pthreads is needed to compile MySQL, which in turn means you will need GNU `make'. Some SunOS 4 systems have problems with dynamic libraries and `libtool'. You can use the following `configure' line to avoid this problem: shell> ./configure --disable-shared --with-mysqld-ldflags=-all-static When compiling `readline', you may get warnings about duplicate defines. These may be ignored. When compiling `mysqld', there will be some `implicit declaration of function' warnings. These may be ignored. Alpha-DEC-UNIX Notes (Tru64) ............................ If you are using egcs 1.1.2 on Digital Unix, you should upgrade to gcc 2.95.2, as egcs on DEC has some serious bugs! When compiling threaded programs under Digital Unix, the documentation recommends using the `-pthread' option for `cc' and `cxx' and the libraries `-lmach -lexc' (in addition to `-lpthread'). You should run `configure' something like this: CC="cc -pthread" CXX="cxx -pthread -O" \ ./configure --with-named-thread-libs="-lpthread -lmach -lexc -lc" When compiling `mysqld', you may see a couple of warnings like this: mysqld.cc: In function void handle_connections()': mysqld.cc:626: passing long unsigned int *' as argument 3 of accept(int,sockadddr *, int *)' You can safely ignore these warnings. They occur because `configure' can detect only errors, not warnings. If you start the server directly from the command-line, you may have problems with it dying when you log out. (When you log out, your outstanding processes receive a `SIGHUP' signal.) If so, try starting the server like this: shell> nohup mysqld [options] & `nohup' causes the command following it to ignore any `SIGHUP' signal sent from the terminal. Alternatively, start the server by running `safe_mysqld', which invokes `mysqld' using `nohup' for you. *Note `safe_mysqld': safe_mysqld. If you get a problem when compiling mysys/get_opt.c, just remove the line #define _NO_PROTO from the start of that file! If you are using Compac's CC compiler, the following configure line should work: CC="cc -pthread" CFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed all -arch host" CXX="cxx -pthread" CXXFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed all -arch host \ -noexceptions -nortti" export CC CFLAGS CXX CXXFLAGS ./configure \ --prefix=/usr/local/mysql \ --with-low-memory \ --enable-large-files \ --enable-shared=yes \ --with-named-thread-libs="-lpthread -lmach -lexc -lc" gnumake If you get a problem with libtool, when compiling with shared libraries as above, when linking `mysql', you should be able to get around this by issuing: cd mysql /bin/sh ../libtool --mode=link cxx -pthread -O3 -DDBUG_OFF \ -O4 -ansi_alias -ansi_args -fast -inline speed \ -speculate all \ -arch host -DUNDEF_HAVE_GETHOSTBYNAME_R \ -o mysql mysql.o readline.o sql_string.o completion_hash.o \ ../readline/libreadline.a -lcurses \ ../libmysql/.libs/libmysqlclient.so -lm cd .. gnumake gnumake install scripts/mysql_install_db Alpha-DEC-OSF/1 Notes ..................... If you have problems compiling and have DEC `CC' and `gcc' installed, try running `configure' like this: CC=cc CFLAGS=-O CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql If you get problems with the `c_asm.h' file, you can create and use a 'dummy' `c_asm.h' file with: touch include/c_asm.h CC=gcc CFLAGS=-I./include \ CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql Note that the following problems with the `ld' program can be fixed by downloading the latest DEC (Compaq) patch kit from: `http://ftp.support.compaq.com/public/unix/'. On OSF/1 V4.0D and compiler "DEC C V5.6-071 on Digital Unix V4.0 (Rev. 878)" the compiler had some strange behaviour (undefined `asm' symbols). `/bin/ld' also appears to be broken (problems with `_exit undefined' errors occuring while linking `mysqld'). On this system, we have managed to compile MySQL with the following `configure' line, after replacing `/bin/ld' with the version from OSF 4.0C: CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql With the Digital compiler "C++ V6.1-029", the following should work: CC=cc -pthread CFLAGS=-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all \ -arch host CXX=cxx -pthread CXXFLAGS=-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all \ -arch host -noexceptions -nortti export CC CFLAGS CXX CXXFLAGS ./configure --prefix=/usr/mysql/mysql --with-mysqld-ldflags=-all-static \ --disable-shared --with-named-thread-libs="-lmach -lexc -lc" In some versions of OSF/1, the `alloca()' function is broken. Fix this by removing the line in `config.h' that defines `'HAVE_ALLOCA''. The `alloca()' function also may have an incorrect prototype in `/usr/include/alloca.h'. This warning resulting from this can be ignored. `configure' will use the following thread libraries automatically: `--with-named-thread-libs="-lpthread -lmach -lexc -lc"'. When using `gcc', you can also try running `configure' like this: shell> CFLAGS=-D_PTHREAD_USE_D4 CXX=gcc CXXFLAGS=-O3 ./configure ... If you have problems with signals (MySQL dies unexpectedly under high load), you may have found an OS bug with threads and signals. In this case you can tell MySQL not to use signals by configuring with: shell> CFLAGS=-DDONT_USE_THR_ALARM \ CXXFLAGS=-DDONT_USE_THR_ALARM \ ./configure ... This doesn't affect the performance of MySQL, but has the side effect that you can't kill clients that are "sleeping" on a connection with `mysqladmin kill' or `mysqladmin shutdown'. Instead, the client will die when it issues its next command. With `gcc' 2.95.2, you will probably run into the following compile error: sql_acl.cc:1456: Internal compiler error in `scan_region', at except.c:2566 Please submit a full bug report. To fix this you should change to the `sql' directory and do a "cut and paste" of the last `gcc' line, but change `-O3' to `-O0' (or add `-O0' immediately after `gcc' if you don't have any `-O' option on your compile line). After this is done you can just change back to the top-level directly and run `make' again. SGI Irix Notes .............. If you are using Irix Version 6.5.3 or newer `mysqld' will only be able to create threads if you run it as a user with `CAP_SCHED_MGT' privileges (like `root') or give the `mysqld' server this privilege with the following shell command: shell> chcap "CAP_SCHED_MGT+epi" /opt/mysql/libexec/mysqld You may have to undefine some things in `config.h' after running `configure' and before compiling. In some Irix implementations, the `alloca()' function is broken. If the `mysqld' server dies on some `SELECT' statements, remove the lines from `config.h' that define `HAVE_ALLOC' and `HAVE_ALLOCA_H'. If `mysqladmin create' doesn't work, remove the line from `config.h' that defines `HAVE_READDIR_R'. You may have to remove the `HAVE_TERM_H' line as well. SGI recommends that you install all of the patches on this page as a set: `http://support.sgi.com/surfzone/patches/patchset/6.2_indigo.rps.html' At the very minimum, you should install the latest kernel rollup, the latest `rld' rollup, and the latest `libc' rollup. You definitely need all the POSIX patches on this page, for pthreads support: `http://support.sgi.com/surfzone/patches/patchset/6.2_posix.rps.html' If you get the something like the following error when compiling `mysql.cc': "/usr/include/curses.h", line 82: error(1084): invalid combination of type Type the following in the top-level directory of your MySQL source tree: shell> extra/replace bool curses_bool < /usr/include/curses.h \ > include/curses.h shell> make There have also been reports of scheduling problems. If only one thread is running, things go slow. Avoid this by starting another client. This may lead to a 2-to-10-fold increase in execution speed thereafter for the other thread. This is a poorly understood problem with Irix threads; you may have to improvise to find solutions until this can be fixed. If you are compiling with `gcc', you can use the following `configure' command: CC=gcc CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql --enable-thread-safe-client \ --with-named-thread-libs=-lpthread On Irix 6.5.11 with native Irix C and C++ compilers ver. 7.3.1.2, the following is reported to work CC=cc CXX=CC CFLAGS='-O3 -n32 -TARG:platform=IP22 -I/usr/local/include \ -L/usr/local/lib' CXXFLAGS='-O3 -n32 -TARG:platform=IP22 \ -I/usr/local/include -L/usr/local/lib' ./configure \ --prefix=/usr/local/mysql --with-innodb --with-berkeley-db \ --with-libwrap=/usr/local \ --with-named-curses-libs=/usr/local/lib/libncurses.a Caldera (SCO) Notes ................... The current port is tested only on a "sco3.2v5.0.4" and "sco3.2v5.0.5" system. There has also been a lot of progress on a port to "sco 3.2v4.2". For the moment the recommended compiler on OpenServer is gcc 2.95.2. With this you should be able to compile MySQL with just: CC=gcc CXX=gcc ./configure ... (options) 1. For OpenServer 5.0.X you need to use gcc-2.95.2p1 or newer from the Skunkware. `http://www.caldera.com/skunkware/' and choose browser OpenServer packages or by ftp to ftp2.caldera.com in the pub/skunkware/osr5/devtools/gcc directory. 2. You need the port of GCC 2.5.x for this product and the Development system. They are required on this version of Caldera (SCO) Unix. You cannot just use the GCC Dev system. 3. You should get the FSU Pthreads package and install it first. This can be found at `http://www.cs.wustl.edu/~schmidt/ACE_wrappers/FSU-threads.tar.gz'. You can also get a precompiled package from `http://www.mysql.com/Downloads/SCO/FSU-threads-3.5c.tar.gz'. 4. FSU Pthreads can be compiled with Caldera (SCO) Unix 4.2 with tcpip. Or OpenServer 3.0 or Open Desktop 3.0 (OS 3.0 ODT 3.0), with the Caldera (SCO) Development System installed using a good port of GCC 2.5.x ODT or OS 3.0 you will need a good port of GCC 2.5.x There are a lot of problems without a good port. The port for this product requires the SCO Unix Development system. Without it, you are missing the libraries and the linker that is needed. 5. To build FSU Pthreads on your system, do the following: a. Run `./configure' in the `threads/src' directory and select the SCO OpenServer option. This command copies `Makefile.SCO5' to `Makefile'. b. Run `make'. c. To install in the default `/usr/include' directory, login as root, then `cd' to the `thread/src' directory, and run `make install'. 6. Remember to use GNU `make' when making MySQL. 7. If you don't start `safe_mysqld' as root, you probably will get only the default 110 open files per process. `mysqld' will write a note about this in the log file. 8. With SCO 3.2V5.0.5, you should use FSU Pthreads version 3.5c or newer. You should also use gcc 2.95.2 or newer! The following `configure' command should work: shell> ./configure --prefix=/usr/local/mysql --disable-shared 9. With SCO 3.2V4.2, you should use FSU Pthreads version 3.5c or newer. The following `configure' command should work: shell> CFLAGS="-D_XOPEN_XPG4" CXX=gcc CXXFLAGS="-D_XOPEN_XPG4" \ ./configure \ --prefix=/usr/local/mysql \ --with-named-thread-libs="-lgthreads -lsocket -lgen -lgthreads" \ --with-named-curses-libs="-lcurses" You may get some problems with some include files. In this case, you can find new SCO-specific include files at `http://www.mysql.com/Downloads/SCO/SCO-3.2v4.2-includes.tar.gz'. You should unpack this file in the `include' directory of your MySQL source tree. Caldera (SCO) development notes: * MySQL should automatically detect FSU Pthreads and link `mysqld' with `-lgthreads -lsocket -lgthreads'. * The Caldera (SCO) development libraries are re-entrant in FSU Pthreads. Caldera claim sthat its libraries' functions are re-entrant, so they must be reentrant with FSU Pthreads. FSU Pthreads on OpenServer tries to use the SCO scheme to make re-entrant libraries. * FSU Pthreads (at least the version at `http://www.mysql.com/') comes linked with GNU `malloc'. If you encounter problems with memory usage, make sure that `gmalloc.o' is included in `libgthreads.a' and `libgthreads.so'. * In FSU Pthreads, the following system calls are pthreads-aware: `read()', `write()', `getmsg()', `connect()', `accept()', `select()', and `wait()'. * The CSSA-2001-SCO.35.2 (the patch is listed in custom as erg711905-dscr_remap security patch (version 2.0.0) breaks FSU threads and makes mysqld unstable. You have to remove this one if you want to run mysqld on an OpenServer 5.0.6 machine. If you want to install DBI on Caldera (SCO), you have to edit the `Makefile' in DBI-xxx and each subdirectory. Note that the following assumes gcc 2.95.2 or newer: OLD: NEW: CC = cc CC = gcc CCCDLFLAGS = -KPIC -W1,-Bexport CCCDLFLAGS = -fpic CCDLFLAGS = -wl,-Bexport CCDLFLAGS = LD = ld LD = gcc -G -fpic LDDLFLAGS = -G -L/usr/local/lib LDDLFLAGS = -L/usr/local/lib LDFLAGS = -belf -L/usr/local/lib LDFLAGS = -L/usr/local/lib LD = ld LD = gcc -G -fpic OPTIMISE = -Od OPTIMISE = -O1 OLD: CCCFLAGS = -belf -dy -w0 -U M_XENIX -DPERL_SCO5 -I/usr/local/include NEW: CCFLAGS = -U M_XENIX -DPERL_SCO5 -I/usr/local/include This is because the Perl dynaloader will not load the `DBI' modules if they were compiled with `icc' or `cc'. Perl works best when compiled with `cc'. Caldera (SCO) Unixware Version 7.0 Notes ........................................ You must use a version of MySQL at least as recent as Version 3.22.13 because that version fixes some portability problems under Unixware. We have been able to compile MySQL with the following `configure' command on Unixware Version 7.0.1: CC=cc CXX=CC ./configure --prefix=/usr/local/mysql If you want to use `gcc', you must use `gcc' 2.95.2 or newer. Caldera provides libsocket.so.2 at `ftp://stage.caldera.com/pub/security/tools' for pre-OSR506 security fixes. Also, the telnetd fix at as both libsocket.so.2 and libresolv.so.1 with instructions for installing on pre-OSR506 systems. It's probably a good idea to install the above patches before trying to compile/use MySQL. OS/2 Notes ---------- MySQL uses quite a few open files. Because of this, you should add something like the following to your `CONFIG.SYS' file: SET EMXOPT=-c -n -h1024 If you don't do this, you will probably run into the following error: File 'xxxx' not found (Errcode: 24) When using MySQL with OS/2 Warp 3, FixPack 29 or above is required. With OS/2 Warp 4, FixPack 4 or above is required. This is a requirement of the Pthreads library. MySQL must be installed in a partition that supports long filenames such as HPFS, FAT32, etc. The `INSTALL.CMD' script must be run from OS/2's own `CMD.EXE' and may not work with replacement shells such as `4OS2.EXE'. The `scripts/mysql-install-db' script has been renamed. It is now called `install.cmd' and is a REXX script, which will set up the default MySQL security settings and create the WorkPlace Shell icons for MySQL. Dynamic module support is compiled in but not fully tested. Dynamic modules should be compiled using the Pthreads run-time library. gcc -Zdll -Zmt -Zcrtdll=pthrdrtl -I../include -I../regex -I.. \ -o example udf_example.cc -L../lib -lmysqlclient udf_example.def mv example.dll example.udf *Note*: Due to limitations in OS/2, UDF module name stems must not exceed 8 characters. Modules are stored in the `/mysql2/udf' directory; the `safe-mysqld.cmd' script will put this directory in the `BEGINLIBPATH' environment variable. When using UDF modules, specified extensions are ignoredit is assumed to be `.udf'. For example, in Unix, the shared module might be named `example.so' and you would load a function from it like this: mysql> CREATE FUNCTION metaphon RETURNS STRING SONAME "example.so"; Is OS/2, the module would be named `example.udf', but you would not specify the module extension: mysql> CREATE FUNCTION metaphon RETURNS STRING SONAME "example"; BeOS Notes ---------- We are really interested in getting MySQL to work on BeOS, but unfortunately we don't have any person who knows BeOS or has time to do a port. We are interested in finding someone to do a port, and we will help them with any technical questions they may have while doing the port. We have previously talked with some BeOS developers that have said that MySQL is 80% ported to BeOS, but we haven't heard from them in a while. Novell NetWare Notes -------------------- We are really interested in getting MySQL to work on NetWare, but unfortunately we don't have any person who knows NetWare or has time to do a port. We are interested in finding someone to do a port, and we will help them with any technical questions they may have while doing the port. Perl Installation Comments ========================== Installing Perl on Unix ----------------------- Perl support for MySQL is provided by means of the `DBI'/`DBD' client interface. *Note Perl::. The Perl `DBD'/`DBI' client code requires Perl Version 5.004 or later. The interface *will not work* if you have an older version of Perl. MySQL Perl support also requires that you've installed MySQL client programming support. If you installed MySQL from RPM files, client programs are in the client RPM, but client programming support is in the developer RPM. Make sure you've installed the latter RPM. As of Version 3.22.8, Perl support is distributed separately from the main MySQL distribution. If you want to install Perl support, the files you will need can be obtained from `http://www.mysql.com/downloads/api-dbi.html'. The Perl distributions are provided as compressed `tar' archives and have names like `MODULE-VERSION.tar.gz', where `MODULE' is the module name and `VERSION' is the version number. You should get the `Data-Dumper', `DBI', and `Msql-Mysql-modules' distributions and install them in that order. The installation procedure is shown here. The example shown is for the `Data-Dumper' module, but the procedure is the same for all three distributions: 1. Unpack the distribution into the current directory: shell> gunzip < Data-Dumper-VERSION.tar.gz | tar xvf - This command creates a directory named `Data-Dumper-VERSION'. 2. Change into the top-level directory of the unpacked distribution: shell> cd Data-Dumper-VERSION 3. Build the distribution and compile everything: shell> perl Makefile.PL shell> make shell> make test shell> make install The `make test' command is important because it verifies that the module is working. Note that when you run that command during the `Msql-Mysql-modules' installation to exercise the interface code, the MySQL server must be running or the test will fail. It is a good idea to rebuild and reinstall the `Msql-Mysql-modules' distribution whenever you install a new release of MySQL, particularly if you notice symptoms such as all your `DBI' scripts dumping core after you upgrade MySQL. If you don't have the right to install Perl modules in the system directory or if you to install local Perl modules, the following reference may help you: `http://www.iserver.com/support/contrib/perl5/modules.html' Look under the heading `Installing New Modules that Require Locally Installed Modules'. Installing ActiveState Perl on Windows -------------------------------------- To install the MySQL `DBD' module with ActiveState Perl on Windows, you should do the following: * Get ActiveState Perl from `http://www.activestate.com/Products/ActivePerl/' and install it. * Open a DOS shell. * If required, set the HTTP_proxy variable. For example, you might try: set HTTP_proxy=my.proxy.com:3128 * Start the PPM program: C:\> c:\perl\bin\ppm.pl * If you have not already done so, install `DBI': ppm> install DBI * If this succeeds, run the following command: install \ ftp://ftp.de.uu.net/pub/CPAN/authors/id/JWIED/DBD-mysql-1.2212.x86.ppd The above should work at least with ActiveState Perl Version 5.6. If you can't get the above to work, you should instead install the `MyODBC' driver and connect to MySQL server through ODBC: use DBI; $dbh= DBI->connect("DBI:ODBC:$dsn","$user","$password") || die "Got error $DBI::errstr when connecting to $dsn\n"; Installing the MySQL Perl Distribution on Windows ------------------------------------------------- The MySQL Perl distribution contains `DBI', `DBD:MySQL' and `DBD:ODBC'. * Get the Perl distribution for Windows from `http://www.mysql.com/downloads/os-win32.html'. * Unzip the distribution in `C:' so that you get a `C:\PERL' directory. * Add the directory `C:\PERL\BIN' to your path. * Add the directory `C:\PERL\BIN\MSWIN32-x86-thread' or `C:\PERL\BIN\MSWIN32-x86' to your path. * Test that `perl' works by executing `perl -v' in a DOS shell. Problems Using the Perl `DBI'/`DBD' Interface --------------------------------------------- If Perl reports that it can't find the `../mysql/mysql.so' module, then the problem is probably that Perl can't locate the shared library `libmysqlclient.so'. You can fix this by any of the following methods: * Compile the `Msql-Mysql-modules' distribution with `perl Makefile.PL -static -config' rather than `perl Makefile.PL'. * Copy `libmysqlclient.so' to the directory where your other shared libraries are located (probably `/usr/lib' or `/lib'). * On Linux you can add the pathname of the directory where `libmysqlclient.so' is located to the `/etc/ld.so.conf' file. * Add the pathname of the directory where `libmysqlclient.so' is located to the `LD_RUN_PATH' environment variable. If you get the following errors from `DBD-mysql', you are probably using `gcc' (or using an old binary compiled with `gcc'): /usr/bin/perl: can't resolve symbol '__moddi3' /usr/bin/perl: can't resolve symbol '__divdi3' Add `-L/usr/lib/gcc-lib/... -lgcc' to the link command when the `mysql.so' library gets built (check the output from `make' for `mysql.so' when you compile the Perl client). The `-L' option should specify the pathname of the directory where `libgcc.a' is located on your system. Another cause of this problem may be that Perl and MySQL aren't both compiled with `gcc'. In this case, you can solve the mismatch by compiling both with `gcc'. If you get the following error from `Msql-Mysql-modules' when you run the tests: t/00base............install_driver(mysql) failed: Can't load '../blib/arch/auto/DBD/mysql/mysql.so' for module DBD::mysql: ../blib/arch/auto/DBD/mysql/mysql.so: undefined symbol: uncompress at /usr/lib/perl5/5.00503/i586-linux/DynaLoader.pm line 169. it means that you need to include the compression library, -lz, to the link line. This can be doing the following change in the file `lib/DBD/mysql/Install.pm': $sysliblist .= " -lm"; to $sysliblist .= " -lm -lz"; After this, you *must* run 'make realclean' and then proceed with the installation from the beginning. If you want to use the Perl module on a system that doesn't support dynamic linking (like Caldera/SCO) you can generate a static version of Perl that includes `DBI' and `DBD-mysql'. The way this works is that you generate a version of Perl with the `DBI' code linked in and install it on top of your current Perl. Then you use that to build a version of Perl that additionally has the `DBD' code linked in, and install that. On Caldera (SCO), you must have the following environment variables set: shell> LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib:/usr/progressive/lib or shell> LD_LIBRARY_PATH=/usr/lib:/lib:/usr/local/lib:/usr/ccs/lib:\ /usr/progressive/lib:/usr/skunk/lib shell> LIBPATH=/usr/lib:/lib:/usr/local/lib:/usr/ccs/lib:\ /usr/progressive/lib:/usr/skunk/lib shell> MANPATH=scohelp:/usr/man:/usr/local1/man:/usr/local/man:\ /usr/skunk/man: First, create a Perl that includes a statically linked `DBI' by running these commands in the directory where your `DBI' distribution is located: shell> perl Makefile.PL -static -config shell> make shell> make install shell> make perl Then you must install the new Perl. The output of `make perl' will indicate the exact `make' command you will need to execute to perform the installation. On Caldera (SCO), this is `make -f Makefile.aperl inst_perl MAP_TARGET=perl'. Next, use the just-created Perl to create another Perl that also includes a statically-linked `DBD::mysql' by running these commands in the directory where your `Msql-Mysql-modules' distribution is located: shell> perl Makefile.PL -static -config shell> make shell> make install shell> make perl Finally, you should install this new Perl. Again, the output of `make perl' indicates the command to use. Tutorial Introduction ********************* This chapter provides a tutorial introduction to MySQL by showing how to use the `mysql' client program to create and use a simple database. `mysql' (sometimes referred to as the "terminal monitor" or just "monitor") is an interactive program that allows you to connect to a MySQL server, run queries, and view the results. `mysql' may also be used in batch mode: you place your queries in a file beforehand, then tell `mysql' to execute the contents of the file. Both ways of using `mysql' are covered here. To see a list of options provided by `mysql', invoke it with the `--help' option: shell> mysql --help This chapter assumes that `mysql' is installed on your machine and that a MySQL server is available to which you can connect. If this is not true, contact your MySQL administrator. (If *you* are the administrator, you will need to consult other sections of this manual.) This chapter describes the entire process of setting up and using a database. If you are interested only in accessing an already-existing database, you may want to skip over the sections that describe how to create the database and the tables it contains. Because this chapter is tutorial in nature, many details are necessarily left out. Consult the relevant sections of the manual for more information on the topics covered here. Connecting to and Disconnecting from the Server =============================================== To connect to the server, you'll usually need to provide a MySQL user name when you invoke `mysql' and, most likely, a password. If the server runs on a machine other than the one where you log in, you'll also need to specify a hostname. Contact your administrator to find out what connection parameters you should use to connect (that is, what host, user name, and password to use). Once you know the proper parameters, you should be able to connect like this: shell> mysql -h host -u user -p Enter password: ******** The `********' represents your password; enter it when `mysql' displays the `Enter password:' prompt. If that works, you should see some introductory information followed by a `mysql>' prompt: shell> mysql -h host -u user -p Enter password: ******** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 459 to server version: 3.22.20a-log Type 'help' for help. mysql> The prompt tells you that `mysql' is ready for you to enter commands. Some MySQL installations allow users to connect as the anonymous (unnamed) user to the server running on the local host. If this is the case on your machine, you should be able to connect to that server by invoking `mysql' without any options: shell> mysql After you have connected successfully, you can disconnect any time by typing `QUIT' at the `mysql>' prompt: mysql> QUIT Bye You can also disconnect by pressing Control-D. Most examples in the following sections assume you are connected to the server. They indicate this by the `mysql>' prompt. Entering Queries ================ Make sure you are connected to the server, as discussed in the previous section. Doing so will not in itself select any database to work with, but that's okay. At this point, it's more important to find out a little about how to issue queries than to jump right in creating tables, loading data into them, and retrieving data from them. This section describes the basic principles of entering commands, using several queries you can try out to familiarise yourself with how `mysql' works. Here's a simple command that asks the server to tell you its version number and the current date. Type it in as shown here following the `mysql>' prompt and press Enter: mysql> SELECT VERSION(), CURRENT_DATE; +--------------+--------------+ | VERSION() | CURRENT_DATE | +--------------+--------------+ | 3.22.20a-log | 1999-03-19 | +--------------+--------------+ 1 row in set (0.01 sec) mysql> This query illustrates several things about `mysql': * A command normally consists of a SQL statement followed by a semicolon. (There are some exceptions where a semicolon is not needed. `QUIT', mentioned earlier, is one of them. We'll get to others later.) * When you issue a command, `mysql' sends it to the server for execution and displays the results, then prints another `mysql>' to indicate that it is ready for another command. * `mysql' displays query output as a table (rows and columns). The first row contains labels for the columns. The rows following are the query results. Normally, column labels are the names of the columns you fetch from database tables. If you're retrieving the value of an expression rather than a table column (as in the example just shown), `mysql' labels the column using the expression itself. * `mysql' shows how many rows were returned and how long the query took to execute, which gives you a rough idea of server performance. These values are imprecise because they represent wall clock time (not CPU or machine time), and because they are affected by factors such as server load and network latency. (For brevity, the "rows in set" line is not shown in the remaining examples in this chapter.) Keywords may be entered in any lettercase. The following queries are equivalent: mysql> SELECT VERSION(), CURRENT_DATE; mysql> select version(), current_date; mysql> SeLeCt vErSiOn(), current_DATE; Here's another query. It demonstrates that you can use `mysql' as a simple calculator: mysql> SELECT SIN(PI()/4), (4+1)*5; +-------------+---------+ | SIN(PI()/4) | (4+1)*5 | +-------------+---------+ | 0.707107 | 25 | +-------------+---------+ The commands shown thus far have been relatively short, single-line statements. You can even enter multiple statements on a single line. Just end each one with a semicolon: mysql> SELECT VERSION(); SELECT NOW(); +--------------+ | VERSION() | +--------------+ | 3.22.20a-log | +--------------+ +---------------------+ | NOW() | +---------------------+ | 1999-03-19 00:15:33 | +---------------------+ A command need not be given all on a single line, so lengthy commands that require several lines are not a problem. `mysql' determines where your statement ends by looking for the terminating semicolon, not by looking for the end of the input line. (In other words, `mysql' accepts free-format input: it collects input lines but does not execute them until it sees the semicolon.) Here's a simple multiple-line statement: mysql> SELECT -> USER() -> , -> CURRENT_DATE; +--------------------+--------------+ | USER() | CURRENT_DATE | +--------------------+--------------+ | joesmith@localhost | 1999-03-18 | +--------------------+--------------+ In this example, notice how the prompt changes from `mysql>' to `->' after you enter the first line of a multiple-line query. This is how `mysql' indicates that it hasn't seen a complete statement and is waiting for the rest. The prompt is your friend, because it provides valuable feedback. If you use that feedback, you will always be aware of what `mysql' is waiting for. If you decide you don't want to execute a command that you are in the process of entering, cancel it by typing `\c': mysql> SELECT -> USER() -> \c mysql> Here, too, notice the prompt. It switches back to `mysql>' after you type `\c', providing feedback to indicate that `mysql' is ready for a new command. The following table shows each of the prompts you may see and summarises what they mean about the state that `mysql' is in: *Prompt**Meaning* `mysql>'Ready for new command. ` Waiting for next line of multiple-line command. ->' ` Waiting for next line, collecting a string that begins '>' with a single quote (`''). ` Waiting for next line, collecting a string that begins ">' with a double quote (`"'). Multiple-line statements commonly occur by accident when you intend to issue a command on a single line, but forget the terminating semicolon. In this case, `mysql' waits for more input: mysql> SELECT USER() -> If this happens to you (you think you've entered a statement but the only response is a `->' prompt), most likely `mysql' is waiting for the semicolon. If you don't notice what the prompt is telling you, you might sit there for a while before realising what you need to do. Enter a semicolon to complete the statement, and `mysql' will execute it: mysql> SELECT USER() -> ; +--------------------+ | USER() | +--------------------+ | joesmith@localhost | +--------------------+ The `'>' and `">' prompts occur during string collection. In MySQL, you can write strings surrounded by either `'' or `"' characters (for example, `'hello'' or `"goodbye"'), and `mysql' lets you enter strings that span multiple lines. When you see a `'>' or `">' prompt, it means that you've entered a line containing a string that begins with a `'' or `"' quote character, but have not yet entered the matching quote that terminates the string. That's fine if you really are entering a multiple-line string, but how likely is that? Not very. More often, the `'>' and `">' prompts indicate that you've inadvertantly left out a quote character. For example: mysql> SELECT * FROM my_table WHERE name = "Smith AND age < 30; "> If you enter this `SELECT' statement, then press Enter and wait for the result, nothing will happen. Instead of wondering why this query takes so long, notice the clue provided by the `">' prompt. It tells you that `mysql' expects to see the rest of an unterminated string. (Do you see the error in the statement? The string `"Smith' is missing the second quote.) At this point, what do you do? The simplest thing is to cancel the command. However, you cannot just type `\c' in this case, because `mysql' interprets it as part of the string that it is collecting! Instead, enter the closing quote character (so `mysql' knows you've finished the string), then type `\c': mysql> SELECT * FROM my_table WHERE name = "Smith AND age < 30; "> "\c mysql> The prompt changes back to `mysql>', indicating that `mysql' is ready for a new command. It's important to know what the `'>' and `">' prompts signify, because if you mistakenly enter an unterminated string, any further lines you type will appear to be ignored by `mysql'including a line containing `QUIT'! This can be quite confusing, especially if you don't know that you need to supply the terminating quote before you can cancel the current command. Creating and Using a Database ============================= Now that you know how to enter commands, it's time to access a database. Suppose you have several pets in your home (your menagerie) and you'd like to keep track of various types of information about them. You can do so by creating tables to hold your data and loading them with the desired information. Then you can answer different sorts of questions about your animals by retrieving data from the tables. This section shows you how to: * Create a database * Create a table * Load data into the table * Retrieve data from the table in various ways * Use multiple tables The menagerie database will be simple (deliberately), but it is not difficult to think of real-world situations in which a similar type of database might be used. For example, a database like this could be used by a farmer to keep track of livestock, or by a veterinarian to keep track of patient records. A menagerie distribution containing some of the queries and sample data used in the following sections can be obtained from the MySQL web site. It's available in either compressed `tar' format (`http://www.mysql.com/Downloads/Contrib/Examples/menagerie.tar.gz') or Zip format (`http://www.mysql.com/Downloads/Contrib/Examples/menagerie.zip'). Use the `SHOW' statement to find out what databases currently exist on the server: mysql> SHOW DATABASES; +----------+ | Database | +----------+ | mysql | | test | | tmp | +----------+ The list of databases is probably different on your machine, but the `mysql' and `test' databases are likely to be among them. The `mysql' database is required because it describes user access privileges. The `test' database is often provided as a workspace for users to try things out. Note that you may not see all databases if you don't have the `SHOW DATABASES' privilege. *Note GRANT::. If the `test' database exists, try to access it: mysql> USE test Database changed Note that `USE', like `QUIT', does not require a semicolon. (You can terminate such statements with a semicolon if you like; it does no harm.) The `USE' statement is special in another way, too: it must be given on a single line. You can use the `test' database (if you have access to it) for the examples that follow, but anything you create in that database can be removed by anyone else with access to it. For this reason, you should probably ask your MySQL administrator for permission to use a database of your own. Suppose you want to call yours `menagerie'. The administrator needs to execute a command like this: mysql> GRANT ALL ON menagerie.* TO your_mysql_name; where `your_mysql_name' is the MySQL user name assigned to you. Creating and Selecting a Database --------------------------------- If the administrator creates your database for you when setting up your permissions, you can begin using it. Otherwise, you need to create it yourself: mysql> CREATE DATABASE menagerie; Under Unix, database names are case-sensitive (unlike SQL keywords), so you must always refer to your database as `menagerie', not as `Menagerie', `MENAGERIE', or some other variant. This is also true for table names. (Under Windows, this restriction does not apply, although you must refer to databases and tables using the same lettercase throughout a given query.) Creating a database does not select it for use; you must do that explicitly. To make `menagerie' the current database, use this command: mysql> USE menagerie Database changed Your database needs to be created only once, but you must select it for use each time you begin a `mysql' session. You can do this by issuing a `USE' statement as shown above. Alternatively, you can select the database on the command-line when you invoke `mysql'. Just specify its name after any connection parameters that you might need to provide. For example: shell> mysql -h host -u user -p menagerie Enter password: ******** Note that `menagerie' is not your password on the command just shown. If you want to supply your password on the command-line after the `-p' option, you must do so with no intervening space (for example, as `-pmypassword', not as `-p mypassword'). However, putting your password on the command-line is not recommended, because doing so exposes it to snooping by other users logged in on your machine. Creating a Table ---------------- Creating the database is the easy part, but at this point it's empty, as `SHOW TABLES' will tell you: mysql> SHOW TABLES; Empty set (0.00 sec) The harder part is deciding what the structure of your database should be: what tables you will need and what columns will be in each of them. You'll want a table that contains a record for each of your pets. This can be called the `pet' table, and it should contain, as a bare minimum, each animal's name. Because the name by itself is not very interesting, the table should contain other information. For example, if more than one person in your family keeps pets, you might want to list each animal's owner. You might also want to record some basic descriptive information such as species and sex. How about age? That might be of interest, but it's not a good thing to store in a database. Age changes as time passes, which means you'd have to update your records often. Instead, it's better to store a fixed value such as date of birth. Then, whenever you need age, you can calculate it as the difference between the current date and the birth date. MySQL provides functions for doing date arithmetic, so this is not difficult. Storing birth date rather than age has other advantages, too: * You can use the database for tasks such as generating reminders for upcoming pet birthdays. (If you think this type of query is somewhat silly, note that it is the same question you might ask in the context of a business database to identify clients to whom you'll soon need to send out birthday greetings, for that computer-assisted personal touch.) * You can calculate age in relation to dates other than the current date. For example, if you store death date in the database, you can easily calculate how old a pet was when it died. You can probably think of other types of information that would be useful in the `pet' table, but the ones identified so far are sufficient for now: name, owner, species, sex, birth, and death. Use a `CREATE TABLE' statement to specify the layout of your table: mysql> CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20), -> species VARCHAR(20), sex CHAR(1), birth DATE, death DATE); `VARCHAR' is a good choice for the `name', `owner', and `species' columns because the column values will vary in length. The lengths of those columns need not all be the same, and need not be `20'. You can pick any length from `1' to `255', whatever seems most reasonable to you. (If you make a poor choice and it turns out later that you need a longer field, MySQL provides an `ALTER TABLE' statement.) Several types of values can be chosen to represent sex in animal records, such as `"m"' and `"f"', or perhaps `"male"' and `"female"'. It's simplest to use the single characters `"m"' and `"f"'. The use of the `DATE' data type for the `birth' and `death' columns is a fairly obvious choice. Now that you have created a table, `SHOW TABLES' should produce some output: mysql> SHOW TABLES; +---------------------+ | Tables in menagerie | +---------------------+ | pet | +---------------------+ To verify that your table was created the way you expected, use a `DESCRIBE' statement: mysql> DESCRIBE pet; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | name | varchar(20) | YES | | NULL | | | owner | varchar(20) | YES | | NULL | | | species | varchar(20) | YES | | NULL | | | sex | char(1) | YES | | NULL | | | birth | date | YES | | NULL | | | death | date | YES | | NULL | | +---------+-------------+------+-----+---------+-------+ You can use `DESCRIBE' any time, for example, if you forget the names of the columns in your table or what types they are. Loading Data into a Table ------------------------- After creating your table, you need to populate it. The `LOAD DATA' and `INSERT' statements are useful for this. Suppose your pet records can be described as shown here. (Observe that MySQL expects dates in `'YYYY-MM-DD'' format; this may be different from what you are used to.) *name* *owner* *species**sex**birth* *death* Fluffy Harold cat f 1993-02-04 Claws Gwen cat m 1994-03-17 Buffy Harold dog f 1989-05-13 Fang Benny dog m 1990-08-27 Bowser Diane dog m 1998-08-31 1995-07-29 Chirpy Gwen bird f 1998-09-11 WhistlerGwen bird 1997-12-09 Slim Benny snake m 1996-04-29 Because you are beginning with an empty table, an easy way to populate it is to create a text file containing a row for each of your animals, then load the contents of the file into the table with a single statement. You could create a text file `pet.txt' containing one record per line, with values separated by tabs, and given in the order in which the columns were listed in the `CREATE TABLE' statement. For missing values (such as unknown sexes or death dates for animals that are still living), you can use `NULL' values. To represent these in your text file, use `\N'. For example, the record for Whistler the bird would look like this (where the whitespace between values is a single tab character): *name* *owner* *species**sex**birth* *death* `Whistler'`Gwen' `bird' `\N' `1997-12-09'`\N' To load the text file `pet.txt' into the `pet' table, use this command: mysql> LOAD DATA LOCAL INFILE "pet.txt" INTO TABLE pet; You can specify the column value separator and end of line marker explicitly in the `LOAD DATA' statement if you wish, but the defaults are tab and linefeed. These are sufficient for the statement to read the file `pet.txt' properly. When you want to add new records one at a time, the `INSERT' statement is useful. In its simplest form, you supply values for each column, in the order in which the columns were listed in the `CREATE TABLE' statement. Suppose Diane gets a new hamster named Puffball. You could add a new record using an `INSERT' statement like this: mysql> INSERT INTO pet -> VALUES ('Puffball','Diane','hamster','f','1999-03-30',NULL); Note that string and date values are specified as quoted strings here. Also, with `INSERT', you can insert `NULL' directly to represent a missing value. You do not use `\N' like you do with `LOAD DATA'. From this example, you should be able to see that there would be a lot more typing involved to load your records initially using several `INSERT' statements rather than a single `LOAD DATA' statement. Retrieving Information from a Table ----------------------------------- The `SELECT' statement is used to pull information from a table. The general form of the statement is: SELECT what_to_select FROM which_table WHERE conditions_to_satisfy `what_to_select' indicates what you want to see. This can be a list of columns, or `*' to indicate "all columns." `which_table' indicates the table from which you want to retrieve data. The `WHERE' clause is optional. If it's present, `conditions_to_satisfy' specifies conditions that rows must satisfy to qualify for retrieval. Selecting All Data .................. The simplest form of `SELECT' retrieves everything from a table: mysql> SELECT * FROM pet; +----------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+--------+---------+------+------------+------------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Fang | Benny | dog | m | 1990-08-27 | NULL | | Bowser | Diane | dog | m | 1998-08-31 | 1995-07-29 | | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | | Slim | Benny | snake | m | 1996-04-29 | NULL | | Puffball | Diane | hamster | f | 1999-03-30 | NULL | +----------+--------+---------+------+------------+------------+ This form of `SELECT' is useful if you want to review your entire table, for instance, after you've just loaded it with your initial dataset. As it happens, the output just shown reveals an error in your datafile: Bowser appears to have been born after he died! Consulting your original pedigree papers, you find that the correct birth year is 1989, not 1998. There are least a couple of ways to fix this: * Edit the file `pet.txt' to correct the error, then empty the table and reload it using `DELETE' and `LOAD DATA': mysql> SET AUTOCOMMIT=1; # Used for quick re-create of the table mysql> DELETE FROM pet; mysql> LOAD DATA LOCAL INFILE "pet.txt" INTO TABLE pet; However, if you do this, you must also re-enter the record for Puffball. * Fix only the erroneous record with an `UPDATE' statement: mysql> UPDATE pet SET birth = "1989-08-31" WHERE name = "Bowser"; As shown above, it is easy to retrieve an entire table. But typically you don't want to do that, particularly when the table becomes large. Instead, you're usually more interested in answering a particular question, in which case you specify some constraints on the information you want. Let's look at some selection queries in terms of questions about your pets that they answer. Selecting Particular Rows ......................... You can select only particular rows from your table. For example, if you want to verify the change that you made to Bowser's birth date, select Bowser's record like this: mysql> SELECT * FROM pet WHERE name = "Bowser"; +--------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+-------+---------+------+------------+------------+ | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+-------+---------+------+------------+------------+ The output confirms that the year is correctly recorded now as 1989, not 1998. String comparisons are normally case-insensitive, so you can specify the name as `"bowser"', `"BOWSER"', etc. The query result will be the same. You can specify conditions on any column, not just `name'. For example, if you want to know which animals were born after 1998, test the `birth' column: mysql> SELECT * FROM pet WHERE birth >= "1998-1-1"; +----------+-------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+-------+ | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Puffball | Diane | hamster | f | 1999-03-30 | NULL | +----------+-------+---------+------+------------+-------+ You can combine conditions, for example, to locate female dogs: mysql> SELECT * FROM pet WHERE species = "dog" AND sex = "f"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+ The preceding query uses the `AND' logical operator. There is also an `OR' operator: mysql> SELECT * FROM pet WHERE species = "snake" OR species = "bird"; +----------+-------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+-------+ | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | | Slim | Benny | snake | m | 1996-04-29 | NULL | +----------+-------+---------+------+------------+-------+ `AND' and `OR' may be intermixed. If you do that, it's a good idea to use parentheses to indicate how conditions should be grouped: mysql> SELECT * FROM pet WHERE (species = "cat" AND sex = "m") -> OR (species = "dog" AND sex = "f"); +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+ Selecting Particular Columns ............................ If you don't want to see entire rows from your table, just name the columns in which you're interested, separated by commas. For example, if you want to know when your animals were born, select the `name' and `birth' columns: mysql> SELECT name, birth FROM pet; +----------+------------+ | name | birth | +----------+------------+ | Fluffy | 1993-02-04 | | Claws | 1994-03-17 | | Buffy | 1989-05-13 | | Fang | 1990-08-27 | | Bowser | 1989-08-31 | | Chirpy | 1998-09-11 | | Whistler | 1997-12-09 | | Slim | 1996-04-29 | | Puffball | 1999-03-30 | +----------+------------+ To find out who owns pets, use this query: mysql> SELECT owner FROM pet; +--------+ | owner | +--------+ | Harold | | Gwen | | Harold | | Benny | | Diane | | Gwen | | Gwen | | Benny | | Diane | +--------+ However, notice that the query simply retrieves the `owner' field from each record, and some of them appear more than once. To minimise the output, retrieve each unique output record just once by adding the keyword `DISTINCT': mysql> SELECT DISTINCT owner FROM pet; +--------+ | owner | +--------+ | Benny | | Diane | | Gwen | | Harold | +--------+ You can use a `WHERE' clause to combine row selection with column selection. For example, to get birth dates for dogs and cats only, use this query: mysql> SELECT name, species, birth FROM pet -> WHERE species = "dog" OR species = "cat"; +--------+---------+------------+ | name | species | birth | +--------+---------+------------+ | Fluffy | cat | 1993-02-04 | | Claws | cat | 1994-03-17 | | Buffy | dog | 1989-05-13 | | Fang | dog | 1990-08-27 | | Bowser | dog | 1989-08-31 | +--------+---------+------------+ Sorting Rows ............ You may have noticed in the preceding examples that the result rows are displayed in no particular order. However, it's often easier to examine query output when the rows are sorted in some meaningful way. To sort a result, use an `ORDER BY' clause. Here are animal birthdays, sorted by date: mysql> SELECT name, birth FROM pet ORDER BY birth; +----------+------------+ | name | birth | +----------+------------+ | Buffy | 1989-05-13 | | Bowser | 1989-08-31 | | Fang | 1990-08-27 | | Fluffy | 1993-02-04 | | Claws | 1994-03-17 | | Slim | 1996-04-29 | | Whistler | 1997-12-09 | | Chirpy | 1998-09-11 | | Puffball | 1999-03-30 | +----------+------------+ On character type columns, sortinglike all other comparison operationsis normally performed in a case-insensitive fashion. This means that the order will be undefined for columns that are identical except for their case. You can force a case-sensitive sort by using the BINARY cast: `ORDER BY BINARY(field)'. To sort in reverse order, add the `DESC' (descending) keyword to the name of the column you are sorting by: mysql> SELECT name, birth FROM pet ORDER BY birth DESC; +----------+------------+ | name | birth | +----------+------------+ | Puffball | 1999-03-30 | | Chirpy | 1998-09-11 | | Whistler | 1997-12-09 | | Slim | 1996-04-29 | | Claws | 1994-03-17 | | Fluffy | 1993-02-04 | | Fang | 1990-08-27 | | Bowser | 1989-08-31 | | Buffy | 1989-05-13 | +----------+------------+ You can sort on multiple columns. For example, to sort by type of animal, then by birth date within animal type with youngest animals first, use the following query: mysql> SELECT name, species, birth FROM pet ORDER BY species, birth DESC; +----------+---------+------------+ | name | species | birth | +----------+---------+------------+ | Chirpy | bird | 1998-09-11 | | Whistler | bird | 1997-12-09 | | Claws | cat | 1994-03-17 | | Fluffy | cat | 1993-02-04 | | Fang | dog | 1990-08-27 | | Bowser | dog | 1989-08-31 | | Buffy | dog | 1989-05-13 | | Puffball | hamster | 1999-03-30 | | Slim | snake | 1996-04-29 | +----------+---------+------------+ Note that the `DESC' keyword applies only to the column name immediately preceding it (`birth'); `species' values are still sorted in ascending order. Date Calculations ................. MySQL provides several functions that you can use to perform calculations on dates, for example, to calculate ages or extract parts of dates. To determine how many years old each of your pets is, compute the difference in the year part of the current date and the birth date, then subtract one if the current date occurs earlier in the calendar year than the birth date. The following query shows, for each pet, the birth date, the current date, and the age in years. mysql> SELECT name, birth, CURRENT_DATE, -> (YEAR(CURRENT_DATE)-YEAR(birth)) -> - (RIGHT(CURRENT_DATE,5) AS age -> FROM pet; +----------+------------+--------------+------+ | name | birth | CURRENT_DATE | age | +----------+------------+--------------+------+ | Fluffy | 1993-02-04 | 2001-08-29 | 8 | | Claws | 1994-03-17 | 2001-08-29 | 7 | | Buffy | 1989-05-13 | 2001-08-29 | 12 | | Fang | 1990-08-27 | 2001-08-29 | 11 | | Bowser | 1989-08-31 | 2001-08-29 | 11 | | Chirpy | 1998-09-11 | 2001-08-29 | 2 | | Whistler | 1997-12-09 | 2001-08-29 | 3 | | Slim | 1996-04-29 | 2001-08-29 | 5 | | Puffball | 1999-03-30 | 2001-08-29 | 2 | +----------+------------+--------------+------+ Here, `YEAR()' pulls out the year part of a date and `RIGHT()' pulls off the rightmost five characters that represent the `MM-DD' (calendar year) part of the date. The part of the expression that compares the `MM-DD' values evaluates to 1 or 0, which adjusts the year difference down a year if `CURRENT_DATE' occurs earlier in the year than `birth'. The full expression is somewhat ungainly, so an alias (`age') is used to make the output column label more meaningful. The query works, but the result could be scanned more easily if the rows were presented in some order. This can be done by adding an `ORDER BY name' clause to sort the output by name: mysql> SELECT name, birth, CURRENT_DATE, -> (YEAR(CURRENT_DATE)-YEAR(birth)) -> - (RIGHT(CURRENT_DATE,5) AS age -> FROM pet ORDER BY name; +----------+------------+--------------+------+ | name | birth | CURRENT_DATE | age | +----------+------------+--------------+------+ | Bowser | 1989-08-31 | 2001-08-29 | 11 | | Buffy | 1989-05-13 | 2001-08-29 | 12 | | Chirpy | 1998-09-11 | 2001-08-29 | 2 | | Claws | 1994-03-17 | 2001-08-29 | 7 | | Fang | 1990-08-27 | 2001-08-29 | 11 | | Fluffy | 1993-02-04 | 2001-08-29 | 8 | | Puffball | 1999-03-30 | 2001-08-29 | 2 | | Slim | 1996-04-29 | 2001-08-29 | 5 | | Whistler | 1997-12-09 | 2001-08-29 | 3 | +----------+------------+--------------+------+ To sort the output by `age' rather than `name', just use a different `ORDER BY' clause: mysql> SELECT name, birth, CURRENT_DATE, -> (YEAR(CURRENT_DATE)-YEAR(birth)) -> - (RIGHT(CURRENT_DATE,5) AS age -> FROM pet ORDER BY age; +----------+------------+--------------+------+ | name | birth | CURRENT_DATE | age | +----------+------------+--------------+------+ | Chirpy | 1998-09-11 | 2001-08-29 | 2 | | Puffball | 1999-03-30 | 2001-08-29 | 2 | | Whistler | 1997-12-09 | 2001-08-29 | 3 | | Slim | 1996-04-29 | 2001-08-29 | 5 | | Claws | 1994-03-17 | 2001-08-29 | 7 | | Fluffy | 1993-02-04 | 2001-08-29 | 8 | | Fang | 1990-08-27 | 2001-08-29 | 11 | | Bowser | 1989-08-31 | 2001-08-29 | 11 | | Buffy | 1989-05-13 | 2001-08-29 | 12 | +----------+------------+--------------+------+ A similar query can be used to determine age at death for animals that have died. You determine which animals these are by checking whether the `death' value is `NULL'. Then, for those with non-`NULL' values, compute the difference between the `death' and `birth' values: mysql> SELECT name, birth, death, -> (YEAR(death)-YEAR(birth)) - (RIGHT(death,5) AS age -> FROM pet WHERE death IS NOT NULL ORDER BY age; +--------+------------+------------+------+ | name | birth | death | age | +--------+------------+------------+------+ | Bowser | 1989-08-31 | 1995-07-29 | 5 | +--------+------------+------------+------+ The query uses `death IS NOT NULL' rather than `death <> NULL' because `NULL' is a special value. This is explained later. *Note Working with `NULL': Working with NULL. What if you want to know which animals have birthdays next month? For this type of calculation, year and day are irrelevant; you simply want to extract the month part of the `birth' column. MySQL provides several date-part extraction functions, such as `YEAR()', `MONTH()', and `DAYOFMONTH()'. `MONTH()' is the appropriate function here. To see how it works, run a simple query that displays the value of both `birth' and `MONTH(birth)': mysql> SELECT name, birth, MONTH(birth) FROM pet; +----------+------------+--------------+ | name | birth | MONTH(birth) | +----------+------------+--------------+ | Fluffy | 1993-02-04 | 2 | | Claws | 1994-03-17 | 3 | | Buffy | 1989-05-13 | 5 | | Fang | 1990-08-27 | 8 | | Bowser | 1989-08-31 | 8 | | Chirpy | 1998-09-11 | 9 | | Whistler | 1997-12-09 | 12 | | Slim | 1996-04-29 | 4 | | Puffball | 1999-03-30 | 3 | +----------+------------+--------------+ Finding animals with birthdays in the upcoming month is easy, too. Suppose the current month is April. Then the month value is `4' and you look for animals born in May (month 5) like this: mysql> SELECT name, birth FROM pet WHERE MONTH(birth) = 5; +-------+------------+ | name | birth | +-------+------------+ | Buffy | 1989-05-13 | +-------+------------+ There is a small complication if the current month is December, of course. You don't just add one to the month number (`12') and look for animals born in month 13, because there is no such month. Instead, you look for animals born in January (month 1). You can even write the query so that it works no matter what the current month is. That way you don't have to use a particular month number in the query. `DATE_ADD()' allows you to add a time interval to a given date. If you add a month to the value of `NOW()', then extract the month part with `MONTH()', the result produces the month in which to look for birthdays: mysql> SELECT name, birth FROM pet -> WHERE MONTH(birth) = MONTH(DATE_ADD(NOW(), INTERVAL 1 MONTH)); A different way to accomplish the same task is to add `1' to get the next month after the current one (after using the modulo function (`MOD') to wrap around the month value to `0' if it is currently `12'): mysql> SELECT name, birth FROM pet -> WHERE MONTH(birth) = MOD(MONTH(NOW()), 12) + 1; Note that `MONTH' returns a number between 1 and 12. And `MOD(something,12)' returns a number between 0 and 11. So the addition has to be after the `MOD()', otherwise we would go from November (11) to January (1). Working with `NULL' Values .......................... The `NULL' value can be surprising until you get used to it. Conceptually, `NULL' means missing value or unknown value and it is treated somewhat differently than other values. To test for `NULL', you cannot use the arithmetic comparison operators such as `=', `<', or `<>'. To demonstrate this for yourself, try the following query: mysql> SELECT 1 = NULL, 1 <> NULL, 1 < NULL, 1 > NULL; +----------+-----------+----------+----------+ | 1 = NULL | 1 <> NULL | 1 < NULL | 1 > NULL | +----------+-----------+----------+----------+ | NULL | NULL | NULL | NULL | +----------+-----------+----------+----------+ Clearly you get no meaningful results from these comparisons. Use the `IS NULL' and `IS NOT NULL' operators instead: mysql> SELECT 1 IS NULL, 1 IS NOT NULL; +-----------+---------------+ | 1 IS NULL | 1 IS NOT NULL | +-----------+---------------+ | 0 | 1 | +-----------+---------------+ Note that in MySQL, 0 or `NULL' means false and anything else means true. The default truth value from a boolean operation is 1. This special treatment of `NULL' is why, in the previous section, it was necessary to determine which animals are no longer alive using `death IS NOT NULL' instead of `death <> NULL'. Two `NULL' values are regarded as equal in a `GROUP BY'. When doing an `ORDER BY', `NULL' values are presented first if you do `ORDER BY ... ASC' and last if you do `ORDER BY ... DESC'. Note that between MySQL 4.0.2 - 4.0.10, `NULL' values incorrectly were always sorted first regardless of the sort direction. Pattern Matching ................ MySQL provides standard SQL pattern matching as well as a form of pattern matching based on extended regular expressions similar to those used by Unix utilities such as `vi', `grep', and `sed'. SQL pattern matching allows you to use `_' to match any single character and `%' to match an arbitrary number of characters (including zero characters). In MySQL, SQL patterns are case-insensitive by default. Some examples are shown here. Note that you do not use `=' or `<>' when you use SQL patterns; use the `LIKE' or `NOT LIKE' comparison operators instead. To find names beginning with `b': mysql> SELECT * FROM pet WHERE name LIKE "b%"; +--------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+------------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+--------+---------+------+------------+------------+ To find names ending with `fy': mysql> SELECT * FROM pet WHERE name LIKE "%fy"; +--------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+-------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +--------+--------+---------+------+------------+-------+ To find names containing a `w': mysql> SELECT * FROM pet WHERE name LIKE "%w%"; +----------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+------------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | +----------+-------+---------+------+------------+------------+ To find names containing exactly five characters, use the `_' pattern character: mysql> SELECT * FROM pet WHERE name LIKE "_____"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+ The other type of pattern matching provided by MySQL uses extended regular expressions. When you test for a match for this type of pattern, use the `REGEXP' and `NOT REGEXP' operators (or `RLIKE' and `NOT RLIKE', which are synonyms). Some characteristics of extended regular expressions are: * `.' matches any single character. * A character class `[...]' matches any character within the brackets. For example, `[abc]' matches `a', `b', or `c'. To name a range of characters, use a dash. `[a-z]' matches any lowercase letter, whereas `[0-9]' matches any digit. * `*' matches zero or more instances of the thing preceding it. For example, `x*' matches any number of `x' characters, `[0-9]*' matches any number of digits, and `.*' matches any number of anything. * The pattern matches if it occurs anywhere in the value being tested. (SQL patterns match only if they match the entire value.) * To anchor a pattern so that it must match the beginning or end of the value being tested, use `^' at the beginning or `$' at the end of the pattern. To demonstrate how extended regular expressions work, the `LIKE' queries shown previously are rewritten here to use `REGEXP'. To find names beginning with `b', use `^' to match the beginning of the name: mysql> SELECT * FROM pet WHERE name REGEXP "^b"; +--------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+------------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+--------+---------+------+------------+------------+ Prior to MySQL Version 3.23.4, `REGEXP' is case-sensitive, and the previous query will return no rows. To match either lowercase or uppercase `b', use this query instead: mysql> SELECT * FROM pet WHERE name REGEXP "^[bB]"; From MySQL 3.23.4 on, to force a `REGEXP' comparison to be case-sensitive, use the `BINARY' keyword to make one of the strings a binary string. This query will match only lowercase `b' at the beginning of a name: mysql> SELECT * FROM pet WHERE name REGEXP BINARY "^b"; To find names ending with `fy', use `$' to match the end of the name: mysql> SELECT * FROM pet WHERE name REGEXP "fy$"; +--------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+-------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +--------+--------+---------+------+------------+-------+ To find names containing a lowercase or uppercase `w', use this query: mysql> SELECT * FROM pet WHERE name REGEXP "w"; +----------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+------------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | +----------+-------+---------+------+------------+------------+ Because a regular expression pattern matches if it occurs anywhere in the value, it is not necessary in the previous query to put a wildcard on either side of the pattern to get it to match the entire value like it would be if you used a SQL pattern. To find names containing exactly five characters, use `^' and `$' to match the beginning and end of the name, and five instances of `.' in between: mysql> SELECT * FROM pet WHERE name REGEXP "^.....$"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+ You could also write the previous query using the `{n}' "repeat-`n'-times" operator: mysql> SELECT * FROM pet WHERE name REGEXP "^.{5}$"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+ Counting Rows ............. Databases are often used to answer the question, "How often does a certain type of data occur in a table?" For example, you might want to know how many pets you have, or how many pets each owner has, or you might want to perform various kinds of censuses on your animals. Counting the total number of animals you have is the same question as "How many rows are in the `pet' table?" because there is one record per pet. The `COUNT()' function counts the number of non-`NULL' results, so the query to count your animals looks like this: mysql> SELECT COUNT(*) FROM pet; +----------+ | COUNT(*) | +----------+ | 9 | +----------+ Earlier, you retrieved the names of the people who owned pets. You can use `COUNT()' if you want to find out how many pets each owner has: mysql> SELECT owner, COUNT(*) FROM pet GROUP BY owner; +--------+----------+ | owner | COUNT(*) | +--------+----------+ | Benny | 2 | | Diane | 2 | | Gwen | 3 | | Harold | 2 | +--------+----------+ Note the use of `GROUP BY' to group together all records for each `owner'. Without it, all you get is an error message: mysql> SELECT owner, COUNT(owner) FROM pet; ERROR 1140 at line 1: Mixing of GROUP columns (MIN(),MAX(),COUNT()...) with no GROUP columns is illegal if there is no GROUP BY clause `COUNT()' and `GROUP BY' are useful for characterising your data in various ways. The following examples show different ways to perform animal census operations. Number of animals per species: mysql> SELECT species, COUNT(*) FROM pet GROUP BY species; +---------+----------+ | species | COUNT(*) | +---------+----------+ | bird | 2 | | cat | 2 | | dog | 3 | | hamster | 1 | | snake | 1 | +---------+----------+ Number of animals per sex: mysql> SELECT sex, COUNT(*) FROM pet GROUP BY sex; +------+----------+ | sex | COUNT(*) | +------+----------+ | NULL | 1 | | f | 4 | | m | 4 | +------+----------+ (In this output, `NULL' indicates sex unknown.) Number of animals per combination of species and sex: mysql> SELECT species, sex, COUNT(*) FROM pet GROUP BY species, sex; +---------+------+----------+ | species | sex | COUNT(*) | +---------+------+----------+ | bird | NULL | 1 | | bird | f | 1 | | cat | f | 1 | | cat | m | 1 | | dog | f | 1 | | dog | m | 2 | | hamster | f | 1 | | snake | m | 1 | +---------+------+----------+ You need not retrieve an entire table when you use `COUNT()'. For example, the previous query, when performed just on dogs and cats, looks like this: mysql> SELECT species, sex, COUNT(*) FROM pet -> WHERE species = "dog" OR species = "cat" -> GROUP BY species, sex; +---------+------+----------+ | species | sex | COUNT(*) | +---------+------+----------+ | cat | f | 1 | | cat | m | 1 | | dog | f | 1 | | dog | m | 2 | +---------+------+----------+ Or, if you wanted the number of animals per sex only for known-sex animals: mysql> SELECT species, sex, COUNT(*) FROM pet -> WHERE sex IS NOT NULL -> GROUP BY species, sex; +---------+------+----------+ | species | sex | COUNT(*) | +---------+------+----------+ | bird | f | 1 | | cat | f | 1 | | cat | m | 1 | | dog | f | 1 | | dog | m | 2 | | hamster | f | 1 | | snake | m | 1 | +---------+------+----------+ Using More Than one Table ......................... The `pet' table keeps track of which pets you have. If you want to record other information about them, such as events in their lives like visits to the vet or when litters are born, you need another table. What should this table look like? It needs: * To contain the pet name so you know which animal each event pertains to. * A date so you know when the event occurred. * A field to describe the event. * An event type field, if you want to be able to categorise events. Given these considerations, the `CREATE TABLE' statement for the `event' table might look like this: mysql> CREATE TABLE event (name VARCHAR(20), date DATE, -> type VARCHAR(15), remark VARCHAR(255)); As with the `pet' table, it's easiest to load the initial records by creating a tab-delimited text file containing the information: *name* *date* *type* *remark* Fluffy 1995-05-15 litter 4 kittens, 3 female, 1 male Buffy 1993-06-23 litter 5 puppies, 2 female, 3 male Buffy 1994-06-19 litter 3 puppies, 3 female Chirpy 1999-03-21 vet needed beak straightened Slim 1997-08-03 vet broken rib Bowser 1991-10-12 kennel Fang 1991-10-12 kennel Fang 1998-08-28 birthdayGave him a new chew toy Claws 1998-03-17 birthdayGave him a new flea collar Whistler1998-12-09 birthdayFirst birthday Load the records like this: mysql> LOAD DATA LOCAL INFILE "event.txt" INTO TABLE event; Based on what you've learned from the queries you've run on the `pet' table, you should be able to perform retrievals on the records in the `event' table; the principles are the same. But when is the `event' table by itself insufficient to answer questions you might ask? Suppose you want to find out the ages of each pet when they had their litters. The `event' table indicates when this occurred, but to calculate the age of the mother, you need her birth date. Because that is stored in the `pet' table, you need both tables for the query: mysql> SELECT pet.name, -> (TO_DAYS(date) - TO_DAYS(birth))/365 AS age, -> remark -> FROM pet, event -> WHERE pet.name = event.name AND type = "litter"; +--------+------+-----------------------------+ | name | age | remark | +--------+------+-----------------------------+ | Fluffy | 2.27 | 4 kittens, 3 female, 1 male | | Buffy | 4.12 | 5 puppies, 2 female, 3 male | | Buffy | 5.10 | 3 puppies, 3 female | +--------+------+-----------------------------+ There are several things to note about this query: * The `FROM' clause lists two tables because the query needs to pull information from both of them. * When combining (joining) information from multiple tables, you need to specify how records in one table can be matched to records in the other. This is easy because they both have a `name' column. The query uses `WHERE' clause to match up records in the two tables based on the `name' values. * Because the `name' column occurs in both tables, you must be specific about which table you mean when referring to the column. This is done by prepending the table name to the column name. You need not have two different tables to perform a join. Sometimes it is useful to join a table to itself, if you want to compare records in a table to other records in that same table. For example, to find breeding pairs among your pets, you can join the `pet' table with itself to pair up males and females of like species: mysql> SELECT p1.name, p1.sex, p2.name, p2.sex, p1.species -> FROM pet AS p1, pet AS p2 -> WHERE p1.species = p2.species AND p1.sex = "f" AND p2.sex = "m"; +--------+------+--------+------+---------+ | name | sex | name | sex | species | +--------+------+--------+------+---------+ | Fluffy | f | Claws | m | cat | | Buffy | f | Fang | m | dog | | Buffy | f | Bowser | m | dog | +--------+------+--------+------+---------+ In this query, we specify aliases for the table name in order to refer to the columns and keep straight which instance of the table each column reference is associated with. Getting Information About Databases and Tables ============================================== What if you forget the name of a database or table, or what the structure of a given table is (for example, what its columns are called)? MySQL addresses this problem through several statements that provide information about the databases and tables it supports. You have already seen `SHOW DATABASES', which lists the databases managed by the server. To find out which database is currently selected, use the `DATABASE()' function: mysql> SELECT DATABASE(); +------------+ | DATABASE() | +------------+ | menagerie | +------------+ If you haven't selected any database yet, the result is blank. To find out what tables the current database contains (for example, when you're not sure about the name of a table), use this command: mysql> SHOW TABLES; +---------------------+ | Tables in menagerie | +---------------------+ | event | | pet | +---------------------+ If you want to find out about the structure of a table, the `DESCRIBE' command is useful; it displays information about each of a table's columns: mysql> DESCRIBE pet; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | name | varchar(20) | YES | | NULL | | | owner | varchar(20) | YES | | NULL | | | species | varchar(20) | YES | | NULL | | | sex | char(1) | YES | | NULL | | | birth | date | YES | | NULL | | | death | date | YES | | NULL | | +---------+-------------+------+-----+---------+-------+ `Field' indicates the column name, `Type' is the data type for the column, `NULL' indicates whether the column can contain `NULL' values, `Key' indicates whether the column is indexed, and `Default' specifies the column's default value. If you have indexes on a table, `SHOW INDEX FROM tbl_name' produces information about them. Examples of Common Queries ========================== Here are examples of how to solve some common problems with MySQL. Some of the examples use the table `shop' to hold the price of each article (item number) for certain traders (dealers). Supposing that each trader has a single fixed price per article, then (`article', `dealer') is a primary key for the records. Start the command-line tool `mysql' and select a database: mysql your-database-name (In most MySQL installations, you can use the database-name 'test'). You can create the example table as: CREATE TABLE shop ( article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL, dealer CHAR(20) DEFAULT '' NOT NULL, price DOUBLE(16,2) DEFAULT '0.00' NOT NULL, PRIMARY KEY(article, dealer)); INSERT INTO shop VALUES (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45),(3,'C',1.69), (3,'D',1.25),(4,'D',19.95); Okay, so the example data is: mysql> SELECT * FROM shop; +---------+--------+-------+ | article | dealer | price | +---------+--------+-------+ | 0001 | A | 3.45 | | 0001 | B | 3.99 | | 0002 | A | 10.99 | | 0003 | B | 1.45 | | 0003 | C | 1.69 | | 0003 | D | 1.25 | | 0004 | D | 19.95 | +---------+--------+-------+ The Maximum Value for a Column ------------------------------ "What's the highest item number?" SELECT MAX(article) AS article FROM shop +---------+ | article | +---------+ | 4 | +---------+ The Row Holding the Maximum of a Certain Column ----------------------------------------------- "Find number, dealer, and price of the most expensive article." In ANSI SQL (and MySQL Version 4.1) this is easily done with a subquery: SELECT article, dealer, price FROM shop WHERE price=(SELECT MAX(price) FROM shop) In MySQL versions prior to 4.1, just do it in two steps: 1. Get the maximum price value from the table with a `SELECT' statement. 2. Using this value compile the actual query: SELECT article, dealer, price FROM shop WHERE price=19.95 Another solution is to sort all rows descending by price and only get the first row using the MySQL-specific `LIMIT' clause: SELECT article, dealer, price FROM shop ORDER BY price DESC LIMIT 1 *NOTE*: If there are several most expensive articles (for example, each 19.95) the `LIMIT' solution shows only one of them! Maximum of Column per Group --------------------------- "What's the highest price per article?" SELECT article, MAX(price) AS price FROM shop GROUP BY article +---------+-------+ | article | price | +---------+-------+ | 0001 | 3.99 | | 0002 | 10.99 | | 0003 | 1.69 | | 0004 | 19.95 | +---------+-------+ The Rows Holding the Group-wise Maximum of a Certain Field ---------------------------------------------------------- "For each article, find the dealer(s) with the most expensive price." In ANSI SQL (and MySQL Version 4.1 or greater), I'd do it with a subquery like this: SELECT article, dealer, price FROM shop s1 WHERE price=(SELECT MAX(s2.price) FROM shop s2 WHERE s1.article = s2.article); In MySQL versions prior to 4.1 it's best do it in several steps: 1. Get the list of (article,maxprice). 2. For each article get the corresponding rows that have the stored maximum price. This can easily be done with a temporary table: CREATE TEMPORARY TABLE tmp ( article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL, price DOUBLE(16,2) DEFAULT '0.00' NOT NULL); LOCK TABLES shop read; INSERT INTO tmp SELECT article, MAX(price) FROM shop GROUP BY article; SELECT shop.article, dealer, shop.price FROM shop, tmp WHERE shop.article=tmp.article AND shop.price=tmp.price; UNLOCK TABLES; DROP TABLE tmp; If you don't use a `TEMPORARY' table, you must also lock the 'tmp' table. "Can it be done with a single query?" Yes, but only by using a quite inefficient trick that I call the "MAX-CONCAT trick": SELECT article, SUBSTRING( MAX( CONCAT(LPAD(price,6,'0'),dealer) ), 7) AS dealer, 0.00+LEFT( MAX( CONCAT(LPAD(price,6,'0'),dealer) ), 6) AS price FROM shop GROUP BY article; +---------+--------+-------+ | article | dealer | price | +---------+--------+-------+ | 0001 | B | 3.99 | | 0002 | A | 10.99 | | 0003 | C | 1.69 | | 0004 | D | 19.95 | +---------+--------+-------+ The last example can, of course, be made a bit more efficient by doing the splitting of the concatenated column in the client. Using user variables -------------------- You can use MySQL user variables to remember results without having to store them in temporary variables in the client. *Note Variables::. For example, to find the articles with the highest and lowest price you can do: mysql> SELECT @min_price:=MIN(price),@max_price:=MAX(price) FROM shop; mysql> SELECT * FROM shop WHERE price=@min_price OR price=@max_price; +---------+--------+-------+ | article | dealer | price | +---------+--------+-------+ | 0003 | D | 1.25 | | 0004 | D | 19.95 | +---------+--------+-------+ Using Foreign Keys ------------------ In MySQL 3.23.44 and up, `InnoDB' tables supports checking of foreign key constraints. *Note InnoDB::. See also *Note ANSI diff Foreign Keys::. You don't actually need foreign keys to join 2 tables. The only thing MySQL currently doesn't do (in table types other than `InnoDB'), is `CHECK' to make sure that the keys you use really exist in the table(s) you're referencing and it doesn't automatically delete rows from a table with a foreign key definition. If you use your keys like normal, it'll work just fine: CREATE TABLE person ( id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, name CHAR(60) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE shirt ( id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, style ENUM('t-shirt', 'polo', 'dress') NOT NULL, color ENUM('red', 'blue', 'orange', 'white', 'black') NOT NULL, owner SMALLINT UNSIGNED NOT NULL REFERENCES person(id), PRIMARY KEY (id) ); INSERT INTO person VALUES (NULL, 'Antonio Paz'); INSERT INTO shirt VALUES (NULL, 'polo', 'blue', LAST_INSERT_ID()), (NULL, 'dress', 'white', LAST_INSERT_ID()), (NULL, 't-shirt', 'blue', LAST_INSERT_ID()); INSERT INTO person VALUES (NULL, 'Lilliana Angelovska'); INSERT INTO shirt VALUES (NULL, 'dress', 'orange', LAST_INSERT_ID()), (NULL, 'polo', 'red', LAST_INSERT_ID()), (NULL, 'dress', 'blue', LAST_INSERT_ID()), (NULL, 't-shirt', 'white', LAST_INSERT_ID()); SELECT * FROM person; +----+---------------------+ | id | name | +----+---------------------+ | 1 | Antonio Paz | | 2 | Lilliana Angelovska | +----+---------------------+ SELECT * FROM shirt; +----+---------+--------+-------+ | id | style | color | owner | +----+---------+--------+-------+ | 1 | polo | blue | 1 | | 2 | dress | white | 1 | | 3 | t-shirt | blue | 1 | | 4 | dress | orange | 2 | | 5 | polo | red | 2 | | 6 | dress | blue | 2 | | 7 | t-shirt | white | 2 | +----+---------+--------+-------+ SELECT s.* FROM person p, shirt s WHERE p.name LIKE 'Lilliana%' AND s.owner = p.id AND s.color <> 'white'; +----+-------+--------+-------+ | id | style | color | owner | +----+-------+--------+-------+ | 4 | dress | orange | 2 | | 5 | polo | red | 2 | | 6 | dress | blue | 2 | +----+-------+--------+-------+ Searching on Two Keys --------------------- MySQL doesn't yet optimise when you search on two different keys combined with `OR' (searching on one key with different `OR' parts is optimised quite well): SELECT field1_index, field2_index FROM test_table WHERE field1_index = '1' OR field2_index = '1' The reason is that we haven't yet had time to come up with an efficient way to handle this in the general case. (The `AND' handling is, in comparison, now completely general and works very well.) For the moment you can solve this very efficiently by using a `TEMPORARY' table. This type of optimisation is also very good if you are using very complicated queries where the SQL server does the optimisations in the wrong order. CREATE TEMPORARY TABLE tmp SELECT field1_index, field2_index FROM test_table WHERE field1_index = '1'; INSERT INTO tmp SELECT field1_index, field2_index FROM test_table WHERE field2_index = '1'; SELECT * from tmp; DROP TABLE tmp; The above way to solve this query is in effect a `UNION' of two queries. *Note UNION::. Calculating Visits Per Day -------------------------- The following shows an idea of how you can use the bit group functions to calculate the number of days per month a user has visited a web page. CREATE TABLE t1 (year YEAR(4), month INT(2) UNSIGNED ZEROFILL, day INT(2) UNSIGNED ZEROFILL); INSERT INTO t1 VALUES(2000,1,1),(2000,1,20),(2000,1,30),(2000,2,2), (2000,2,23),(2000,2,23); SELECT year,month,BIT_COUNT(BIT_OR(1< mysql < batch-file If you are running `mysql' under windows and have some special characters in the file that causes problems, you can do: dos> mysql -e "source batch-file" If you need to specify connection parameters on the command-line, the command might look like this: shell> mysql -h host -u user -p < batch-file Enter password: ******** When you use `mysql' this way, you are creating a script file, then executing the script. If you want the script to continue even if you have errors, you should use the `--force' command-line option. Why use a script? Here are a few reasons: * If you run a query repeatedly (say, every day or every week), making it a script allows you to avoid retyping it each time you execute it. * You can generate new queries from existing ones that are similar by copying and editing script files. * Batch mode can also be useful while you're developing a query, particularly for multiple-line commands or multiple-statement sequences of commands. If you make a mistake, you don't have to retype everything. Just edit your script to correct the error, then tell `mysql' to execute it again. * If you have a query that produces a lot of output, you can run the output through a pager rather than watching it scroll off the top of your screen: shell> mysql < batch-file | more * You can catch the output in a file for further processing: shell> mysql < batch-file > mysql.out * You can distribute your script to other people so they can run the commands, too. * Some situations do not allow for interactive use, for example, when you run a query from a `cron' job. In this case, you must use batch mode. The default output format is different (more concise) when you run `mysql' in batch mode than when you use it interactively. For example, the output of `SELECT DISTINCT species FROM pet' looks like this when run interactively: +---------+ | species | +---------+ | bird | | cat | | dog | | hamster | | snake | +---------+ But like this when run in batch mode: species bird cat dog hamster snake If you want to get the interactive output format in batch mode, use `mysql -t'. To echo to the output the commands that are executed, use `mysql -vvv'. You can also use scripts in the `mysql' command-line prompt by using the `source' command: mysql> source filename; Queries from Twin Project ========================= At Analytikerna and Lentus, we have been doing the systems and field work for a big research project. This project is a collaboration between the Institute of Environmental Medicine at Karolinska Institutet Stockholm and the Section on Clinical Research in Aging and Psychology at the University of Southern California. The project involves a screening part where all twins in Sweden older than 65 years are interviewed by telephone. Twins who meet certain criteria are passed on to the next stage. In this latter stage, twins who want to participate are visited by a doctor/nurse team. Some of the examinations include physical and neuropsychological examination, laboratory testing, neuroimaging, psychological status assessment, and family history collection. In addition, data are collected on medical and environmental risk factors. More information about Twin studies can be found at: `http://www.imm.ki.se/TWIN/TWINUKW.HTM' The latter part of the project is administered with a web interface written using Perl and MySQL. Each night all data from the interviews are moved into a MySQL database. Find all Non-distributed Twins ------------------------------ The following query is used to determine who goes into the second part of the project: SELECT CONCAT(p1.id, p1.tvab) + 0 AS tvid, CONCAT(p1.christian_name, " ", p1.surname) AS Name, p1.postal_code AS Code, p1.city AS City, pg.abrev AS Area, IF(td.participation = "Aborted", "A", " ") AS A, p1.dead AS dead1, l.event AS event1, td.suspect AS tsuspect1, id.suspect AS isuspect1, td.severe AS tsevere1, id.severe AS isevere1, p2.dead AS dead2, l2.event AS event2, h2.nurse AS nurse2, h2.doctor AS doctor2, td2.suspect AS tsuspect2, id2.suspect AS isuspect2, td2.severe AS tsevere2, id2.severe AS isevere2, l.finish_date FROM twin_project AS tp /* For Twin 1 */ LEFT JOIN twin_data AS td ON tp.id = td.id AND tp.tvab = td.tvab LEFT JOIN informant_data AS id ON tp.id = id.id AND tp.tvab = id.tvab LEFT JOIN harmony AS h ON tp.id = h.id AND tp.tvab = h.tvab LEFT JOIN lentus AS l ON tp.id = l.id AND tp.tvab = l.tvab /* For Twin 2 */ LEFT JOIN twin_data AS td2 ON p2.id = td2.id AND p2.tvab = td2.tvab LEFT JOIN informant_data AS id2 ON p2.id = id2.id AND p2.tvab = id2.tvab LEFT JOIN harmony AS h2 ON p2.id = h2.id AND p2.tvab = h2.tvab LEFT JOIN lentus AS l2 ON p2.id = l2.id AND p2.tvab = l2.tvab, person_data AS p1, person_data AS p2, postal_groups AS pg WHERE /* p1 gets main twin and p2 gets his/her twin. */ /* ptvab is a field inverted from tvab */ p1.id = tp.id AND p1.tvab = tp.tvab AND p2.id = p1.id AND p2.ptvab = p1.tvab AND /* Just the sceening survey */ tp.survey_no = 5 AND /* Skip if partner died before 65 but allow emigration (dead=9) */ (p2.dead = 0 OR p2.dead = 9 OR (p2.dead = 1 AND (p2.death_date = 0 OR (((TO_DAYS(p2.death_date) - TO_DAYS(p2.birthday)) / 365) >= 65)))) AND ( /* Twin is suspect */ (td.future_contact = 'Yes' AND td.suspect = 2) OR /* Twin is suspect - Informant is Blessed */ (td.future_contact = 'Yes' AND td.suspect = 1 AND id.suspect = 1) OR /* No twin - Informant is Blessed */ (ISNULL(td.suspect) AND id.suspect = 1 AND id.future_contact = 'Yes') OR /* Twin broken off - Informant is Blessed */ (td.participation = 'Aborted' AND id.suspect = 1 AND id.future_contact = 'Yes') OR /* Twin broken off - No inform - Have partner */ (td.participation = 'Aborted' AND ISNULL(id.suspect) AND p2.dead = 0)) AND l.event = 'Finished' /* Get at area code */ AND SUBSTRING(p1.postal_code, 1, 2) = pg.code /* Not already distributed */ AND (h.nurse IS NULL OR h.nurse=00 OR h.doctor=00) /* Has not refused or been aborted */ AND NOT (h.status = 'Refused' OR h.status = 'Aborted' OR h.status = 'Died' OR h.status = 'Other') ORDER BY tvid; Some explanations: `CONCAT(p1.id, p1.tvab) + 0 AS tvid' We want to sort on the concatenated `id' and `tvab' in numerical order. Adding `0' to the result causes MySQL to treat the result as a number. column `id' This identifies a pair of twins. It is a key in all tables. column `tvab' This identifies a twin in a pair. It has a value of `1' or `2'. column `ptvab' This is an inverse of `tvab'. When `tvab' is `1' this is `2', and vice versa. It exists to save typing and to make it easier for MySQL to optimise the query. This query demonstrates, among other things, how to do lookups on a table from the same table with a join (`p1' and `p2'). In the example, this is used to check whether a twin's partner died before the age of 65. If so, the row is not returned. All of the above exist in all tables with twin-related information. We have a key on both `id,tvab' (all tables), and `id,ptvab' (`person_data') to make queries faster. On our production machine (A 200MHz UltraSPARC), this query returns about 150-200 rows and takes less than one second. The current number of records in the tables used above: *Table* *Rows* `person_data' 71074 `lentus' 5291 `twin_project' 5286 `twin_data' 2012 `informant_data' 663 `harmony' 381 `postal_groups' 100 Show a Table on Twin Pair Status -------------------------------- Each interview ends with a status code called `event'. The query shown here is used to display a table over all twin pairs combined by event. This indicates in how many pairs both twins are finished, in how many pairs one twin is finished and the other refused, and so on. SELECT t1.event, t2.event, COUNT(*) FROM lentus AS t1, lentus AS t2, twin_project AS tp WHERE /* We are looking at one pair at a time */ t1.id = tp.id AND t1.tvab=tp.tvab AND t1.id = t2.id /* Just the sceening survey */ AND tp.survey_no = 5 /* This makes each pair only appear once */ AND t1.tvab='1' AND t2.tvab='2' GROUP BY t1.event, t2.event; Using MySQL with Apache ======================= There are programs that let you authenticate your users from a MySQL database and also let you log your log files into a MySQL table. You can change the Apache logging format to be easily readable by MySQL by putting the following into the Apache configuration file: LogFormat \ "\"%h\",%{%Y%m%d%H%M%S}t,%>s,\"%b\",\"%{Content-Type}o\", \ \"%U\",\"%{Referer}i\",\"%{User-Agent}i\"" In MySQL you can do something like this: LOAD DATA INFILE '/local/access_log' INTO TABLE table_name FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\' Database Administration *********************** Configuring MySQL ================= `mysqld' Command-line Options ----------------------------- In most cases you should manage mysqld options through option files. *Note Option files::. `mysqld' and `mysqld.server' reads options from the `mysqld' and `server' groups. `mysqld_safe' read options from the `mysqld', `server', `mysqld_safe' and `safe_mysqld' groups. An embedded MySQL server usually reads options from the `server', `embedded' and `xxxxx_SERVER', where `xxxxx' is the name of the application. `mysqld' accepts the following command-line options. For a full list execute `mysqld --help'. `--ansi' Use ANSI SQL syntax instead of MySQL syntax. *Note ANSI mode::. `-b, --basedir=path' Path to installation directory. All paths are usually resolved relative to this. `--big-tables' Allow big result sets by saving all temporary sets on file. It solves most 'table full' errors, but also slows down the queries where in-memory tables would suffice. Since Version 3.23.2, MySQL is able to solve it automatically by using memory for small temporary tables and switching to disk tables where necessary. `--bind-address=IP' IP address to bind to. `--console' Write the error log messages to stderr/stdout even if `--log-error' is specified. On windows mysqld will not close the console screen if this option is used. `--character-sets-dir=path' Directory where character sets are. *Note Character sets::. `--chroot=path' Put `mysqld' daemon in chroot environment at startup. Recommended security measure since MySQL 4.0 (MySQL 3.23 is not able to provide 100% closed chroot jail). It somewhat limits `LOAD DATA INFILE' and `SELECT ... INTO OUTFILE' though. `--core-file' Write a core file if `mysqld' dies. For some systems you must also specify `--core-file-size' to `safe_mysqld'. *Note `safe_mysqld': safe_mysqld. Note that on some systems, like Solaris, you will not get a core file if you are also using the `--user' option. `-h, --datadir=path' Path to the database root. `--debug[...]=' If MySQL is configured with `--with-debug', you can use this option to get a trace file of what `mysqld' is doing. *Note Making trace files::. `--default-character-set=charset' Set the default character set. *Note Character sets::. `--default-table-type=type' Set the default table type for tables. *Note Table types::. `--delay-key-write[= OFF | ON | ALL]' How MyISAM `DELAYED KEYS' should be used. *Note Server parameters::. `--delay-key-write-for-all-tables; In MySQL 4.0.3 you should use --delay-key-write=ALL instead.' Don't flush key buffers between writes for any `MyISAM' table. *Note Server parameters::. `--des-key-file=filename' Read the default keys used by `DES_ENCRYPT()' and `DES_DECRYPT()' from this file. `--enable-external-locking (was --enable-locking)' Enable system locking. Note that if you use this option on a system on which `lockd' does not fully work (as on Linux), you will easily get mysqld to deadlock. `--enable-named-pipe' Enable support for named pipes (only on NT/Win2000/XP). `-T, --exit-info' This is a bit mask of different flags one can use for debugging the mysqld server; one should not use this option if one doesn't know exactly what it does! `--flush' Flush all changes to disk after each SQL command. Normally MySQL only does a write of all changes to disk after each SQL command and lets the operating system handle the syncing to disk. *Note Crashing::. `-?, --help' Display short help and exit. `--init-file=file' Read SQL commands from this file at startup. `-L, --language=...' Client error messages in given language. May be given as a full path. *Note Languages::. `-l, --log[=file]' Log connections and queries to file. *Note Query log::. `--log-bin=[file]' Log all queries that changes data to the file. Used for backup and replication. *Note Binary log::. `--log-bin-index[=file]' Index file for binary log file names. *Note Binary log::. `--log-error[=file]' Log errors and startup messages to this file. *Note Error log::. `--log-isam[=file]' Log all ISAM/MyISAM changes to file (only used when debugging ISAM/MyISAM). `--log-slow-queries[=file]' Log all queries that have taken more than `long_query_time' seconds to execute to file. *Note Slow query log::. `--log-update[=file]' Log updates to `file.#' where `#' is a unique number if not given. *Note Update log::. `--log-long-format' Log some extra information to update log. If you are using `--log-slow-queries' then queries that are not using indexes are logged to the slow query log. `--low-priority-updates' Table-modifying operations (`INSERT'/`DELETE'/`UPDATE') will have lower priority than selects. It can also be done via `{INSERT | REPLACE | UPDATE | DELETE} LOW_PRIORITY ...' to lower the priority of only one query, or by `SET LOW_PRIORITY_UPDATES=1' to change the priority in one thread. *Note Table locking::. `--memlock' Lock the `mysqld' process in memory. This works only if your system supports the `mlockall()' system call (like Solaris). This may help if you have a problem where the operating system is causing `mysqld' to swap on disk. `--myisam-recover [=option[,option...]]]' Option is any combination of `DEFAULT', `BACKUP', `FORCE' or `QUICK'. You can also set this explicitly to `""' if you want to disable this option. If this option is used, `mysqld' will on open check if the table is marked as crashed or if the table wasn't closed properly. (The last option only works if you are running with `--skip-external-locking'.) If this is the case `mysqld' will run check on the table. If the table was corrupted, `mysqld' will attempt to repair it. The following options affects how the repair works. *Option* *Description* DEFAULT The same as not giving any option to `--myisam-recover'. BACKUP If the data table was changed during recover, save a backup of the `table_name.MYD' datafile as `table_name-datetime.BAK'. FORCE Run recover even if we will lose more than one row from the .MYD file. QUICK Don't check the rows in the table if there aren't any delete blocks. Before a table is automatically repaired, MySQL will add a note about this in the error log. If you want to be able to recover from most things without user intervention, you should use the options `BACKUP,FORCE'. This will force a repair of a table even if some rows would be deleted, but it will keep the old datafile as a backup so that you can later examine what happened. `--pid-file=path' Path to pid file used by `safe_mysqld'. `-P, --port=...' Port number to listen for TCP/IP connections. `-o, --old-protocol' Use the 3.20 protocol for compatibility with some very old clients. *Note Upgrading-from-3.20::. `--one-thread' Only use one thread (for debugging under Linux). *Note Debugging server::. `-O, --set-variable var=option' Give a variable a value. `--help' lists variables. You can find a full description for all variables in the `SHOW VARIABLES' section in this manual. *Note SHOW VARIABLES::. The tuning server parameters section includes information of how to optimise these. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--var=option' on its own. *Note Server parameters::. In MySQL 4.0.2 one can set a variable directly with `--variable-name=option' and `set-variable' is not anymore needed in option files. If you want to restrict the maximum value a startup option can be set to with `SET', you can define this by using the `--maximum-variable-name' command line option. *Note SET OPTION::. Note that when setting a variable to a value, MySQL may automatically correct it to stay within a given range and also adjusts the value a little to fix for the used algorithm. `--safe-mode' Skip some optimise stages. `--safe-show-database' With this option, the `SHOW DATABASES' command returns only those databases for which the user has some kind of privilege. From version 4.0.2 this option is deprecated and doesn't do anything (the option is enabled by default) as we now have the `SHOW DATABASES' privilege. *Note GRANT::. `--safe-user-create' If this is enabled, a user can't create new users with the GRANT command, if the user doesn't have `INSERT' privilege to the `mysql.user' table or any column in this table. `--skip-bdb' Disable usage of BDB tables. This will save memory and may speed up some things. `--skip-concurrent-insert' Turn off the ability to select and insert at the same time on `MyISAM' tables. (This is only to be used if you think you have found a bug in this feature.) `--skip-delay-key-write; In MySQL 4.0.3 you should use --delay-key-write=OFF instead.' Ignore the `DELAY_KEY_WRITE' option for all tables. *Note Server parameters::. `--skip-grant-tables' This option causes the server not to use the privilege system at all. This gives everyone *full access* to all databases! (You can tell a running server to start using the grant tables again by executing `mysqladmin flush-privileges' or `mysqladmin reload'.) `--skip-host-cache' Never use host name cache for faster name-ip resolution, but query DNS server on every connect instead. *Note DNS::. `--skip-innodb' Disable usage of Innodb tables. This will save memory and disk space and speed up some things. `--skip-external-locking (was --skip-locking)' Don't use system locking. To use `isamchk' or `myisamchk' you must shut down the server. *Note Stability::. Note that in MySQL Version 3.23 you can use `REPAIR' and `CHECK' to repair/check `MyISAM' tables. `--skip-name-resolve' Hostnames are not resolved. All `Host' column values in the grant tables must be IP numbers or `localhost'. *Note DNS::. `--skip-networking' Don't listen for TCP/IP connections at all. All interaction with `mysqld' must be made via Unix sockets. This option is highly recommended for systems where only local requests are allowed. *Note DNS::. `--skip-new' Don't use new, possible wrong routines. `--skip-symlink' Don't delete or rename files that a symlinked file in the data directory points to. `--skip-safemalloc' If MySQL is configured with `--with-debug=full', all programs will check the memory for overruns for every memory allocation and memory freeing. As this checking is very slow, you can avoid this, when you don't need memory checking, by using this option. `--skip-show-database' Don't allow `SHOW DATABASES' command, unless the user has the `SHOW DATABASES' privilege. From version 4.0.2 you should no longer need this option, since access can now be granted specifically with the `SHOW DATABASES' privilege. `--skip-stack-trace' Don't write stack traces. This option is useful when you are running `mysqld' under a debugger. On some systems you also have to use this option to get a core file. *Note Debugging server::. `--skip-thread-priority' Disable using thread priorities for faster response time. `--socket=path' Socket file to use for local connections instead of default `/tmp/mysql.sock'. `--sql-mode=option[,option[,option...]]' Option can be any combination of: `REAL_AS_FLOAT', `PIPES_AS_CONCAT', `ANSI_QUOTES', `IGNORE_SPACE', `SERIALIZE', `ONLY_FULL_GROUP_BY'. It can also be empty (`""') if you want to reset this. By specifying all of the above options is same as using -ansi. With this option one can turn on only needed SQL modes. *Note ANSI mode::. `--temp-pool' Using this option will cause most temporary files created to use a small set of names, rather than a unique name for each new file. This is to work around a problem in the Linux kernel dealing with creating a bunch of new files with different names. With the old behaviour, Linux seems to 'leak' memory, as it's being allocated to the directory entry cache instead of the disk cache. `--transaction-isolation= { READ-UNCOMMITTED | READ-COMMITTED | REPEATABLE-READ | SERIALIZABLE }' Sets the default transaction isolation level. *Note SET TRANSACTION::. `-t, --tmpdir=path' Path for temporary files. It may be useful if your default `/tmp' directory resides on a partition too small to hold temporary tables. Starting from MySQL 4.1, this option accepts several paths separated by colon `:' (semicolon `;' on Windows). They will be used in round-robin fashion. `-u, --user= [user_name | userid]' Run `mysqld' daemon as user `user_name' or `userid' (numeric). This option is *mandatory* when starting `mysqld' as root. Starting from MySQL 3.23.56 and 4.0.12: To avoid a possible security hole where a user adds an `--user=root' option to some `my.cnf' file, `mysqld' will only use the first `--user' option specified and give a warning if there are multiple options. Note that `/etc/my.cnf' and `datadir/my.cnf' may override a command line option - therefore it is recommended to put this option in `/etc/my.cnf'. `-V, --version' Output version information and exit. `-W, --log-warnings (Was --warnings)' Print out warnings like `Aborted connection...' to the `.err' file. *Note Communication errors::. One can change most values for a running server with the `SET' command. *Note SET OPTION::. `my.cnf' Option Files --------------------- MySQL can, since Version 3.22, read default startup options for the server and for clients from option files. MySQL reads default options from the following files on Unix: *Filename* *Purpose* `/etc/my.cnf' Global options `DATADIR/my.cnf' Server-specific options `defaults-extra-file' The file specified with -defaults-extra-file=# `~/.my.cnf' User-specific options `DATADIR' is the MySQL data directory (typically `/usr/local/mysql/data' for a binary installation or `/usr/local/var' for a source installation). Note that this is the directory that was specified at configuration time, not the one specified with `--datadir' when `mysqld' starts up! (`--datadir' has no effect on where the server looks for option files, because it looks for them before it processes any command-line arguments.) MySQL reads default options from the following files on Windows: *Filename* *Purpose* `windows-system-directory\my.ini'Global options `C:\my.cnf' Global options Note that on Windows, you should specify all paths with `/' instead of `\'. If you use `\', you need to specify this twice, as `\' is the escape character in MySQL. MySQL tries to read option files in the order listed above. If multiple option files exist, an option specified in a file read later takes precedence over the same option specified in a file read earlier. Options specified on the command-line take precedence over options specified in any option file. Some options can be specified using environment variables. Options specified on the command-line or in option files take precedence over environment variable values. *Note Environment variables::. The following programs support option files: `mysql', `mysqladmin', `mysqld', `mysqld_safe', `mysql.server', `mysqldump', `mysqlimport', `mysqlshow', `mysqlcheck', `myisamchk', and `myisampack'. Any long option that may be given on the command-line when running a MySQL program can be given in an option file as well (without the leading double dash). Run the program with `--help' to get a list of available options. An option file can contain lines of the following forms: `#comment' Comment lines start with `#' or `;'. Empty lines are ignored. `[group]' `group' is the name of the program or group for which you want to set options. After a group line, any `option' or `set-variable' lines apply to the named group until the end of the option file or another group line is given. `option' This is equivalent to `--option' on the command-line. `option=value' This is equivalent to `--option=value' on the command-line. `set-variable = variable=value' This is equivalent to `--set-variable variable=value' on the command-line. This syntax must be used to set a `mysqld' variable. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--variable=value' on its own. The `client' group allows you to specify options that apply to all MySQL clients (not `mysqld'). This is the perfect group to use to specify the password you use to connect to the server. (But make sure the option file is readable and writable only by yourself.) Note that for options and values, all leading and trailing blanks are automatically deleted. You may use the escape sequences `\b', `\t', `\n', `\r', `\\', and `\s' in your value string (`\s' == blank). Here is a typical global option file: [client] port=3306 socket=/tmp/mysql.sock [mysqld] port=3306 socket=/tmp/mysql.sock set-variable = key_buffer_size=16M set-variable = max_allowed_packet=1M [mysqldump] quick Here is typical user option file: [client] # The following password will be sent to all standard MySQL clients password=my_password [mysql] no-auto-rehash set-variable = connect_timeout=2 [mysqlhotcopy] interactive-timeout If you have a source distribution, you will find sample configuration files named `my-xxxx.cnf' in the `support-files' directory. If you have a binary distribution, look in the `DIR/support-files' directory, where `DIR' is the pathname to the MySQL installation directory (typically `/usr/local/mysql'). Currently there are sample configuration files for small, medium, large, and very large systems. You can copy `my-xxxx.cnf' to your home directory (rename the copy to `.my.cnf') to experiment with this. All MySQL clients that support option files support the following options: *Option* *Description* -no-defaults Don't read any option files. -print-defaults Print the program name and all options that it will get. -defaults-file=full-path-to-default-fileOnly use the given configuration file. -defaults-extra-file=full-path-to-default-fileRead this configuration file after the global configuration file but before the user configuration file. Note that the above options must be first on the command-line to work! `--print-defaults' may however be used directly after the `--defaults-xxx-file' commands. Note for developers: Option file handling is implemented simply by processing all matching options (that is, options in the appropriate group) before any command-line arguments. This works nicely for programs that use the last instance of an option that is specified multiple times. If you have an old program that handles multiply-specified options this way but doesn't read option files, you need add only two lines to give it that capability. Check the source code of any of the standard MySQL clients to see how to do this. In shell scripts you can use the `my_print_defaults' command to parse the config files: shell> my_print_defaults client mysql --port=3306 --socket=/tmp/mysql.sock --no-auto-rehash The above output contains all options for the groups 'client' and 'mysql'. Installing Many Servers on the Same Machine ------------------------------------------- In some cases you may want to have many different `mysqld' daemons (servers) running on the same machine. You may for example want to run a new version of MySQL for testing together with an old version that is in production. Another case is when you want to give different users access to different `mysqld' servers that they manage themselves. One way to get a new server running is by starting it with a different socket and port as follows: shell> MYSQL_UNIX_PORT=/tmp/mysqld-new.sock shell> MYSQL_TCP_PORT=3307 shell> export MYSQL_UNIX_PORT MYSQL_TCP_PORT shell> scripts/mysql_install_db shell> bin/safe_mysqld & The environment variables appendix includes a list of other environment variables you can use to affect `mysqld'. *Note Environment variables::. The above is the quick and dirty way that one commonly uses for testing. The nice thing with this is that all connections you do in the above shell will automatically be directed to the new running server! If you need to do this more permanently, you should create an option file for each server. *Note Option files::. In your startup script that is executed at boot time you should specify for both servers: `safe_mysqld --defaults-file=path-to-option-file' At least the following options should be different per server: * port=# * socket=path * pid-file=path The following options should be different, if they are used: * log=path * log-bin=path * log-update=path * log-isam=path * bdb-logdir=path * shared-memory-base-name (New in MySQL 4.1) If you want more performance, you can also specify the following differently: * tmpdir=path * bdb-tmpdir=path *Note Command-line options::. Starting from MySQL 4.1, `tmpdir' can be set to a list of paths separated by colon `:' (semicolon `;' on Windows). They will be used in round-robin fashion. This feature can be used to spread load between several physical disks. If you are installing binary MySQL versions (.tar files) and start them with `./bin/safe_mysqld' then in most cases the only option you need to add/change is the `socket' and `port' argument to `safe_mysqld'. *Note Running Multiple MySQL Servers on the Same Machine: Multiple servers. Running Multiple MySQL Servers on the Same Machine -------------------------------------------------- There are circumstances when you might want to run multiple servers on the same machine. For example, you might want to test a new MySQL release while leaving your existing production setup undisturbed. Or you might be an Internet service provider that wants to provide independent MySQL installations for different customers. If you want to run multiple servers, the easiest way is to compile the servers with different TCP/IP ports and socket files so they are not both listening to the same TCP/IP port or socket file. *Note `mysqld_multi': mysqld_multi. Assume an existing server is configured for the default port number and socket file. Then configure the new server with a `configure' command something like this: shell> ./configure --with-tcp-port=port_number \ --with-unix-socket-path=file_name \ --prefix=/usr/local/mysql-3.22.9 Here `port_number' and `file_name' should be different from the default port number and socket file pathname, and the `--prefix' value should specify an installation directory different from the one under which the existing MySQL installation is located. You can check the socket used by any currently executing MySQL server with this command: shell> mysqladmin -h hostname --port=port_number variables Note that if you specify "`localhost'" as a hostname, `mysqladmin' will default to using Unix sockets instead of TCP/IP. In MySQL 4.1 you can also specify the protocol to use by using the `--protocol=(TCP | SOCKET | PIPE | MEMORY)' option. If you have a MySQL server running on the port you used, you will get a list of some of the most important configurable variables in MySQL, including the socket name. You don't have to recompile a new MySQL server just to start with a different port and socket. You can change the port and socket to be used by specifying them at runtime as options to `safe_mysqld': shell> /path/to/safe_mysqld --socket=file_name --port=port_number `mysqld_multi' can also take `safe_mysqld' (or `mysqld') as an argument and pass the options from a configuration file to `safe_mysqld' and further to `mysqld'. If you run the new server on the same database directory as another server with logging enabled, you should also specify the name of the log files to `safe_mysqld' with `--log', `--log-update', or `--log-slow-queries'. Otherwise, both servers may be trying to write to the same log file. *Warning*: normally you should never have two servers that update data in the same database! If your OS doesn't support fault-free system locking, this may lead to unpleasant surprises! If you want to use another database directory for the second server, you can use the `--datadir=path' option to `safe_mysqld'. *Note* also that starting several MySQL servers (`mysqlds') in different machines and letting them access one data directory over `NFS' is generally a *bad idea*! The problem is that the `NFS' will become the bottleneck with the speed. It is not meant for such use. And last but not least, you would still have to come up with a solution how to make sure that two or more `mysqlds' are not interfering with each other. At the moment there is no platform that would 100% reliable do the file locking (`lockd' daemon usually) in every situation. Yet there would be one more possible risk with `NFS'; it would make the work even more complicated for `lockd' daemon to handle. So make it easy for your self and forget about the idea. The working solution is to have one computer with an operating system that efficiently handles threads and have several CPUs in it. When you want to connect to a MySQL server that is running with a different port than the port that is compiled into your client, you can use one of the following methods: * Start the client with `--host 'hostname' --port=port_number' to connect with TCP/IP, or `[--host localhost] --socket=file_name' to connect via a Unix socket. * Start the client with `--protocol=tcp' to connect with TCP/IP and `--protocol=socket' to connect via a Unix socket. * In your C or Perl programs, you can give the port or socket arguments when connecting to the MySQL server. * If your are using the Perl `DBD::mysql' module you can read the options from the MySQL option files. *Note Option files::. $dsn = "DBI:mysql:test;mysql_read_default_group=client; mysql_read_default_file=/usr/local/mysql/data/my.cnf" $dbh = DBI->connect($dsn, $user, $password); * Set the `MYSQL_UNIX_PORT' and `MYSQL_TCP_PORT' environment variables to point to the Unix socket and TCP/IP port before you start your clients. If you normally use a specific socket or port, you should place commands to set these environment variables in your `.login' file. *Note Environment variables::. * Specify the default socket and TCP/IP port in the `.my.cnf' file in your home directory. *Note Option files::. General Security Issues and the MySQL Access Privilege System ============================================================= MySQL has an advanced but non-standard security/privilege system. This section describes how it works. General Security Guidelines --------------------------- Anyone using MySQL on a computer connected to the Internet should read this section to avoid the most common security mistakes. In discussing security, we emphasise the necessity of fully protecting the entire server host (not simply the MySQL server) against all types of applicable attacks: eavesdropping, altering, playback, and denial of service. We do not cover all aspects of availability and fault tolerance here. MySQL uses security based on Access Control Lists (ACLs) for all connections, queries, and other operations that a user may attempt to perform. There is also some support for SSL-encrypted connections between MySQL clients and servers. Many of the concepts discussed here are not specific to MySQL at all; the same general ideas apply to almost all applications. When running MySQL, follow these guidelines whenever possible: * *Do not ever give anyone (except the mysql root user) access to the `user' table in the `mysql' database!* This is critical. *The encrypted password is the real password in MySQL.* Anyone who knows the password which is listed in the `user' table and has access to the host listed for the account *can easily log in as that user*. * Learn the MySQL access privilege system. The `GRANT' and `REVOKE' commands are used for controlling access to MySQL. Do not grant any more privileges than necessary. Never grant privileges to all hosts. Checklist: - Try `mysql -u root'. If you are able to connect successfully to the server without being asked for a password, you have problems. Anyone can connect to your MySQL server as the MySQL `root' user with full privileges! Review the MySQL installation instructions, paying particular attention to the item about setting a `root' password. - Use the command `SHOW GRANTS' and check to see who has access to what. Remove those privileges that are not necessary using the `REVOKE' command. * Do not keep any plain-text passwords in your database. When your computer becomes compromised, the intruder can take the full list of passwords and use them. Instead use `MD5()', `SHA1()' or another one-way hashing function. * Do not choose passwords from dictionaries. There are special programs to break them. Even passwords like "xfish98" are very bad. Much better is "duag98" which contains the same word "fish" but typed one key to the left on a standard QWERTY keyboard. Another method is to use "Mhall" which is taken from the first characters of each word in the sentence "Mary had a little lamb." This is easy to remember and type, but difficult to guess for someone who does not know it. * Invest in a firewall. This protects you from at least 50% of all types of exploits in any software. Put MySQL behind the firewall or in a demilitarised zone (DMZ). Checklist: - Try to scan your ports from the Internet using a tool such as `nmap'. MySQL uses port 3306 by default. This port should be inaccessible from untrusted hosts. Another simple way to check whether or not your MySQL port is open is to try the following command from some remote machine, where `server_host' is the hostname of your MySQL server: shell> telnet server_host 3306 If you get a connection and some garbage characters, the port is open, and should be closed on your firewall or router, unless you really have a good reason to keep it open. If `telnet' just hangs or the connection is refused, everything is OK; the port is blocked. * Do not trust any data entered by your users. They can try to trick your code by entering special or escaped character sequences in web forms, URLs, or whatever application you have built. Be sure that your application remains secure if a user enters something like "`; DROP DATABASE mysql;'". This is an extreme example, but large security leaks and data loss may occur as a result of hackers using similar techniques, if you do not prepare for them. Also remember to check numeric data. A common mistake is to protect only strings. Sometimes people think that if a database contains only publicly available data that it need not be protected. This is incorrect. At least denial-of-service type attacks can be performed on such databases. The simplest way to protect from this type of attack is to use apostrophes around the numeric constants: `SELECT * FROM table WHERE ID='234'' rather than `SELECT * FROM table WHERE ID=234'. MySQL automatically converts this string to a number and strips all non-numeric symbols from it. Checklist: - All web applications: * Try to enter `'' and `"' in all your web forms. If you get any kind of MySQL error, investigate the problem right away. * Try to modify any dynamic URLs by adding `%22' (`"'), `%23' (`#'), and `%27' (`'') in the URL. * Try to modify datatypes in dynamic URLs from numeric ones to character ones containing characters from previous examples. Your application should be safe against this and similar attacks. * Try to enter characters, spaces, and special symbols instead of numbers in numeric fields. Your application should remove them before passing them to MySQL or your application should generate an error. Passing unchecked values to MySQL is very dangerous! * Check data sizes before passing them to MySQL. * Consider having your application connect to the database using a different user name than the one you use for administrative purposes. Do not give your applications any more access privileges than they need. - Users of PHP: * Check out the `addslashes()' function. As of PHP 4.0.3, a `mysql_escape_string()' function is available that is based on the function of the same name in the MySQL C API. - Users of MySQL C API: * Check out the `mysql_real_escape_string()' API call. - Users of MySQL++: * Check out the `escape' and `quote' modifiers for query streams. - Users of Perl DBI: * Check out the `quote()' method or use placeholders. - Users of Java JDBC: * Use a `PreparedStatement' object and placeholders. * Do not transmit plain (unencrypted) data over the Internet. These data are accessible to everyone who has the time and ability to intercept it and use it for their own purposes. Instead, use an encrypted protocol such as SSL or SSH. MySQL supports internal SSL connections as of Version 4.0.0. SSH port-forwarding can be used to create an encrypted (and compressed) tunnel for the communication. * Learn to use the `tcpdump' and `strings' utilities. For most cases, you can check whether MySQL data streams are unencrypted by issuing a command like the following: shell> tcpdump -l -i eth0 -w - src or dst port 3306 | strings (This works under Linux and should work with small modifications under other systems.) Warning: If you do not see data this doesn't always actually mean that it is encrypted. If you need high security, you should consult with a security expert. How to Make MySQL Secure Against Crackers ----------------------------------------- When you connect to a MySQL server, you normally should use a password. The password is not transmitted in clear text over the connection, however the encryption algorithm is not very strong, and with some effort a clever attacker can crack the password if he is able to sniff the traffic between the client and the server. If the connection between the client and the server goes through an untrusted network, you should use an SSH tunnel to encrypt the communication. All other information is transferred as text that can be read by anyone who is able to watch the connection. If you are concerned about this, you can use the compressed protocol (in MySQL Version 3.22 and above) to make things much harder. To make things even more secure you should use `ssh'. You can find an `Open Source' `ssh' client at `http://www.openssh.org/', and a commercial `ssh' client at `http://www.ssh.com/'. With this, you can get an encrypted TCP/IP connection between a MySQL server and a MySQL client. If you are using MySQL 4.0, you can also use internal OpenSSL support. *Note Secure connections::. To make a MySQL system secure, you should strongly consider the following suggestions: * Use passwords for all MySQL users. Remember that anyone can log in as any other person as simply as `mysql -u other_user db_name' if `other_user' has no password. It is common behaviour with client/server applications that the client may specify any user name. You can change the password of all users by editing the `mysql_install_db' script before you run it, or only the password for the MySQL `root' user like this: shell> mysql -u root mysql mysql> UPDATE user SET Password=PASSWORD('new_password') -> WHERE user='root'; mysql> FLUSH PRIVILEGES; * Don't run the MySQL daemon as the Unix `root' user. This is very dangerous, because any user with the `FILE' privilege will be able to create files as `root' (for example, `~root/.bashrc'). To prevent this, `mysqld' will refuse to run as `root' unless it is specified directly using a `--user=root' option. `mysqld' can be run as an ordinary unprivileged user instead. You can also create a new Unix user `mysql' to make everything even more secure. If you run `mysqld' as another Unix user, you don't need to change the `root' user name in the `user' table, because MySQL user names have nothing to do with Unix user names. To start `mysqld' as another Unix user, add a `user' line that specifies the user name to the `[mysqld]' group of the `/etc/my.cnf' option file or the `my.cnf' option file in the server's data directory. For example: [mysqld] user=mysql This will cause the server to start as the designated user whether you start it manually or by using `safe_mysqld' or `mysql.server'. For more details, see *Note Changing MySQL user::. * Don't support symlinks to tables (this can be disabled with the `--skip-symlink' option). This is especially important if you run `mysqld' as root as anyone that has write access to the mysqld data directories could then delete any file in the system! *Note Symbolic links to tables::. * Check that the Unix user that `mysqld' runs as is the only user with read/write privileges in the database directories. * Don't give the `PROCESS' privilege to all users. The output of `mysqladmin processlist' shows the text of the currently executing queries, so any user who is allowed to execute that command might be able to see if another user issues an `UPDATE user SET password=PASSWORD('not_secure')' query. `mysqld' reserves an extra connection for users who have the `PROCESS' privilege, so that a MySQL `root' user can log in and check things even if all normal connections are in use. * Don't give the `FILE' privilege to all users. Any user that has this privilege can write a file anywhere in the filesystem with the privileges of the `mysqld' daemon! To make this a bit safer, all files generated with `SELECT ... INTO OUTFILE' are writeable by everyone, and you cannot overwrite existing files. The `FILE' privilege may also be used to read any world readable file that is accessible to the Unix user that the server runs as. One can also read any file to the current database (which the user need some privilege for). This could be abused, for example, by using `LOAD DATA' to load `/etc/passwd' into a table, which can then be read with `SELECT'. * If you don't trust your DNS, you should use IP numbers instead of hostnames in the grant tables. In any case, you should be very careful about creating grant table entries using hostname values that contain wildcards! * If you want to restrict the number of connections for a single user, you can do this by setting the `max_user_connections' variable in `mysqld'. Startup Options for `mysqld' Concerning Security ------------------------------------------------ The following `mysqld' options affect security: `--local-infile[=(0|1)]' If one uses `--local-infile=0' then one can't use `LOAD DATA LOCAL INFILE'. `--safe-show-database' With this option, the `SHOW DATABASES' command returns only those databases for which the user has some kind of privilege. From version 4.0.2 this option is deprecated and doesn't do anything (the option is enabled by default) as we now have the `SHOW DATABASES' privilege. *Note GRANT::. `--safe-user-create' If this is enabled, an user can't create new users with the `GRANT' command, if the user doesn't have the `INSERT' privilege for the `mysql.user' table. If you want to give a user access to just create new users with those privileges that the user has right to grant, you should give the user the following privilege: mysql> GRANT INSERT(user) ON mysql.user TO 'user'@'hostname'; This will ensure that the user can't change any privilege columns directly, but has to use the `GRANT' command to give privileges to other users. `--skip-grant-tables' This option causes the server not to use the privilege system at all. This gives everyone *full access* to all databases! (You can tell a running server to start using the grant tables again by executing `mysqladmin flush-privileges' or `mysqladmin reload'.) `--skip-name-resolve' Hostnames are not resolved. All `Host' column values in the grant tables must be IP numbers or `localhost'. `--skip-networking' Don't allow TCP/IP connections over the network. All connections to `mysqld' must be made via Unix sockets. This option is unsuitable when using a MySQL version prior to 3.23.27 with the MIT-pthreads package, because Unix sockets were not supported by MIT-pthreads at that time. `--skip-show-database' Don't allow `SHOW DATABASES' command, unless the user has the `SHOW DATABASES' privilege. From version 4.0.2 you should no longer need this option, since access can now be granted specifically with the `SHOW DATABASES' privilege. Security issues with LOAD DATA LOCAL ------------------------------------ In MySQL 3.23.49 and MySQL 4.0.2, we added some new options to deal with possible security issues when it comes to `LOAD DATA LOCAL'. There are two possible problems with supporting this command: As the reading of the file is initiated from the server, one could theoretically create a patched MySQL server that could read any file on the client machine that the current user has read access to, when the client issues a query against the table. In a web environment where the clients are connecting from a web server, a user could use `LOAD DATA LOCAL' to read any files that the web server process has read access to (assuming a user could run any command against the SQL server). There are two separate fixes for this: If you don't configure MySQL with `--enable-local-infile', then `LOAD DATA LOCAL' will be disabled by all clients, unless one calls `mysql_options(... MYSQL_OPT_LOCAL_INFILE, 0)' in the client. *Note `mysql_options()': mysql_options. For the `mysql' command-line client, `LOAD DATA LOCAL' can be enabled by specifying the option `--local-infile[=1]', or disabled with `--local-infile=0'. By default, all MySQL clients and libraries are compiled with `--enable-local-infile', to be compatible with MySQL 3.23.48 and before. One can disable all `LOAD DATA LOCAL' commands in the MySQL server by starting `mysqld' with `--local-infile=0'. In the case that `LOAD DATA LOCAL INFILE' is disabled in the server or the client, you will get the error message (1148): The used command is not allowed with this MySQL version What the Privilege System Does ------------------------------ The primary function of the MySQL privilege system is to authenticate a user connecting from a given host, and to associate that user with privileges on a database such as `SELECT', `INSERT', `UPDATE' and `DELETE'. Additional functionality includes the ability to have an anonymous user and to grant privileges for MySQL-specific functions such as `LOAD DATA INFILE' and administrative operations. How the Privilege System Works ------------------------------ The MySQL privilege system ensures that all users may do exactly the things that they are supposed to be allowed to do. When you connect to a MySQL server, your identity is determined by *the host from which you connect* and *the user name you specify*. The system grants privileges according to your identity and *what you want to do*. MySQL considers both your hostname and user name in identifying you because there is little reason to assume that a given user name belongs to the same person everywhere on the Internet. For example, the user `joe' who connects from `office.com' need not be the same person as the user `joe' who connects from `elsewhere.com'. MySQL handles this by allowing you to distinguish users on different hosts that happen to have the same name: you can grant `joe' one set of privileges for connections from `office.com', and a different set of privileges for connections from `elsewhere.com'. MySQL access control involves two stages: * Stage 1: The server checks whether you are even allowed to connect. * Stage 2: Assuming you can connect, the server checks each request you issue to see whether you have sufficient privileges to perform it. For example, if you try to select rows from a table in a database or drop a table from the database, the server makes sure you have the `SELECT' privilege for the table or the `DROP' privilege for the database. The server uses the `user', `db', and `host' tables in the `mysql' database at both stages of access control. The fields in these grant tables are shown here: *Table name* `user' `db' `host' *Scope `Host' `Host' `Host' fields* `User' `Db' `Db' `Password' `User' *Privilege `Select_priv' `Select_priv' `Select_priv' fields* `Insert_priv' `Insert_priv' `Insert_priv' `Update_priv' `Update_priv' `Update_priv' `Delete_priv' `Delete_priv' `Delete_priv' `Index_priv' `Index_priv' `Index_priv' `Alter_priv' `Alter_priv' `Alter_priv' `Create_priv' `Create_priv' `Create_priv' `Drop_priv' `Drop_priv' `Drop_priv' `Grant_priv' `Grant_priv' `Grant_priv' `References_priv' `Reload_priv' `Shutdown_priv' `Process_priv' `File_priv' For the second stage of access control (request verification), the server may, if the request involves tables, additionally consult the `tables_priv' and `columns_priv' tables. The fields in these tables are shown here: *Table name* `tables_priv' `columns_priv' *Scope `Host' `Host' fields* `Db' `Db' `User' `User' `Table_name' `Table_name' `Column_name' *Privilege `Table_priv' `Column_priv' fields* `Column_priv' *Other `Timestamp' `Timestamp' fields* `Grantor' Each grant table contains scope fields and privilege fields. Scope fields determine the scope of each entry in the tables, that is, the context in which the entry applies. For example, a `user' table entry with `Host' and `User' values of `'thomas.loc.gov'' and `'bob'' would be used for authenticating connections made to the server by `bob' from the host `thomas.loc.gov'. Similarly, a `db' table entry with `Host', `User', and `Db' fields of `'thomas.loc.gov'', `'bob'' and `'reports'' would be used when `bob' connects from the host `thomas.loc.gov' to access the `reports' database. The `tables_priv' and `columns_priv' tables contain scope fields indicating tables or table/column combinations to which each entry applies. For access-checking purposes, comparisons of `Host' values are case-insensitive. `User', `Password', `Db', and `Table_name' values are case-sensitive. `Column_name' values are case-insensitive in MySQL Version 3.22.12 or later. Privilege fields indicate the privileges granted by a table entry, that is, what operations can be performed. The server combines the information in the various grant tables to form a complete description of a user's privileges. The rules used to do this are described in *Note Request access::. Scope fields are strings, declared as shown here; the default value for each is the empty string: *Field name* *Type* *Notes* `Host' `CHAR(60)' `User' `CHAR(16)' `Password' `CHAR(16)' `Db' `CHAR(64)' (`CHAR(60)' for the `tables_priv' and `columns_priv' tables) `Table_name' `CHAR(60)' `Column_name' `CHAR(60)' In the `user', `db' and `host' tables, all privilege fields are declared as `ENUM('N','Y')'each can have a value of `'N'' or `'Y'', and the default value is `'N''. In the `tables_priv' and `columns_priv' tables, the privilege fields are declared as `SET' fields: *Table *Field *Possible set elements* name* name* `tables_priv'`Table_priv'`'Select', 'Insert', 'Update', 'Delete', 'Create', 'Drop', 'Grant', 'References', 'Index', 'Alter'' `tables_priv'`Column_priv'`'Select', 'Insert', 'Update', 'References'' `columns_priv'`Column_priv'`'Select', 'Insert', 'Update', 'References'' Briefly, the server uses the grant tables like this: * The `user' table scope fields determine whether to allow or reject incoming connections. For allowed connections, any privileges granted in the `user' table indicate the user's global (superuser) privileges. These privileges apply to *all* databases on the server. * The `db' and `host' tables are used together: - The `db' table scope fields determine which users can access which databases from which hosts. The privilege fields determine which operations are allowed. - The `host' table is used as an extension of the `db' table when you want a given `db' table entry to apply to several hosts. For example, if you want a user to be able to use a database from several hosts in your network, leave the `Host' value empty in the user's `db' table entry, then populate the `host' table with an entry for each of those hosts. This mechanism is described more detail in *Note Request access::. * The `tables_priv' and `columns_priv' tables are similar to the `db' table, but are more fine-grained: they apply at the table and column levels rather than at the database level. Note that administrative privileges (`RELOAD', `SHUTDOWN', etc.) are specified only in the `user' table. This is because administrative operations are operations on the server itself and are not database-specific, so there is no reason to list such privileges in the other grant tables. In fact, only the `user' table need be consulted to determine whether you can perform an administrative operation. The `FILE' privilege is specified only in the `user' table, too. It is not an administrative privilege as such, but your ability to read or write files on the server host is independent of the database you are accessing. The `mysqld' server reads the contents of the grant tables once, when it starts up. Changes to the grant tables take effect as indicated in *Note Privilege changes::. When you modify the contents of the grant tables, it is a good idea to make sure that your changes set up privileges the way you want. For help in diagnosing problems, see *Note Access denied::. For advice on security issues, see *Note Security::. A useful diagnostic tool is the `mysqlaccess' script, which Yves Carlier has provided for the MySQL distribution. Invoke `mysqlaccess' with the `--help' option to find out how it works. Note that `mysqlaccess' checks access using only the `user', `db' and `host' tables. It does not check table- or column-level privileges. Privileges Provided by MySQL ---------------------------- Information about user privileges is stored in the `user', `db', `host', `tables_priv', and `columns_priv' tables in the `mysql' database (that is, in the database named `mysql'). The MySQL server reads the contents of these tables when it starts up and under the circumstances indicated in *Note Privilege changes::. The names used in this manual to refer to the privileges provided by MySQL version 4.0.2 are shown here, along with the table column name associated with each privilege in the grant tables and the context in which the privilege applies: *Privilege* *Column* *Context* `ALTER' `Alter_priv' tables `DELETE' `Delete_priv' tables `INDEX' `Index_priv' tables `INSERT' `Insert_priv' tables `SELECT' `Select_priv' tables `UPDATE' `Update_priv' tables `CREATE' `Create_priv' databases, tables, or indexes `DROP' `Drop_priv' databases or tables `GRANT' `Grant_priv' databases or tables `REFERENCES'`References_priv'databases or tables `CREATE `Create_tmp_table_priv'server administration TEMPORARY TABLES' `EXECUTE' `Execute_priv' server administration `FILE' `File_priv' file access on server `LOCK `Lock_tables_priv'server administration TABLES' `PROCESS' `Process_priv' server administration `RELOAD' `Reload_priv' server administration `REPLICATION`Repl_client_priv'server administration CLIENT' `REPLICATION`Repl_slave_priv'server administration SLAVE' `SHOW `Show_db_priv' server administration DATABASES' `SHUTDOWN' `Shutdown_priv'server administration `SUPER' `Super_priv' server administration The `SELECT', `INSERT', `UPDATE', and `DELETE' privileges allow you to perform operations on rows in existing tables in a database. `SELECT' statements require the `SELECT' privilege only if they actually retrieve rows from a table. You can execute certain `SELECT' statements even without permission to access any of the databases on the server. For example, you could use the `mysql' client as a simple calculator: mysql> SELECT 1+1; mysql> SELECT PI()*2; The `INDEX' privilege allows you to create or drop (remove) indexes. The `ALTER' privilege allows you to use `ALTER TABLE'. The `CREATE' and `DROP' privileges allow you to create new databases and tables, or to drop (remove) existing databases and tables. Note that if you grant the `DROP' privilege for the `mysql' database to a user, that user can drop the database in which the MySQL access privileges are stored! The `GRANT' privilege allows you to give to other users those privileges you yourself possess. The `FILE' privilege gives you permission to read and write files on the server using the `LOAD DATA INFILE' and `SELECT ... INTO OUTFILE' statements. Any user to whom this privilege is granted can read any world readable file accessable by the MySQL server and create a new world readable file in any directory where the MySQL server can write. The user can also read any file in the current database directory. The user can however not change any existing file. The remaining privileges are used for administrative operations, which are performed using the `mysqladmin' program. The table here shows which `mysqladmin' commands each administrative privilege allows you to execute: *Privilege* *Commands permitted to privilege holders* `RELOAD' `reload', `refresh', `flush-privileges', `flush-hosts', `flush-logs', and `flush-tables' `SHUTDOWN' `shutdown' `PROCESS' `processlist' `SUPER' `kill' The `reload' command tells the server to re-read the grant tables. The `refresh' command flushes all tables and opens and closes the log files. `flush-privileges' is a synonym for `reload'. The other `flush-*' commands perform functions similar to `refresh' but are more limited in scope, and may be preferable in some instances. For example, if you want to flush just the log files, `flush-logs' is a better choice than `refresh'. The `shutdown' command shuts down the server. The `processlist' command displays information about the threads executing within the server. The `kill' command kills server threads. You can always display or kill your own threads, but you need the `PROCESS' privilege to display and `SUPER' privilege to kill threads initiated by other users. *Note KILL::. It is a good idea in general to grant privileges only to those users who need them, but you should exercise particular caution in granting certain privileges: * The `GRANT' privilege allows users to give away their privileges to other users. Two users with different privileges and with the `GRANT' privilege are able to combine privileges. * The `ALTER' privilege may be used to subvert the privilege system by renaming tables. * The `FILE' privilege can be abused to read any world-readable file on the server or any file in the current database directory on the server into a database table, the contents of which can then be accessed using `SELECT'. * The `SHUTDOWN' privilege can be abused to deny service to other users entirely, by terminating the server. * The `PROCESS' privilege can be used to view the plain text of currently executing queries, including queries that set or change passwords. * Privileges on the `mysql' database can be used to change passwords and other access privilege information. (Passwords are stored encrypted, so a malicious user cannot simply read them to know the plain text password.) If they can access the `mysql.user' password column, they can use it to log into the MySQL server for the given user. (With sufficient privileges, the same user can replace a password with a different one.) There are some things that you cannot do with the MySQL privilege system: * You cannot explicitly specify that a given user should be denied access. That is, you cannot explicitly match a user and then refuse the connection. * You cannot specify that a user has privileges to create or drop tables in a database but not to create or drop the database itself. Connecting to the MySQL Server ------------------------------ MySQL client programs generally require that you specify connection parameters when you want to access a MySQL server: the host you want to connect to, your user name, and your password. For example, the `mysql' client can be started like this (optional arguments are enclosed between `[' and `]'): shell> mysql [-h host_name] [-u user_name] [-pyour_pass] Alternate forms of the `-h', `-u', and `-p' options are `--host=host_name', `--user=user_name', and `--password=your_pass'. Note that there is _no space_ between `-p' or `--password=' and the password following it. *Note*: Specifying a password on the command-line is not secure! Any user on your system may then find out your password by typing a command like: `ps auxww'. *Note Option files::. `mysql' uses default values for connection parameters that are missing from the command-line: * The default hostname is `localhost'. * The default user name is your Unix login name. * No password is supplied if `-p' is missing. Thus, for a Unix user `joe', the following commands are equivalent: shell> mysql -h localhost -u joe shell> mysql -h localhost shell> mysql -u joe shell> mysql Other MySQL clients behave similarly. On Unix systems, you can specify different default values to be used when you make a connection, so that you need not enter them on the command-line each time you invoke a client program. This can be done in a couple of ways: * You can specify connection parameters in the `[client]' section of the `.my.cnf' configuration file in your home directory. The relevant section of the file might look like this: [client] host=host_name user=user_name password=your_pass *Note Option files::. * You can specify connection parameters using environment variables. The host can be specified for `mysql' using `MYSQL_HOST'. The MySQL user name can be specified using `USER' (this is for Windows only). The password can be specified using `MYSQL_PWD' (but this is insecure; see the next section). *Note Environment variables::. Access Control, Stage 1: Connection Verification ------------------------------------------------ When you attempt to connect to a MySQL server, the server accepts or rejects the connection based on your identity and whether you can verify your identity by supplying the correct password. If not, the server denies access to you completely. Otherwise, the server accepts the connection, then enters Stage 2 and waits for requests. Your identity is based on two pieces of information: * The host from which you connect * Your MySQL user name Identity checking is performed using the three `user' table scope fields (`Host', `User', and `Password'). The server accepts the connection only if a `user' table entry matches your hostname and user name, and you supply the correct password. Values in the `user' table scope fields may be specified as follows: * A `Host' value may be a hostname or an IP number, or `'localhost'' to indicate the local host. * You can use the wildcard characters `%' and `_' in the `Host' field. * A `Host' value of `'%'' matches any hostname. * A blank `Host' value means that the privilege should be anded with the entry in the `host' table that matches the given host name. You can find more information about this in the next chapter. * As of MySQL Version 3.23, for `Host' values specified as IP numbers, you can specify a netmask indicating how many address bits to use for the network number. For example: mysql> GRANT ALL PRIVILEGES ON db.* -> TO david@'192.58.197.0/255.255.255.0'; This will allow everyone to connect from an IP where the following is true: user_ip & netmask = host_ip. In the above example all IP:s in the interval 192.58.197.0 - 192.58.197.255 can connect to the MySQL server. * Wildcard characters are not allowed in the `User' field, but you can specify a blank value, which matches any name. If the `user' table entry that matches an incoming connection has a blank user name, the user is considered to be the anonymous user (the user with no name), rather than the name that the client actually specified. This means that a blank user name is used for all further access checking for the duration of the connection (that is, during Stage 2). * The `Password' field can be blank. This does not mean that any password matches, it means the user must connect without specifying a password. Non-blank `Password' values represent encrypted passwords. MySQL does not store passwords in plaintext form for anyone to see. Rather, the password supplied by a user who is attempting to connect is encrypted (using the `PASSWORD()' function). The encrypted password is then used when the client/server is checking if the password is correct. (This is done without the encrypted password ever traveling over the connection.) Note that from MySQL's point of view the encrypted password is the REAL password, so you should not give anyone access to it! In particular, don't give normal users read access to the tables in the `mysql' database! From version 4.1, MySQL employs a different password and login mechanism that is secure even if TCP/IP packets are sniffed and/or the mysql database is captured. The examples here show how various combinations of `Host' and `User' values in `user' table entries apply to incoming connections: `Host' *value* `User' *Connections matched by entry* *value* `'thomas.loc.gov'' `'fred'' `fred', connecting from `thomas.loc.gov' `'thomas.loc.gov'' `''' Any user, connecting from `thomas.loc.gov' `'%'' `'fred'' `fred', connecting from any host `'%'' `''' Any user, connecting from any host `'%.loc.gov'' `'fred'' `fred', connecting from any host in the `loc.gov' domain `'x.y.%'' `'fred'' `fred', connecting from `x.y.net', `x.y.com',`x.y.edu', etc. (this is probably not useful) `'144.155.166.177'' `'fred'' `fred', connecting from the host with IP address `144.155.166.177' `'144.155.166.%'' `'fred'' `fred', connecting from any host in the `144.155.166' class C subnet `'144.155.166.0/255.255.255.0''`'fred'' Same as previous example Because you can use IP wildcard values in the `Host' field (for example, `'144.155.166.%'' to match every host on a subnet), there is the possibility that someone might try to exploit this capability by naming a host `144.155.166.somewhere.com'. To foil such attempts, MySQL disallows matching on hostnames that start with digits and a dot. Thus, if you have a host named something like `1.2.foo.com', its name will never match the `Host' column of the grant tables. Only an IP number can match an IP wildcard value. An incoming connection may be matched by more than one entry in the `user' table. For example, a connection from `thomas.loc.gov' by `fred' would be matched by several of the entries just shown above. How does the server choose which entry to use if more than one matches? The server resolves this question by sorting the `user' table after reading it at startup time, then looking through the entries in sorted order when a user attempts to connect. The first matching entry is the one that is used. `user' table sorting works as follows. Suppose the `user' table looks like this: +-----------+----------+- | Host | User | ... +-----------+----------+- | % | root | ... | % | jeffrey | ... | localhost | root | ... | localhost | | ... +-----------+----------+- When the server reads in the table, it orders the entries with the most-specific `Host' values first (`'%'' in the `Host' column means "any host" and is least specific). Entries with the same `Host' value are ordered with the most-specific `User' values first (a blank `User' value means "any user" and is least specific). The resulting sorted `user' table looks like this: +-----------+----------+- | Host | User | ... +-----------+----------+- | localhost | root | ... | localhost | | ... | % | jeffrey | ... | % | root | ... +-----------+----------+- When a connection is attempted, the server looks through the sorted entries and uses the first match found. For a connection from `localhost' by `jeffrey', the entries with `'localhost'' in the `Host' column match first. Of those, the entry with the blank user name matches both the connecting hostname and user name. (The `'%'/'jeffrey'' entry would have matched, too, but it is not the first match in the table.) Here is another example. Suppose the `user' table looks like this: +----------------+----------+- | Host | User | ... +----------------+----------+- | % | jeffrey | ... | thomas.loc.gov | | ... +----------------+----------+- The sorted table looks like this: +----------------+----------+- | Host | User | ... +----------------+----------+- | thomas.loc.gov | | ... | % | jeffrey | ... +----------------+----------+- A connection from `thomas.loc.gov' by `jeffrey' is matched by the first entry, whereas a connection from `whitehouse.gov' by `jeffrey' is matched by the second. A common misconception is to think that for a given user name, all entries that explicitly name that user will be used first when the server attempts to find a match for the connection. This is simply not true. The previous example illustrates this, where a connection from `thomas.loc.gov' by `jeffrey' is first matched not by the entry containing `'jeffrey'' as the `User' field value, but by the entry with no user name! If you have problems connecting to the server, print out the `user' table and sort it by hand to see where the first match is being made. If connection was successful, but your privileges are not what you expected you may use `CURRENT_USER()' function (new in version 4.0.6) to see what user/host combination your connection actually matched. *Note `CURRENT_USER()': Miscellaneous functions. Access Control, Stage 2: Request Verification --------------------------------------------- Once you establish a connection, the server enters Stage 2. For each request that comes in on the connection, the server checks whether you have sufficient privileges to perform it, based on the type of operation you wish to perform. This is where the privilege fields in the grant tables come into play. These privileges can come from any of the `user', `db', `host', `tables_priv', or `columns_priv' tables. The grant tables are manipulated with `GRANT' and `REVOKE' commands. *Note `GRANT': GRANT. (You may find it helpful to refer to *Note Privileges::, which lists the fields present in each of the grant tables.) The `user' table grants privileges that are assigned to you on a global basis and that apply no matter what the current database is. For example, if the `user' table grants you the `DELETE' privilege, you can delete rows from any database on the server host! In other words, `user' table privileges are superuser privileges. It is wise to grant privileges in the `user' table only to superusers such as server or database administrators. For other users, you should leave the privileges in the `user' table set to `'N'' and grant privileges on a database-specific basis only, using the `db' and `host' tables. The `db' and `host' tables grant database-specific privileges. Values in the scope fields may be specified as follows: * The wildcard characters `%' and `_' can be used in the `Host' and `Db' fields of either table. If you wish to use for instance a `_' character as part of a database name, specify it as `\_' in the `GRANT' command. * A `'%'' `Host' value in the `db' table means "any host." A blank `Host' value in the `db' table means "consult the `host' table for further information." * A `'%'' or blank `Host' value in the `host' table means "any host." * A `'%'' or blank `Db' value in either table means "any database." * A blank `User' value in either table matches the anonymous user. The `db' and `host' tables are read in and sorted when the server starts up (at the same time that it reads the `user' table). The `db' table is sorted on the `Host', `Db', and `User' scope fields, and the `host' table is sorted on the `Host' and `Db' scope fields. As with the `user' table, sorting puts the most-specific values first and least-specific values last, and when the server looks for matching entries, it uses the first match that it finds. The `tables_priv' and `columns_priv' tables grant table- and column-specific privileges. Values in the scope fields may be specified as follows: * The wildcard characters `%' and `_' can be used in the `Host' field of either table. * A `'%'' or blank `Host' value in either table means "any host." * The `Db', `Table_name' and `Column_name' fields cannot contain wildcards or be blank in either table. The `tables_priv' and `columns_priv' tables are sorted on the `Host', `Db', and `User' fields. This is similar to `db' table sorting, although the sorting is simpler because only the `Host' field may contain wildcards. The request verification process is described here. (If you are familiar with the access-checking source code, you will notice that the description here differs slightly from the algorithm used in the code. The description is equivalent to what the code actually does; it differs only to make the explanation simpler.) For administrative requests (`SHUTDOWN', `RELOAD', etc.), the server checks only the `user' table entry, because that is the only table that specifies administrative privileges. Access is granted if the entry allows the requested operation and denied otherwise. For example, if you want to execute `mysqladmin shutdown' but your `user' table entry doesn't grant the `SHUTDOWN' privilege to you, access is denied without even checking the `db' or `host' tables. (They contain no `Shutdown_priv' column, so there is no need to do so.) For database-related requests (`INSERT', `UPDATE', etc.), the server first checks the user's global (superuser) privileges by looking in the `user' table entry. If the entry allows the requested operation, access is granted. If the global privileges in the `user' table are insufficient, the server determines the user's database-specific privileges by checking the `db' and `host' tables: 1. The server looks in the `db' table for a match on the `Host', `Db', and `User' fields. The `Host' and `User' fields are matched to the connecting user's hostname and MySQL user name. The `Db' field is matched to the database the user wants to access. If there is no entry for the `Host' and `User', access is denied. 2. If there is a matching `db' table entry and its `Host' field is not blank, that entry defines the user's database-specific privileges. 3. If the matching `db' table entry's `Host' field is blank, it signifies that the `host' table enumerates which hosts should be allowed access to the database. In this case, a further lookup is done in the `host' table to find a match on the `Host' and `Db' fields. If no `host' table entry matches, access is denied. If there is a match, the user's database-specific privileges are computed as the intersection (*not* the union!) of the privileges in the `db' and `host' table entries, that is, the privileges that are `'Y'' in both entries. (This way you can grant general privileges in the `db' table entry and then selectively restrict them on a host-by-host basis using the `host' table entries.) After determining the database-specific privileges granted by the `db' and `host' table entries, the server adds them to the global privileges granted by the `user' table. If the result allows the requested operation, access is granted. Otherwise, the server checks the user's table and column privileges in the `tables_priv' and `columns_priv' tables and adds those to the user's privileges. Access is allowed or denied based on the result. Expressed in boolean terms, the preceding description of how a user's privileges are calculated may be summarised like this: global privileges OR (database privileges AND host privileges) OR table privileges OR column privileges It may not be apparent why, if the global `user' entry privileges are initially found to be insufficient for the requested operation, the server adds those privileges to the database-, table-, and column-specific privileges later. The reason is that a request might require more than one type of privilege. For example, if you execute an `INSERT ... SELECT' statement, you need both `INSERT' and `SELECT' privileges. Your privileges might be such that the `user' table entry grants one privilege and the `db' table entry grants the other. In this case, you have the necessary privileges to perform the request, but the server cannot tell that from either table by itself; the privileges granted by the entries in both tables must be combined. The `host' table can be used to maintain a list of secure servers. At TcX, the `host' table contains a list of all machines on the local network. These are granted all privileges. You can also use the `host' table to indicate hosts that are *not* secure. Suppose you have a machine `public.your.domain' that is located in a public area that you do not consider secure. You can allow access to all hosts on your network except that machine by using `host' table entries like this: +--------------------+----+- | Host | Db | ... +--------------------+----+- | public.your.domain | % | ... (all privileges set to 'N') | %.your.domain | % | ... (all privileges set to 'Y') +--------------------+----+- Naturally, you should always test your entries in the grant tables (for example, using `mysqlaccess') to make sure your access privileges are actually set up the way you think they are. Causes of `Access denied' Errors -------------------------------- If you encounter `Access denied' errors when you try to connect to the MySQL server, the following list indicates some courses of action you can take to correct the problem: * After installing MySQL, did you run the `mysql_install_db' script to set up the initial grant table contents? If not, do so. *Note Default privileges::. Test the initial privileges by executing this command: shell> mysql -u root test The server should let you connect without error. You should also make sure you have a file `user.MYD' in the MySQL database directory. Ordinarily, this is `PATH/var/mysql/user.MYD', where `PATH' is the pathname to the MySQL installation root. * After a fresh installation, you should connect to the server and set up your users and their access permissions: shell> mysql -u root mysql The server should let you connect because the MySQL `root' user has no password initially. That is also a security risk, so setting the `root' password is something you should do while you're setting up your other MySQL users. If you try to connect as `root' and get this error: Access denied for user: '@unknown' to database mysql this means that you don't have an entry in the `user' table with a `User' column value of `'root'' and that `mysqld' cannot resolve the hostname for your client. In this case, you must restart the server with the `--skip-grant-tables' option and edit your `/etc/hosts' or `\windows\hosts' file to add an entry for your host. * If you get an error like the following: shell> mysqladmin -u root -pxxxx ver Access denied for user: 'root@localhost' (Using password: YES) It means that you are using a wrong password. *Note Passwords::. If you have forgot the root password, you can restart `mysqld' with `--skip-grant-tables' to change the password. *Note Resetting permissions::. If you get the above error even if you haven't specified a password, this means that you a wrong password in some `my.ini' file. *Note Option files::. You can avoid using option files with the `--no-defaults' option, as follows: shell> mysqladmin --no-defaults -u root ver * If you updated an existing MySQL installation from a version earlier than Version 3.22.11 to Version 3.22.11 or later, did you run the `mysql_fix_privilege_tables' script? If not, do so. The structure of the grant tables changed with MySQL Version 3.22.11 when the `GRANT' statement became functional. * If your privileges seem to have changed in the middle of a session, it may be that a superuser has changed them. Reloading the grant tables affects new client connections, but it also affects existing connections as indicated in *Note Privilege changes::. * If you can't get your password to work, remember that you must use the `PASSWORD()' function if you set the password with the `INSERT', `UPDATE', or `SET PASSWORD' statements. The `PASSWORD()' function is unnecessary if you specify the password using the `GRANT ... INDENTIFIED BY' statement or the `mysqladmin password' command. *Note Passwords::. * `localhost' is a synonym for your local hostname, and is also the default host to which clients try to connect if you specify no host explicitly. However, connections to `localhost' do not work if you are using a MySQL version prior to 3.23.27 that uses MIT-pthreads (`localhost' connections are made using Unix sockets, which were not supported by MIT-pthreads at that time). To avoid this problem on such systems, you should use the `--host' option to name the server host explicitly. This will make a TCP/IP connection to the `mysqld' server. In this case, you must have your real hostname in `user' table entries on the server host. (This is true even if you are running a client program on the same host as the server.) * If you get an `Access denied' error when trying to connect to the database with `mysql -u user_name db_name', you may have a problem with the `user' table. Check this by executing `mysql -u root mysql' and issuing this SQL statement: mysql> SELECT * FROM user; The result should include an entry with the `Host' and `User' columns matching your computer's hostname and your MySQL user name. * The `Access denied' error message will tell you who you are trying to log in as, the host from which you are trying to connect, and whether or not you were using a password. Normally, you should have one entry in the `user' table that exactly matches the hostname and user name that were given in the error message. For example if you get an error message that contains `Using password: NO', this means that you tried to login without an password. * If you get the following error when you try to connect from a different host than the one on which the MySQL server is running, then there is no row in the `user' table that matches that host: Host ... is not allowed to connect to this MySQL server You can fix this by using the command-line tool `mysql' (on the server host!) to add a row to the `user', `db', or `host' table for the user/hostname combination from which you are trying to connect and then execute `mysqladmin flush-privileges'. If you are not running MySQL Version 3.22 and you don't know the IP number or hostname of the machine from which you are connecting, you should put an entry with `'%'' as the `Host' column value in the `user' table and restart `mysqld' with the `--log' option on the server machine. After trying to connect from the client machine, the information in the MySQL log will indicate how you really did connect. (Then replace the `'%'' in the `user' table entry with the actual hostname that shows up in the log. Otherwise, you'll have a system that is insecure.) Another reason for this error on Linux is that you are using a binary MySQL version that is compiled with a different glibc version than the one you are using. In this case you should either upgrade your OS/glibc or download the source MySQL version and compile this yourself. A source RPM is normally trivial to compile and install, so this isn't a big problem. * If you get an error message where the hostname is not shown or where the hostname is an IP, even if you try to connect with a hostname: shell> mysqladmin -u root -pxxxx -h some-hostname ver Access denied for user: 'root@' (Using password: YES) This means that MySQL got some error when trying to resolve the IP to a hostname. In this case you can execute `mysqladmin flush-hosts' to reset the internal DNS cache. *Note DNS::. Some permanent solutions are: - Try to find out what is wrong with your DNS server and fix this. - Specify IPs instead of hostnames in the MySQL privilege tables. - Start `mysqld' with `--skip-name-resolve'. - Start `mysqld' with `--skip-host-cache'. - Connect to `localhost' if you are running the server and the client on the same machine. - Put the client machine names in `/etc/hosts'. * If `mysql -u root test' works but `mysql -h your_hostname -u root test' results in `Access denied', then you may not have the correct name for your host in the `user' table. A common problem here is that the `Host' value in the user table entry specifies an unqualified hostname, but your system's name resolution routines return a fully qualified domain name (or vice-versa). For example, if you have an entry with host `'tcx'' in the `user' table, but your DNS tells MySQL that your hostname is `'tcx.subnet.se'', the entry will not work. Try adding an entry to the `user' table that contains the IP number of your host as the `Host' column value. (Alternatively, you could add an entry to the `user' table with a `Host' value that contains a wildcard--for example, `'tcx.%''. However, use of hostnames ending with `%' is *insecure* and is *not* recommended!) * If `mysql -u user_name test' works but `mysql -u user_name other_db_name' doesn't work, you don't have an entry for `other_db_name' listed in the `db' table. * If `mysql -u user_name db_name' works when executed on the server machine, but `mysql -u host_name -u user_name db_name' doesn't work when executed on another client machine, you don't have the client machine listed in the `user' table or the `db' table. * If you can't figure out why you get `Access denied', remove from the `user' table all entries that have `Host' values containing wildcards (entries that contain `%' or `_'). A very common error is to insert a new entry with `Host'=`'%'' and `User'=`'some user'', thinking that this will allow you to specify `localhost' to connect from the same machine. The reason that this doesn't work is that the default privileges include an entry with `Host'=`'localhost'' and `User'=`'''. Because that entry has a `Host' value `'localhost'' that is more specific than `'%'', it is used in preference to the new entry when connecting from `localhost'! The correct procedure is to insert a second entry with `Host'=`'localhost'' and `User'=`'some_user'', or to remove the entry with `Host'=`'localhost'' and `User'=`'''. * If you get the following error, you may have a problem with the `db' or `host' table: Access to database denied If the entry selected from the `db' table has an empty value in the `Host' column, make sure there are one or more corresponding entries in the `host' table specifying which hosts the `db' table entry applies to. If you get the error when using the SQL commands `SELECT ... INTO OUTFILE' or `LOAD DATA INFILE', your entry in the `user' table probably doesn't have the `FILE' privilege enabled. * Remember that client programs will use connection parameters specified in configuration files or environment variables. *Note Environment variables::. If a client seems to be sending the wrong default connection parameters when you don't specify them on the command-line, check your environment and the `.my.cnf' file in your home directory. You might also check the system-wide MySQL configuration files, though it is far less likely that client connection parameters will be specified there. *Note Option files::. If you get `Access denied' when you run a client without any options, make sure you haven't specified an old password in any of your option files! *Note Option files::. * If you make changes to the grant tables directly (using an `INSERT' or `UPDATE' statement) and your changes seem to be ignored, remember that you must issue a `FLUSH PRIVILEGES' statement or execute a `mysqladmin flush-privileges' command to cause the server to re-read the privilege tables. Otherwise, your changes have no effect until the next time the server is restarted. Remember that after you set the `root' password with an `UPDATE' command, you won't need to specify it until after you flush the privileges, because the server won't know you've changed the password yet! * If you have access problems with a Perl, PHP, Python, or ODBC program, try to connect to the server with `mysql -u user_name db_name' or `mysql -u user_name -pyour_pass db_name'. If you are able to connect using the `mysql' client, there is a problem with your program and not with the access privileges. (Note that there is no space between `-p' and the password; you can also use the `--password=your_pass' syntax to specify the password. If you use the `-p' option alone, MySQL will prompt you for the password.) * For testing, start the `mysqld' daemon with the `--skip-grant-tables' option. Then you can change the MySQL grant tables and use the `mysqlaccess' script to check whether your modifications have the desired effect. When you are satisfied with your changes, execute `mysqladmin flush-privileges' to tell the `mysqld' server to start using the new grant tables. *Note*: reloading the grant tables overrides the `--skip-grant-tables' option. This allows you to tell the server to begin using the grant tables again without bringing it down and restarting it. * If everything else fails, start the `mysqld' daemon with a debugging option (for example, `--debug=d,general,query'). This will print host and user information about attempted connections, as well as information about each command issued. *Note Making trace files::. * If you have any other problems with the MySQL grant tables and feel you must post the problem to the mailing list, always provide a dump of the MySQL grant tables. You can dump the tables with the `mysqldump mysql' command. As always, post your problem using the `mysqlbug' script. *Note Bug reports::. In some cases you may need to restart `mysqld' with `--skip-grant-tables' to run `mysqldump'. MySQL User Account Management ============================= `GRANT' and `REVOKE' Syntax --------------------------- GRANT priv_type [(column_list)] [, priv_type [(column_list)] ...] ON {tbl_name | * | *.* | db_name.*} TO user_name [IDENTIFIED BY [PASSWORD] 'password'] [, user_name [IDENTIFIED BY 'password'] ...] [REQUIRE NONE | [{SSL| X509}] [CIPHER cipher [AND]] [ISSUER issuer [AND]] [SUBJECT subject]] [WITH [GRANT OPTION | MAX_QUERIES_PER_HOUR # | MAX_UPDATES_PER_HOUR # | MAX_CONNECTIONS_PER_HOUR #]] REVOKE priv_type [(column_list)] [, priv_type [(column_list)] ...] ON {tbl_name | * | *.* | db_name.*} FROM user_name [, user_name ...] `GRANT' is implemented in MySQL Version 3.22.11 or later. For earlier MySQL versions, the `GRANT' statement does nothing. The `GRANT' and `REVOKE' commands allow system administrators to create users and grant and revoke rights to MySQL users at four privilege levels: *Global level* Global privileges apply to all databases on a given server. These privileges are stored in the `mysql.user' table. *Database level* Database privileges apply to all tables in a given database. These privileges are stored in the `mysql.db' and `mysql.host' tables. *Table level* Table privileges apply to all columns in a given table. These privileges are stored in the `mysql.tables_priv' table. *Column level* Column privileges apply to single columns in a given table. These privileges are stored in the `mysql.columns_priv' table. If you give a grant for a users that doesn't exists, that user is created. For examples of how `GRANT' works, see *Note Adding users::. For the `GRANT' and `REVOKE' statements, `priv_type' may be specified as any of the following: `ALL [PRIVILEGES]' Sets all simple privileges except `WITH GRANT OPTION' `ALTER' Allows usage of `ALTER TABLE' `CREATE' Allows usage of `CREATE TABLE' `CREATE TEMPORARY Allows usage of `CREATE TEMPORARY TABLE' TABLES' `DELETE' Allows usage of `DELETE' `DROP' Allows usage of `DROP TABLE'. `EXECUTE' Allows the user to run stored procedures (for MySQL 5.0) `FILE' Allows usage of `SELECT ... INTO OUTFILE' and `LOAD DATA INFILE'. `INDEX' Allows usage of `CREATE INDEX' and `DROP INDEX' `INSERT' Allows usage of `INSERT' `LOCK TABLES' Allows usage of `LOCK TABLES' on tables for which one has the `SELECT' privilege. `PROCESS' Allows usage of `SHOW FULL PROCESSLIST' `REFERENCES' For the future `RELOAD' Allows usage of `FLUSH' `REPLICATION CLIENT' Gives the right to the user to ask where the slaves/masters are. `REPLICATION SLAVE' Needed for the replication slaves (to read binlogs from master). `SELECT' Allows usage of `SELECT' `SHOW DATABASES' `SHOW DATABASES' shows all databases. `SHUTDOWN' Allows usage of `mysqladmin shutdown' `SUPER' Allows one connect (once) even if max_connections is reached and execute commands `CHANGE MASTER', `KILL thread', `mysqladmin debug', `PURGE MASTER LOGS' and `SET GLOBAL' `UPDATE' Allows usage of `UPDATE' `USAGE' Synonym for "no privileges." `GRANT OPTION' Synonym for `WITH GRANT OPTION' `USAGE' can be used when you want to create a user that has no privileges. The privileges `CREATE TEMPORARY TABLES', `EXECUTE', `LOCK TABLES', `REPLICATION ...', `SHOW DATABASES' and `SUPER' are new for in version 4.0.2. To use these new privileges after upgrading to 4.0.2, you have to run the `mysql_fix_privilege_tables' script. In older MySQL versions, the `PROCESS' privilege gives the same rights as the new `SUPER' privilege. To revoke the `GRANT' privilege from a user, use a `priv_type' value of `GRANT OPTION': mysql> REVOKE GRANT OPTION ON ... FROM ...; The only `priv_type' values you can specify for a table are `SELECT', `INSERT', `UPDATE', `DELETE', `CREATE', `DROP', `GRANT OPTION', `INDEX', and `ALTER'. The only `priv_type' values you can specify for a column (that is, when you use a `column_list' clause) are `SELECT', `INSERT', and `UPDATE'. You can set global privileges by using `ON *.*' syntax. You can set database privileges by using `ON db_name.*' syntax. If you specify `ON *' and you have a current database, you will set the privileges for that database. (*Warning*: if you specify `ON *' and you *don't* have a current database, you will affect the global privileges!) *Please note*: the `_' and `%' wildcards are allowed when specifying database names in `GRANT' commands. This means that if you wish to use for instance a `_' character as part of a database name, you should specify it as `\_' in the `GRANT' command, to prevent the user from being able to access additional databases matching the wildcard pattern, e.g., `GRANT ... ON `foo\_bar`.* TO ...'. In order to accommodate granting rights to users from arbitrary hosts, MySQL supports specifying the `user_name' value in the form `user@host'. If you want to specify a `user' string containing special characters (such as `-'), or a `host' string containing special characters or wildcard characters (such as `%'), you can quote the user or host name (for example, `'test-user'@'test-hostname''). You can specify wildcards in the hostname. For example, `user@'%.loc.gov'' applies to `user' for any host in the `loc.gov' domain, and `user@'144.155.166.%'' applies to `user' for any host in the `144.155.166' class C subnet. The simple form `user' is a synonym for `user@"%"'. MySQL doesn't support wildcards in user names. Anonymous users are defined by inserting entries with `User=''' into the `mysql.user' table or creating an user with an empty name with the `GRANT' command. *Note*: if you allow anonymous users to connect to the MySQL server, you should also grant privileges to all local users as `user@localhost' because otherwise the anonymous user entry for the local host in the `mysql.user' table will be used when the user tries to log into the MySQL server from the local machine! You can verify if this applies to you by executing this query: mysql> SELECT Host,User FROM mysql.user WHERE User=''; For the moment, `GRANT' only supports host, table, database, and column names up to 60 characters long. A user name can be up to 16 characters. The privileges for a table or column are formed from the logical OR of the privileges at each of the four privilege levels. For example, if the `mysql.user' table specifies that a user has a global `SELECT' privilege, this can't be denied by an entry at the database, table, or column level. The privileges for a column can be calculated as follows: global privileges OR (database privileges AND host privileges) OR table privileges OR column privileges In most cases, you grant rights to a user at only one of the privilege levels, so life isn't normally as complicated as above. The details of the privilege-checking procedure are presented in *Note Privilege system::. If you grant privileges for a user/hostname combination that does not exist in the `mysql.user' table, an entry is added and remains there until deleted with a `DELETE' command. In other words, `GRANT' may create `user' table entries, but `REVOKE' will not remove them; you must do that explicitly using `DELETE'. In MySQL Version 3.22.12 or later, if a new user is created or if you have global grant privileges, the user's password will be set to the password specified by the `IDENTIFIED BY' clause, if one is given. If the user already had a password, it is replaced by the new one. If you don't want to send the password in clear text you can use the `PASSWORD' option followed by a scrambled password from SQL function `PASSWORD()' or the C API function `make_scrambled_password(char *to, const char *password)'. *Warning*: if you create a new user but do not specify an `IDENTIFIED BY' clause, the user has no password. This is insecure. Passwords can also be set with the `SET PASSWORD' command. *Note `SET': SET OPTION. If you grant privileges for a database, an entry in the `mysql.db' table is created if needed. When all privileges for the database have been removed with `REVOKE', this entry is deleted. If a user doesn't have any privileges on a table, the table is not displayed when the user requests a list of tables (for example, with a `SHOW TABLES' statement). The `WITH GRANT OPTION' clause gives the user the ability to give to other users any privileges the user has at the specified privilege level. You should be careful to whom you give the `GRANT' privilege, as two users with different privileges may be able to join privileges! `MAX_QUERIES_PER_HOUR #', `MAX_UPDATES_PER_HOUR #' and `MAX_CONNECTIONS_PER_HOUR #' are new in MySQL version 4.0.2. These options limit the number of queries/updates and logins the user can do during one hour. If `#' is 0 (default), then this means that there are no limitations for that user. *Note User resources::. Note: to specify any of these options for an existing user without adding other additional privileges, use `GRANT USAGE ... WITH MAX_...'. You cannot grant another user a privilege you don't have yourself; the `GRANT' privilege allows you to give away only those privileges you possess. Be aware that when you grant a user the `GRANT' privilege at a particular privilege level, any privileges the user already possesses (or is given in the future!) at that level are also grantable by that user. Suppose you grant a user the `INSERT' privilege on a database. If you then grant the `SELECT' privilege on the database and specify `WITH GRANT OPTION', the user can give away not only the `SELECT' privilege, but also `INSERT'. If you then grant the `UPDATE' privilege to the user on the database, the user can give away the `INSERT', `SELECT' and `UPDATE'. You should not grant `ALTER' privileges to a normal user. If you do that, the user can try to subvert the privilege system by renaming tables! Note that if you are using table or column privileges for even one user, the server examines table and column privileges for all users and this will slow down MySQL a bit. When `mysqld' starts, all privileges are read into memory. Database, table, and column privileges take effect at once, and user-level privileges take effect the next time the user connects. Modifications to the grant tables that you perform using `GRANT' or `REVOKE' are noticed by the server immediately. If you modify the grant tables manually (using `INSERT', `UPDATE', etc.), you should execute a `FLUSH PRIVILEGES' statement or run `mysqladmin flush-privileges' to tell the server to reload the grant tables. *Note Privilege changes::. The biggest differences between the ANSI SQL and MySQL versions of `GRANT' are: * In MySQL privileges are given for an username + hostname combination and not only for an username. * ANSI SQL doesn't have global or database-level privileges, and ANSI SQL doesn't support all privilege types that MySQL supports. MySQL doesn't support the ANSI SQL `TRIGGER' or `UNDER' privileges. * ANSI SQL privileges are structured in a hierarchal manner. If you remove an user, all privileges the user has granted are revoked. In MySQL the granted privileges are not automatically revoked, but you have to revoke these yourself if needed. * In MySQL, if you have the `INSERT' privilege on only some of the columns in a table, you can execute `INSERT' statements on the table; the columns for which you don't have the `INSERT' privilege will be set to their default values. ANSI SQL requires you to have the `INSERT' privilege on all columns. * When you drop a table in ANSI SQL, all privileges for the table are revoked. If you revoke a privilege in ANSI SQL, all privileges that were granted based on this privilege are also revoked. In MySQL, privileges can be dropped only with explicit `REVOKE' commands or by manipulating the MySQL grant tables. For a description of using `REQUIRE', see *Note Secure connections::. MySQL User Names and Passwords ------------------------------ There are several distinctions between the way user names and passwords are used by MySQL and the way they are used by Unix or Windows: * User names, as used by MySQL for authentication purposes, have nothing to do with Unix user names (login names) or Windows user names. Most MySQL clients by default try to log in using the current Unix user name as the MySQL user name, but that is for convenience only. Client programs allow a different name to be specified with the `-u' or `--user' options. This means that you can't make a database secure in any way unless all MySQL user names have passwords. Anyone may attempt to connect to the server using any name, and they will succeed if they specify any name that doesn't have a password. * MySQL user names can be up to 16 characters long; Unix user names typically are limited to 8 characters. * MySQL passwords have nothing to do with Unix passwords. There is no necessary connection between the password you use to log in to a Unix machine and the password you use to access a database on that machine. * MySQL encrypts passwords using a different algorithm than the one used during the Unix login process. See the descriptions of the `PASSWORD()' and `ENCRYPT()' functions in *Note Miscellaneous functions::. Note that even if the password is stored 'scrambled', and knowing your 'scrambled' password is enough to be able to connect to the MySQL server! From version 4.1, MySQL employs a different password and login mechanism that is secure even if TCP/IP packets are sniffed and/or the mysql database is captured. MySQL users and their privileges are normally created with the `GRANT' command. *Note GRANT::. When you login to a MySQL server with a command-line client you should specify the password with `--password=your-password'. *Note Connecting::. mysql --user=monty --password=guess database_name If you want the client to prompt for a password, you should use `--password' without any argument mysql --user=monty --password database_name or the short form: mysql -u monty -p database_name Note that in the last example the password is *not* 'database_name'. If you want to use the `-p' option to supply a password you should do so like this: mysql -u monty -pguess database_name On some systems, the library call that MySQL uses to prompt for a password will automatically cut the password to 8 characters. Internally MySQL doesn't have any limit for the length of the password. When Privilege Changes Take Effect ---------------------------------- When `mysqld' starts, all grant table contents are read into memory and become effective at that point. Modifications to the grant tables that you perform using `GRANT', `REVOKE', or `SET PASSWORD' are noticed by the server immediately. If you modify the grant tables manually (using `INSERT', `UPDATE', etc.), you should execute a `FLUSH PRIVILEGES' statement or run `mysqladmin flush-privileges' or `mysqladmin reload' to tell the server to reload the grant tables. Otherwise, your changes will have _no effect_ until you restart the server. If you change the grant tables manually but forget to reload the privileges, you will be wondering why your changes don't seem to make any difference! When the server notices that the grant tables have been changed, existing client connections are affected as follows: * Table and column privilege changes take effect with the client's next request. * Database privilege changes take effect at the next `USE db_name' command. * Global privilege changes and password changes take effect the next time the client connects. Setting Up the Initial MySQL Privileges --------------------------------------- After installing MySQL, you set up the initial access privileges by running `scripts/mysql_install_db'. *Note Quick install::. The `mysql_install_db' script starts up the `mysqld' server, then initialises the grant tables to contain the following set of privileges: * The MySQL `root' user is created as a superuser who can do anything. Connections must be made from the local host. *Note*: The initial `root' password is empty, so anyone can connect as `root' _without a password_ and be granted all privileges. * An anonymous user is created that can do anything with databases that have a name of `'test'' or starting with `'test_''. Connections must be made from the local host. This means any local user can connect without a password and be treated as the anonymous user. * Other privileges are denied. For example, normal users can't use `mysqladmin shutdown' or `mysqladmin processlist'. *Note*: the default privileges are different for Windows. *Note Windows running::. Because your installation is initially wide open, one of the first things you should do is specify a password for the MySQL `root' user. You can do this as follows (note that you specify the password using the `PASSWORD()' function): shell> mysql -u root mysql mysql> SET PASSWORD FOR root@localhost=PASSWORD('new_password'); If you know what you are doing, you can also directly manipulate the privilege tables: shell> mysql -u root mysql mysql> UPDATE user SET Password=PASSWORD('new_password') -> WHERE user='root'; mysql> FLUSH PRIVILEGES; Another way to set the password is by using the `mysqladmin' command: shell> mysqladmin -u root password new_password Only users with write/update access to the `mysql' database can change the password for others users. All normal users (not anonymous ones) can only change their own password with either of the above commands or with `SET PASSWORD=PASSWORD('new password')'. Note that if you update the password in the `user' table directly using the first method, you must tell the server to re-read the grant tables (with `FLUSH PRIVILEGES'), because the change will go unnoticed otherwise. Once the `root' password has been set, thereafter you must supply that password when you connect to the server as `root'. You may wish to leave the `root' password blank so that you don't need to specify it while you perform additional setup or testing. However, be sure to set it before using your installation for any real production work. See the `scripts/mysql_install_db' script to see how it sets up the default privileges. You can use this as a basis to see how to add other users. If you want the initial privileges to be different from those just described above, you can modify `mysql_install_db' before you run it. To re-create the grant tables completely, remove all the `.frm', `.MYI', and `.MYD' files in the directory containing the `mysql' database. (This is the directory named `mysql' under the database directory, which is listed when you run `mysqld --help'.) Then run the `mysql_install_db' script, possibly after editing it first to have the privileges you want. *Note*: for MySQL versions older than Version 3.22.10, you should not delete the `.frm' files. If you accidentally do this, you should copy them back from your MySQL distribution before running `mysql_install_db'. Adding New Users to MySQL ------------------------- You can add users two different ways: by using `GRANT' statements or by manipulating the MySQL grant tables directly. The preferred method is to use `GRANT' statements, because they are more concise and less error-prone. *Note GRANT::. There are also a lot of contributed programs like `phpmyadmin' that can be used to create and administrate users. The following examples show how to use the `mysql' client to set up new users. These examples assume that privileges are set up according to the defaults described in the previous section. This means that to make changes, you must be on the same machine where `mysqld' is running, you must connect as the MySQL `root' user, and the `root' user must have the `INSERT' privilege for the `mysql' database and the `RELOAD' administrative privilege. Also, if you have changed the `root' user password, you must specify it for the `mysql' commands here. You can add new users by issuing `GRANT' statements: shell> mysql --user=root mysql mysql> GRANT ALL PRIVILEGES ON *.* TO monty@localhost -> IDENTIFIED BY 'some_pass' WITH GRANT OPTION; mysql> GRANT ALL PRIVILEGES ON *.* TO monty@"%" -> IDENTIFIED BY 'some_pass' WITH GRANT OPTION; mysql> GRANT RELOAD,PROCESS ON *.* TO admin@localhost; mysql> GRANT USAGE ON *.* TO dummy@localhost; These `GRANT' statements set up three new users: `monty' A full superuser who can connect to the server from anywhere, but who must use a password `'some_pass'' to do so. Note that we must issue `GRANT' statements for both `monty@localhost' and `monty@"%"'. If we don't add the entry with `localhost', the anonymous user entry for `localhost' that is created by `mysql_install_db' will take precedence when we connect from the local host, because it has a more specific `Host' field value and thus comes earlier in the `user' table sort order. `admin' A user who can connect from `localhost' without a password and who is granted the `RELOAD' and `PROCESS' administrative privileges. This allows the user to execute the `mysqladmin reload', `mysqladmin refresh', and `mysqladmin flush-*' commands, as well as `mysqladmin processlist' . No database-related privileges are granted. (They can be granted later by issuing additional `GRANT' statements.) `dummy' A user who can connect without a password, but only from the local host. The global privileges are all set to `'N''the `USAGE' privilege type allows you to create a user with no privileges. It is assumed that you will grant database-specific privileges later. You can also add the same user access information directly by issuing `INSERT' statements and then telling the server to reload the grant tables: shell> mysql --user=root mysql mysql> INSERT INTO user VALUES('localhost','monty',PASSWORD('some_pass'), -> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); mysql> INSERT INTO user VALUES('%','monty',PASSWORD('some_pass'), -> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); mysql> INSERT INTO user SET Host='localhost',User='admin', -> Reload_priv='Y', Process_priv='Y'; mysql> INSERT INTO user (Host,User,Password) -> VALUES('localhost','dummy',''); mysql> FLUSH PRIVILEGES; Depending on your MySQL version, you may have to use a different number of `'Y'' values above (versions prior to Version 3.22.11 had fewer privilege columns). For the `admin' user, the more readable extended `INSERT' syntax that is available starting with Version 3.22.11 is used. Note that to set up a superuser, you need only create a `user' table entry with the privilege fields set to `'Y''. No `db' or `host' table entries are necessary. The privilege columns in the `user' table were not set explicitly in the last `INSERT' statement (for the `dummy' user), so those columns are assigned the default value of `'N''. This is the same thing that `GRANT USAGE' does. The following example adds a user `custom' who can connect from hosts `localhost', `server.domain', and `whitehouse.gov'. He wants to access the `bankaccount' database only from `localhost', the `expenses' database only from `whitehouse.gov', and the `customer' database from all three hosts. He wants to use the password `stupid' from all three hosts. To set up this user's privileges using `GRANT' statements, run these commands: shell> mysql --user=root mysql mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP -> ON bankaccount.* -> TO custom@localhost -> IDENTIFIED BY 'stupid'; mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP -> ON expenses.* -> TO custom@whitehouse.gov -> IDENTIFIED BY 'stupid'; mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP -> ON customer.* -> TO custom@'%' -> IDENTIFIED BY 'stupid'; The reason that we do to grant statements for the user 'custom' is that we want the give the user access to MySQL both from the local machine with Unix sockets and from the remote machine 'whitehouse.gov' over TCP/IP. To set up the user's privileges by modifying the grant tables directly, run these commands (note the `FLUSH PRIVILEGES' at the end): shell> mysql --user=root mysql mysql> INSERT INTO user (Host,User,Password) -> VALUES('localhost','custom',PASSWORD('stupid')); mysql> INSERT INTO user (Host,User,Password) -> VALUES('server.domain','custom',PASSWORD('stupid')); mysql> INSERT INTO user (Host,User,Password) -> VALUES('whitehouse.gov','custom',PASSWORD('stupid')); mysql> INSERT INTO db -> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv, -> Create_priv,Drop_priv) -> VALUES -> ('localhost','bankaccount','custom','Y','Y','Y','Y','Y','Y'); mysql> INSERT INTO db -> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv, -> Create_priv,Drop_priv) -> VALUES -> ('whitehouse.gov','expenses','custom','Y','Y','Y','Y','Y','Y'); mysql> INSERT INTO db -> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv, -> Create_priv,Drop_priv) -> VALUES('%','customer','custom','Y','Y','Y','Y','Y','Y'); mysql> FLUSH PRIVILEGES; The first three `INSERT' statements add `user' table entries that allow user `custom' to connect from the various hosts with the given password, but grant no permissions to him (all privileges are set to the default value of `'N''). The next three `INSERT' statements add `db' table entries that grant privileges to `custom' for the `bankaccount', `expenses', and `customer' databases, but only when accessed from the proper hosts. As usual, when the grant tables are modified directly, the server must be told to reload them (with `FLUSH PRIVILEGES') so that the privilege changes take effect. If you want to give a specific user access from any machine in a given domain, you can issue a `GRANT' statement like the following: mysql> GRANT ... -> ON *.* -> TO myusername@"%.mydomainname.com" -> IDENTIFIED BY 'mypassword'; To do the same thing by modifying the grant tables directly, do this: mysql> INSERT INTO user VALUES ('%.mydomainname.com', 'myusername', -> PASSWORD('mypassword'),...); mysql> FLUSH PRIVILEGES; Limiting user resources ----------------------- Starting from MySQL 4.0.2 one can limit certain resources per user. So far, the only available method of limiting usage of MySQL server resources has been setting the `max_user_connections' startup variable to a non-zero value. But this method is strictly global and does not allow for management of individual users, which could be of particular interest to Internet Service Providers. Therefore, management of three resources is introduced on the individual user level: * Number of all queries per hour: All commands that could be run by a user. * Number of all updates per hour: Any command that changes any table or database. * Number of connections made per hour: New connections opened per hour. A user in the aforementioned context is a single entry in the `user' table, which is uniquely identified by its `user' and `host' columns. All users are by default not limited in using the above resources, unless the limits are granted to them. These limits can be granted *only* via global `GRANT (*.*)', using this syntax: GRANT ... WITH MAX_QUERIES_PER_HOUR N1 MAX_UPDATES_PER_HOUR N2 MAX_CONNECTIONS_PER_HOUR N3; One can specify any combination of the above resources. N1, N2 and N3 are integers and stands for count / hour. If user reaches any of the above limits withing one hour, his connection will be terminated or refused and the appropriate error message shall be issued. Current usage values for a particular user can be flushed (set to zero) by issuing a `GRANT' statement with any of the above clauses, including a `GRANT' statement with the current values. Also, current values for all users will be flushed if privileges are reloaded (in the server or using `mysqladmin reload') or if the `FLUSH USER_RESOURCES' command is issued. The feature is enabled as soon as a single user is granted with any of the limiting `GRANT' clauses. As a prerequisite for enabling this feature, the `user' table in the `mysql' database must contain the additional columns, as defined in the table creation scripts `mysql_install_db' and `mysql_install_db.sh' in `scripts' subdirectory. Setting Up Passwords -------------------- In most cases you should use `GRANT' to set up your users/passwords, so the following only applies for advanced users. *Note `GRANT': GRANT. The examples in the preceding sections illustrate an important principle: when you store a non-empty password using `INSERT' or `UPDATE' statements, you must use the `PASSWORD()' function to encrypt it. This is because the `user' table stores passwords in encrypted form, not as plaintext. If you forget that fact, you are likely to attempt to set passwords like this: shell> mysql -u root mysql mysql> INSERT INTO user (Host,User,Password) -> VALUES('%','jeffrey','biscuit'); mysql> FLUSH PRIVILEGES; The result is that the plaintext value `'biscuit'' is stored as the password in the `user' table. When the user `jeffrey' attempts to connect to the server using this password, the `mysql' client encrypts it with `PASSWORD()', generates an authentification vector based on *encrypted* password and a random number, obtained from server, and sends the result to the server. The server uses the `password' value in the `user' table (that is *not encrypted* value `'biscuit'') to perform the same calculations, and compares results. The comparison fails and the server rejects the connection: shell> mysql -u jeffrey -pbiscuit test Access denied Passwords must be encrypted when they are inserted in the `user' table, so the `INSERT' statement should have been specified like this instead: mysql> INSERT INTO user (Host,User,Password) -> VALUES('%','jeffrey',PASSWORD('biscuit')); You must also use the `PASSWORD()' function when you use `SET PASSWORD' statements: mysql> SET PASSWORD FOR jeffrey@"%" = PASSWORD('biscuit'); If you set passwords using the `GRANT ... IDENTIFIED BY' statement or the `mysqladmin password' command, the `PASSWORD()' function is unnecessary. They both take care of encrypting the password for you, so you would specify a password of `'biscuit'' like this: mysql> GRANT USAGE ON *.* TO jeffrey@"%" IDENTIFIED BY 'biscuit'; or shell> mysqladmin -u jeffrey password biscuit *Note*: `PASSWORD()' is different from Unix password encryption. *Note User names::. Keeping Your Password Secure ---------------------------- It is inadvisable to specify your password in a way that exposes it to discovery by other users. The methods you can use to specify your password when you run client programs are listed here, along with an assessment of the risks of each method: * Never give a normal user access to the `mysql.user' table. Knowing the encrypted password for a user makes it possible to login as this user. The passwords are only scrambled so that one shouldn't be able to see the real password you used (if you happen to use a similar password with your other applications). * Use a `-pyour_pass' or `--password=your_pass' option on the command line. This is convenient but insecure, because your password becomes visible to system status programs (such as `ps') that may be invoked by other users to display command-lines. (MySQL clients typically overwrite the command-line argument with zeroes during their initialisation sequence, but there is still a brief interval during which the value is visible.) * Use a `-p' or `--password' option (with no `your_pass' value specified). In this case, the client program solicits the password from the terminal: shell> mysql -u user_name -p Enter password: ******** The `*' characters represent your password. It is more secure to enter your password this way than to specify it on the command-line because it is not visible to other users. However, this method of entering a password is suitable only for programs that you run interactively. If you want to invoke a client from a script that runs non-interactively, there is no opportunity to enter the password from the terminal. On some systems, you may even find that the first line of your script is read and interpreted (incorrectly) as your password! * Store your password in a configuration file. For example, you can list your password in the `[client]' section of the `.my.cnf' file in your home directory: [client] password=your_pass If you store your password in `.my.cnf', the file should not be group or world readable or writable. Make sure the file's access mode is `400' or `600'. *Note Option files::. * You can store your password in the `MYSQL_PWD' environment variable, but this method must be considered extremely insecure and should not be used. Some versions of `ps' include an option to display the environment of running processes; your password will be in plain sight for all to see if you set `MYSQL_PWD'. Even on systems without such a version of `ps', it is unwise to assume there is no other method to observe process environments. *Note Environment variables::. All in all, the safest methods are to have the client program prompt for the password or to specify the password in a properly protected `.my.cnf' file. Using Secure Connections ------------------------ Basics ...... Beginning with version 4.0.0, MySQL has support for SSL encrypted connections. To understand how MySQL uses SSL, it's necessary to explain some basic SSL and X509 concepts. People who are already familiar with them can skip this part. By default, MySQL uses unencrypted connections between the client and the server. This means that someone could watch all your traffic and look at the data being sent or received. They could even change the data while it is in transit between client and server. Sometimes you need to move information over public networks in a secure fashion; in such cases, using an unencrypted connection is unacceptable. SSL is a protocol that uses different encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect any change, loss or replay of data. SSL also incorporates algorithms to recognise and provide identity verification using the X509 standard. Encryption is the way to make any kind of data unreadable. In fact, today's practice requires many additional security elements from encryption algorithms. They should resist many kind of known attacks like just messing with the order of encrypted messages or replaying data twice. X509 is a standard that makes it possible to identify someone on the Internet. It is most commonly used in e-commerce applications. In basic terms, there should be some company (called a "Certificate Authority") that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys (a public key and a secret key). A certificate owner can prove his identity by showing his certificate to other party. A certificate consists of its owner's public key. Any data encrypted with this public key can be decrypted only using the corresponding secret key, which is held by the owner of the certificate. MySQL doesn't use encrypted connections by default, because doing so would make the client/server protocol much slower. Any kind of additional functionality requires the computer to do additional work and encrypting data is a CPU-intensive operation that requires time and can delay MySQL main tasks. By default MySQL is tuned to be fast as possible. If you need more information about SSL, X509, or encryption, you should use your favourite Internet search engine and search for keywords in which you are interested. Requirements ............ To get secure connections to work with MySQL you must do the following: 1. Install the OpenSSL library. We have tested MySQL with OpenSSL 0.9.6. `http://www.openssl.org/'. 2. Configure MySQL with `--with-vio --with-openssl'. 3. If you are using an old MySQL installation, you have to update your `mysql.user' table with some new SSL-related columns. You can do this by running the `mysql_fix_privilege_tables.sh' script. This is necessary if your grant tables date from a version prior to MySQL 4.0.0. 4. You can check if a running `mysqld' server supports OpenSSL by examining if `SHOW VARIABLES LIKE 'have_openssl'' returns `YES'. Setting Up SSL Certificates for MySQL ..................................... Here is an example for setting up SSL certificates for MySQL: DIR=`pwd`/openssl PRIV=$DIR/private mkdir $DIR $PRIV $DIR/newcerts cp /usr/share/ssl/openssl.cnf $DIR replace ./demoCA $DIR -- $DIR/openssl.cnf # Create necessary files: $database, $serial and $new_certs_dir # directory (optional) touch $DIR/index.txt echo "01" > $DIR/serial # # Generation of Certificate Authority(CA) # openssl req -new -x509 -keyout $PRIV/cakey.pem -out $DIR/cacert.pem \ -config $DIR/openssl.cnf # Sample output: # Using configuration from /home/monty/openssl/openssl.cnf # Generating a 1024 bit RSA private key # ................++++++ # .........++++++ # writing new private key to '/home/monty/openssl/private/cakey.pem' # Enter PEM pass phrase: # Verifying password - Enter PEM pass phrase: # ----- # You are about to be asked to enter information that will be incorporated # into your certificate request. # What you are about to enter is what is called a Distinguished Name or a DN. # There are quite a few fields but you can leave some blank # For some fields there will be a default value, # If you enter '.', the field will be left blank. # ----- # Country Name (2 letter code) [AU]:FI # State or Province Name (full name) [Some-State]:. # Locality Name (eg, city) []: # Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB # Organizational Unit Name (eg, section) []: # Common Name (eg, YOUR name) []:MySQL admin # Email Address []: # # Create server request and key # openssl req -new -keyout $DIR/server-key.pem -out \ $DIR/server-req.pem -days 3600 -config $DIR/openssl.cnf # Sample output: # Using configuration from /home/monty/openssl/openssl.cnf # Generating a 1024 bit RSA private key # ..++++++ # ..........++++++ # writing new private key to '/home/monty/openssl/server-key.pem' # Enter PEM pass phrase: # Verifying password - Enter PEM pass phrase: # ----- # You are about to be asked to enter information that will be incorporated # into your certificate request. # What you are about to enter is what is called a Distinguished Name or a DN. # There are quite a few fields but you can leave some blank # For some fields there will be a default value, # If you enter '.', the field will be left blank. # ----- # Country Name (2 letter code) [AU]:FI # State or Province Name (full name) [Some-State]:. # Locality Name (eg, city) []: # Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB # Organizational Unit Name (eg, section) []: # Common Name (eg, YOUR name) []:MySQL server # Email Address []: # # Please enter the following 'extra' attributes # to be sent with your certificate request # A challenge password []: # An optional company name []: # # Remove the passphrase from the key (optional) # openssl rsa -in $DIR/server-key.pem -out $DIR/server-key.pem # # Sign server cert # openssl ca -policy policy_anything -out $DIR/server-cert.pem \ -config $DIR/openssl.cnf -infiles $DIR/server-req.pem # Sample output: # Using configuration from /home/monty/openssl/openssl.cnf # Enter PEM pass phrase: # Check that the request matches the signature # Signature ok # The Subjects Distinguished Name is as follows # countryName :PRINTABLE:'FI' # organizationName :PRINTABLE:'MySQL AB' # commonName :PRINTABLE:'MySQL admin' # Certificate is to be certified until Sep 13 14:22:46 2003 GMT (365 days) # Sign the certificate? [y/n]:y # # # 1 out of 1 certificate requests certified, commit? [y/n]y # Write out database with 1 new entries # Data Base Updated # # Create client request and key # openssl req -new -keyout $DIR/client-key.pem -out \ $DIR/client-req.pem -days 3600 -config $DIR/openssl.cnf # Sample output: # Using configuration from /home/monty/openssl/openssl.cnf # Generating a 1024 bit RSA private key # .....................................++++++ # .............................................++++++ # writing new private key to '/home/monty/openssl/client-key.pem' # Enter PEM pass phrase: # Verifying password - Enter PEM pass phrase: # ----- # You are about to be asked to enter information that will be incorporated # into your certificate request. # What you are about to enter is what is called a Distinguished Name or a DN. # There are quite a few fields but you can leave some blank # For some fields there will be a default value, # If you enter '.', the field will be left blank. # ----- # Country Name (2 letter code) [AU]:FI # State or Province Name (full name) [Some-State]:. # Locality Name (eg, city) []: # Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB # Organizational Unit Name (eg, section) []: # Common Name (eg, YOUR name) []:MySQL user # Email Address []: # # Please enter the following 'extra' attributes # to be sent with your certificate request # A challenge password []: # An optional company name []: # # Remove a passphrase from the key (optional) # openssl rsa -in $DIR/client-key.pem -out $DIR/client-key.pem # # Sign client cert # openssl ca -policy policy_anything -out $DIR/client-cert.pem \ -config $DIR/openssl.cnf -infiles $DIR/client-req.pem # Sample output: # Using configuration from /home/monty/openssl/openssl.cnf # Enter PEM pass phrase: # Check that the request matches the signature # Signature ok # The Subjects Distinguished Name is as follows # countryName :PRINTABLE:'FI' # organizationName :PRINTABLE:'MySQL AB' # commonName :PRINTABLE:'MySQL user' # Certificate is to be certified until Sep 13 16:45:17 2003 GMT (365 days) # Sign the certificate? [y/n]:y # # # 1 out of 1 certificate requests certified, commit? [y/n]y # Write out database with 1 new entries # Data Base Updated # # Create a my.cnf file that you can use to test the certificates # cnf="" cnf="$cnf [client]" cnf="$cnf ssl-ca=$DIR/cacert.pem" cnf="$cnf ssl-cert=$DIR/client-cert.pem" cnf="$cnf ssl-key=$DIR/client-key.pem" cnf="$cnf [mysqld]" cnf="$cnf ssl-ca=$DIR/cacert.pem" cnf="$cnf ssl-cert=$DIR/server-cert.pem" cnf="$cnf ssl-key=$DIR/server-key.pem" echo $cnf | replace " " ' ' > $DIR/my.cnf # # To test MySQL mysqld --defaults-file=$DIR/my.cnf & mysql --defaults-file=$DIR/my.cnf You can also test your setup by modifying the above `my.cnf' file to refer to the demo certificates in the mysql-source-dist/SSL direcory. `GRANT' Options ............... MySQL can check X509 certificate attributes in addition to the normal username/password scheme. All the usual options are still required (username, password, IP address mask, database/table name). There are different possibilities to limit connections: * Without any SSL or X509 options, all kind of encrypted/unencrypted connections are allowed if the username and password are valid. * `REQUIRE SSL' option limits the server to allow only SSL encrypted connections. Note that this option can be omitted if there are any ACL records which allow non-SSL connections. mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" REQUIRE SSL; * `REQUIRE X509' means that the client should have a valid certificate but we do not care about the exact certificate, issuer or subject. The only restriction is that it should be possible to verify its signature with one of the CA certificates. mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" REQUIRE X509; * `REQUIRE ISSUER "issuer"' places a restriction on connection attempts: The client must present a valid X509 certificate issued by CA `"issuer"'. Using X509 certificates always implies encryption, so the `SSL' option is unneccessary. mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" -> REQUIRE ISSUER "C=FI, ST=Some-State, L=Helsinki, "> O=MySQL Finland AB, CN=Tonu Samuel/Email=tonu@mysql.com"; * `REQUIRE SUBJECT "subject"' requires clients to have valid X509 certificate with subject `"subject"' on it. If the client presents a certificate that is valid but has a different `"subject"', the connection is disallowed. mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" -> REQUIRE SUBJECT "C=EE, ST=Some-State, L=Tallinn, "> O=MySQL demo client certificate, "> CN=Tonu Samuel/Email=tonu@mysql.com"; * `REQUIRE CIPHER "cipher"' is needed to assure enough strong ciphers and keylengths will be used. SSL itself can be weak if old algorithms with short encryption keys are used. Using this option, we can ask for some exact cipher method to allow a connection. mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" -> REQUIRE CIPHER "EDH-RSA-DES-CBC3-SHA"; The `SUBJECT', `ISSUER', and `CIPHER' options can be combined in the `REQUIRE' clause like this: mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost -> IDENTIFIED BY "goodsecret" -> REQUIRE SUBJECT "C=EE, ST=Some-State, L=Tallinn, "> O=MySQL demo client certificate, "> CN=Tonu Samuel/Email=tonu@mysql.com" -> AND ISSUER "C=FI, ST=Some-State, L=Helsinki, "> O=MySQL Finland AB, CN=Tonu Samuel/Email=tonu@mysql.com" -> AND CIPHER "EDH-RSA-DES-CBC3-SHA"; Starting from MySQL 4.0.4 the `AND' keyword is optional between `REQUIRE' options. The order of the options does not matter, but no option can be specified twice. Disaster Prevention and Recovery ================================ Database Backups ---------------- Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a `LOCK TABLES' on the relevant tables followed by `FLUSH TABLES' for the tables. *Note `LOCK TABLES': LOCK TABLES. *Note `FLUSH': FLUSH. You only need a read lock; this allows other threads to continue to query the tables while you are making a copy of the files in the database directory. The `FLUSH TABLE' is needed to ensure that the all active index pages is written to disk before you start the backup. Starting from 3.23.56 and 4.0.12 `BACKUP TABLE' will not allow you to overwrite existing files as this would be a security risk. If you want to make a SQL level backup of a table, you can use `SELECT INTO OUTFILE' or `BACKUP TABLE'. *Note SELECT::. *Note BACKUP TABLE::. Another way to back up a database is to use the `mysqldump' program or the `mysqlhotcopy script'. *Note `mysqldump': mysqldump. *Note `mysqlhotcopy': mysqlhotcopy. 1. Do a full backup of your databases: shell> mysqldump --tab=/path/to/some/dir --opt --all or shell> mysqlhotcopy database /path/to/some/dir You can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The script `mysqlhotcopy' does use this method. 2. Stop `mysqld' if it's running, then start it with the `--log-update[=file_name]' option. *Note Update log::. The update log file(s) provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed `mysqldump'. If you have to restore something, try to recover your tables using `REPAIR TABLE' or `myisamchk -r' first. That should work in 99.9% of all cases. If `myisamchk' fails, try the following procedure (this will only work if you have started MySQL with `--log-update', *note Update log::): 1. Restore the original `mysqldump' backup. 2. Execute the following command to re-run the updates in the binary log: shell> mysqlbinlog hostname-bin.[0-9]* | mysql If you are using the update log you can use: shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql `ls' is used to get all the update log files in the right order. You can also do selective backups with `SELECT * INTO OUTFILE 'file_name' FROM tbl_name' and restore with `LOAD DATA INFILE 'file_name' REPLACE ...' To avoid duplicate records, you need a `PRIMARY KEY' or a `UNIQUE' key in the table. The `REPLACE' keyword causes old records to be replaced with new ones when a new record duplicates an old record on a unique key value. If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. *Note Replication Intro::. If you are using a Veritas filesystem, you can do: 1. From a client (or Perl), execute: `FLUSH TABLES WITH READ LOCK'. 2. From another shell, execute: `mount vxfs snapshot'. 3. From the first client, execute: `UNLOCK TABLES'. 4. Copy files from snapshot. 5. Unmount snapshot. `BACKUP TABLE' Syntax --------------------- BACKUP TABLE tbl_name[,tbl_name...] TO '/path/to/backup/directory' Copies to the backup directory the minimum number of table files needed to restore the table, after flushing any buffered changes to disk. Currently works only for `MyISAM' tables. For `MyISAM' tables, copies `.frm' (definition) and `.MYD' (data) files. The index file can be rebuilt from those two. Before using this command, please see *Note Backup::. During the backup, a read lock will be held for each table, one at time, as they are being backed up. If you want to back up several tables as a snapshot, you must first issue `LOCK TABLES' obtaining a read lock for each table in the group. The command returns a table with the following columns: *Column* *Value* Table Table name Op Always "backup" Msg_type One of `status', `error', `info' or `warning'. Msg_text The message. Note that `BACKUP TABLE' is only available in MySQL version 3.23.25 and later. `RESTORE TABLE' Syntax ---------------------- RESTORE TABLE tbl_name[,tbl_name...] FROM '/path/to/backup/directory' Restores the table(s) from the backup that was made with `BACKUP TABLE'. Existing tables will not be overwritten; if you try to restore over an existing table, you will get an error. Restoring will take longer than backing up due to the need to rebuild the index. The more keys you have, the longer it will take. Just as `BACKUP TABLE', `RESTORE TABLE' currently works only for `MyISAM' tables. The command returns a table with the following columns: *Column* *Value* Table Table name Op Always "restore" Msg_type One of `status', `error', `info' or `warning'. Msg_text The message. `CHECK TABLE' Syntax -------------------- CHECK TABLE tbl_name[,tbl_name...] [option [option...]] option = QUICK | FAST | MEDIUM | EXTENDED | CHANGED `CHECK TABLE' works only on `MyISAM' and `InnoDB' tables. On `MyISAM' tables it's the same thing as running `myisamchk -m table_name' on the table. If you don't specify any option `MEDIUM' is used. Checks the table(s) for errors. For `MyISAM' tables the key statistics are updated. The command returns a table with the following columns: *Column* *Value* Table Table name. Op Always "check". Msg_type One of `status', `error', `info', or `warning'. Msg_text The message. Note that you can get many rows of information for each checked table. The last row will be of `Msg_type status' and should normally be `OK'. If you don't get `OK', or `Table is already up to date' you should normally run a repair of the table. *Note Table maintenance::. `Table is already up to date' means that the table the given `TYPE' told MySQL that there wasn't any need to check the table. The different check types stand for the following: *Type* *Meaning* `QUICK' Don't scan the rows to check for wrong links. `FAST' Only check tables which haven't been closed properly. `CHANGED' Only check tables which have been changed since last check or haven't been closed properly. `MEDIUM' Scan rows to verify that deleted links are okay. This also calculates a key checksum for the rows and verifies this with a calculated checksum for the keys. `EXTENDED' Do a full key lookup for all keys for each row. This ensures that the table is 100% consistent, but will take a long time! For dynamically sized `MyISAM' tables a started check will always do a `MEDIUM' check. For statically sized rows we skip the row scan for `QUICK' and `FAST' as the rows are very seldom corrupted. You can combine check options as in: CHECK TABLE test_table FAST QUICK; Which would simply do a quick check on the table to see whether it was closed properly. *Note*: that in some case `CHECK TABLE' will change the table! This happens if the table is marked as 'corrupted' or 'not closed properly' but `CHECK TABLE' didn't find any problems in the table. In this case `CHECK TABLE' will mark the table as okay. If a table is corrupted, then it's most likely that the problem is in the indexes and not in the data part. All of the above check types checks the indexes thoroughly and should thus find most errors. If you just want to check a table that you assume is okay, you should use no check options or the `QUICK' option. The latter should be used when you are in a hurry and can take the very small risk that `QUICK' didn't find an error in the datafile. (In most cases MySQL should find, under normal usage, any error in the data file. If this happens then the table will be marked as 'corrupted', in which case the table can't be used until it's repaired.) `FAST' and `CHANGED' are mostly intended to be used from a script (for example to be executed from `cron') if you want to check your table from time to time. In most cases you `FAST' is to be prefered over `CHANGED'. (The only case when it isn't is when you suspect a bug you have found a bug in the `MyISAM' code.) `EXTENDED' is only to be used after you have run a normal check but still get strange errors from a table when MySQL tries to update a row or find a row by key (this is very unlikely if a normal check has succeeded!). Some things reported by check table, can't be corrected automatically: * `Found row where the auto_increment column has the value 0'. This means that you have in the table a row where the `AUTO_INCREMENT' index column contains the value 0. (It's possible to create a row where the `AUTO_INCREMENT' column is 0 by explicitly setting the column to 0 with an `UPDATE' statement) This isn't an error in itself, but could cause trouble if you decide to dump the table and restore it or do an `ALTER TABLE' on the table. In this case the `AUTO_INCREMENT' column will change value, according to the rules of `AUTO_INCREMENT' columns, which could cause problems like a duplicate key error. To get rid of the warning, just execute an `UPDATE' statement to set the column to some other value than 0. `REPAIR TABLE' Syntax --------------------- REPAIR TABLE tbl_name[,tbl_name...] [QUICK] [EXTENDED] [USE_FRM] `REPAIR TABLE' works only on `MyISAM' tables and is the same as running `myisamchk -r table_name' on the table. Normally you should never have to run this command, but if disaster strikes you are very likely to get back all your data from a MyISAM table with `REPAIR TABLE'. If your tables get corrupted a lot you should try to find the reason for this! *Note Crashing::. *Note MyISAM table problems::. `REPAIR TABLE' repairs a possible corrupted table. The command returns a table with the following columns: *Column* *Value* Table Table name Op Always "repair" Msg_type One of `status', `error', `info' or `warning'. Msg_text The message. Note that you can get many rows of information for each repaired table. The last one row will be of `Msg_type status' and should normally be `OK'. If you don't get `OK', you should try repairing the table with `myisamchk -o', as `REPAIR TABLE' does not yet implement all the options of `myisamchk'. In the near future, we will make it more flexible. If `QUICK' is given then MySQL will try to do a `REPAIR' of only the index tree. If you use `EXTENDED' then MySQL will create the index row by row instead of creating one index at a time with sorting; this may be better than sorting on fixed-length keys if you have long `CHAR' keys that compress very well. This type of repair is like that done by `myisamchk --safe-recover'. As of `MySQL' 4.0.2, there is a `USE_FRM' mode for `REPAIR'. Use it if the `.MYI' file is missing or if its header is corrupted. In this mode MySQL will recreate the table, using information from the `.frm' file. This kind of repair cannot be done with `myisamchk'. Using `myisamchk' for Table Maintenance and Crash Recovery ---------------------------------------------------------- Starting with MySQL Version 3.23.13, you can check MyISAM tables with the `CHECK TABLE' command. *Note CHECK TABLE::. You can repair tables with the `REPAIR TABLE' command. *Note REPAIR TABLE::. To check/repair MyISAM tables (`.MYI' and `.MYD') you should use the `myisamchk' utility. To check/repair ISAM tables (`.ISM' and `.ISD') you should use the `isamchk' utility. *Note Table types::. In the following text we will talk about `myisamchk', but everything also applies to the old `isamchk'. You can use the `myisamchk' utility to get information about your database tables, check and repair them, or optimise them. The following sections describe how to invoke `myisamchk' (including a description of its options), how to set up a table maintenance schedule, and how to use `myisamchk' to perform its various functions. You can, in most cases, also use the command `OPTIMIZE TABLES' to optimise and repair tables, but this is not as fast or reliable (in case of real fatal errors) as `myisamchk'. On the other hand, `OPTIMIZE TABLE' is easier to use and you don't have to worry about flushing tables. *Note `OPTIMIZE TABLE': OPTIMIZE TABLE. Even that the repair in `myisamchk' is quite secure, it's always a good idea to make a backup _before_ doing a repair (or anything that could make a lot of changes to a table) `myisamchk' Invocation Syntax ............................. `myisamchk' is invoked like this: shell> myisamchk [options] tbl_name The `options' specify what you want `myisamchk' to do. They are described here. (You can also get a list of options by invoking `myisamchk --help'.) With no options, `myisamchk' simply checks your table. To get more information or to tell `myisamchk' to take corrective action, specify options as described here and in the following sections. `tbl_name' is the database table you want to check/repair. If you run `myisamchk' somewhere other than in the database directory, you must specify the path to the file, because `myisamchk' has no idea where your database is located. Actually, `myisamchk' doesn't care whether the files you are working on are located in a database directory; you can copy the files that correspond to a database table into another location and perform recovery operations on them there. You can name several tables on the `myisamchk' command-line if you wish. You can also specify a name as an index file name (with the `.MYI' suffix), which allows you to specify all tables in a directory by using the pattern `*.MYI'. For example, if you are in a database directory, you can check all the tables in the directory like this: shell> myisamchk *.MYI If you are not in the database directory, you can check all the tables there by specifying the path to the directory: shell> myisamchk /path/to/database_dir/*.MYI You can even check all tables in all databases by specifying a wildcard with the path to the MySQL data directory: shell> myisamchk /path/to/datadir/*/*.MYI The recommended way to quickly check all tables is: myisamchk --silent --fast /path/to/datadir/*/*.MYI isamchk --silent /path/to/datadir/*/*.ISM If you want to check all tables and repair all tables that are corrupted, you can use the following line: myisamchk --silent --force --fast --update-state -O key_buffer=64M \ -O sort_buffer=64M -O read_buffer=1M -O write_buffer=1M \ /path/to/datadir/*/*.MYI isamchk --silent --force -O key_buffer=64M -O sort_buffer=64M \ -O read_buffer=1M -O write_buffer=1M /path/to/datadir/*/*.ISM The above assumes that you have more than 64 M free. Note that if you get an error like: myisamchk: warning: 1 clients is using or hasn't closed the table properly This means that you are trying to check a table that has been updated by the another program (like the `mysqld' server) that hasn't yet closed the file or that has died without closing the file properly. If you `mysqld' is running, you must force a sync/close of all tables with `FLUSH TABLES' and ensure that no one is using the tables while you are running `myisamchk'. In MySQL Version 3.23 the easiest way to avoid this problem is to use `CHECK TABLE' instead of `myisamchk' to check tables. General Options for `myisamchk' ............................... `myisamchk' supports the following options. `-# or --debug=debug_options' Output debug log. The `debug_options' string often is `'d:t:o,filename''. `-? or --help' Display a help message and exit. `-O var=option, --set-variable var=option' Set the value of a variable. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--var=option' on its own. The possible variables and their default values for myisamchk can be examined with `myisamchk --help': *Variable* *Value* key_buffer_size523264 read_buffer_size262136 write_buffer_size262136 sort_buffer_size2097144 sort_key_blocks16 decode_bits 9 `sort_buffer_size' is used when the keys are repaired by sorting keys, which is the normal case when you use `--recover'. `key_buffer_size' is used when you are checking the table with `--extended-check' or when the keys are repaired by inserting key row by row in to the table (like when doing normal inserts). Repairing through the key buffer is used in the following cases: * If you use `--safe-recover'. * If the temporary files needed to sort the keys would be more than twice as big as when creating the key file directly. This is often the case when you have big `CHAR', `VARCHAR' or `TEXT' keys as the sort needs to store the whole keys during sorting. If you have lots of temporary space and you can force `myisamchk' to repair by sorting you can use the `--sort-recover' option. Reparing through the key buffer takes much less disk space than using sorting, but is also much slower. If you want a faster repair, set the above variables to about 1/4 of your available memory. You can set both variables to big values, as only one of the above buffers will be used at a time. `-s or --silent' Silent mode. Write output only when errors occur. You can use `-s' twice (`-ss') to make `myisamchk' very silent. `-v or --verbose' Verbose mode. Print more information. This can be used with `-d' and `-e'. Use `-v' multiple times (`-vv', `-vvv') for more verbosity! `-V or --version' Print the `myisamchk' version and exit. `-w or, --wait' Instead of giving an error if the table is locked, wait until the table is unlocked before continuing. Note that if you are running `mysqld' on the table with `--skip-external-locking', the table can only be locked by another `myisamchk' command. Check Options for `myisamchk' ............................. `-c or --check' Check table for errors. This is the default operation if you are not giving `myisamchk' any options that override this. `-e or --extend-check' Check the table very thoroughly (which is quite slow if you have many indexes). This option should only be used in extreme cases. Normally, `myisamchk' or `myisamchk --medium-check' should, in most cases, be able to find out if there are any errors in the table. If you are using `--extended-check' and have much memory, you should increase the value of `key_buffer_size' a lot! `-F or --fast' Check only tables that haven't been closed properly. `-C or --check-only-changed' Check only tables that have changed since the last check. `-f or --force' Restart `myisamchk' with `-r' (repair) on the table, if `myisamchk' finds any errors in the table. `-i or --information' Print informational statistics about the table that is checked. `-m or --medium-check' Faster than extended-check, but only finds 99.99% of all errors. Should, however, be good enough for most cases. `-U or --update-state' Store in the `.MYI' file when the table was checked and if the table crashed. This should be used to get full benefit of the `--check-only-changed' option, but you shouldn't use this option if the `mysqld' server is using the table and you are running `mysqld' with `--skip-external-locking'. `-T or --read-only' Don't mark table as checked. This is useful if you use `myisamchk' to check a table that is in use by some other application that doesn't use locking (like `mysqld --skip-external-locking'). Repair Options for myisamchk ............................ The following options are used if you start `myisamchk' with `-r' or `-o': `-D # or --data-file-length=#' Max length of datafile (when re-creating datafile when it's 'full'). `-e or --extend-check' Try to recover every possible row from the datafile. Normally this will also find a lot of garbage rows. Don't use this option if you are not totally desperate. `-f or --force' Overwrite old temporary files (`table_name.TMD') instead of aborting. `-k # or keys-used=#' If you are using ISAM, tells the ISAM storage engine to update only the first `#' indexes. If you are using `MyISAM', tells which keys to use, where each binary bit stands for one key (first key is bit 0). This can be used to get faster inserts! Deactivated indexes can be reactivated by using `myisamchk -r'. keys. `-l or --no-symlinks' Do not follow symbolic links. Normally `myisamchk' repairs the table a symlink points at. This option doesn't exist in MySQL 4.0, as MySQL 4.0 will not remove symlinks during repair. `-r or --recover' Can fix almost anything except unique keys that aren't unique (which is an extremely unlikely error with ISAM/MyISAM tables). If you want to recover a table, this is the option to try first. Only if myisamchk reports that the table can't be recovered by `-r', you should then try `-o'. (Note that in the unlikely case that `-r' fails, the datafile is still intact.) If you have lots of memory, you should increase the size of `sort_buffer_size'! `-o or --safe-recover' Uses an old recovery method (reads through all rows in order and updates all index trees based on the found rows); this is an order of magnitude slower than `-r', but can handle a couple of very unlikely cases that `-r' cannot handle. This recovery method also uses much less disk space than `-r'. Normally one should always first repair with `-r', and only if this fails use `-o'. If you have lots of memory, you should increase the size of `key_buffer_size'! `-n or --sort-recover' Force `myisamchk' to use sorting to resolve the keys even if the temporary files should be very big. `--character-sets-dir=...' Directory where character sets are stored. `--set-character-set=name' Change the character set used by the index `-t or --tmpdir=path' Path for storing temporary files. If this is not set, `myisamchk' will use the environment variable `TMPDIR' for this. Starting from MySQL 4.1, `tmpdir' can be set to a list of paths separated by colon `:' (semicolon `;' on Windows). They will be used in round-robin fashion. `-q or --quick' Faster repair by not modifying the datafile. One can give a second `-q' to force `myisamchk' to modify the original datafile in case of duplicate keys `-u or --unpack' Unpack file packed with myisampack. Other Options for `myisamchk' ............................. Other actions that `myisamchk' can do, besides repair and check tables: `-a or --analyze' Analyse the distribution of keys. This improves join performance by enabling the join optimiser to better choose in which order it should join the tables and which keys it should use: `myisamchk --describe --verbose table_name'' or using `SHOW KEYS' in MySQL. `-d or --description' Prints some information about table. `-A or --set-auto-increment[=value]' Force `AUTO_INCREMENT' to start at this or higher value. If no value is given, then sets the next `AUTO_INCREMENT' value to the highest used value for the auto key + 1. `-S or --sort-index' Sort the index tree blocks in high-low order. This will optimise seeks and will make table scanning by key faster. `-R or --sort-records=#' Sorts records according to an index. This makes your data much more localised and may speed up ranged `SELECT' and `ORDER BY' operations on this index. (It may be very slow to do a sort the first time!) To find out a table's index numbers, use `SHOW INDEX', which shows a table's indexes in the same order that `myisamchk' sees them. Indexes are numbered beginning with 1. `myisamchk' Memory Usage ........................ Memory allocation is important when you run `myisamchk'. `myisamchk' uses no more memory than you specify with the `-O' options. If you are going to use `myisamchk' on very large files, you should first decide how much memory you want it to use. The default is to use only about 3M to fix things. By using larger values, you can get `myisamchk' to operate faster. For example, if you have more than 32M RAM, you could use options such as these (in addition to any other options you might specify): shell> myisamchk -O sort=16M -O key=16M -O read=1M -O write=1M ... Using `-O sort=16M' should probably be enough for most cases. Be aware that `myisamchk' uses temporary files in `TMPDIR'. If `TMPDIR' points to a memory filesystem, you may easily get out of memory errors. If this happens, set `TMPDIR' to point at some directory with more space and restart `myisamchk'. When repairing, `myisamchk' will also need a lot of disk space: * Double the size of the record file (the original one and a copy). This space is not needed if one does a repair with `--quick', as in this case only the index file will be re-created. This space is needed on the same disk as the original record file! * Space for the new index file that replaces the old one. The old index file is truncated at start, so one usually ignore this space. This space is needed on the same disk as the original index file! * When using `--recover' or `--sort-recover' (but not when using `--safe-recover'), you will need space for a sort buffer for: `(largest_key + row_pointer_length)*number_of_rows * 2'. You can check the length of the keys and the row_pointer_length with `myisamchk -dv table'. This space is allocated on the temporary disk (specified by `TMPDIR' or `--tmpdir=#'). If you have a problem with disk space during repair, you can try to use `--safe-recover' instead of `--recover'. Using `myisamchk' for Crash Recovery .................................... If you run `mysqld' with `--skip-external-locking' (which is the default on some systems, like Linux), you can't reliably use `myisamchk' to check a table when `mysqld' is using the same table. If you can be sure that no one is accessing the tables through `mysqld' while you run `myisamchk', you only have to do `mysqladmin flush-tables' before you start checking the tables. If you can't guarantee the above, then you must take down `mysqld' while you check the tables. If you run `myisamchk' while `mysqld' is updating the tables, you may get a warning that a table is corrupt even if it isn't. If you are not using `--skip-external-locking', you can use `myisamchk' to check tables at any time. While you do this, all clients that try to update the table will wait until `myisamchk' is ready before continuing. If you use `myisamchk' to repair or optimise tables, you *must* always ensure that the `mysqld' server is not using the table (this also applies if you are using `--skip-external-locking'). If you don't take down `mysqld' you should at least do a `mysqladmin flush-tables' before you run `myisamchk'. Your tables *may be corrupted* if the server and `myisamchk' access the tables simultaneously. This chapter describes how to check for and deal with data corruption in MySQL databases. If your tables get corrupted frequently you should try to find the reason for this! *Note Crashing::. The `MyISAM' table section contains reason for why a table could be corrupted. *Note MyISAM table problems::. When performing crash recovery, it is important to understand that each table `tbl_name' in a database corresponds to three files in the database directory: *File* *Purpose* `tbl_name.frm' Table definition (form) file `tbl_name.MYD' Datafile `tbl_name.MYI' Index file Each of these three file types is subject to corruption in various ways, but problems occur most often in datafiles and index files. `myisamchk' works by creating a copy of the `.MYD' (data) file row by row. It ends the repair stage by removing the old `.MYD' file and renaming the new file to the original file name. If you use `--quick', `myisamchk' does not create a temporary `.MYD' file, but instead assumes that the `.MYD' file is correct and only generates a new index file without touching the `.MYD' file. This is safe, because `myisamchk' automatically detects if the `.MYD' file is corrupt and aborts the repair in this case. You can also give two `--quick' options to `myisamchk'. In this case, `myisamchk' does not abort on some errors (like duplicate key) but instead tries to resolve them by modifying the `.MYD' file. Normally the use of two `--quick' options is useful only if you have too little free disk space to perform a normal repair. In this case you should at least make a backup before running `myisamchk'. How to Check Tables for Errors .............................. To check a MyISAM table, use the following commands: `myisamchk tbl_name' This finds 99.99% of all errors. What it can't find is corruption that involves *only* the datafile (which is very unusual). If you want to check a table, you should normally run `myisamchk' without options or with either the `-s' or `--silent' option. `myisamchk -m tbl_name' This finds 99.999% of all errors. It checks first all index entries for errors and then it reads through all rows. It calculates a checksum for all keys in the rows and verifies that they checksum matches the checksum for the keys in the index tree. `myisamchk -e tbl_name' This does a complete and thorough check of all data (`-e' means "extended check"). It does a check-read of every key for each row to verify that they indeed point to the correct row. This may take a long time on a big table with many keys. `myisamchk' will normally stop after the first error it finds. If you want to obtain more information, you can add the `--verbose' (`-v') option. This causes `myisamchk' to keep going, up through a maximum of 20 errors. In normal usage, a simple `myisamchk' (with no arguments other than the table name) is sufficient. `myisamchk -e -i tbl_name' Like the previous command, but the `-i' option tells `myisamchk' to print some informational statistics, too. How to Repair Tables .................... In the following section we only talk about using `myisamchk' on `MyISAM' tables (extensions `.MYI' and `.MYD'). If you are using `ISAM' tables (extensions `.ISM' and `.ISD'), you should use `isamchk' instead. Starting with MySQL Version 3.23.14, you can repair MyISAM tables with the `REPAIR TABLE' command. *Note REPAIR TABLE::. The symptoms of a corrupted table include queries that abort unexpectedly and observable errors such as these: * `tbl_name.frm' is locked against change * Can't find file `tbl_name.MYI' (Errcode: ###) * Unexpected end of file * Record file is crashed * Got error ### from table handler To get more information about the error you can run `perror ###'. Here is the most common errors that indicates a problem with the table: shell> perror 126 127 132 134 135 136 141 144 145 126 = Index file is crashed / Wrong file format 127 = Record-file is crashed 132 = Old database file 134 = Record was already deleted (or record file crashed) 135 = No more room in record file 136 = No more room in index file 141 = Duplicate unique key or constraint on write or update 144 = Table is crashed and last repair failed 145 = Table was marked as crashed and should be repaired Note that error 135, no more room in record file, is not an error that can be fixed by a simple repair. In this case you have to do: ALTER TABLE table MAX_ROWS=xxx AVG_ROW_LENGTH=yyy; In the other cases, you must repair your tables. `myisamchk' can usually detect and fix most things that go wrong. The repair process involves up to four stages, described here. Before you begin, you should `cd' to the database directory and check the permissions of the table files. Make sure they are readable by the Unix user that `mysqld' runs as (and to you, because you need to access the files you are checking). If it turns out you need to modify files, they must also be writable by you. If you are using MySQL Version 3.23.16 and above, you can (and should) use the `CHECK' and `REPAIR' commands to check and repair `MyISAM' tables. *Note CHECK TABLE::. *Note REPAIR TABLE::. The manual section about table maintenance includes the options to `isamchk'/`myisamchk'. *Note Table maintenance::. The following section is for the cases where the above command fails or if you want to use the extended features that `isamchk'/`myisamchk' provides. If you are going to repair a table from the command-line, you must first take down the `mysqld' server. Note that when you do `mysqladmin shutdown' on a remote server, the `mysqld' server will still be alive for a while after `mysqladmin' returns, until all queries are stopped and all keys have been flushed to disk. *Stage 1: Checking your tables* Run `myisamchk *.MYI' or `myisamchk -e *.MYI' if you have more time. Use the `-s' (silent) option to suppress unnecessary information. If the `mysqld' server is done you should use the -update option to tell `myisamchk' to mark the table as 'checked'. You have to repair only those tables for which `myisamchk' announces an error. For such tables, proceed to Stage 2. If you get weird errors when checking (such as `out of memory' errors), or if `myisamchk' crashes, go to Stage 3. *Stage 2: Easy safe repair* Note: If you want repairing to go much faster, you should add: `-O sort_buffer=# -O key_buffer=#' (where # is about 1/4 of the available memory) to all `isamchk/myisamchk' commands. First, try `myisamchk -r -q tbl_name' (`-r -q' means "quick recovery mode"). This will attempt to repair the index file without touching the datafile. If the datafile contains everything that it should and the delete links point at the correct locations within the datafile, this should work, and the table is fixed. Start repairing the next table. Otherwise, use the following procedure: 1. Make a backup of the datafile before continuing. 2. Use `myisamchk -r tbl_name' (`-r' means "recovery mode"). This will remove incorrect records and deleted records from the datafile and reconstruct the index file. 3. If the preceding step fails, use `myisamchk --safe-recover tbl_name'. Safe recovery mode uses an old recovery method that handles a few cases that regular recovery mode doesn't (but is slower). If you get weird errors when repairing (such as `out of memory' errors), or if `myisamchk' crashes, go to Stage 3. *Stage 3: Difficult repair* You should only reach this stage if the first 16K block in the index file is destroyed or contains incorrect information, or if the index file is missing. In this case, it's necessary to create a new index file. Do so as follows: 1. Move the datafile to some safe place. 2. Use the table description file to create new (empty) data and index files: shell> mysql db_name mysql> SET AUTOCOMMIT=1; mysql> TRUNCATE TABLE table_name; mysql> quit If your SQL version doesn't have `TRUNCATE TABLE', use `DELETE FROM table_name' instead. 3. Copy the old datafile back onto the newly created datafile. (Don't just move the old file back onto the new file; you want to retain a copy in case something goes wrong.) Go back to Stage 2. `myisamchk -r -q' should work now. (This shouldn't be an endless loop.) As of `MySQL' 4.0.2 you can also use `REPAIR ... USE_FRM' which performs the whole procedure automatically. *Stage 4: Very difficult repair* You should reach this stage only if the description file has also crashed. That should never happen, because the description file isn't changed after the table is created: 1. Restore the description file from a backup and go back to Stage 3. You can also restore the index file and go back to Stage 2. In the latter case, you should start with `myisamchk -r'. 2. If you don't have a backup but know exactly how the table was created, create a copy of the table in another database. Remove the new datafile, then move the description and index files from the other database to your crashed database. This gives you new description and index files, but leaves the datafile alone. Go back to Stage 2 and attempt to reconstruct the index file. Table Optimisation .................. To coalesce fragmented records and eliminate wasted space resulting from deleting or updating records, run `myisamchk' in recovery mode: shell> myisamchk -r tbl_name You can optimise a table in the same way using the SQL `OPTIMIZE TABLE' statement. `OPTIMIZE TABLE' does a repair of the table and a key analysis, and also sorts the index tree to give faster key lookups. There is also no possibility of unwanted interaction between a utility and the server, because the server does all the work when you use `OPTIMIZE TABLE'. *Note OPTIMIZE TABLE::. `myisamchk' also has a number of other options you can use to improve the performance of a table: * `-S', `--sort-index' * `-R index_num', `--sort-records=index_num' * `-a', `--analyze' For a full description of the option. *Note myisamchk syntax::. Setting Up a Table Maintenance Regimen -------------------------------------- Starting with MySQL Version 3.23.13, you can check MyISAM tables with the `CHECK TABLE' command. *Note CHECK TABLE::. You can repair tables with the `REPAIR TABLE' command. *Note REPAIR TABLE::. It is a good idea to perform table checks on a regular basis rather than waiting for problems to occur. For maintenance purposes, you can use `myisamchk -s' to check tables. The `-s' option (short for `--silent') causes `myisamchk' to run in silent mode, printing messages only when errors occur. It's also a good idea to check tables when the server starts up. For example, whenever the machine has done a reboot in the middle of an update, you usually need to check all the tables that could have been affected. (This is an "expected crashed table".) You could add a test to `safe_mysqld' that runs `myisamchk' to check all tables that have been modified during the last 24 hours if there is an old `.pid' (process ID) file left after a reboot. (The `.pid' file is created by `mysqld' when it starts up and removed when it terminates normally. The presence of a `.pid' file at system startup time indicates that `mysqld' terminated abnormally.) An even better test would be to check any table whose last-modified time is more recent than that of the `.pid' file. You should also check your tables regularly during normal system operation. At MySQL AB, we run a `cron' job to check all our important tables once a week, using a line like this in a `crontab' file: 35 0 * * 0 /path/to/myisamchk --fast --silent /path/to/datadir/*/*.MYI This prints out information about crashed tables so we can examine and repair them when needed. As we haven't had any unexpectedly crashed tables (tables that become corrupted for reasons other than hardware trouble) for a couple of years now (this is really true), once a week is more than enough for us. We recommend that to start with, you execute `myisamchk -s' each night on all tables that have been updated during the last 24 hours, until you come to trust MySQL as much as we do. Normally you don't need to maintain MySQL tables that much. If you are changing tables with dynamic size rows (tables with `VARCHAR', `BLOB' or `TEXT' columns) or have tables with many deleted rows you may want to from time to time (once a month?) defragment/reclaim space from the tables. You can do this by using `OPTIMIZE TABLE' on the tables in question or if you can take the `mysqld' server down for a while do: isamchk -r --silent --sort-index -O sort_buffer_size=16M */*.ISM myisamchk -r --silent --sort-index -O sort_buffer_size=16M */*.MYI Getting Information About a Table --------------------------------- To get a description of a table or statistics about it, use the commands shown here. We explain some of the information in more detail later: * myisamchk -d tbl_name Runs `myisamchk' in "describe mode" to produce a description of your table. If you start the MySQL server using the `--skip-external-locking' option, `myisamchk' may report an error for a table that is updated while it runs. However, because `myisamchk' doesn't change the table in describe mode, there isn't any risk of destroying data. * myisamchk -d -v tbl_name To produce more information about what `myisamchk' is doing, add `-v' to tell it to run in verbose mode. * myisamchk -eis tbl_name Shows only the most important information from a table. It is slow because it must read the whole table. * myisamchk -eiv tbl_name This is like `-eis', but tells you what is being done. Example of `myisamchk -d' output: MyISAM file: company.MYI Record format: Fixed length Data records: 1403698 Deleted blocks: 0 Recordlength: 226 table description: Key Start Len Index Type 1 2 8 unique double 2 15 10 multip. text packed stripped 3 219 8 multip. double 4 63 10 multip. text packed stripped 5 167 2 multip. unsigned short 6 177 4 multip. unsigned long 7 155 4 multip. text 8 138 4 multip. unsigned long 9 177 4 multip. unsigned long 193 1 text Example of `myisamchk -d -v' output: MyISAM file: company Record format: Fixed length File-version: 1 Creation time: 1999-10-30 12:12:51 Recover time: 1999-10-31 19:13:01 Status: checked Data records: 1403698 Deleted blocks: 0 Datafile parts: 1403698 Deleted data: 0 Datafilepointer (bytes): 3 Keyfile pointer (bytes): 3 Max datafile length: 3791650815 Max keyfile length: 4294967294 Recordlength: 226 table description: Key Start Len Index Type Rec/key Root Blocksize 1 2 8 unique double 1 15845376 1024 2 15 10 multip. text packed stripped 2 25062400 1024 3 219 8 multip. double 73 40907776 1024 4 63 10 multip. text packed stripped 5 48097280 1024 5 167 2 multip. unsigned short 4840 55200768 1024 6 177 4 multip. unsigned long 1346 65145856 1024 7 155 4 multip. text 4995 75090944 1024 8 138 4 multip. unsigned long 87 85036032 1024 9 177 4 multip. unsigned long 178 96481280 1024 193 1 text Example of `myisamchk -eis' output: Checking MyISAM file: company Key: 1: Keyblocks used: 97% Packed: 0% Max levels: 4 Key: 2: Keyblocks used: 98% Packed: 50% Max levels: 4 Key: 3: Keyblocks used: 97% Packed: 0% Max levels: 4 Key: 4: Keyblocks used: 99% Packed: 60% Max levels: 3 Key: 5: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 6: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 7: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 8: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 9: Keyblocks used: 98% Packed: 0% Max levels: 4 Total: Keyblocks used: 98% Packed: 17% Records: 1403698 M.recordlength: 226 Packed: 0% Recordspace used: 100% Empty space: 0% Blocks/Record: 1.00 Record blocks: 1403698 Delete blocks: 0 Recorddata: 317235748 Deleted data: 0 Lost space: 0 Linkdata: 0 User time 1626.51, System time 232.36 Maximum resident set size 0, Integral resident set size 0 Non physical pagefaults 0, Physical pagefaults 627, Swaps 0 Blocks in 0 out 0, Messages in 0 out 0, Signals 0 Voluntary context switches 639, Involuntary context switches 28966 Example of `myisamchk -eiv' output: Checking MyISAM file: company Data records: 1403698 Deleted blocks: 0 - check file-size - check delete-chain block_size 1024: index 1: index 2: index 3: index 4: index 5: index 6: index 7: index 8: index 9: No recordlinks - check index reference - check data record references index: 1 Key: 1: Keyblocks used: 97% Packed: 0% Max levels: 4 - check data record references index: 2 Key: 2: Keyblocks used: 98% Packed: 50% Max levels: 4 - check data record references index: 3 Key: 3: Keyblocks used: 97% Packed: 0% Max levels: 4 - check data record references index: 4 Key: 4: Keyblocks used: 99% Packed: 60% Max levels: 3 - check data record references index: 5 Key: 5: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 6 Key: 6: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 7 Key: 7: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 8 Key: 8: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 9 Key: 9: Keyblocks used: 98% Packed: 0% Max levels: 4 Total: Keyblocks used: 9% Packed: 17% - check records and index references [LOTS OF ROW NUMBERS DELETED] Records: 1403698 M.recordlength: 226 Packed: 0% Recordspace used: 100% Empty space: 0% Blocks/Record: 1.00 Record blocks: 1403698 Delete blocks: 0 Recorddata: 317235748 Deleted data: 0 Lost space: 0 Linkdata: 0 User time 1639.63, System time 251.61 Maximum resident set size 0, Integral resident set size 0 Non physical pagefaults 0, Physical pagefaults 10580, Swaps 0 Blocks in 4 out 0, Messages in 0 out 0, Signals 0 Voluntary context switches 10604, Involuntary context switches 122798 Here are the sizes of the data and index files for the table used in the preceding examples: -rw-rw-r-- 1 monty tcx 317235748 Jan 12 17:30 company.MYD -rw-rw-r-- 1 davida tcx 96482304 Jan 12 18:35 company.MYM Explanations for the types of information `myisamchk' produces are given here. The "keyfile" is the index file. "Record" and "row" are synonymous: * ISAM file Name of the ISAM (index) file. * Isam-version Version of ISAM format. Currently always 2. * Creation time When the datafile was created. * Recover time When the index/datafile was last reconstructed. * Data records How many records are in the table. * Deleted blocks How many deleted blocks still have reserved space. You can optimise your table to minimise this space. *Note Optimisation::. * Data file: Parts For dynamic record format, this indicates how many data blocks there are. For an optimised table without fragmented records, this is the same as `Data records'. * Deleted data How many bytes of non-reclaimed deleted data there are. You can optimise your table to minimise this space. *Note Optimisation::. * Data file pointer The size of the datafile pointer, in bytes. It is usually 2, 3, 4, or 5 bytes. Most tables manage with 2 bytes, but this cannot be controlled from MySQL yet. For fixed tables, this is a record address. For dynamic tables, this is a byte address. * Keyfile pointer The size of the index file pointer, in bytes. It is usually 1, 2, or 3 bytes. Most tables manage with 2 bytes, but this is calculated automatically by MySQL. It is always a block address. * Max datafile length How long the table's datafile (`.MYD' file) can become, in bytes. * Max keyfile length How long the table's key file (`.MYI' file) can become, in bytes. * Recordlength How much space each record takes, in bytes. * Record format The format used to store table rows. The examples shown above use `Fixed length'. Other possible values are `Compressed' and `Packed'. * table description A list of all keys in the table. For each key, some low-level information is presented: - Key This key's number. - Start Where in the record this index part starts. - Len How long this index part is. For packed numbers, this should always be the full length of the column. For strings, it may be shorter than the full length of the indexed column, because you can index a prefix of a string column. - Index `unique' or `multip.' (multiple). Indicates whether one value can exist multiple times in this index. - Type What data-type this index part has. This is an ISAM data-type with the options `packed', `stripped' or `empty'. - Root Address of the root index block. - Blocksize The size of each index block. By default this is 1024, but the value may be changed at compile time. - Rec/key This is a statistical value used by the optimiser. It tells how many records there are per value for this key. A unique key always has a value of 1. This may be updated after a table is loaded (or greatly changed) with `myisamchk -a'. If this is not updated at all, a default value of 30 is given. * In the first example above, the 9th key is a multi-part key with two parts. * Keyblocks used What percentage of the keyblocks are used. Because the table used in the examples had just been reorganised with `myisamchk', the values are very high (very near the theoretical maximum). * Packed MySQL tries to pack keys with a common suffix. This can only be used for `CHAR'/`VARCHAR'/`DECIMAL' keys. For long strings like names, this can significantly reduce the space used. In the third example above, the 4th key is 10 characters long and a 60% reduction in space is achieved. * Max levels How deep the B-tree for this key is. Large tables with long keys get high values. * Records How many rows are in the table. * M.recordlength The average record length. For tables with fixed-length records, this is the exact record length. * Packed MySQL strips spaces from the end of strings. The `Packed' value indicates the percentage of savings achieved by doing this. * Recordspace used What percentage of the datafile is used. * Empty space What percentage of the datafile is unused. * Blocks/Record Average number of blocks per record (that is, how many links a fragmented record is composed of). This is always 1.0 for fixed-format tables. This value should stay as close to 1.0 as possible. If it gets too big, you can reorganise the table with `myisamchk'. *Note Optimisation::. * Recordblocks How many blocks (links) are used. For fixed format, this is the same as the number of records. * Deleteblocks How many blocks (links) are deleted. * Recorddata How many bytes in the datafile are used. * Deleted data How many bytes in the datafile are deleted (unused). * Lost space If a record is updated to a shorter length, some space is lost. This is the sum of all such losses, in bytes. * Linkdata When the dynamic table format is used, record fragments are linked with pointers (4 to 7 bytes each). `Linkdata' is the sum of the amount of storage used by all such pointers. If a table has been compressed with `myisampack', `myisamchk -d' prints additional information about each table column. See *Note `myisampack': myisampack, for an example of this information and a description of what it means. Database Administration Language Reference ========================================== `OPTIMIZE TABLE' Syntax ----------------------- OPTIMIZE TABLE tbl_name[,tbl_name]... `OPTIMIZE TABLE' should be used if you have deleted a large part of a table or if you have made many changes to a table with variable-length rows (tables that have `VARCHAR', `BLOB', or `TEXT' columns). Deleted records are maintained in a linked list and subsequent `INSERT' operations reuse old record positions. You can use `OPTIMIZE TABLE' to reclaim the unused space and to defragment the datafile. For the moment, `OPTIMIZE TABLE' works only on `MyISAM' and `BDB' tables. For `BDB' tables, `OPTIMIZE TABLE' is currently mapped to `ANALYZE TABLE'. *Note `ANALYZE TABLE': ANALYZE TABLE. You can get `OPTIMIZE TABLE' to work on other table types by starting `mysqld' with `--skip-new' or `--safe-mode', but in this case `OPTIMIZE TABLE' is just mapped to `ALTER TABLE'. `OPTIMIZE TABLE' works the following way: * If the table has deleted or split rows, repair the table. * If the index pages are not sorted, sort them. * If the statistics are not up to date (and the repair couldn't be done by sorting the index), update them. `OPTIMIZE TABLE' for a `MyISAM' table is equivalent to running `myisamchk --quick --check-only-changed --sort-index --analyze' on the table. Note that the table is locked during the time `OPTIMIZE TABLE' is running! `ANALYZE TABLE' Syntax ---------------------- ANALYZE TABLE tbl_name[,tbl_name...] Analyse and store the key distribution for the table. During the analysis, the table is locked with a read lock. This works on `MyISAM' and `BDB' tables. This is equivalent to running `myisamchk -a' on the table. MySQL uses the stored key distribution to decide in which order tables should be joined when one does a join on something else than a constant. The command returns a table with the following columns: *Column* *Value* Table Table name Op Always "analyze" Msg_type One of `status', `error', `info' or `warning'. Msg_text The message. You can check the stored key distribution with the `SHOW INDEX' command. *Note SHOW DATABASE INFO::. If the table hasn't changed since the last `ANALYZE TABLE' command, the table will not be analysed again. `FLUSH' Syntax -------------- FLUSH flush_option [,flush_option] ... You should use the `FLUSH' command if you want to clear some of the internal caches MySQL uses. To execute `FLUSH', you must have the `RELOAD' privilege. `flush_option' can be any of the following: *Option* *Description* `HOSTS' Empties the host cache tables. You should flush the host tables if some of your hosts change IP number or if you get the error message `Host ... is blocked'. When more than `max_connect_errors' errors occur in a row for a given host while connection to the MySQL server, MySQL assumes something is wrong and blocks the host from further connection requests. Flushing the host tables allows the host to attempt to connect again. *Note Blocked host::. You can start `mysqld' with `-O max_connect_errors=999999999' to avoid this error message. `DES_KEY_FILE' Reloads the DES keys from the file that was specified with the `--des-key-file' option at server startup time. `LOGS' Closes and reopens all log files. If you have specified the update log file or a binary log file without an extension, the extension number of the log file will be incremented by one relative to the previous file. If you have used an extension in the file name, MySQL will close and reopen the update log file. *Note Update log::. This is the same thing as sending the `SIGHUP' signal to the `mysqld' server. `PRIVILEGES' Reloads the privileges from the grant tables in the `mysql' database. `QUERY CACHE' Defragment the query cache to better utilise its memory. This command will not remove any queries from the cache, unlike `RESET QUERY CACHE'. `TABLES' Closes all open tables and force all tables in use to be closed. This also flushes the query cache. `[TABLE | TABLES] Flushes only the given tables. tbl_name [,tbl_name...]' `TABLES WITH READ Closes all open tables and locks all tables for all LOCK' databases with a read lock until you execute `UNLOCK TABLES'. This is very convenient way to get backups if you have a filesystem, like Veritas, that can take snapshots in time. `STATUS' Resets most status variables to zero. This is something one should only use when debugging a query. `USER_RESOURCES' Resets all user resources to zero. This will enable blocked users to login again. *Note User resources::. You can also access each of the commands shown above with the `mysqladmin' utility, using the `flush-hosts', `flush-logs', `reload', or `flush-tables' commands. Take also a look at the `RESET' command used with replication. *Note `RESET': RESET. `RESET' Syntax -------------- RESET reset_option [,reset_option] ... The `RESET' command is used to clear things. It also acts as an stronger version of the `FLUSH' command. *Note `FLUSH': FLUSH. To execute `RESET', you must have the `RELOAD' privilege. *Option* *Description* `MASTER' Deletes all binary logs listed in the index file, resetting the binlog index file to be empty. In pre-3.23.26 versions, `FLUSH MASTER' (Master) `SLAVE' Makes the slave forget its replication position in the master logs. In pre 3.23.26 versions the command was called `FLUSH SLAVE'(Slave) `QUERY CACHE' Removes all query results from the query cache. `KILL' Syntax ------------- KILL thread_id Each connection to `mysqld' runs in a separate thread. You can see which threads are running with the `SHOW PROCESSLIST' command and kill a thread with the `KILL thread_id' command. If you have the `PROCESS' privilege, you can see all threads. If you have the `SUPER' privilege, you can kill all threads. Otherwise, you can only see and kill your own threads. You can also use the `mysqladmin processlist' and `mysqladmin kill' commands to examine and kill threads. When you do a `KILL', a thread-specific `kill flag' is set for the thread. In most cases it may take some time for the thread to die as the kill flag is only checked at specific intervals. * In `SELECT', `ORDER BY' and `GROUP BY' loops, the flag is checked after reading a block of rows. If the kill flag is set, the statement is aborted. * When doing an `ALTER TABLE' the kill flag is checked before each block of rows are read from the original table. If the kill flag was set the command is aborted and the temporary table is deleted. * When doing an `UPDATE' or `DELETE', the kill flag is checked after each block read and after each updated or deleted row. If the kill flag is set, the statement is aborted. Note that if you are not using transactions, the changes will not be rolled back! * `GET_LOCK()' will abort with `NULL'. * An `INSERT DELAYED' thread will quickly flush all rows it has in memory and die. * If the thread is in the table lock handler (state: `Locked'), the table lock will be quickly aborted. * If the thread is waiting for free disk space in a `write' call, the write is aborted with an disk full error message. `SHOW' Syntax ------------- SHOW DATABASES [LIKE wild] or SHOW [OPEN] TABLES [FROM db_name] [LIKE wild] or SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [LIKE wild] or SHOW INDEX FROM tbl_name [FROM db_name] or SHOW TABLE STATUS [FROM db_name] [LIKE wild] or SHOW STATUS [LIKE wild] or SHOW VARIABLES [LIKE wild] or SHOW LOGS or SHOW [FULL] PROCESSLIST or SHOW GRANTS FOR user or SHOW CREATE TABLE table_name or SHOW MASTER STATUS or SHOW MASTER LOGS or SHOW SLAVE STATUS or SHOW WARNINGS [LIMIT #] or SHOW ERRORS [LIMIT #] or SHOW TABLE TYPES `SHOW' provides information about databases, tables, columns, or status information about the server. If the `LIKE wild' part is used, the `wild' string can be a string that uses the SQL `%' and `_' wildcard characters. Retrieving information about Database, Tables, Columns, and Indexes ................................................................... You can use `db_name.tbl_name' as an alternative to the `tbl_name FROM db_name' syntax. These two statements are equivalent: mysql> SHOW INDEX FROM mytable FROM mydb; mysql> SHOW INDEX FROM mydb.mytable; `SHOW DATABASES' lists the databases on the MySQL server host. You can also get this list using the `mysqlshow' command line tool. In version 4.0.2 you will only see those databases for which you have some kind of privilege, if you don't have the global `SHOW DATABASES' privilege. `SHOW TABLES' lists the tables in a given database. You can also get this list using the `mysqlshow db_name' command. *Note*: if a user doesn't have any privileges for a table, the table will not show up in the output from `SHOW TABLES' or `mysqlshow db_name'. `SHOW OPEN TABLES' lists the tables that are currently open in the table cache. *Note Table cache::. The `Comment' field tells how many times the table is `cached' and `in_use'. `SHOW COLUMNS' lists the columns in a given table. If you specify the `FULL' option, you will also get the privileges you have for each column. If the column types are different from what you expect them to be based on a `CREATE TABLE' statement, note that MySQL sometimes changes column types. *Note Silent column changes::. The `DESCRIBE' statement provides information similar to `SHOW COLUMNS'. *Note `DESCRIBE': DESCRIBE. `SHOW FIELDS' is a synonym for `SHOW COLUMNS', and `SHOW KEYS' is a synonym for `SHOW INDEX'. You can also list a table's columns or indexes with `mysqlshow db_name tbl_name' or `mysqlshow -k db_name tbl_name'. `SHOW INDEX' returns the index information in a format that closely resembles the `SQLStatistics' call in ODBC. The following columns are returned: *Column* *Meaning* `Table' Name of the table. `Non_unique'0 if the index can't contain duplicates. `Key_name' Name of the index. `Seq_in_index'Column sequence number in index, starting with 1. `Column_name'Column name. `Collation' How the column is sorted in the index. In MySQL, this can have values `A' (Ascending) or `NULL' (Not sorted). `Cardinality'Number of unique values in the index. This is updated by running `isamchk -a'. `Sub_part' Number of indexed characters if the column is only partly indexed. `NULL' if the entire key is indexed. `Null' Contains 'YES' if the column may contain `NULL'. `Index_type'Index method used. `Comment' Various remarks. For now, it tells in MySQL < 4.0.2 whether index is `FULLTEXT' or not. Note that as the `Cardinality' is counted based on statistics stored as integers, it's not necessarily accurate for small tables. The `Null' and `Index_type' columns were added in MySQL 4.0.2. `SHOW TABLE STATUS' ................... SHOW TABLE STATUS [FROM db_name] [LIKE wild] `SHOW TABLE STATUS' (new in Version 3.23) works likes `SHOW STATUS', but provides a lot of information about each table. You can also get this list using the `mysqlshow --status db_name' command. The following columns are returned: *Column* *Meaning* `Name' Name of the table. `Type' Type of table. *Note Table types::. `Row_format' The row storage format (Fixed, Dynamic, or Compressed). `Rows' Number of rows. `Avg_row_length'Average row length. `Data_length' Length of the datafile. `Max_data_length'Max length of the datafile. `Index_length' Length of the index file. `Data_free' Number of allocated but not used bytes. `Auto_increment'Next autoincrement value. `Create_time' When the table was created. `Update_time' When the datafile was last updated. `Check_time' When the table was last checked. `Create_options'Extra options used with `CREATE TABLE'. `Comment' The comment used when creating the table (or some information why MySQL couldn't access the table information). `InnoDB' tables will report the free space in the tablespace in the table comment. `SHOW STATUS' ............. `SHOW STATUS' provides server status information (like `mysqladmin extended-status'). The output resembles that shown here, though the format and numbers probably differ: +--------------------------+------------+ | Variable_name | Value | +--------------------------+------------+ | Aborted_clients | 0 | | Aborted_connects | 0 | | Bytes_received | 155372598 | | Bytes_sent | 1176560426 | | Connections | 30023 | | Created_tmp_disk_tables | 0 | | Created_tmp_tables | 8340 | | Created_tmp_files | 60 | | Delayed_insert_threads | 0 | | Delayed_writes | 0 | | Delayed_errors | 0 | | Flush_commands | 1 | | Handler_delete | 462604 | | Handler_read_first | 105881 | | Handler_read_key | 27820558 | | Handler_read_next | 390681754 | | Handler_read_prev | 6022500 | | Handler_read_rnd | 30546748 | | Handler_read_rnd_next | 246216530 | | Handler_update | 16945404 | | Handler_write | 60356676 | | Key_blocks_used | 14955 | | Key_read_requests | 96854827 | | Key_reads | 162040 | | Key_write_requests | 7589728 | | Key_writes | 3813196 | | Max_used_connections | 0 | | Not_flushed_key_blocks | 0 | | Not_flushed_delayed_rows | 0 | | Open_tables | 1 | | Open_files | 2 | | Open_streams | 0 | | Opened_tables | 44600 | | Questions | 2026873 | | Select_full_join | 0 | | Select_full_range_join | 0 | | Select_range | 99646 | | Select_range_check | 0 | | Select_scan | 30802 | | Slave_running | OFF | | Slave_open_temp_tables | 0 | | Slow_launch_threads | 0 | | Slow_queries | 0 | | Sort_merge_passes | 30 | | Sort_range | 500 | | Sort_rows | 30296250 | | Sort_scan | 4650 | | Table_locks_immediate | 1920382 | | Table_locks_waited | 0 | | Threads_cached | 0 | | Threads_created | 30022 | | Threads_connected | 1 | | Threads_running | 1 | | Uptime | 80380 | +--------------------------+------------+ The status variables listed above have the following meaning: *Variable* *Meaning* `Aborted_clients' Number of connections aborted because the client died without closing the connection properly. *Note Communication errors::. `Aborted_connects' Number of tries to connect to the MySQL server that failed. *Note Communication errors::. `Bytes_received' Number of bytes received from all clients. `Bytes_sent' Number of bytes sent to all clients. `Com_xxx' Number of times each xxx command has been executed. `Connections' Number of connection attempts to the MySQL server. `Created_tmp_disk_tables'Number of implicit temporary tables on disk created while executing statements. `Created_tmp_tables' Number of implicit temporary tables in memory created while executing statements. `Created_tmp_files' How many temporary files `mysqld' has created. `Delayed_insert_threads'Number of delayed insert handler threads in use. `Delayed_writes' Number of rows written with `INSERT DELAYED'. `Delayed_errors' Number of rows written with `INSERT DELAYED' for which some error occurred (probably `duplicate key'). `Flush_commands' Number of executed `FLUSH' commands. `Handler_commit' Number of internal `COMMIT' commands. `Handler_delete' Number of times a row was deleted from a table. `Handler_read_first' Number of times the first entry was read from an index. If this is high, it suggests that the server is doing a lot of full index scans, for example, `SELECT col1 FROM foo', assuming that col1 is indexed. `Handler_read_key' Number of requests to read a row based on a key. If this is high, it is a good indication that your queries and tables are properly indexed. `Handler_read_next' Number of requests to read next row in key order. This will be incremented if you are querying an index column with a range constraint. This also will be incremented if you are doing an index scan. `Handler_read_prev' Number of requests to read previous row in key order. This is mainly used to optimise `ORDER BY ... DESC'. `Handler_read_rnd' Number of requests to read a row based on a fixed position. This will be high if you are doing a lot of queries that require sorting of the result. `Handler_read_rnd_next'Number of requests to read the next row in the datafile. This will be high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have. `Handler_rollback' Number of internal `ROLLBACK' commands. `Handler_update' Number of requests to update a row in a table. `Handler_write' Number of requests to insert a row in a table. `Key_blocks_used' The number of used blocks in the key cache. `Key_read_requests' The number of requests to read a key block from the cache. `Key_reads' The number of physical reads of a key block from disk. `Key_write_requests' The number of requests to write a key block to the cache. `Key_writes' The number of physical writes of a key block to disk. `Max_used_connections' The maximum number of connections in use simultaneously. `Not_flushed_key_blocks'Keys blocks in the key cache that has changed but hasn't yet been flushed to disk. `Not_flushed_delayed_rows'Number of rows waiting to be written in `INSERT DELAY' queues. `Open_tables' Number of tables that are open. `Open_files' Number of files that are open. `Open_streams' Number of streams that are open (used mainly for logging). `Opened_tables' Number of tables that have been opened. `Rpl_status' Status of failsafe replication. (Not yet in use). `Select_full_join' Number of joins without keys (If this is 0, you should carefully check the index of your tables). `Select_full_range_join'Number of joins where we used a range search on reference table. `Select_range' Number of joins where we used ranges on the first table. (It's normally not critical even if this is big.) `Select_scan' Number of joins where we did a full scan of the first table. `Select_range_check' Number of joins without keys where we check for key usage after each row (If this is 0, you should carefully check the index of your tables). `Questions' Number of queries sent to the server. `Slave_open_temp_tables'Number of temporary tables currently open by the slave thread `Slave_running' Is `ON' if this is a slave that is connected to a master. `Slow_launch_threads' Number of threads that have taken more than `slow_launch_time' to create. `Slow_queries' Number of queries that have taken more than `long_query_time'. *Note Slow query log::. `Sort_merge_passes' Number of merges passes the sort algoritm have had to do. If this value is large you should consider increasing `sort_buffer'. `Sort_range' Number of sorts that where done with ranges. `Sort_rows' Number of sorted rows. `Sort_scan' Number of sorts that where done by scanning the table. `ssl_xxx' Variables used by SSL; Not yet implemented. `Table_locks_immediate'Number of times a table lock was acquired immediately. Available after 3.23.33. `Table_locks_waited' Number of times a table lock could not be acquired immediately and a wait was needed. If this is high, and you have performance problems, you should first optimise your queries, and then either split your table(s) or use replication. Available after 3.23.33. `Threads_cached' Number of threads in the thread cache. `Threads_connected' Number of currently open connections. `Threads_created' Number of threads created to handle connections. `Threads_running' Number of threads that are not sleeping. `Uptime' How many seconds the server has been up. Some comments about the above: * If `Opened_tables' is big, then your `table_cache' variable is probably too small. * If `Key_reads' is big, then your `key_buffer_size' variable is probably too small. The *cache miss rate* can be calculated with `Key_reads'/`Key_read_requests'. * If `Handler_read_rnd' is big, then you probably have a lot of queries that require MySQL to scan whole tables or you have joins that don't use keys properly. * If `Threads_created' is big, you may want to increase the `thread_cache_size' variable. The cache hit rate can be calculated with `Threads_created'/`Connections'. * If `Created_tmp_disk_tables' is big, you may want to increase the `tmp_table_size' variable to get the temporary tables memory-based instead of disk based. `SHOW VARIABLES' ................ SHOW [GLOBAL | SESSION] VARIABLES [LIKE wild] `SHOW VARIABLES' shows the values of some MySQL system variables. You can also get this information using the `mysqladmin variables' command. If the default values are unsuitable, you can set most of these variables using command-line options when `mysqld' starts up. *Note Command-line options::. The options `GLOBAL' and `SESSION' are new in MySQL 4.0.3. With `GLOBAL' you will get the variables that will be used for new connections to MySQL. With `SESSION' you will get the values that are in effect for the current connection. If you are not using either option, `SESSION' is used. You can change most options with the `SET' command. *Note `SET': SET OPTION. The output resembles that shown here, though the format and numbers may differ somewhat: +---------------------------------+------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------| | back_log | 50 | | basedir | /usr/local/mysql | | bdb_cache_size | 8388572 | | bdb_log_buffer_size | 32768 | | bdb_home | /usr/local/mysql | | bdb_max_lock | 10000 | | bdb_logdir | | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | bdb_version | Sleepycat Software: ... | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set | latin1 | | character_sets | latin1 big5 czech euc_kr | | concurrent_insert | ON | | connect_timeout | 5 | | convert_character_set | | | datadir | /usr/local/mysql/data/ | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_min_word_len | 4 | | ft_max_word_len | 254 | | ft_max_word_len_for_sort | 20 | | ft_stopword_file | (built-in) | | have_bdb | YES | | have_innodb | YES | | have_isam | YES | | have_raid | NO | | have_symlink | DISABLED | | have_openssl | YES | | have_query_cache | YES | | init_file | | | innodb_additional_mem_pool_size | 1048576 | | innodb_buffer_pool_size | 8388608 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_file_io_threads | 4 | | innodb_force_recovery | 0 | | innodb_thread_concurrency | 8 | | innodb_flush_log_at_trx_commit | 0 | | innodb_fast_shutdown | ON | | innodb_flush_method | | | innodb_lock_wait_timeout | 50 | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 1048576 | | innodb_log_file_size | 5242880 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_mirrored_log_groups | 1 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 16773120 | | language | /usr/local/mysql/share/... | | large_files_support | ON | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_update | OFF | | log_bin | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | OFF | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_table_names | OFF | | max_allowed_packet | 1047552 | | max_binlog_cache_size | 4294967295 | | max_binlog_size | 1073741824 | | max_connections | 100 | | max_connect_errors | 10 | | max_delayed_threads | 20 | | max_heap_table_size | 16777216 | | max_join_size | 4294967295 | | max_sort_length | 1024 | | max_user_connections | 0 | | max_tmp_tables | 32 | | max_write_lock_count | 4294967295 | | myisam_max_extra_sort_file_size | 268435456 | | myisam_max_sort_file_size | 2147483647 | | myisam_recover_options | force | | myisam_sort_buffer_size | 8388608 | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | open_files_limit | 0 | | pid_file | /usr/local/mysql/name.pid | | port | 3306 | | protocol_version | 10 | | read_buffer_size | 131072 | | read_rnd_buffer_size | 262144 | | rpl_recovery_rank | 0 | | query_cache_limit | 1048576 | | query_cache_size | 0 | | query_cache_type | ON | | safe_show_database | OFF | | server_id | 0 | | slave_net_timeout | 3600 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 2097116 | | sql_mode | 0 | | table_cache | 64 | | table_type | MYISAM | | thread_cache_size | 3 | | thread_stack | 131072 | | tx_isolation | READ-COMMITTED | | timezone | EEST | | tmp_table_size | 33554432 | | tmpdir | /tmp/:/mnt/hd2/tmp/ | | version | 4.0.4-beta | | wait_timeout | 28800 | +---------------------------------+------------------------------+ Each option is described here. Values for buffer sizes, lengths, and stack sizes are given in bytes. You can specify values with a suffix of `K' or `M' to indicate kilobytes or megabytes. For example, `16M' indicates 16 megabytes. The case of suffix letters does not matter; `16M' and `16m' are equivalent: * `ansi_mode'. Is `ON' if `mysqld' was started with `--ansi'. *Note ANSI mode::. * `back_log' The number of outstanding connection requests MySQL can have. This comes into play when the main MySQL thread gets *very* many connection requests in a very short time. It then takes some time (although very little) for the main thread to check the connection and start a new thread. The `back_log' value indicates how many requests can be stacked during this short time before MySQL momentarily stops answering new requests. You need to increase this only if you expect a large number of connections in a short period of time. In other words, this value is the size of the listen queue for incoming TCP/IP connections. Your operating system has its own limit on the size of this queue. The manual page for the Unix `listen(2)' system call should have more details. Check your OS documentation for the maximum value for this variable. Attempting to set `back_log' higher than your operating system limit will be ineffective. * `basedir' The value of the `--basedir' option. * `bdb_cache_size' The buffer that is allocated to cache index and rows for `BDB' tables. If you don't use `BDB' tables, you should start `mysqld' with `--skip-bdb' to not waste memory for this cache. * `bdb_log_buffer_size' The buffer that is allocated to cache index and rows for `BDB' tables. If you don't use `BDB' tables, you should set this to 0 or start `mysqld' with `--skip-bdb' to not waste memory for this cache. * `bdb_home' The value of the `--bdb-home' option. * `bdb_max_lock' The maximum number of locks (10,000 by default) you can have active on a BDB table. You should increase this if you get errors of type `bdb: Lock table is out of available locks' or `Got error 12 from ...' when you have do long transactions or when `mysqld' has to examine a lot of rows to calculate the query. * `bdb_logdir' The value of the `--bdb-logdir' option. * `bdb_shared_data' Is `ON' if you are using `--bdb-shared-data'. * `bdb_tmpdir' The value of the `--bdb-tmpdir' option. * `binlog_cache_size'. The size of the cache to hold the SQL statements for the binary log during a transaction. If you often use big, multi-statement transactions you can increase this to get more performance. *Note COMMIT::. * `bulk_insert_buffer_size' (was `myisam_bulk_insert_tree_size') MyISAM uses special tree-like cache to make bulk inserts (that is, `INSERT ... SELECT', `INSERT ... VALUES (...), (...), ...', and `LOAD DATA INFILE') faster. This variable limits the size of the cache tree in bytes per thread. Setting it to 0 will disable this optimisation. *Note*: this cache is only used when adding data to non-empty table. Default value is 8 MB. * `character_set' The default character set. * `character_sets' The supported character sets. * `concurrent_inserts' If `ON' (the default), MySQL will allow you to use `INSERT' on `MyISAM' tables at the same time as you run `SELECT' queries on them. You can turn this option off by starting `mysqld' with `--safe' or `--skip-new'. * `connect_timeout' The number of seconds the `mysqld' server is waiting for a connect packet before responding with `Bad handshake'. * `datadir' The value of the `--datadir' option. * `delay_key_write' Option for MyISAM tables. Can have one of the following values: OFF All CREATE TABLE ... DELAYED_KEY_WRITES are ignored. ON (default) MySQL will honor the `DELAY_KEY_WRITE' option for `CREATE TABLE'. ALL All new opened tables are treated as if they were created with the `DELAY_KEY_WRITE' option. If `DELAY_KEY_WRITE' is enabled this means that the key buffer for tables with this option will not get flushed on every index update, but only when a table is closed. This will speed up writes on keys a lot, but you should add automatic checking of all tables with `myisamchk --fast --force' if you use this. * `delayed_insert_limit' After inserting `delayed_insert_limit' rows, the `INSERT DELAYED' handler will check if there are any `SELECT' statements pending. If so, it allows these to execute before continuing. * `delayed_insert_timeout' How long a `INSERT DELAYED' thread should wait for `INSERT' statements before terminating. * `delayed_queue_size' What size queue (in rows) should be allocated for handling `INSERT DELAYED'. If the queue becomes full, any client that does `INSERT DELAYED' will wait until there is room in the queue again. * `flush' This is `ON' if you have started MySQL with the `--flush' option. * `flush_time' If this is set to a non-zero value, then every `flush_time' seconds all tables will be closed (to free up resources and sync things to disk). We only recommend this option on Windows 9x/Me, or on systems where you have very little resources. * `ft_boolean_syntax' List of operators supported by `MATCH ... AGAINST(... IN BOOLEAN MODE)'. *Note Fulltext Search::. * `ft_min_word_len' The minimum length of the word to be included in a `FULLTEXT' index. *Note: `FULLTEXT' indexes must be rebuilt after changing this variable.* (This option is new for MySQL 4.0.) * `ft_max_word_len' The maximum length of the word to be included in a `FULLTEXT' index. *Note: `FULLTEXT' indexes must be rebuilt after changing this variable.* (This option is new for MySQL 4.0.) * `ft_max_word_len_for_sort' The maximum length of the word in a `FULLTEXT' index to be used in fast index recreation method in `REPAIR', `CREATE INDEX', or `ALTER TABLE'. Longer words are inserted the slow way. The rule of the thumb is as follows: with `ft_max_word_len_for_sort' increasing, *MySQL* will create bigger temporary files (thus slowing the process down, due to disk I/O), and will put fewer keys in one sort block (again, decreasing the efficiency). When `ft_max_word_len_for_sort' is too small, instead, *MySQL* will insert a lot of words into index the slow way, but short words will be inserted very quickly. * `ft_stopword_file' The file to read the list of stopwords for fulltext search from. All the words from the file will be used, comments are *not* honored. By default, built-in list of stopwords is used (as defined in `myisam/ft_static.c'). Setting this parameter to an empty string (`""') will disable stopword filtering. *Note: `FULLTEXT' indexes must be rebuilt after changing this variable.* (This option is new for MySQL 4.0.10) * `have_innodb' `YES' if `mysqld' supports InnoDB tables. `DISABLED' if `--skip-innodb' is used. * `have_bdb' `YES' if `mysqld' supports Berkeley DB tables. `DISABLED' if `--skip-bdb' is used. * `have_raid' `YES' if `mysqld' supports the `RAID' option. * `have_openssl' `YES' if `mysqld' supports SSL (encryption) on the client/server protocol. * `init_file' The name of the file specified with the `--init-file' option when you start the server. This is a file of SQL statements you want the server to execute when it starts. * `interactive_timeout' The number of seconds the server waits for activity on an interactive connection before closing it. An interactive client is defined as a client that uses the `CLIENT_INTERACTIVE' option to `mysql_real_connect()'. See also `wait_timeout'. * `join_buffer_size' The size of the buffer that is used for full joins (joins that do not use indexes). The buffer is allocated one time for each full join between two tables. Increase this value to get a faster full join when adding indexes is not possible. (Normally the best way to get fast joins is to add indexes.) * `key_buffer_size' Index blocks are buffered and are shared by all threads. `key_buffer_size' is the size of the buffer used for index blocks. Increase this to get better index handling (for all reads and multiple writes) to as much as you can afford; 64M on a 256M machine that mainly runs MySQL is quite common. If you, however, make this too big (for instance more than 50% of your total memory) your system may start to page and become extremely slow. Remember that because MySQL does not cache data reads, you will have to leave some room for the OS filesystem cache. You can check the performance of the key buffer by doing `SHOW STATUS' and examine the variables `Key_read_requests', `Key_reads', `Key_write_requests', and `Key_writes'. The `Key_reads/Key_read_request' ratio should normally be < 0.01. The `Key_write/Key_write_requests' is usually near 1 if you are using mostly updates/deletes but may be much smaller if you tend to do updates that affect many at the same time or if you are using `DELAY_KEY_WRITE'. *Note `SHOW': SHOW. To get even more speed when writing many rows at the same time, use `LOCK TABLES'. *Note `LOCK TABLES': LOCK TABLES. * `language' The language used for error messages. * `large_file_support' If `mysqld' was compiled with options for big file support. * `locked_in_memory' If `mysqld' was locked in memory with `--memlock' * `log' If logging of all queries is enabled. * `log_update' If the update log is enabled. * `log_bin' If the binary log is enabled. * `log_slave_updates' If the updates from the slave should be logged. * `long_query_time' If a query takes longer than this (in seconds), the `Slow_queries' counter will be incremented. If you are using `--log-slow-queries', the query will be logged to the slow query logfile. This value is measured in real time, not CPU time, so a query that may be under the threshold on a lightly loaded system may be above the threshold on a heavily loaded one. *Note Slow query log::. * `lower_case_table_names' If set to 1 table names are stored in lowercase on disk and table name comparisons will be case-insensitive. From version 4.0.2, this option also applies to database names. *Note Name case sensitivity::. * `max_allowed_packet' The maximum size of one packet. The message buffer is initialised to `net_buffer_length' bytes, but can grow up to `max_allowed_packet' bytes when needed. This value by default is small, to catch big (possibly wrong) packets. You must increase this value if you are using big `BLOB' columns. It should be as big as the biggest `BLOB' you want to use. The protocol limits for `max_allowed_packet' is 16M in MySQL 3.23 and 1G in MySQL 4.0. * `max_binlog_cache_size' If a multi-statement transaction requires more than this amount of memory, one will get the error "Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage". * `max_binlog_size' Available after 3.23.33. If a write to the binary (replication) log exceeds the given value, rotate the logs. You cannot set it to less than 1024 bytes, or more than 1 GB. Default is 1 GB. * `max_connections' The number of simultaneous clients allowed. Increasing this value increases the number of file descriptors that `mysqld' requires. See below for comments on file descriptor limits. *Note Too many connections::. * `max_connect_errors' If there is more than this number of interrupted connections from a host this host will be blocked from further connections. You can unblock a host with the command `FLUSH HOSTS'. * `max_delayed_threads' Don't start more than this number of threads to handle `INSERT DELAYED' statements. If you try to insert data into a new table after all `INSERT DELAYED' threads are in use, the row will be inserted as if the `DELAYED' attribute wasn't specified. * `max_heap_table_size' Don't allow creation of heap tables bigger than this. * `max_join_size' Joins that are probably going to read more than `max_join_size' records return an error. Set this value if your users tend to perform joins that lack a `WHERE' clause, that take a long time, and that return millions of rows. * `max_sort_length' The number of bytes to use when sorting `BLOB' or `TEXT' values (only the first `max_sort_length' bytes of each value are used; the rest are ignored). * `max_user_connections' The maximum number of active connections for a single user (0 = no limit). * `max_tmp_tables' (This option doesn't yet do anything.) Maximum number of temporary tables a client can keep open at the same time. * `max_write_lock_count' After this many write locks, allow some read locks to run in between. * `myisam_recover_options' The value of the `--myisam-recover' option. * `myisam_sort_buffer_size' The buffer that is allocated when sorting the index when doing a `REPAIR' or when creating indexes with `CREATE INDEX' or `ALTER TABLE'. * `myisam_max_extra_sort_file_size'. If the temporary file used for fast index creation would be bigger than using the key cache by the amount specified here, then prefer the key cache method. This is mainly used to force long character keys in large tables to use the slower key cache method to create the index. *Note* that this parameter is given in megabytes before 4.0.3 and in bytes starting from this version. * `myisam_max_sort_file_size' The maximum size of the temporary file MySQL is allowed to use while recreating the index (during `REPAIR', `ALTER TABLE' or `LOAD DATA INFILE'. If the file-size would be bigger than this, the index will be created through the key cache (which is slower). *Note* that this parameter is given in megabytes before 4.0.3 and in bytes starting from this version. * `net_buffer_length' The communication buffer is reset to this size between queries. This should not normally be changed, but if you have very little memory, you can set it to the expected size of a query. (That is, the expected length of SQL statements sent by clients. If statements exceed this length, the buffer is automatically enlarged, up to `max_allowed_packet' bytes.) * `net_read_timeout' Number of seconds to wait for more data from a connection before aborting the read. Note that when we don't expect data from a connection, the timeout is defined by `write_timeout'. See also `slave_net_timeout'. * `net_retry_count' If a read on a communication port is interrupted, retry this many times before giving up. This value should be quite high on `FreeBSD' as internal interrupts are sent to all threads. * `net_write_timeout' Number of seconds to wait for a block to be written to a connection before aborting the write. * `open_files_limit' If this is not 0, then `mysqld' will use this value to reserve file descriptors to use with `setrlimit()'. If this value is 0 then `mysqld' will reserve `max_connections*5' or `max_connections + table_cache*2' (whichever is larger) number of files. You should try increasing this if `mysqld' gives you the error 'Too many open files'. * `pid_file' The value of the `--pid-file' option. * `port' The value of the `--port' option. * `protocol_version' The protocol version used by the MySQL server. * `read_buffer_size' (was `record_buffer') Each thread that does a sequential scan allocates a buffer of this size for each table it scans. If you do many sequential scans, you may want to increase this value. * `record_rnd_buffer_size' When reading rows in sorted order after a sort, the rows are read through this buffer to avoid a disk seeks. Can improve `ORDER BY' by a lot if set to a high value. As this is a thread-specific variable, one should not set this big globally, but just change this when running some specific big queries. * `query_cache_limit' Don't cache results that are bigger than this. (Default 1M). * `query_cache_size' The memory allocated to store results from old queries. If this is 0, the query cache is disabled (default). * `query_cache_type' This may be set (only numeric) to *Value**Alias* *Comment* 0 OFF Don't cache or retrieve results. 1 ON Cache all results except `SELECT SQL_NO_CACHE ...' queries. 2 DEMAND Cache only `SELECT SQL_CACHE ...' queries. * `safe_show_database' Don't show databases for which the user doesn't have any database or table privileges. This can improve security if you're concerned about people being able to see what databases other users have. See also `skip_show_database'. * `server_id' The value of the `--server-id' option. * `skip_locking' Is OFF if `mysqld' uses external locking. * `skip_networking' Is ON if we only allow local (socket) connections. * `skip_show_database' This prevents people from doing `SHOW DATABASES' if they don't have the `PROCESS' privilege. This can improve security if you're concerned about people being able to see what databases other users have. See also `safe_show_database'. * `slave_net_timeout' Number of seconds to wait for more data from a master/slave connection before aborting the read. * `slow_launch_time' If creating the thread takes longer than this value (in seconds), the `Slow_launch_threads' counter will be incremented. * `socket' The Unix socket used by the server. * `sort_buffer_size' Each thread that needs to do a sort allocates a buffer of this size. Increase this value for faster `ORDER BY' or `GROUP BY' operations. *Note Temporary files::. * `table_cache' The number of open tables for all threads. Increasing this value increases the number of file descriptors that `mysqld' requires. You can check if you need to increase the table cache by checking the `Opened_tables' variable. *Note `Opened_tables': SHOW STATUS. If this variable is big and you don't do `FLUSH TABLES' a lot (which just forces all tables to be closed and reopenend), then you should increase the value of this variable. For more information about the table cache, see *Note Table cache::. * `table_type' The default table type. * `thread_cache_size' How many threads we should keep in a cache for reuse. When a client disconnects, the client's threads are put in the cache if there aren't more than `thread_cache_size' threads from before. All new threads are first taken from the cache, and only when the cache is empty is a new thread created. This variable can be increased to improve performance if you have a lot of new connections. (Normally this doesn't give a notable performance improvement if you have a good thread implementation.) By examing the difference between the `Connections' and `Threads_created' status variables (*note SHOW STATUS:: for details) you can see how efficient thread cache is. * `thread_concurrency' On Solaris, `mysqld' will call `thr_setconcurrency()' with this value. `thr_setconcurrency()' permits the application to give the threads system a hint for the desired number of threads that should be run at the same time. * `thread_stack' The stack size for each thread. Many of the limits detected by the `crash-me' test are dependent on this value. The default is large enough for normal operation. *Note MySQL Benchmarks::. * `timezone' The timezone for the server. * `tmp_table_size' If an in-memory temporary table exceeds this size, MySQL will automatically convert it to an on-disk `MyISAM' table. Increase the value of `tmp_table_size' if you do many advanced `GROUP BY' queries and you have lots of memory. * `tmpdir' The directory used for temporary files and temporary tables. Starting from MySQL 4.1, it can be set to a list of paths separated by colon `:' (semicolon `;' on Windows). They will be used in round-robin fashion. This feature can be used to spread load between several physical disks. * `version' The version number for the server. * `wait_timeout' The number of seconds the server waits for activity on a not interactive connection before closing it. On thread startup `SESSION.WAIT_TIMEOUT' is initialised from `GLOBAL.WAIT_TIMEOUT' or `GLOBAL.INTERACTIVE_TIMEOUT' depending on the type of client (as defined by the `CLIENT_INTERACTIVE' connect option). See also `interactive_timeout'. The manual section that describes tuning MySQL contains some information of how to tune the above variables. *Note Server parameters::. `SHOW LOGS' ........... `SHOW LOGS' shows you status information about existing log files. It currently only displays information about Berkeley DB log files. * `File' shows the full path to the log file * `Type' shows the type of the log file (`BDB' for Berkeley DB log files) * `Status' shows the status of the log file (`FREE' if the file can be removed, or `IN USE' if the file is needed by the transaction subsystem) `SHOW PROCESSLIST' .................. `SHOW [FULL] PROCESSLIST' shows you which threads are running. You can also get this information using the `mysqladmin processlist' command. If you have the `SUPER' privilege, you can see all threads. Otherwise, you can see only your own threads. *Note `KILL': KILL. If you don't use the `FULL' option, then only the first 100 characters of each query will be shown. Starting from 4.0.12, MySQL reports the hostname for TCP/IP connections as `hostname:client_port' to make it easier to find out which client is doing what. This command is very useful if you get the 'too many connections' error message and want to find out what's going on. MySQL reserves one extra connection for a client with the `SUPER' privilege to ensure that you should always be able to login and check the system (assuming you are not giving this privilege to all your users). Some states commonly seen in `mysqladmin processlist' * `Checking table' The thread is performing [automatic] checking of the table. * `Closing tables' Means that the thread is flushing the changed table data to disk and closing the used tables. This should be a fast operation. If not, then you should check that you don't have a full disk or that the disk is not in very heavy use. * `Connect Out' Slave connecting to master. * `Copying to tmp table on disk' The temporary result set was larger than `tmp_table_size' and the thread is now changing the in memory-based temporary table to a disk based one to save memory. * `Creating tmp table' The thread is creating a temporary table to hold a part of the result for the query. * `deleting from main table' When executing the first part of a multi-table delete and we are only deleting from the first table. * `deleting from reference tables' When executing the second part of a multi-table delete and we are deleting the matched rows from the other tables. * `Flushing tables' The thread is executing `FLUSH TABLES' and is waiting for all threads to close their tables. * `Killed' Someone has sent a kill to the thread and it should abort next time it checks the kill flag. The flag is checked in each major loop in MySQL, but in some cases it may still take a short time for the thread to die. If the thread is locked by some other thread, the kill will take affect as soon as the other thread releases it's lock. * `Sending data' The thread is processing rows for a `SELECT' statement and is also sending data to the client. * `Sorting for group' The thread is doing a sort to satsify a `GROUP BY'. * `Sorting for order' The thread is doing a sort to satsify a `ORDER BY'. * `Opening tables' This simply means that the thread is trying to open a table. This is should be very fast procedure, unless something prevents opening. For example an `ALTER TABLE' or a `LOCK TABLE' can prevent opening a table until the command is finished. * `Removing duplicates' The query was using `SELECT DISTINCT' in such a way that MySQL couldn't optimise that distinct away at an early stage. Because of this MySQL has to do an extra stage to remove all duplicated rows before sending the result to the client. * `Reopen table' The thread got a lock for the table, but noticed after getting the lock that the underlying table structure changed. It has freed the lock, closed the table and is now trying to reopen it. * `Repair by sorting' The repair code is using sorting to create indexes. * `Repair with keycache' The repair code is using creating keys one by one through the key cache. This is much slower than `Repair by sorting'. * `Searching rows for update' The thread is doing a first phase to find all matching rows before updating them. This has to be done if the `UPDATE' is changing the index that is used to find the involved rows. * `Sleeping' The thread is wating for the client to send a new command to it. * `System lock' The thread is waiting for getting to get a external system lock for the table. If you are not using multiple mysqld servers that are accessing the same tables, you can disable system locks with the `--skip-external-locking' option. * `Upgrading lock' The `INSERT DELAYED' handler is trying to get a lock for the table to insert rows. * `Updating' The thread is searching for rows to update and updating them. * `User Lock' The thread is waiting on a `GET_LOCK()'. * `Waiting for tables' The thread got a notification that the underlying structure for a table has changed and it needs to reopen the table to get the new structure. To be able to reopen the table it must however wait until all other threads have closed the table in question. This notification happens if another thread has used `FLUSH TABLES' or one of the following commands on the table in question: `FLUSH TABLES table_name', `ALTER TABLE', `RENAME TABLE', `REPAIR TABLE', `ANALYZE TABLE' or `OPTIMIZE TABLE'. * `waiting for handler insert' The `INSERT DELAYED' handler has processed all inserts and are waiting to get new ones. Most states are very quick operations. If threads last in any of these states for many seconds, there may be a problem around that needs to be investigated. There are some other states that are not mentioned previously, but most of these are only useful to find bugs in `mysqld'. `SHOW GRANTS' ............. `SHOW GRANTS FOR user' lists the grant commands that must be issued to duplicate the grants for a user. mysql> SHOW GRANTS FOR root@localhost; +---------------------------------------------------------------------+ | Grants for root@localhost | +---------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +---------------------------------------------------------------------+ To list grants for the current session one may use `CURRENT_USER()' function (new in version 4.0.6) to find out what user the session was authentificated as. *Note `CURRENT_USER()': Miscellaneous functions. `SHOW CREATE TABLE' ................... Shows a `CREATE TABLE' statement that will create the given table: mysql> SHOW CREATE TABLE t\G *************************** 1. row *************************** Table: t Create Table: CREATE TABLE t ( id int(11) default NULL auto_increment, s char(60) default NULL, PRIMARY KEY (id) ) TYPE=MyISAM `SHOW CREATE TABLE' will quote table and column names according to `SQL_QUOTE_SHOW_CREATE' option. *Note `SET SQL_QUOTE_SHOW_CREATE': SET OPTION. `SHOW WARNINGS | ERRORS' ........................ SHOW WARNINGS [LIMIT #] SHOW ERRORS [LIMIT #] This command is implemented in MySQL 4.1.0. It shows the errors, warnings and notes that one got for the last command. The errors/warnings are reset for each new command that uses a table. The MySQL server sends back the total number of warnings and errors you got for the last commend; This can be retrieved by calling `mysql_warning_count()'. Up to `max_error_count' messages are stored (Global and thread specific variable). You can retrieve the number of errors from `@error_count' and warnings from `@warning_count'. `SHOW WARNINGS' shows all errors, warnings and notes you got for the last command while `SHOW ERRORS' only shows you the errors. mysql> DROP TABLE IF EXISTS no_such_table; mysql> SHOW WARNINGS; +-------+------+-------------------------------+ | Level | Code | Message | +-------+------+-------------------------------+ | Note | 1051 | Unknown table 'no_such_table' | +-------+------+-------------------------------+ `SHOW TABLE TYPES' .................. SHOW TABLE TYPES This command is implemented in MySQL 4.1.0. `SHOW TABLE TYPES' shows you status information about the table types. This is particulary useful for checking if a table type is supported; or to see what is the default table type is. mysql> SHOW TABLE TYPES; +--------+---------+-----------------------------------------------------------+ | Type | Support | Comment | +--------+---------+-----------------------------------------------------------+ | MyISAM | DEFAULT | Default type from 3.23 with great performance | | HEAP | YES | Hash based, stored in memory, useful for temporary tables | | MERGE | YES | Collection of identical MyISAM tables | | ISAM | YES | Obsolete table type; Is replaced by MyISAM | | InnoDB | YES | Supports transactions, row-level locking and foreign keys | | BDB | NO | Supports transactions and page-level locking | +--------+---------+-----------------------------------------------------------+ 6 rows in set (0.00 sec) The 'Support' option `DEFAULT' indicates whether the perticular table type is supported, and which is the default type. If the server is started with `--default-table-type=InnoDB', then the InnoDB 'Support' field will have the value `DEFAULT'. `SHOW PRIVILEGES' ................. SHOW PRIVILEGES This command is implemented in MySQL 4.1.0. `SHOW PRIVILEGES' shows the list of system privileges that the underlying MySQL server supports. mysql> show privileges; +------------+--------------------------+-------------------------------------------------------+ | Privilege | Context | Comment | +------------+--------------------------+-------------------------------------------------------+ | Select | Tables | To retrieve rows from table | | Insert | Tables | To insert data into tables | | Update | Tables | To update existing rows | | Delete | Tables | To delete existing rows | | Index | Tables | To create or drop indexes | | Alter | Tables | To alter the table | | Create | Databases,Tables,Indexes | To create new databases and tables | | Drop | Databases,Tables | To drop databases and tables | | Grant | Databases,Tables | To give to other users those privileges you possess | | References | Databases,Tables | To have references on tables | | Reload | Server Admin | To reload or refresh tables, logs and privileges | | Shutdown | Server Admin | To shutdown the server | | Process | Server Admin | To view the plain text of currently executing queries | | File | File access on server | To read and write files on the server | +------------+--------------------------+-------------------------------------------------------+ 14 rows in set (0.00 sec) MySQL Localisation and International Usage ========================================== The Character Set Used for Data and Sorting ------------------------------------------- By default, MySQL uses the ISO-8859-1 (Latin1) character set with sorting according to Swedish/Finnish. This is the character set suitable in the USA and western Europe. All standard MySQL binaries are compiled with `--with-extra-charsets=complex'. This will add code to all standard programs to be able to handle `latin1' and all multi-byte character sets within the binary. Other character sets will be loaded from a character-set definition file when needed. The character set determines what characters are allowed in names and how things are sorted by the `ORDER BY' and `GROUP BY' clauses of the `SELECT' statement. You can change the character set with the `--default-character-set' option when you start the server. The character sets available depend on the `--with-charset=charset' and `--with-extra-charsets= list-of-charset | complex | all' options to `configure', and the character set configuration files listed in `SHAREDIR/charsets/Index'. *Note configure options::. If you change the character set when running MySQL (which may also change the sort order), you must run `myisamchk -r -q --set-character-set=charset' on all tables. Otherwise, your indexes may not be ordered correctly. When a client connects to a MySQL server, the server sends the default character set in use to the client. The client will switch to use this character set for this connection. One should use `mysql_real_escape_string()' when escaping strings for a SQL query. `mysql_real_escape_string()' is identical to the old `mysql_escape_string()' function, except that it takes the `MYSQL' connection handle as the first parameter. If the client is compiled with different paths than where the server is installed and the user who configured MySQL didn't include all character sets in the MySQL binary, one must specify for the client where it can find the additional character sets it will need if the server runs with a different character set than the client. One can specify this by putting in a MySQL option file: [client] character-sets-dir=/usr/local/mysql/share/mysql/charsets where the path points to the directory in which the dynamic MySQL character sets are stored. One can force the client to use specific character set by specifying: [client] default-character-set=character-set-name but normally this is never needed. German character set .................... To get German sorting order, you should start `mysqld' with `--default-character-set=latin1_de'. This will give you the following characteristics. When sorting and comparing string's the following mapping is done on the strings before doing the comparison: ä -> ae ö -> oe ü -> ue ß -> ss All accented characters, are converted to their un-accented uppercase counterpart. All letters are converted to uppercase. When comparing strings with `LIKE' the one -> two character mapping is not done. All letters are converted to uppercase. Accent are removed from all letters except: `Ü', `ü', `Ö', `ö', `Ä' and `ä'. Non-English Error Messages -------------------------- `mysqld' can issue error messages in the following languages: Czech, Danish, Dutch, English (the default), Estonian, French, German, Greek, Hungarian, Italian, Japanese, Korean, Norwegian, Norwegian-ny, Polish, Portuguese, Romanian, Russian, Slovak, Spanish, and Swedish. To start `mysqld' with a particular language, use either the `--language=lang' or `-L lang' options. For example: shell> mysqld --language=swedish or: shell> mysqld --language=/usr/local/share/swedish Note that all language names are specified in lowercase. The language files are located (by default) in `MYSQL_BASE_DIR/share/LANGUAGE/'. To update the error message file, you should edit the `errmsg.txt' file and execute the following command to generate the `errmsg.sys' file: shell> comp_err errmsg.txt errmsg.sys If you upgrade to a newer version of MySQL, remember to repeat your changes with the new `errmsg.txt' file. Adding a New Character Set -------------------------- To add another character set to MySQL, use the following procedure. Decide if the set is simple or complex. If the character set does not need to use special string collating routines for sorting and does not need multi-byte character support, it is simple. If it needs either of those features, it is complex. For example, `latin1' and `danish' are simple charactersets while `big5' or `czech' are complex character sets. In the following section, we have assumed that you name your character set `MYSET'. For a simple character set do the following: 1. Add MYSET to the end of the `sql/share/charsets/Index' file Assign a unique number to it. 2. Create the file `sql/share/charsets/MYSET.conf'. (You can use `sql/share/charsets/latin1.conf' as a base for this.) The syntax for the file very simple: * Comments start with a '#' character and proceed to the end of the line. * Words are separated by arbitrary amounts of whitespace. * When defining the character set, every word must be a number in hexadecimal format * The `ctype' array takes up the first 257 words. The `to_lower[]', `to_upper[]' and `sort_order[]' arrays take up 256 words each after that. *Note Character arrays::. 3. Add the character set name to the `CHARSETS_AVAILABLE' and `COMPILED_CHARSETS' lists in `configure.in'. 4. Reconfigure, recompile, and test. For a complex character set do the following: 1. Create the file `strings/ctype-MYSET.c' in the MySQL source distribution. 2. Add MYSET to the end of the `sql/share/charsets/Index' file. Assign a unique number to it. 3. Look at one of the existing `ctype-*.c' files to see what needs to be defined, for example `strings/ctype-big5.c'. Note that the arrays in your file must have names like `ctype_MYSET', `to_lower_MYSET', and so on. This corresponds to the arrays in the simple character set. *Note Character arrays::. For a complex character set 4. Near the top of the file, place a special comment like this: /* * This comment is parsed by configure to create ctype.c, * so don't change it unless you know what you are doing. * * .configure. number_MYSET=MYNUMBER * .configure. strxfrm_multiply_MYSET=N * .configure. mbmaxlen_MYSET=N */ The `configure' program uses this comment to include the character set into the MySQL library automatically. The strxfrm_multiply and mbmaxlen lines will be explained in the following sections. Only include these if you need the string collating functions or the multi-byte character set functions, respectively. 5. You should then create some of the following functions: * `my_strncoll_MYSET()' * `my_strcoll_MYSET()' * `my_strxfrm_MYSET()' * `my_like_range_MYSET()' *Note String collating::. 6. Add the character set name to the `CHARSETS_AVAILABLE' and `COMPILED_CHARSETS' lists in `configure.in'. 7. Reconfigure, recompile, and test. The file `sql/share/charsets/README' includes some more instructions. If you want to have the character set included in the MySQL distribution, mail a patch to . The Character Definition Arrays ------------------------------- `to_lower[]' and `to_upper[]' are simple arrays that hold the lowercase and uppercase characters corresponding to each member of the character set. For example: to_lower['A'] should contain 'a' to_upper['a'] should contain 'A' `sort_order[]' is a map indicating how characters should be ordered for comparison and sorting purposes. Quite often (but not for all character sets) this is the same as `to_upper[]' (which means sorting will be case-insensitive). MySQL will sort characters based on the value of `sort_order[character]'. For more complicated sorting rules, see the discussion of string collating below. *Note String collating::. `ctype[]' is an array of bit values, with one element for one character. (Note that `to_lower[]', `to_upper[]', and `sort_order[]' are indexed by character value, but `ctype[]' is indexed by character value + 1. This is an old legacy to be able to handle `EOF'.) You can find the following bitmask definitions in `m_ctype.h': #define _U 01 /* Uppercase */ #define _L 02 /* Lowercase */ #define _N 04 /* Numeral (digit) */ #define _S 010 /* Spacing character */ #define _P 020 /* Punctuation */ #define _C 040 /* Control character */ #define _B 0100 /* Blank */ #define _X 0200 /* heXadecimal digit */ The `ctype[]' entry for each character should be the union of the applicable bitmask values that describe the character. For example, `'A'' is an uppercase character (`_U') as well as a hexadecimal digit (`_X'), so `ctype['A'+1]' should contain the value: _U + _X = 01 + 0200 = 0201 String Collating Support ------------------------ If the sorting rules for your language are too complex to be handled with the simple `sort_order[]' table, you need to use the string collating functions. Right now the best documentation on this is the character sets that are already implemented. Look at the `big5', `czech', `gbk', `sjis', and `tis160' character sets for examples. You must specify the `strxfrm_multiply_MYSET=N' value in the special comment at the top of the file. `N' should be set to the maximum ratio the strings may grow during `my_strxfrm_MYSET' (it must be a positive integer). Multi-byte Character Support ---------------------------- If your want to add support for a new character set that includes multi-byte characters, you need to use the multi-byte character functions. Right now the best documentation on this is the character sets that are already implemented. Look at the `euc_kr', `gb2312', `gbk', `sjis', and `ujis' character sets for examples. These are implemented in the `ctype-'charset'.c' files in the `strings' directory. You must specify the `mbmaxlen_MYSET=N' value in the special comment at the top of the source file. `N' should be set to the size in bytes of the largest character in the set. Problems With Character Sets ---------------------------- If you try to use a character set that is not compiled into your binary, you can run into a couple of different problems: * Your program has a wrong path to where the character sets are stored. (Default `/usr/local/mysql/share/mysql/charsets'). This can be fixed by using the `--character-sets-dir' option to the program in question. * The character set is a multi-byte character set that can't be loaded dynamically. In this case you have to recompile the program with the support for the character set. * The character set is a dynamic character set, but you don't have a configure file for it. In this case you should install the configure file for the character set from a new MySQL distribution. * Your `Index' file doesn't contain the name for the character set. ERROR 1105: File '/usr/local/share/mysql/charsets/?.conf' not found (Errcode: 2) In this case you should either get a new `Index' file or add by hand the name of any missing character sets. For `MyISAM' tables, you can check the character set name and number for a table with `myisamchk -dvv table_name'. MySQL Server-Side Scripts and Utilities ======================================= Overview of the Server-Side Scripts and Utilities ------------------------------------------------- All MySQL programs take many different options. However, every MySQL program provides a `--help' option that you can use to get a full description of the program's different options. For example, try `mysql --help'. You can override default options for all standard programs with an option file. *Note Option files::. The following list briefly describes the server-side MySQL programs: `myisamchk' Utility to describe, check, optimise, and repair MySQL tables. Because `myisamchk' has many functions, it is described in its own chapter. *Note MySQL Database Administration::. `make_binary_distribution' Makes a binary release of a compiled MySQL. This could be sent by FTP to `/pub/mysql/Incoming' on `support.mysql.com' for the convenience of other MySQL users. `mysqlbug' The MySQL bug report script. This script should always be used when filing a bug report to the MySQL list. `mysqld' The SQL daemon. This should always be running. `mysql_install_db' Creates the MySQL grant tables with default privileges. This is usually executed only once, when first installing MySQL on a system. `safe_mysqld', The Wrapper Around `mysqld' ------------------------------------------ Note that in MySQL 4.0 `safe_mysqld' was renamed to `mysqld_safe'. `safe_mysqld' is the recommended way to start a `mysqld' daemon on Unix. `safe_mysqld' adds some safety features such as restarting the server when an error occurs and logging run-time information to a log file. If you don't use `--mysqld=#' or `--mysqld-version=#' `safe_mysqld' will use an executable named `mysqld-max' if it exists. If not, `safe_mysqld' will start `mysqld'. This makes it very easy to test to use `mysqld-max' instead of `mysqld'; just copy `mysqld-max' to where you have `mysqld' and it will be used. Normally one should never edit the `safe_mysqld' script, but instead put the options to `safe_mysqld' in the `[safe_mysqld]' section in the `my.cnf' file. `safe_mysqld' will read all options from the `[mysqld]', `[server]' and `[safe_mysqld]' sections from the option files. *Note Option files::. Note that all options on the command-line to `safe_mysqld' are passed to `mysqld'. If you wants to use any options in `safe_mysqld' that `mysqld' doesn't support, you must specify these in the option file. Most of the options to `safe_mysqld' are the same as the options to `mysqld'. *Note Command-line options::. `safe_mysqld' supports the following options: `--basedir=path' `--core-file-size=#' Size of the core file `mysqld' should be able to create. Passed to `ulimit -c'. `--datadir=path' `--defaults-extra-file=path' `--defaults-file=path' `--err-log=path (this is marked obsolete in 4.0; Use --log-error instead)' `--log-error=path' Write the error log to the above file. *Note Error log::. `--ledir=path' Path to `mysqld' `--log=path' `--mysqld=mysqld-version' Name of the `mysqld' version in the `ledir' directory you want to start. `--mysqld-version=version' Similar to `--mysqld=' but here you only give the suffix for `mysqld'. For example if you use `--mysqld-version=max', `safe_mysqld' will start the `ledir/mysqld-max' version. If the argument to `--mysqld-version' is empty, `ledir/mysqld' will be used. `--no-defaults' `--open-files-limit=#' Number of files `mysqld' should be able to open. Passed to `ulimit -n'. Note that you need to start `safe_mysqld' as root for this to work properly! `--pid-file=path' `--port=#' `--socket=path' `--timezone=#' Set the timezone (the `TZ') variable to the value of this parameter. `--user=#' The `safe_mysqld' script is written so that it normally is able to start a server that was installed from either a source or a binary version of MySQL, even if these install the server in slightly different locations. `safe_mysqld' expects one of these conditions to be true: * The server and databases can be found relative to the directory from which `safe_mysqld' is invoked. `safe_mysqld' looks under its working directory for `bin' and `data' directories (for binary distributions) or for `libexec' and `var' directories (for source distributions). This condition should be met if you execute `safe_mysqld' from your MySQL installation directory (for example, `/usr/local/mysql' for a binary distribution). * If the server and databases cannot be found relative to the working directory, `safe_mysqld' attempts to locate them by absolute pathnames. Typical locations are `/usr/local/libexec' and `/usr/local/var'. The actual locations are determined when the distribution was built from which `safe_mysqld' comes. They should be correct if MySQL was installed in a standard location. Because `safe_mysqld' will try to find the server and databases relative to its own working directory, you can install a binary distribution of MySQL anywhere, as long as you start `safe_mysqld' from the MySQL installation directory: shell> cd mysql_installation_directory shell> bin/safe_mysqld & If `safe_mysqld' fails, even when invoked from the MySQL installation directory, you can modify it to use the path to `mysqld' and the pathname options that are correct for your system. Note that if you upgrade MySQL in the future, your modified version of `safe_mysqld' will be overwritten, so you should make a copy of your edited version that you can reinstall. `mysqld_multi', A Program for Managing Multiple MySQL Servers ------------------------------------------------------------- `mysqld_multi' is meant for managing several `mysqld' processes that listen for connections on different Unix sockets and TCP/IP ports. The program will search for group(s) named `[mysqld#]' from `my.cnf' (or the file named by the `--config-file=...' option), where `#' can be any positive number starting from 1. This number is referred to in the following discussion as the option group number, or GNR. Group numbers distinquish option groups from one another and are used as arguments to `mysqld_multi' to specify which servers you want to start, stop, or obtain status for. Options listed in these groups should be the same as you would use in the usual `[mysqld]' group used for starting `mysqld'. (See, for example, *Note Automatic start::.) However, for `mysqld_multi', be sure that each group includes options for values such as the port, socket, etc., to be used for each individual `mysqld' process. `mysqld_multi' is invoked using the following syntax: Usage: mysqld_multi [OPTIONS] {start|stop|report} [GNR,GNR,GNR...] or mysqld_multi [OPTIONS] {start|stop|report} [GNR-GNR,GNR,GNR-GNR,...] Each GNR represents an option group number. You can start, stop or report any GNR, or several of them at the same time. For an example of how you might set up an option file, use this command: shell> mysqld_multi --example The GNR values in the list can be comma-separated or combined with a dash; in the latter case, all the GNRs between GNR1-GNR2 will be affected. With no GNR argument, all groups listed in the option file will be either started, stopped, or reported. Note that you must not have any white spaces in the GNR list. Anything after a white space is ignored. `mysqld_multi' supports the following options: `--config-file=...' Alternative config file. Note: This will not affect this program's own options (group `[mysqld_multi]'), but only groups `[mysqld#]'. Without this option, everything will be searched from the ordinary `my.cnf' file. `--example' Display an example option file. `--help' Print this help and exit. `--log=...' Log file. Full path to and the name for the log file. Note: If the file exists, everything will be appended. `--mysqladmin=...' `mysqladmin' binary to be used for a server shutdown. `--mysqld=...' `mysqld' binary to be used. Note that you can give `safe_mysqld' to this option also. The options are passed to `mysqld'. Just make sure you have `mysqld' in your environment variable `PATH' or fix `safe_mysqld'. `--no-log' Print to stdout instead of the log file. By default the log file is turned on. `--password=...' Password for user for `mysqladmin'. `--tcp-ip' Connect to the MySQL server(s) via the TCP/IP port instead of the Unix socket. This affects stopping and reporting. If a socket file is missing, the server may still be running, but can be accessed only via the TCP/IP port. By default, connections are made using the Unix socket. `--user=...' MySQL user for `mysqladmin'. `--version' Print the version number and exit. Some notes about `mysqld_multi': * Make sure that the MySQL user, who is stopping the `mysqld' services (e.g using the `mysqladmin' program) have the same password and username for all the data directories accessed (to the `mysql' database) And make sure that the user has the `SHUTDOWN' privilege! If you have many data directories and many different `mysql' databases with different passwords for the MySQL `root' user, you may want to create a common `multi_admin' user for each using the same password (see below). Example how to do it: shell> mysql -u root -S /tmp/mysql.sock -proot_password -e "GRANT SHUTDOWN ON *.* TO multi_admin@localhost IDENTIFIED BY 'multipass'" *Note Privileges::. You will have to do the above for each `mysqld' running in each data directory, that you have (just change the socket, `-S=...'). * `pid-file' is very important, if you are using `safe_mysqld' to start `mysqld' (e.g., `--mysqld=safe_mysqld') Every `mysqld' should have its own `pid-file'. The advantage using `safe_mysqld' instead of `mysqld' directly here is, that `safe_mysqld' "guards" every `mysqld' process and will restart it, if a `mysqld' process terminates due to a signal sent using `kill -9', or for other reasons such as a segmentation fault (which MySQL should never do, of course ;). Please note that the `safe_mysqld' script may require that you start it from a certain place. This means that you may have to `cd' to a certain directory, before you start the `mysqld_multi'. If you have problems starting, please see the `safe_mysqld' script. Check especially the lines: -------------------------------------------------------------------------- MY_PWD=`pwd` Check if we are starting this relative (for the binary release) if test -d /data/mysql -a -f ./share/mysql/english/errmsg.sys -a -x ./bin/mysqld -------------------------------------------------------------------------- *Note `safe_mysqld': safe_mysqld. The above test should be successful, or you may encounter problems. * Beware of the dangers starting multiple `mysqld's in the same data directory. Use separate data directories, unless you *know* what you are doing! * The socket file and the TCP/IP port must be different for every `mysqld'. * The first and fifth `mysqld' group were intentionally left out from the example. You may have 'gaps' in the config file. This gives you more flexibility. The order in which the `mysqlds' are started or stopped depends on the order in which they appear in the config file. * When you want to refer to a certain group using GNR with this program, just use the number in the end of the group name. For example, the GNR for a group named `[mysqld17]' is 17. * You may want to use option `--user' for `mysqld', but in order to do this you need to run the `mysqld_multi' script as the Unix `root' user. Having the option in the config file doesn't matter; you will just get a warning, if you are not the superuser and the `mysqlds' are started under *your* Unix account. *Important*: Make sure that the `pid-file' and the data directory are read+write(+execute for the latter one) accessible for *that* Unix user, who the specific `mysqld' process is started as. *Do not* use the Unix root account for this, unless you *know* what you are doing! * *Most important*: Make sure that you understand the meanings of the options that are passed to the `mysqld's and *why one would want* to have separate `mysqld' processes. Starting multiple `mysqld's in one data directory *will not* give you extra performance in a threaded system! *Note Multiple servers::. This is an example of the config file on behalf of `mysqld_multi'. # This file should probably be in your home dir (~/.my.cnf) or /etc/my.cnf # Version 2.1 by Jani Tolonen [mysqld_multi] mysqld = /usr/local/bin/safe_mysqld mysqladmin = /usr/local/bin/mysqladmin user = multi_admin password = multipass [mysqld2] socket = /tmp/mysql.sock2 port = 3307 pid-file = /usr/local/mysql/var2/hostname.pid2 datadir = /usr/local/mysql/var2 language = /usr/local/share/mysql/english user = john [mysqld3] socket = /tmp/mysql.sock3 port = 3308 pid-file = /usr/local/mysql/var3/hostname.pid3 datadir = /usr/local/mysql/var3 language = /usr/local/share/mysql/swedish user = monty [mysqld4] socket = /tmp/mysql.sock4 port = 3309 pid-file = /usr/local/mysql/var4/hostname.pid4 datadir = /usr/local/mysql/var4 language = /usr/local/share/mysql/estonia user = tonu [mysqld6] socket = /tmp/mysql.sock6 port = 3311 pid-file = /usr/local/mysql/var6/hostname.pid6 datadir = /usr/local/mysql/var6 language = /usr/local/share/mysql/japanese user = jani *Note Option files::. `myisampack', The MySQL Compressed Read-only Table Generator ------------------------------------------------------------ `myisampack' is used to compress MyISAM tables, and `pack_isam' is used to compress ISAM tables. Because ISAM tables are deprecated, we will only discuss `myisampack' here, but everything said about `myisampack' should also be true for `pack_isam'. `myisampack' works by compressing each column in the table separately. The information needed to decompress columns is read into memory when the table is opened. This results in much better performance when accessing individual records, because you only have to uncompress exactly one record, not a much larger disk block as when using Stacker on MS-DOS. Usually, `myisampack' packs the datafile 40%-70%. MySQL uses memory mapping (`mmap()') on compressed tables and falls back to normal read/write file usage if `mmap()' doesn't work. Please note the following: * After packing, the table is read-only. This is generally intended (such as when accessing packed tables on a CD). Also allowing writes to a packed table is on our TODO list but with low priority. * `myisampack' can also pack `BLOB' or `TEXT' columns. The older `pack_isam' (for `ISAM' tables) can not do this. `myisampack' is invoked like this: shell> myisampack [options] filename ... Each filename should be the name of an index (`.MYI') file. If you are not in the database directory, you should specify the pathname to the file. It is permissible to omit the `.MYI' extension. `myisampack' supports the following options: `-b, --backup' Make a backup of the table as `tbl_name.OLD'. `-#, --debug=debug_options' Output debug log. The `debug_options' string often is `'d:t:o,filename''. `-f, --force' Force packing of the table even if it becomes bigger or if the temporary file exists. `myisampack' creates a temporary file named `tbl_name.TMD' while it compresses the table. If you kill `myisampack', the `.TMD' file may not be deleted. Normally, `myisampack' exits with an error if it finds that `tbl_name.TMD' exists. With `--force', `myisampack' packs the table anyway. `-?, --help' Display a help message and exit. `-j big_tbl_name, --join=big_tbl_name' Join all tables named on the command-line into a single table `big_tbl_name'. All tables that are to be combined *must* be identical (same column names and types, same indexes, etc.). `-p #, --packlength=#' Specify the record length storage size, in bytes. The value should be 1, 2, or 3. (`myisampack' stores all rows with length pointers of 1, 2, or 3 bytes. In most normal cases, `myisampack' can determine the right length value before it begins packing the file, but it may notice during the packing process that it could have used a shorter length. In this case, `myisampack' will print a note that the next time you pack the same file, you could use a shorter record length.) `-s, --silent' Silent mode. Write output only when errors occur. `-t, --test' Don't actually pack table, just test packing it. `-T dir_name, --tmp_dir=dir_name' Use the named directory as the location in which to write the temporary table. `-v, --verbose' Verbose mode. Write information about progress and packing result. `-V, --version' Display version information and exit. `-w, --wait' Wait and retry if table is in use. If the `mysqld' server was invoked with the `--skip-external-locking' option, it is not a good idea to invoke `myisampack' if the table might be updated during the packing process. The sequence of commands shown here illustrates a typical table compression session: shell> ls -l station.* -rw-rw-r-- 1 monty my 994128 Apr 17 19:00 station.MYD -rw-rw-r-- 1 monty my 53248 Apr 17 19:00 station.MYI -rw-rw-r-- 1 monty my 5767 Apr 17 19:00 station.frm shell> myisamchk -dvv station MyISAM file: station Isam-version: 2 Creation time: 1996-03-13 10:08:58 Recover time: 1997-02-02 3:06:43 Data records: 1192 Deleted blocks: 0 Datafile: Parts: 1192 Deleted data: 0 Datafile pointer (bytes): 2 Keyfile pointer (bytes): 2 Max datafile length: 54657023 Max keyfile length: 33554431 Recordlength: 834 Record format: Fixed length table description: Key Start Len Index Type Root Blocksize Rec/key 1 2 4 unique unsigned long 1024 1024 1 2 32 30 multip. text 10240 1024 1 Field Start Length Type 1 1 1 2 2 4 3 6 4 4 10 1 5 11 20 6 31 1 7 32 30 8 62 35 9 97 35 10 132 35 11 167 4 12 171 16 13 187 35 14 222 4 15 226 16 16 242 20 17 262 20 18 282 20 19 302 30 20 332 4 21 336 4 22 340 1 23 341 8 24 349 8 25 357 8 26 365 2 27 367 2 28 369 4 29 373 4 30 377 1 31 378 2 32 380 8 33 388 4 34 392 4 35 396 4 36 400 4 37 404 1 38 405 4 39 409 4 40 413 4 41 417 4 42 421 4 43 425 4 44 429 20 45 449 30 46 479 1 47 480 1 48 481 79 49 560 79 50 639 79 51 718 79 52 797 8 53 805 1 54 806 1 55 807 20 56 827 4 57 831 4 shell> myisampack station.MYI Compressing station.MYI: (1192 records) - Calculating statistics normal: 20 empty-space: 16 empty-zero: 12 empty-fill: 11 pre-space: 0 end-space: 12 table-lookups: 5 zero: 7 Original trees: 57 After join: 17 - Compressing file 87.14% shell> ls -l station.* -rw-rw-r-- 1 monty my 127874 Apr 17 19:00 station.MYD -rw-rw-r-- 1 monty my 55296 Apr 17 19:04 station.MYI -rw-rw-r-- 1 monty my 5767 Apr 17 19:00 station.frm shell> myisamchk -dvv station MyISAM file: station Isam-version: 2 Creation time: 1996-03-13 10:08:58 Recover time: 1997-04-17 19:04:26 Data records: 1192 Deleted blocks: 0 Datafile: Parts: 1192 Deleted data: 0 Datafilepointer (bytes): 3 Keyfile pointer (bytes): 1 Max datafile length: 16777215 Max keyfile length: 131071 Recordlength: 834 Record format: Compressed table description: Key Start Len Index Type Root Blocksize Rec/key 1 2 4 unique unsigned long 10240 1024 1 2 32 30 multip. text 54272 1024 1 Field Start Length Type Huff tree Bits 1 1 1 constant 1 0 2 2 4 zerofill(1) 2 9 3 6 4 no zeros, zerofill(1) 2 9 4 10 1 3 9 5 11 20 table-lookup 4 0 6 31 1 3 9 7 32 30 no endspace, not_always 5 9 8 62 35 no endspace, not_always, no empty 6 9 9 97 35 no empty 7 9 10 132 35 no endspace, not_always, no empty 6 9 11 167 4 zerofill(1) 2 9 12 171 16 no endspace, not_always, no empty 5 9 13 187 35 no endspace, not_always, no empty 6 9 14 222 4 zerofill(1) 2 9 15 226 16 no endspace, not_always, no empty 5 9 16 242 20 no endspace, not_always 8 9 17 262 20 no endspace, no empty 8 9 18 282 20 no endspace, no empty 5 9 19 302 30 no endspace, no empty 6 9 20 332 4 always zero 2 9 21 336 4 always zero 2 9 22 340 1 3 9 23 341 8 table-lookup 9 0 24 349 8 table-lookup 10 0 25 357 8 always zero 2 9 26 365 2 2 9 27 367 2 no zeros, zerofill(1) 2 9 28 369 4 no zeros, zerofill(1) 2 9 29 373 4 table-lookup 11 0 30 377 1 3 9 31 378 2 no zeros, zerofill(1) 2 9 32 380 8 no zeros 2 9 33 388 4 always zero 2 9 34 392 4 table-lookup 12 0 35 396 4 no zeros, zerofill(1) 13 9 36 400 4 no zeros, zerofill(1) 2 9 37 404 1 2 9 38 405 4 no zeros 2 9 39 409 4 always zero 2 9 40 413 4 no zeros 2 9 41 417 4 always zero 2 9 42 421 4 no zeros 2 9 43 425 4 always zero 2 9 44 429 20 no empty 3 9 45 449 30 no empty 3 9 46 479 1 14 4 47 480 1 14 4 48 481 79 no endspace, no empty 15 9 49 560 79 no empty 2 9 50 639 79 no empty 2 9 51 718 79 no endspace 16 9 52 797 8 no empty 2 9 53 805 1 17 1 54 806 1 3 9 55 807 20 no empty 3 9 56 827 4 no zeros, zerofill(2) 2 9 57 831 4 no zeros, zerofill(1) 2 9 The information printed by `myisampack' is described here: `normal' The number of columns for which no extra packing is used. `empty-space' The number of columns containing values that are only spaces; these will occupy 1 bit. `empty-zero' The number of columns containing values that are only binary 0's; these will occupy 1 bit. `empty-fill' The number of integer columns that don't occupy the full byte range of their type; these are changed to a smaller type (for example, an `INTEGER' column may be changed to `MEDIUMINT'). `pre-space' The number of decimal columns that are stored with leading spaces. In this case, each value will contain a count for the number of leading spaces. `end-space' The number of columns that have a lot of trailing spaces. In this case, each value will contain a count for the number of trailing spaces. `table-lookup' The column had only a small number of different values, which were converted to an `ENUM' before Huffman compression. `zero' The number of columns for which all values are zero. `Original trees' The initial number of Huffman trees. `After join' The number of distinct Huffman trees left after joining trees to save some header space. After a table has been compressed, `myisamchk -dvv' prints additional information about each field: `Type' The field type may contain the following descriptors: `constant' All rows have the same value. `no endspace' Don't store endspace. `no endspace, not_always' Don't store endspace and don't do end space compression for all values. `no endspace, no empty' Don't store endspace. Don't store empty values. `table-lookup' The column was converted to an `ENUM'. `zerofill(n)' The most significant `n' bytes in the value are always 0 and are not stored. `no zeros' Don't store zeros. `always zero' 0 values are stored in 1 bit. `Huff tree' The Huffman tree associated with the field. `Bits' The number of bits used in the Huffman tree. After you have run `pack_isam'/`myisampack' you must run `isamchk'/`myisamchk' to re-create the index. At this time you can also sort the index blocks and create statistics needed for the MySQL optimiser to work more efficiently: myisamchk -rq --analyze --sort-index table_name.MYI isamchk -rq --analyze --sort-index table_name.ISM After you have installed the packed table into the MySQL database directory you should do `mysqladmin flush-tables' to force `mysqld' to start using the new table. If you want to unpack a packed table, you can do this with the `--unpack' option to `isamchk' or `myisamchk'. `mysqld-max', An Extended `mysqld' Server ----------------------------------------- `mysqld-max' is the MySQL server (`mysqld') configured with the following configure options: *Option* *Comment* -with-server-suffix=-maxAdd a suffix to the `mysqld' version string. -with-innodb Support for InnoDB tables. -with-bdb Support for Berkeley DB (BDB) tables CFLAGS=-DUSE_SYMDIR Symbolic links support for Windows. You can find the MySQL-max binaries at `http://www.mysql.com/downloads/mysql-max-3.23.html'. The Windows MySQL binary distributions includes both the standard `mysqld.exe' binary and the `mysqld-max.exe' binary. `http://www.mysql.com/downloads/mysql-3.23.html'. *Note Windows installation::. Note that as InnoDB and Berkeley DB are not available for all platforms, some of the `Max' binaries may not have support for both of these. You can check which table types are supported by doing the following query: mysql> SHOW VARIABLES LIKE "have_%"; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | have_bdb | YES | | have_innodb | NO | | have_isam | YES | | have_raid | NO | | have_openssl | NO | +---------------+-------+ The meaning of the values are: *Value* *Meaning* `YES' The option is activated and usable. `NO' MySQL is not compiled with support for this option. `DISABLED' The xxxx option is disabled because one started `mysqld' with `--skip-xxxx' or because one didn't start `mysqld' with all needed options to enable the option. In this case the `hostname.err' file should contain a reason for why the option is disabled. *Note*: To be able to create InnoDB tables you *must* edit your startup options to include at least the `innodb_data_file_path' option. *Note InnoDB start::. To get better performance for BDB tables, you should add some configuration options for these too. *Note BDB start::. `safe_mysqld' will automatically try to start any `mysqld' binary with the `-max' suffix. This makes it very easy to test out a another `mysqld' binary in an existing installation. Just run `configure' with the options you want and then install the new `mysqld' binary as `mysqld-max' in the same directory where your old `mysqld' binary is. *Note `safe_mysqld': safe_mysqld. The `mysqld-max' RPM uses the above mentioned `safe_mysqld' feature. It just installs the `mysqld-max' executable and `safe_mysqld' will automatically use this executable when `safe_mysqld' is restarted. The following table shows which table types our standard MySQL-Max binaries includes: *System* `BDB' `InnoDB' AIX 4.3 N Y HP-UX 11.0 N Y Linux-Alpha N Y Linux-Intel Y Y Linux-IA64 N Y Solaris-IntelN Y Solaris-SPARCY Y Caldera Y Y (SCO) OSR5 UnixWare Y Y Windows/NT Y Y MySQL Client-Side Scripts and Utilities ======================================= Overview of the Client-Side Scripts and Utilities ------------------------------------------------- All MySQL clients that communicate with the server using the `mysqlclient' library use the following environment variables: *Name* *Description* `MYSQL_UNIX_PORT' The default socket; used for connections to `localhost' `MYSQL_TCP_PORT' The default TCP/IP port `MYSQL_PWD' The default password `MYSQL_DEBUG' Debug-trace options when debugging `TMPDIR' The directory where temporary tables/files are created Use of `MYSQL_PWD' is insecure. *Note Connecting::. The `mysql' client uses the file named in the `MYSQL_HISTFILE' environment variable to save the command-line history. The default value for the history file is `$HOME/.mysql_history', where `$HOME' is the value of the `HOME' environment variable. *Note Environment variables::. All MySQL programs take many different options. However, every MySQL program provides a `--help' option that you can use to get a full description of the program's different options. For example, try `mysql --help'. You can override default options for all standard client programs with an option file. *Note Option files::. The following list briefly describes the client-side MySQL programs: `msql2mysql' A shell script that converts `mSQL' programs to MySQL. It doesn't handle all cases, but it gives a good start when converting. `mysqlaccess' A script that checks the access privileges for a host, user, and database combination. `mysqladmin' Utility for performing administrative operations, such as creating or dropping databases, reloading the grant tables, flushing tables to disk, and reopening log files. `mysqladmin' can also be used to retrieve version, process, and status information from the server. *Note `mysqladmin': mysqladmin. `mysqldump' Dumps a MySQL database into a file as SQL statements or as tab-separated text files. Enhanced freeware originally by Igor Romanenko. *Note `mysqldump': mysqldump. `mysqlimport' Imports text files into their respective tables using `LOAD DATA INFILE'. *Note `mysqlimport': mysqlimport. `mysqlshow' Displays information about databases, tables, columns, and indexes. `replace' A utility program that is used by `msql2mysql', but that has more general applicability as well. `replace' changes strings in place in files or on the standard input. Uses a finite state machine to match longer strings first. Can be used to swap strings. For example, this command swaps `a' and `b' in the given files: shell> replace a b b a -- file1 file2 ... `mysql', The Command-line Tool ------------------------------ `mysql' is a simple SQL shell (with GNU `readline' capabilities). It supports interactive and non-interactive use. When used interactively, query results are presented in an ASCII-table format. When used non-interactively (for example, as a filter), the result is presented in tab-separated format. (The output format can be changed using command-line options.) You can run scripts simply like this: shell> mysql database < script.sql > output.tab If you have problems due to insufficient memory in the client, use the `--quick' option! This forces `mysql' to use `mysql_use_result()' rather than `mysql_store_result()' to retrieve the result set. Using `mysql' is very easy. Just start it as follows: `mysql database' or `mysql --user=user_name --password=your_password database'. Type a SQL statement, end it with `;', `\g', or `\G' and press Enter. `mysql' supports the following options: `-?, --help' Display this help and exit. `-A, --no-auto-rehash' No automatic rehashing. One has to use 'rehash' to get table and field completion. This gives a quicker start of mysql. `--prompt=...' Set the mysql prompt to specified format. `-b, --no-beep' Turn off beep-on-error. `-B, --batch' Print results with a tab as separator, each row on a new line. Doesn't use history file. `--character-sets-dir=...' Directory where character sets are located. `-C, --compress' Use compression in server/client protocol. `-#, --debug[=...]' Debug log. Default is 'd:t:o,/tmp/mysql.trace'. `-D, --database=...' Database to use. This is mainly useful in the `my.cnf' file. `--default-character-set=...' Set the default character set. `-e, --execute=...' Execute command and quit. (Output like with -batch) `-E, --vertical' Print the output of a query (rows) vertically. Without this option you can also force this output by ending your statements with `\G'. `-f, --force' Continue even if we get a SQL error. `-g, --no-named-commands' Named commands are disabled. Use \* form only, or use named commands only in the beginning of a line ending with a semicolon (`;'). Since Version 10.9, the client now starts with this option *enabled* by default! With the -g option, long format commands will still work from the first line, however. `-G, --enable-named-commands' Named commands are *enabled*. Long format commands are allowed as well as shortened \* commands. `-i, --ignore-space' Ignore space after function names. `-h, --host=...' Connect to the given host. `-H, --html' Produce HTML output. `-X, --xml' Produce XML output. `-L, --skip-line-numbers' Don't write line number for errors. Useful when one wants to compare result files that includes error messages `--no-pager' Disable pager and print to stdout. See interactive help (\h) also. `--no-tee' Disable outfile. See interactive help (\h) also. `-n, --unbuffered' Flush buffer after each query. `-N, --skip-column-names' Don't write column names in results. `-O, --set-variable var=option' Give a variable a value. `--help' lists variables. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--var=option' on its own. `-o, --one-database' Only update the default database. This is useful for skipping updates to other database in the update log. ``--pager[=...]'' Output type. Default is your `ENV' variable `PAGER'. Valid pagers are less, more, cat [> filename], etc. See interactive help (\h) also. This option does not work in batch mode. Pager works only in Unix. `-p[password], --password[=...]' Password to use when connecting to server. If a password is not given on the command-line, you will be prompted for it. Note that if you use the short form `-p' you can't have a space between the option and the password. `-P port_num, --port=port_num' TCP/IP port number to use for connection. ``--protocol=(TCP | SOCKET | PIPE | MEMORY)'' To specify the connect protocol to use. New in MySQL 4.1. `-q, --quick' Don't cache result, print it row-by-row. This may slow down the server if the output is suspended. Doesn't use history file. `-r, --raw' Write column values without escape conversion. Used with `--batch' `--reconnect' If the connection is lost, automatically try to reconnect once to the server. `-s, --silent' Be more silent. `-S --socket=...' Socket file to use for connection. `-t --table' Output in table format. This is default in non-batch mode. `-T, --debug-info' Print some debug information at exit. `--tee=...' Append everything into outfile. See interactive help (\h) also. Does not work in batch mode. `-u, --user=#' User for login if not current user. `-U, --safe-updates[=#], --i-am-a-dummy[=#]' Only allow `UPDATE' and `DELETE' that uses keys. See below for more information about this option. You can reset this option if you have it in your `my.cnf' file by using `--safe-updates=0'. `-v, --verbose' More verbose output (-v -v -v gives the table output format). `-V, --version' Output version information and exit. `-w, --wait' Wait and retry if connection is down instead of aborting. You can also set the following variables with `-O' or `--set-variable'; please note that `--set-variable' is deprecated since MySQL 4.0, just use `--var=option' on its own: *Variable Name* *Default**Description* connect_timeout 0 Number of seconds before timeout connection. max_allowed_packet 16777216Max packetlength to send/receive from to server net_buffer_length 16384 Buffer for TCP/IP and socket communication select_limit 1000 Automatic limit for SELECT when using -i-am-a-dummy max_join_size 1000000 Automatic limit for rows in a join when using -i-am-a-dummy. If the `mysql' client loses connection to the server while sending it a query, it will immediately and automatically try to reconnect once to the server and send the query again. Note that even if it succeeds in reconnecting, as your first connection has ended, all your previous session objects are lost : temporary tables, user and session variables. Therefore, the above behaviour may be dangerous for you, like in this example where the server was shut down and restarted without you knowing it : mysql> set @a=1; Query OK, 0 rows affected (0.05 sec) mysql> insert into t values(@a); ERROR 2006: MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: test Query OK, 1 row affected (1.30 sec) mysql> select * from t; +------+ | a | +------+ | NULL | +------+ 1 row in set (0.05 sec) The `@a' user variable has been lost with the connection, and after the reconnection it is undefined. To protect from this risk, you can start the `mysql' client with the `--disable-reconnect' option. If you type 'help' on the command-line, `mysql' will print out the commands that it supports: mysql> help MySQL commands: help (\h) Display this text. ? (\h) Synonym for `help'. clear (\c) Clear command. connect (\r) Reconnect to the server. Optional arguments are db and host. edit (\e) Edit command with $EDITOR. ego (\G) Send command to mysql server, display result vertically. exit (\q) Exit mysql. Same as quit. go (\g) Send command to mysql server. nopager (\n) Disable pager, print to stdout. notee (\t) Don't write into outfile. pager (\P) Set PAGER [to_pager]. Print the query results via PAGER. print (\p) Print current command. prompt (\R) Change your mysql prompt. quit (\q) Quit mysql. rehash (\#) Rebuild completion hash. source (\.) Execute a SQL script file. Takes a file name as an argument. status (\s) Get status information from the server. tee (\T) Set outfile [to_outfile]. Append everything into given outfile. use (\u) Use another database. Takes database name as argument. The `pager' command works only in Unix. The `status' command gives you some information about the connection and the server you are using. If you are running in the `--safe-updates' mode, `status' will also print the values for the `mysql' variables that affect your queries. A useful startup option for beginners (introduced in MySQL Version 3.23.11) is `--safe-updates' (or `--i-am-a-dummy' for users that has at some time done a `DELETE FROM table_name' but forgot the `WHERE' clause). When using this option, `mysql' sends the following command to the MySQL server when opening the connection: SET SQL_SAFE_UPDATES=1,SQL_SELECT_LIMIT=#select_limit#, SQL_MAX_JOIN_SIZE=#max_join_size#" where `#select_limit#' and `#max_join_size#' are variables that can be set from the `mysql' command-line. *Note `SET': SET OPTION. The effect of the above is: * You are not allowed to do an `UPDATE' or `DELETE' statement if you don't have a key constraint in the `WHERE' part. One can, however, force an `UPDATE/DELETE' by using `LIMIT': UPDATE table_name SET not_key_column=# WHERE not_key_column=# LIMIT 1; * All big results are automatically limited to `#select_limit#' rows. * `SELECT's that will probably need to examine more than `#max_join_size' row combinations will be aborted. Some useful hints about the `mysql' client: Some data is much more readable when displayed vertically, instead of the usual horizontal box type output. For example longer text, which includes new lines, is often much easier to be read with vertical output. mysql> SELECT * FROM mails WHERE LENGTH(txt) < 300 lIMIT 300,1\G *************************** 1. row *************************** msg_nro: 3068 date: 2000-03-01 23:29:50 time_zone: +0200 mail_from: Monty reply: monty@no.spam.com mail_to: "Thimble Smith" sbj: UTF-8 txt: >>>>> "Thimble" == Thimble Smith writes: Thimble> Hi. I think this is a good idea. Is anyone familiar with UTF-8 Thimble> or Unicode? Otherwise, I'll put this on my TODO list and see what Thimble> happens. Yes, please do that. Regards, Monty file: inbox-jani-1 hash: 190402944 1 row in set (0.09 sec) For logging, you can use the `tee' option. The `tee' can be started with option `--tee=...', or from the command-line interactively with command `tee'. All the data displayed on the screen will also be appended into a given file. This can be very useful for debugging purposes also. The `tee' can be disabled from the command-line with command `notee'. Executing `tee' again starts logging again. Without a parameter the previous file will be used. Note that `tee' will flush the results into the file after each command, just before the command-line appears again waiting for the next command. Browsing, or searching the results in the interactive mode in Unix less, more, or any other similar program, is now possible with option `--pager[=...]'. Without argument, `mysql' client will look for environment variable PAGER and set `pager' to that. `pager' can be started from the interactive command-line with command `pager' and disabled with command `nopager'. The command takes an argument optionally and the `pager' will be set to that. Command `pager' can be called without an argument, but this requires that the option `--pager' was used, or the `pager' will default to stdout. `pager' works only in Unix, since it uses the popen() function, which doesn't exist in Windows. In Windows, the `tee' option can be used instead, although it may not be as handy as `pager' can be in some situations. A few tips about `pager': * You can use it to write to a file: mysql> pager cat > /tmp/log.txt and the results will only go to a file. You can also pass any options for the programs that you want to use with the `pager': mysql> pager less -n -i -S * From the above do note the option '-S'. You may find it very useful when browsing the results; try the option with horizontal output (end commands with '\g', or ';') and with vertical output (end commands with '\G'). Sometimes a very wide result set is hard to be read from the screen, with option -S to less you can browse the results within the interactive less from left to right, preventing lines longer than your screen from being continued to the next line. This can make the result set much more readable. You can swith the mode between on and off within the interactive less with '-S'. See the 'h' for more help about less. * You can combine very complex ways to handle the results, for example the following would send the results to two files in two different directories, on two different hard-disks mounted on /dr1 and /dr2, yet let the results still be seen on the screen via less: mysql> pager cat | tee /dr1/tmp/res.txt | \ tee /dr2/tmp/res2.txt | less -n -i -S You can also combine the two functions above; have the `tee' enabled, `pager' set to 'less' and you will be able to browse the results in unix 'less' and still have everything appended into a file the same time. The difference between Unix `tee' used with the `pager' and the `mysql' client in-built `tee', is that the in-built `tee' works even if you don't have the Unix `tee' available. The in-built `tee' also logs everything that is printed on the screen, where the Unix `tee' used with `pager' doesn't log quite that much. Last, but not least, the interactive `tee' is more handy to switch on and off, when you want to log something into a file, but want to be able to turn the feature off sometimes. From MySQL version 4.0.2 it is possible to change the prompt in the `mysql' command-line client. You can use the following prompt options: *Option**Description* \v mysqld version \d database in use \h host connected to \p port connected on \u username \U full username@host \\ `\' \n new line break \t tab \ space \_ space \R military hour time (0-23) \r standard hour time (1-12) \m minutes \y two digit year \Y four digit year \D full date format \s seconds \w day of the week in three letter format (Mon, Tue, ...) \P am/pm \o month in number format \O month in three letter format (Jan, Feb, ...) \c counter that counts up for each command you do `\' followed by any other letter just becomes that letter. You may set the prompt in the following places: *Environment Variable* You may set the `MYSQL_PS1' environment variable to a prompt string. For example: shell> export MYSQL_PS1="(\u@\h) [\d]> " *`my.cnf'* *`.my.cnf'* You may set the `prompt' option in any MySQL configuration file, in the `mysql' group. For example: [mysql] prompt=(\u@\h) [\d]>\_ *Command Line* You may set the `--prompt' option on the command line to `mysql'. For example: shell> mysql --prompt="(\u@\h) [\d]> " (user@host) [database]> *Interactively* You may also use the `prompt' (or `\R') command to change your prompt interactively. For example: mysql> prompt (\u@\h) [\d]>\_ PROMPT set to '(\u@\h) [\d]>\_' (user@host) [database]> (user@host) [database]> prompt Returning to default PROMPT of mysql> mysql> `mysqladmin', Administrating a MySQL Server ------------------------------------------- A utility for performing administrative operations. The syntax is: shell> mysqladmin [OPTIONS] command [command-option] command ... You can get a list of the options your version of `mysqladmin' supports by executing `mysqladmin --help'. The current `mysqladmin' supports the following commands: `create databasename' Create a new database. `drop databasename' Delete a database and all its tables. `extended-status' Gives an extended status message from the server. `flush-hosts' Flush all cached hosts. `flush-logs' Flush all logs. `flush-tables' Flush all tables. `flush-privileges' Reload grant tables (same as reload). `kill id,id,...' Kill mysql threads. `password' Set a new password. Change old password to new-password. `ping' Check if mysqld is alive. `processlist' Show list of active threads in server. `reload' Reload grant tables. `refresh' Flush all tables and close and open logfiles. `shutdown' Take server down. `slave-start' Start slave replication thread. `slave-stop' Stop slave replication thread. `status' Gives a short status message from the server. `variables' Prints variables available. `version' Get version info from server. All commands can be shortened to their unique prefix. For example: shell> mysqladmin proc stat +----+-------+-----------+----+-------------+------+-------+------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------+-----------+----+-------------+------+-------+------+ | 6 | monty | localhost | | Processlist | 0 | | | +----+-------+-----------+----+-------------+------+-------+------+ Uptime: 10077 Threads: 1 Questions: 9 Slow queries: 0 Opens: 6 Flush tables: 1 Open tables: 2 Memory in use: 1092K Max memory used: 1116K The `mysqladmin status' command result has the following columns: *Column* *Description* Uptime Number of seconds the MySQL server has been up. Threads Number of active threads (clients). Questions Number of questions from clients since `mysqld' was started. Slow queries Queries that have taken more than `long_query_time' seconds. *Note Slow query log::. Opens How many tables `mysqld' has opened. Flush tables Number of `flush ...', `refresh', and `reload' commands. Open tables Number of tables that are open now. Memory in use Memory allocated directly by the `mysqld' code (only available when MySQL is compiled with -with-debug=full). Max memory Maximum memory allocated directly by the used `mysqld' code (only available when MySQL is compiled with -with-debug=full). If you do `mysqladmin shutdown' on a socket (in other words, on a the computer where `mysqld' is running), `mysqladmin' will wait until the MySQL `pid-file' is removed to ensure that the `mysqld' server has stopped properly. Using `mysqlcheck' for Table Maintenance and Crash Recovery ----------------------------------------------------------- Since MySQL version 3.23.38 you will be able to use a new checking and repairing tool for `MyISAM' tables. The difference to `myisamchk' is that `mysqlcheck' should be used when the `mysqld' server is running, where as `myisamchk' should be used when it is not. The benefit is that you no longer have to take the server down for checking or repairing your tables. `mysqlcheck' uses MySQL server commands `CHECK', `REPAIR', `ANALYZE' and `OPTIMIZE' in a convenient way for the user. There are three alternative ways to invoke `mysqlcheck': shell> mysqlcheck [OPTIONS] database [tables] shell> mysqlcheck [OPTIONS] --databases DB1 [DB2 DB3...] shell> mysqlcheck [OPTIONS] --all-databases So it can be used in a similar way as `mysqldump' when it comes to what databases and tables you want to choose. `mysqlcheck' does have a special feature compared to the other clients; the default behaviour, checking tables (-c), can be changed by renaming the binary. So if you want to have a tool that repairs tables by default, you should just copy `mysqlcheck' to your harddrive with a new name, `mysqlrepair', or alternatively make a symbolic link to `mysqlrepair' and name the symbolic link as `mysqlrepair'. If you invoke `mysqlrepair' now, it will repair tables by default. The names that you can use to change `mysqlcheck' default behaviour are here: mysqlrepair: The default option will be -r mysqlanalyze: The default option will be -a mysqloptimize: The default option will be -o The options available for `mysqlcheck' are listed here, please check what your version supports with `mysqlcheck --help'. `-A, --all-databases' Check all the databases. This will be same as -databases with all databases selected `-1, --all-in-1' Instead of making one query for each table, execute all queries in 1 query separately for each database. Table names will be in a comma separated list. `-a, --analyze' Analyse given tables. `--auto-repair' If a checked table is corrupted, automatically fix it. Repairing will be done after all tables have been checked, if corrupted ones were found. `-#, --debug=...' Output debug log. Often this is 'd:t:o,filename' `--character-sets-dir=...' Directory where character sets are `-c, --check' Check table for errors `-C, --check-only-changed' Check only tables that have changed since last check or haven't been closed properly. `--compress' Use compression in server/client protocol. `-?, --help' Display this help message and exit. `-B, --databases' To check several databases. Note the difference in usage; in this case no tables are given. All name arguments are regarded as database names. `--default-character-set=...' Set the default character set `-F, --fast' Check only tables that hasn't been closed properly `-f, --force' Continue even if we get an sql-error. `-e, --extended' If you are using this option with CHECK TABLE, it will ensure that the table is 100 percent consistent, but will take a long time. If you are using this option with REPAIR TABLE, it will run an extended repair on the table, which may not only take a long time to execute, but may produce a lot of garbage rows also! `-h, --host=...' Connect to host. `-m, --medium-check' Faster than extended-check, but only finds 99.99 percent of all errors. Should be good enough for most cases. `-o, --optimize' Optimise table `-p, --password[=...]' Password to use when connecting to server. If password is not given it's solicited on the tty. `-P, --port=...' Port number to use for TCP/IP connections. ``--protocol=(TCP | SOCKET | PIPE | MEMORY)'' To specify the connect protocol to use. New in MySQL 4.1. `-q, --quick' If you are using this option with CHECK TABLE, it prevents the check from scanning the rows to check for wrong links. This is the fastest check. If you are using this option with REPAIR TABLE, it will try to repair only the index tree. This is the fastest repair method for a table. `-r, --repair' Can fix almost anything except unique keys that aren't unique. `-s, --silent' Print only error messages. `-S, --socket=...' Socket file to use for connection. `--tables' Overrides option -databases (-B). `-u, --user=#' User for login if not current user. `-v, --verbose' Print info about the various stages. `-V, --version' Output version information and exit. `mysqldump', Dumping Table Structure and Data --------------------------------------------- Utility to dump a database or a collection of database for backup or for transferring the data to another SQL server (not necessarily a MySQL server). The dump will contain SQL statements to create the table and/or populate the table. If you are doing a backup on the server, you should consider using the `mysqlhotcopy' instead. *Note `mysqlhotcopy': mysqlhotcopy. shell> mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS] If you don't give any tables or use the `--databases' or `--all-databases', the whole database(s) will be dumped. You can get a list of the options your version of `mysqldump' supports by executing `mysqldump --help'. Note that if you run `mysqldump' without `--quick' or `--opt', `mysqldump' will load the whole result set into memory before dumping the result. This will probably be a problem if you are dumping a big database. Note that if you are using a new copy of the `mysqldump' program and you are going to do a dump that will be read into a very old MySQL server, you should not use the `--opt' or `-e' options. `mysqldump' supports the following options: `--add-locks' Add `LOCK TABLES' before and `UNLOCK TABLE' after each table dump. (To get faster inserts into MySQL.) `--add-drop-table' Add a `drop table' before each create statement. `-A, --all-databases' Dump all the databases. This will be same as `--databases' with all databases selected. `-a, --all' Include all MySQL-specific create options. `--allow-keywords' Allow creation of column names that are keywords. This works by prefixing each column name with the table name. `-c, --complete-insert' Use complete insert statements (with column names). `-C, --compress' Compress all information between the client and the server if both support compression. `-B, --databases' To dump several databases. Note the difference in usage. In this case no tables are given. All name arguments are regarded as database names. `USE db_name;' will be included in the output before each new database. `--delayed' Insert rows with the `INSERT DELAYED' command. `-e, --extended-insert' Use the new multiline `INSERT' syntax. (Gives more compact and faster inserts statements.) `-#, --debug[=option_string]' Trace usage of the program (for debugging). `--help' Display a help message and exit. `--fields-terminated-by=...' `--fields-enclosed-by=...' `--fields-optionally-enclosed-by=...' `--fields-escaped-by=...' `--lines-terminated-by=...' These options are used with the `-T' option and have the same meaning as the corresponding clauses for `LOAD DATA INFILE'. *Note `LOAD DATA': LOAD DATA. `-F, --flush-logs' Flush log file in the MySQL server before starting the dump. `-f, --force,' Continue even if we get a SQL error during a table dump. `-h, --host=..' Dump data from the MySQL server on the named host. The default host is `localhost'. `-l, --lock-tables.' Lock all tables before starting the dump. The tables are locked with `READ LOCAL' to allow concurrent inserts in the case of `MyISAM' tables. Please note that when dumping multiple databases, `--lock-tables' will lock tables for each database separately. So using this option will not guarantee your tables will be logically consistent between databases. Tables in different databases may be dumped in completely different states. `-K, --disable-keys' `/*!40000 ALTER TABLE tb_name DISABLE KEYS */;' and `/*!40000 ALTER TABLE tb_name ENABLE KEYS */;' will be put in the output. This will make loading the data into a MySQL 4.0 server faster as the indexes are created after all data are inserted. `-n, --no-create-db' `CREATE DATABASE /*!32312 IF NOT EXISTS*/ db_name;' will not be put in the output. The above line will be added otherwise, if a `--databases' or `--all-databases' option was given. `-t, --no-create-info' Don't write table creation information (the `CREATE TABLE' statement). `-d, --no-data' Don't write any row information for the table. This is very useful if you just want to get a dump of the structure for a table! `--opt' Same as `--quick --add-drop-table --add-locks --extended-insert --lock-tables'. Should give you the fastest possible dump for reading into a MySQL server. `-pyour_pass, --password[=your_pass]' The password to use when connecting to the server. If you specify no `=your_pass' part, `mysqldump' you will be prompted for a password. `-P, --port=...' Port number to use for TCP/IP connections. ``--protocol=(TCP | SOCKET | PIPE | MEMORY)'' To specify the connect protocol to use. New in MySQL 4.1. `-q, --quick' Don't buffer query, dump directly to stdout. Uses `mysql_use_result()' to do this. `-Q, --quote-names' Quote table and column names within ``' characters. `-r, --result-file=...' Direct output to a given file. This option should be used in MSDOS, because it prevents new line `\n' from being converted to `\n\r' (new line + carriage return). `--single-transaction' This option issues a `BEGIN' SQL command before dumping data from server. It is mostly useful with `InnoDB' tables and `READ_COMMITTED' transaction isolation level, as in this mode it will dump the consistent state of the database at the time then `BEGIN' was issued without blocking any applications. When using this option you should keep in mind that only transactional tables will be dumped in a consistent state, e.g., any `MyISAM' or `HEAP' tables dumped while using this option may still change state. The `--single-transaction' option was added in version 4.0.2. This option is mutually exclusive with the `--lock-tables' option as `LOCK TABLES' already commits a previous transaction internally. `-S /path/to/socket, --socket=/path/to/socket' The socket file to use when connecting to `localhost' (which is the default host). `--tables' Overrides option -databases (-B). `-T, --tab=path-to-some-directory' Creates a `table_name.sql' file, that contains the SQL CREATE commands, and a `table_name.txt' file, that contains the data, for each give table. The format of the `.txt' file is made according to the `--fields-xxx' and `--lines--xxx' options. *Note*: This option only works if `mysqldump' is run on the same machine as the `mysqld' daemon, and the user/group that `mysqld' is running as (normally user `mysql', group `mysql') needs to have permission to create/write a file at the location you specify. `-u user_name, --user=user_name' The MySQL user name to use when connecting to the server. The default value is your Unix login name. `-O var=option, --set-variable var=option' Set the value of a variable. The possible variables are listed below. Please note that `--set-variable' is deprecated since MySQL 4.0, just use `--var=option' on its own. `-v, --verbose' Verbose mode. Print out more information on what the program does. `-V, --version' Print version information and exit. `-w, --where='where-condition'' Dump only selected records. Note that quotes are mandatory: `-X, --xml' Dumps a database as well formed XML `-x, --first-slave' Locks all tables across all databases. "--where=user='jimf'" "-wuserid>1" "-wuserid<1" `-O net_buffer_length=#, where # < 16M' When creating multi-row-insert statements (as with option `--extended-insert' or `--opt'), `mysqldump' will create rows up to `net_buffer_length' length. If you increase this variable, you should also ensure that the `max_allowed_packet' variable in the MySQL server is bigger than the `net_buffer_length'. The most normal use of `mysqldump' is probably for making a backup of whole databases. *Note Backup::. mysqldump --opt database > backup-file.sql You can read this back into MySQL with: mysql database < backup-file.sql or mysql -e "source /patch-to-backup/backup-file.sql" database However, it's also very useful to populate another MySQL server with information from a database: mysqldump --opt database | mysql ---host=remote-host -C database It is possible to dump several databases with one command: mysqldump --databases database1 [database2 ...] > my_databases.sql If all the databases are wanted, one can use: mysqldump --all-databases > all_databases.sql `mysqlhotcopy', Copying MySQL Databases and Tables -------------------------------------------------- `mysqlhotcopy' is a Perl script that uses `LOCK TABLES', `FLUSH TABLES' and `cp' or `scp' to quickly make a backup of a database. It's the fastest way to make a backup of the database or single tables, but it can only be run on the same machine where the database directories are. mysqlhotcopy db_name [/path/to/new_directory] mysqlhotcopy db_name_1 ... db_name_n /path/to/new_directory mysqlhotcopy db_name./regex/ `mysqlhotcopy' supports the following options: `-?, --help' Display a help screen and exit `-u, --user=#' User for database login `-p, --password=#' Password to use when connecting to server `-P, --port=#' Port to use when connecting to local server `-S, --socket=#' Socket to use when connecting to local server `--allowold' Don't abort if target already exists (rename it _old) `--keepold' Don't delete previous (now renamed) target when done `--noindices' Don't include full index files in copy to make the backup smaller and faster The indexes can later be reconstructed with `myisamchk -rq.'. `--method=#' Method for copy (`cp' or `scp'). `-q, --quiet' Be silent except for errors `--debug' Enable debug `-n, --dryrun' Report actions without doing them `--regexp=#' Copy all databases with names matching regexp `--suffix=#' Suffix for names of copied databases `--checkpoint=#' Insert checkpoint entry into specified db.table `--flushlog' Flush logs once all tables are locked. `--tmpdir=#' Temporary directory (instead of /tmp). You can use `perldoc mysqlhotcopy' to get more complete documentation for `mysqlhotcopy'. `mysqlhotcopy' reads the groups `[client]' and `[mysqlhotcopy]' from the option files. To be able to execute `mysqlhotcopy' you need write access to the backup directory, the `SELECT' privilege for the tables you are about to copy and the MySQL `RELOAD' privilege (to be able to execute `FLUSH TABLES'). `mysqlimport', Importing Data from Text Files --------------------------------------------- `mysqlimport' provides a command-line interface to the `LOAD DATA INFILE' SQL statement. Most options to `mysqlimport' correspond directly to the same options to `LOAD DATA INFILE'. *Note `LOAD DATA': LOAD DATA. `mysqlimport' is invoked like this: shell> mysqlimport [options] database textfile1 [textfile2 ...] For each text file named on the command-line, `mysqlimport' strips any extension from the filename and uses the result to determine which table to import the file's contents into. For example, files named `patient.txt', `patient.text', and `patient' would all be imported into a table named `patient'. `mysqlimport' supports the following options: `-c, --columns=...' This option takes a comma-separated list of field names as an argument. The field list is used to create a proper `LOAD DATA INFILE' command, which is then passed to MySQL. *Note `LOAD DATA': LOAD DATA. `-C, --compress' Compress all information between the client and the server if both support compression. `-#, --debug[=option_string]' Trace usage of the program (for debugging). `-d, --delete' Empty the table before importing the text file. `--fields-terminated-by=...' `--fields-enclosed-by=...' `--fields-optionally-enclosed-by=...' `--fields-escaped-by=...' `--lines-terminated-by=...' These options have the same meaning as the corresponding clauses for `LOAD DATA INFILE'. *Note `LOAD DATA': LOAD DATA. `-f, --force' Ignore errors. For example, if a table for a text file doesn't exist, continue processing any remaining files. Without `--force', `mysqlimport' exits if a table doesn't exist. `--help' Display a help message and exit. `-h host_name, --host=host_name' Import data to the MySQL server on the named host. The default host is `localhost'. `-i, --ignore' See the description for the `--replace' option. `-l, --lock-tables' Lock *all* tables for writing before processing any text files. This ensures that all tables are synchronised on the server. `-L, --local' Read input files from the client. By default, text files are assumed to be on the server if you connect to `localhost' (which is the default host). `-pyour_pass, --password[=your_pass]' The password to use when connecting to the server. If you specify no `=your_pass' part, `mysqlimport' you will be prompted for a password. `-P port_num, --port=port_num' TCP/IP port number to use for connection. ``--protocol=(TCP | SOCKET | PIPE | MEMORY)'' To specify the connect protocol to use. New in MySQL 4.1. `-r, --replace' The `--replace' and `--ignore' options control handling of input records that duplicate existing records on unique key values. If you specify `--replace', new rows replace existing rows that have the same unique key value. If you specify `--ignore', input rows that duplicate an existing row on a unique key value are skipped. If you don't specify either option, an error occurs when a duplicate key value is found, and the rest of the text file is ignored. `-s, --silent' Silent mode. Write output only when errors occur. `-S /path/to/socket, --socket=/path/to/socket' The socket file to use when connecting to `localhost' (which is the default host). `-u user_name, --user=user_name' The MySQL user name to use when connecting to the server. The default value is your Unix login name. `-v, --verbose' Verbose mode. Print out more information what the program does. `-V, --version' Print version information and exit. Here is a sample run using `mysqlimport': $ mysql --version mysql Ver 9.33 Distrib 3.22.25, for pc-linux-gnu (i686) $ uname -a Linux xxx.com 2.2.5-15 #1 Mon Apr 19 22:21:09 EDT 1999 i586 unknown $ mysql -e 'CREATE TABLE imptest(id INT, n VARCHAR(30))' test $ ed a 100 Max Sydow 101 Count Dracula . w imptest.txt 32 q $ od -c imptest.txt 0000000 1 0 0 \t M a x S y d o w \n 1 0 0000020 1 \t C o u n t D r a c u l a \n 0000040 $ mysqlimport --local test imptest.txt test.imptest: Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 $ mysql -e 'SELECT * FROM imptest' test +------+---------------+ | id | n | +------+---------------+ | 100 | Max Sydow | | 101 | Count Dracula | +------+---------------+ `mysqlshow', Showing Databases, Tables, and Columns --------------------------------------------------- `mysqlshow' can be used to quickly look at which databases exist, their tables, and the table's columns. With the `mysql' program you can get the same information with the `SHOW' commands. *Note SHOW::. `mysqlshow' is invoked like this: shell> mysqlshow [OPTIONS] [database [table [column]]] * If no database is given, all matching databases are shown. * If no table is given, all matching tables in the database are shown. * If no column is given, all matching columns and column types in the table are shown. Note that in newer MySQL versions, you only see those database/tables/columns for which you have some privileges. If the last argument contains a shell or SQL wildcard (`*', `?', `%' or `_') then only what's matched by the wildcard is shown. If a database contains underscore(s), those should be escaped with backslash (some Unix shells will require two), in order to get tables / columns properly. '*' are converted into SQL '%' wildcard and '?' into SQL '_' wildcard. This may cause some confusion when you try to display the columns for a table with a `_' as in this case `mysqlshow' only shows you the table names that match the pattern. This is easily fixed by adding an extra `%' last on the command-line (as a separate argument). `mysql_config', Get compile options for compiling clients --------------------------------------------------------- `mysql_config' provides you with useful information how to compile your MySQL client and connect it to MySQL. `mysql_config' supports the following options: `--cflags' Compiler flags to find include files `--libs' Libs and options required to link with the MySQL client library. `--socket' The default socket name, defined when configuring MySQL. `--port' The default port number, defined when configuring MySQL. `--version' Version number and version for the MySQL distribution `--libmysqld-libs' Libs and options required to link with the MySQL embedded server. If you execute `mysql_config' without any options it will print all options it supports plus the value of all options: shell> mysql_config sage: /usr/local/mysql/bin/mysql_config [OPTIONS] Options: --cflags [-I'/usr/local/mysql/include/mysql'] --libs [-L'/usr/local/mysql/lib/mysql' -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib -lssl -lcrypto] --socket [/tmp/mysql.sock] --port [3306] --version [4.0.8-gamma] --libmysqld-libs [ -L'/usr/local/mysql/lib/mysql' -lmysqld -lpthread -lz -lcrypt -lnsl -lm -lpthread -lrt] You can use this to compile a MySQL client by as follows: CFG=/usr/local/mysql/bin/mysql_config sh -c "gcc -o progname `$CFG --cflags` progname.c `$CFG --libs`" `perror', Explaining Error Codes -------------------------------- For most system errors MySQL will, in addition to a internal text message, also print the system error code in one of the following styles: `message ... (errno: #)' or `message ... (Errcode: #)'. You can find out what the error code means by either examining the documentation for your system or use the `perror' utility. `perror' prints a description for a system error code, or an MyISAM/ISAM storage engine (table handler) error code. `perror' is invoked like this: shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]] Example: shell> perror 13 64 Error code 13: Permission denied Error code 64: Machine is not on the network Note that the error messages are mostly system dependent! How to Run SQL Commands from a Text File ---------------------------------------- The `mysql' client typically is used interactively, like this: shell> mysql database However, it's also possible to put your SQL commands in a file and tell `mysql' to read its input from that file. To do so, create a text file `text_file' that contains the commands you wish to execute. Then invoke `mysql' as shown here: shell> mysql database < text_file You can also start your text file with a `USE db_name' statement. In this case, it is unnecessary to specify the database name on the command line: shell> mysql < text_file If you are already running `mysql', you can execute a SQL script file using the `source' command: mysql> source filename; For more information about batch mode, *Note Batch mode::. The MySQL Log Files =================== MySQL has several different log files that can help you find out what's going on inside `mysqld': *Log file* *Description* The error log Problems encountering starting, running or stopping `mysqld'. The isam log Logs all changes to the ISAM tables. Used only for debugging the isam code. The query log Established connections and executed queries. The update Deprecated: Stores all statements that changes data log The binary Stores all statements that changes something. Used also log for replication The slow log Stores all queries that took more than `long_query_time' to execute or didn't use indexes. All logs can be found in the `mysqld' data directory. You can force `mysqld' to reopen the log files (or in some cases switch to a new log) by executing `FLUSH LOGS'. *Note FLUSH::. The Error Log ------------- The error log file contains information indicating when `mysqld' was started and stopped and also any critical errors found when running. If `mysqld' dies unexpectedly and `mysqld_safe' needs to restart `mysqld', `mysqld_safe' will write a `restarted mysqld' row in this file. This log also holds a warning if `mysqld' notices a table that needs to be automatically checked or repaired. On some operating systems, the error log will contain a stack trace for where `mysqld' died. This can be used to find out where `mysqld' died. *Note Using stack trace::. Beginning with MySQL 4.0.10 you can specify where `mysqld' stores the error log file with the option `--log-error[=filename]'. If no file name is given `mysqld' will use `mysql-data-dir/'hostname'.err' on Unix and `\mysql\data\mysql.err' on windows. If you execute `flush logs' the old file will be prefixed with `--old' and `mysqld' will create a new empty log file. In older MySQL versions the error log handling was done by `mysqld_safe' which redirected the error file to `'hostname'.err'. One could change this file name with the option `--err-log=filename'. If you don't specify `--log-error' or if you use the `--console' option the errors will be written to stderr (the terminal). On windows the output is always done to the `.err' file if `--console' is not given. The General Query Log --------------------- If you want to know what happens within `mysqld', you should start it with `--log[=file]'. This will log all connections and queries to the log file (by default named `'hostname'.log'). This log can be very useful when you suspect an error in a client and want to know exactly what `mysqld' thought the client sent to it. Older versions of the `mysql.server' script (from MySQL 3.23.4 to 3.23.8) pass `safe_mysqld' a `--log' option (enable general query log). If you need better performance when you start using MySQL in a production environment, you can remove the `--log' option from `mysql.server' or change it to `--log-bin'. *Note Binary log::. The entries in this log are written as `mysqld' receives the questions. This may be different from the order in which the statements are executed. This is in contrast to the update log and the binary log which are written after the query is executed, but before any locks are released. The Update Log -------------- *Note*: the update log is replaced by the binary log. *Note Binary log::. With this you can do anything that you can do with the update log. When started with the `--log-update[=file_name]' option, `mysqld' writes a log file containing all SQL commands that update data. If no filename is given, it defaults to the name of the host machine. If a filename is given, but it doesn't contain a path, the file is written in the data directory. If `file_name' doesn't have an extension, `mysqld' will create log file names like so: `file_name.###', where `###' is a number that is incremented each time you execute `mysqladmin refresh', execute `mysqladmin flush-logs', execute the `FLUSH LOGS' statement, or restart the server. *Note*: for the above scheme to work, you must not create your own files with the same filename as the update log + some extensions that may be regarded as a number, in the directory used by the update log! If you use the `--log' or `-l' options, `mysqld' writes a general log with a filename of `hostname.log', and restarts and refreshes do not cause a new log file to be generated (although it is closed and reopened). In this case you can copy it (on Unix) by doing: mv hostname.log hostname-old.log mysqladmin flush-logs cp hostname-old.log to-backup-directory rm hostname-old.log Update logging is smart because it logs only statements that really update data. So an `UPDATE' or a `DELETE' with a `WHERE' that finds no rows is not written to the log. It even skips `UPDATE' statements that set a column to the value it already has. The update logging is done immediately after a query completes but before any locks are released or any commit is done. This ensures that the log will be logged in the execution order. If you want to update a database from update log files, you could do the following (assuming your update logs have names of the form `file_name.###'): shell> ls -1 -t -r file_name.[0-9]* | xargs cat | mysql `ls' is used to get all the log files in the right order. This can be useful if you have to revert to backup files after a crash and you want to redo the updates that occurred between the time of the backup and the crash. The Binary Update Log --------------------- The intention is that the binary log should replace the update log, so we recommend you to switch to this log format as soon as possible! The binary log contains all information that is available in the update log in a more efficient format. It also contains information about how long each query took that updated the database. It doesn't contain queries that don't modify any data. If you want to log all queries (for example to find a problem query) you should use the general query log. *Note Query log::. The binary log is also used when you are replicating a slave from a master. *Note Replication::. When started with the `--log-bin[=file_name]' option, `mysqld' writes a log file containing all SQL commands that update data. If no file name is given, it defaults to the name of the host machine followed by `-bin'. If file name is given, but it doesn't contain a path, the file is written in the data directory. If you supply an extension to `--log-bin=filename.extension', the extension will be silenty removed. To the binary log filename `mysqld' will append an extension that is a number that is incremented each time you execute `mysqladmin refresh', execute `mysqladmin flush-logs', execute the `FLUSH LOGS' statement or restart the server. A new binary log will also automatically be created when it reaches `max_binlog_size'. You can delete all not active binary log files with the `RESET MASTER' command. *Note RESET::. You can use the following options to `mysqld' to affect what is logged to the binary log: *Option* *Description* `binlog-do-db=database_name' Tells the master that it should log updates to the binary log if the current database (i.e. the one selected by `USE') database is 'database_name'. All others databases which are not explicitly mentioned are ignored. Note that if you use this you should ensure that you only do updates in the current database. (Example: `binlog-do-db=some_database') `binlog-ignore-db=database_name' Tells the master that updates where the current database (i.e. the one selected by `USE') is 'database_name' should not be stored in the binary log. Note that if you use this you should ensure that you only do updates in the current database. (Example: `binlog-ignore-db=some_database') To be able to know which different binary log files have been used, `mysqld' will also create a binary log index file that contains the name of all used binary log files. By default this has the same name as the binary log file, with the extension `'.index''. You can change the name of the binary log index file with the `--log-bin-index=[filename]' option. You should not manually edit this file while `mysqld' is running; doing this would confuse `mysqld'. If you are using replication, you should not delete old binary log files until you are sure that no slave will ever need to use them. One way to do this is to do `mysqladmin flush-logs' once a day and then remove any logs that are more than 3 days old. You can remove them manually, or preferably using `PURGE MASTER LOGS TO' (*note Replication SQL::) which will also safely update the binary log index file for you. You can examine the binary log file with the `mysqlbinlog' command. For example, you can update a MySQL server from the binary log as follows: shell> mysqlbinlog log-file | mysql -h server_name You can also use the `mysqlbinlog' program to read the binary log directly from a remote MySQL server! `mysqlbinlog --help' will give you more information of how to use this program! If you are using `BEGIN [WORK]' or `SET AUTOCOMMIT=0', you must use the MySQL binary log for backups instead of the old update log. The binary logging is done immediately after a query completes but before any locks are released or any commit is done. This ensures that the log will be logged in the execution order. Updates to non-transactional tables are stored in the binary log immediately after execution. For transactional tables such as `BDB' or `InnoDB' tables, all updates (`UPDATE', `DELETE' or `INSERT') that change tables are cached until a `COMMIT' command is sent to the server. At this point `mysqld' writes the whole transaction to the binary log before the `COMMIT' is executed. Every thread will, on start, allocate a buffer of `binlog_cache_size' to buffer queries. If a query is bigger than this, the thread will open a temporary file to store the transaction. The temporary file will be deleted when the thread ends. The `max_binlog_cache_size' (default 4G) can be used to restrict the total size used to cache a multi-query transaction. If a transaction is bigger than this it will fail and roll back. If you are using the update or binary log, concurrent inserts will be converted to normal inserts when using `CREATE ... SELECT' or `INSERT ... SELECT'. This is to ensure that you can recreate an exact copy of your tables by applying the log on a backup. The Slow Query Log ------------------ When started with the `--log-slow-queries[=file_name]' option, `mysqld' writes a log file containing all SQL commands that took more than `long_query_time' to execute. The time to get the initial table locks are not counted as execution time. The slow query log is logged after the query is executed and after all locks has been released. This may be different from the order in which the statements are executed. If no file name is given, it defaults to the name of the host machine suffixed with `-slow.log'. If a filename is given, but doesn't contain a path, the file is written in the data directory. The slow query log can be used to find queries that take a long time to execute and are thus candidates for optimisation. With a large log, that can become a difficult task. You can pipe the slow query log through the `mysqldumpslow' command to get a summary of the queries which appear in the log. You are using `--log-long-format' then also queries that are not using indexes are printed. *Note Command-line options::. Log File Maintenance -------------------- The MySQL Server can create a number of different log files, which make it easy to see what is going on. *Note Log Files::. One must however regularly clean up these files, to ensure that the logs don't take up too much disk space. When using MySQL with log files, you will, from time to time, want to remove/backup old log files and tell MySQL to start logging on new files. *Note Backup::. On a Linux (`Red Hat') installation, you can use the `mysql-log-rotate' script for this. If you installed MySQL from an RPM distribution, the script should have been installed automatically. Note that you should be careful with this if you are using the log for replication! On other systems you must install a short script yourself that you start from `cron' to handle log files. You can force MySQL to start using new log files by using `mysqladmin flush-logs' or by using the SQL command `FLUSH LOGS'. If you are using MySQL Version 3.21 you must use `mysqladmin refresh'. The above command does the following: * If standard logging (`--log') or slow query logging (`--log-slow-queries') is used, closes and reopens the log file (`mysql.log' and ``hostname`-slow.log' as default). * If update logging (`--log-update') is used, closes the update log and opens a new log file with a higher sequence number. If you are using only an update log, you only have to flush the logs and then move away the old update log files to a backup. If you are using the normal logging, you can do something like: shell> cd mysql-data-directory shell> mv mysql.log mysql.old shell> mysqladmin flush-logs and then take a backup and remove `mysql.old'. Replication in MySQL ==================== This section describes the various replication features in MySQL. It serves as a reference to the options available with replication. You will be introduced to replication and learn how to implement it. Toward the end, there are some frequently asked questions and descriptions of problems and how to solve them. We suggest that you visit our website at `http://www.mysql.com/' often and read updates to this section. Replication is constantly being improved, and we update the manual frequently with the most current information. Introduction ------------ One way replication can be used is to increase both robustness and speed. For robustness you can have two systems and can switch to the backup if you have problems with the master. The extra speed is achieved by sending a part of the non-updating queries to the replica server. Of course this only works if non-updating queries dominate, but that is the normal case. Starting in Version 3.23.15, MySQL supports one-way replication internally. One server acts as the master, while the other acts as the slave. Note that one server could play the roles of master in one pair and slave in the other. The master server keeps a binary log of updates (*note Binary log::) and an index file to binary logs to keep track of log rotation. The slave, upon connecting, informs the master where it left off since the last successfully propagated update, catches up on the updates, and then blocks and waits for the master to notify it of the new updates. Note that if you are replicating a database, all updates to this database should be done through the master! Another benefit of using replication is that one can get live backups of the system by doing a backup on a slave instead of doing it on the master. *Note Backup::. Replication Implementation Overview ----------------------------------- MySQL replication is based on the server keeping track of all changes to your database (updates, deletes, etc) in the binary log (*note Binary log::) and the slave server(s) reading the saved queries from the master server's binary log so that the slave can execute the same queries on its copy of the data. It is *very important* to realise that the binary log is simply a record starting from a fixed point in time (the moment you enable binary logging). Any slaves which you set up will need copies of all the data from your master as it existed the moment that you enabled binary logging on the master. If you start your slaves with data that doesn't agree with what was on the master *when the binary log was started*, your slaves may fail. Please see the following table for an indication of master-slave compatibility between different versions. With regard to version 4.0, we recommend using same version on both sides. *Master* *Master**Master**Master* *3.23.33 *4.0.0* *4.0.1* *4.0.3 and and up* up* *Slave* *3.23.33 yes no no no and up* *Slave* *4.0.0* no yes no no *Slave* *4.0.1* yes no yes no *Slave* *4.0.3 and yes no no yes up* *Note*: MySQL Version 4.0.2 is not recommended for replication. Starting from 4.0.0, one can use `LOAD DATA FROM MASTER' to set up a slave. Be aware that `LOAD DATA FROM MASTER' currently works only if all the tables on the master are `MyISAM' type, and will acquire a global read lock, so no writes are possible while the tables are being transferred from the master. This limitation is of a temporary nature, and is due to the fact that we have not yet implemented hot lock-free table backup. It will be removed in the future 4.0 branch versions once we implement hot backup enabling `LOAD DATA FROM MASTER' to work without blocking master updates. Due to the above limitation, we recommend that at this point you use `LOAD DATA FROM MASTER' only if the dataset on the master is relatively small, or if a prolonged read lock on the master is acceptable. While the actual speed of `LOAD DATA FROM MASTER' may vary from system to system, a good rule for a rough estimate of how long it is going to take is 1 second per 1 MB of the datafile. You will get close to the estimate if both master and slave are equivalent to 700 MHz Pentium, are connected through 100 MBit/s network, and your index file is about half the size of your data file. Of course, your mileage will vary from system to system, the above rule just gives you a rough order of magnitude estimate. Once a slave is properly configured and running, it will simply connect to the master and wait for updates to process. If the master goes away or the slave loses connectivity with your master, it will keep trying to connect every `master-connect-retry' seconds until it is able to reconnect and resume listening for updates. Each slave keeps track of where it left off. The master server has no knowledge of how many slaves there are or which ones are up-to-date at any given time. The next section explains the master/slave setup process in more detail. How To Set Up Replication ------------------------- Here is a quick description of how to set up complete replication on your current MySQL server. It assumes you want to replicate all your databases and have not configured replication before. You will need to shutdown your master server briefly to complete the steps outlined here. While this method is the most straightforward way to set up a slave, it is not the only one. For example, if you already have a snapshot of the master, and the master already has server id set and binary logging enabled, you can set up a slave without shutting the master down or even blocking the updates. For more details, please see *Note Replication FAQ::. If you want to become a real MySQL replication guru, we suggest that you begin by studying, pondering, and trying all commands mentioned in *Note Replication SQL::. You should also familiarise yourself with replication startup options in `my.cnf' in *Note Replication Options::. 1. Make sure you have a recent version of MySQL installed on the master and slave(s). Use Version 3.23.29 or higher. Previous releases used a different binary log format and had bugs which have been fixed in newer releases. Please, do not report bugs until you have verified that the problem is present in the latest release. 2. Set up special a replication user on the master with the `FILE' (in MySQL versions older than 4.0.2) or `REPLICATION SLAVE' privilege in newer MySQL versions. You must also have given permission to connect from all the slaves. If the user is only doing replication (which is recommended), you don't need to grant any additional privileges. For example, to create a user named `repl' which can access your master from any host, you might use this command: mysql> GRANT FILE ON *.* TO repl@"%" IDENTIFIED BY ''; # master < 4.0.2 mysql> GRANT REPLICATION SLAVE ON *.* TO repl@"%" IDENTIFIED BY ''; # master >= 4.0.2 If you plan to use the `LOAD TABLE FROM MASTER' or `LOAD DATA FROM MASTER' commands (available starting from version 4.0.0), you will also need to grant the `RELOAD' and `SUPER' privileges on the master to the above user. 3. If you are using MyISAM tables, flush all the tables and block write queries by executing `FLUSH TABLES WITH READ LOCK' command. mysql> FLUSH TABLES WITH READ LOCK; and then take a snapshot of the data on your master server. The easiest way to do this (on Unix) is to simply use *tar* to produce an archive of your entire data directory. The exact data directory location depends on your installation. tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dir Windows users can use `WinZIP' or similar software to create an archive of the data directory. After or during the process of taking a snapshot, read the value of the current binary log name and the offset on the master: mysql > SHOW MASTER STATUS; +---------------+----------+--------------+-------------------------------+ | File | Position | Binlog_do_db | Binlog_ignore_db | +---------------+----------+--------------+-------------------------------+ | mysql-bin.003 | 73 | test,bar | foo,manual,sasha_likes_to_run | +---------------+----------+--------------+-------------------------------+ 1 row in set (0.06 sec) The `File' column shows the name of the log, while `Position' shows the offset. In the above example, the binary log value is `mysql-bin.003' and the offset is 73. Record the values - you will need to use them later when you are setting up the slave. Once you have taken the snapshot and recorded the log name and offset, you can re-enable write activity on the master: mysql> UNLOCK TABLES; If you are using InnoDB tables, ideally you should use the InnoDB Hot Backup tool that is available to those who purchase MySQL commercial licenses, support, or the backup tool itself. It will take a consistent snapshot without acquiring any locks on the master server, and record the log name and offset corresponding to the snapshot to be later used on the slave. More information about the tool is avaliable at `http://www.innodb.com/hotbackup.html'. Without the hot backup tool, the quickest way to take a snapshot of InnoDB tables is to shut the master server down and copy the data files, the logs, and the table definition files (`.frm'). To record the current log file name and offset, you should do the following before you shut down the server: mysql> FLUSH TABLES WITH READ LOCK; mysql> SHOW MASTER STATUS; And then record the log name and the offset from the output of `SHOW MASTER STATUS' as was shown earlier. Once you have recorded the log name and the offset, shut the server down without unlocking the tables to make sure it goes down with the snapshot corresponding to the current log file and offset: shell> mysqladmin -uroot shutdown If the master has been previously running without `log-bin' enabled, the values of log name and position will be empty when you run `SHOW MASTER STATUS'. In that case, record empty string (") for the log name, and 4 for the offset. 4. Make sure that `my.cnf' on the master has `log-bin' if it is not there already and `server-id=unique number' in the `[mysqld]' section. If those options are not present, add them and restart the server. It is very important that the id of the slave is different from the id of the master. Think of `server-id' as something similar to the IP address - it uniquely identifies the server instance in the community of replication partners. [mysqld] log-bin server-id=1 5. Add the following to `my.cnf' on the slave(s): server-id= replacing the values in <> with what is relevant to your system. `server-id' must be different for each server participating in replication. If you don't specify a server-id, it will be set to 1 if you have not defined `master-host', else it will be set to 2. Note that in the case of `server-id' omission the master will refuse connections from all slaves, and the slave will refuse to connect to a master. Thus, omitting `server-id' is only good for backup with a binary log. 6. While the slave is running, make it forget about the old replication configuration if it has been replicating previously: mysql> RESET SLAVE; 7. Copy the snapshot data into your data directory on your slave(s). Make sure that the privileges on the files and directories are correct. The user which MySQL runs as needs to be able to read and write to them, just as on the master. 8. Restart the slave(s). 9. Once the slave comes up, execute the following command: mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_USER='', MASTER_PASSWORD='', MASTER_LOG_FILE='', MASTER_LOG_POS=; replacing the values in <> with the actual values relevant to your system. 10. Start the slave thread: mysql> SLAVE START; After you have done the above, the slave(s) should connect to the master and catch up on any updates which happened since the snapshot was taken. If you have forgotten to set `server-id' for the slave you will get the following error in the error log file: Warning: one should set server_id to a non-0 value if master_host is set. The server will not act as a slave. If you have forgotten to do this for the master, the slaves will not be able to connect to the master. If a slave is not able to replicate for any reason, you will find error messages in the error log on the slave. Once a slave is replicating, you will find a file called `master.info' in the same directory as your error log. The `master.info' file is used by the slave to keep track of how much of the master's binary log it has processed. *Do not* remove or edit the file, unless you really know what you are doing. Even in that case, it is preferred that you use `CHANGE MASTER TO' command. Now that you have a snapshot, you can use it to set up other slaves. To do so, follow the slave portion of the procedure described above. You do not need to take another snapshot of the master. Replication Features and Known Problems --------------------------------------- Here is an explanation of what is supported and what is not: * Replication will be done correctly with `AUTO_INCREMENT', `LAST_INSERT_ID()', and `TIMESTAMP' values. * `RAND()' in updates does not replicate properly. Use `RAND(some_non_rand_expr)' if you are replicating updates with `RAND()'. You can, for example, use `UNIX_TIMESTAMP()' for the argument to `RAND()'. * You have to use the same character set (`--default-character-set') on the master and the slave. If not, you may get duplicate key errors on the slave, because a key that is regarded as unique in the master character set may not be unique in the slave character set. * In 3.23, `LOAD DATA INFILE' will be handled properly as long as the file still resides on the master server at the time of update propagation. `LOAD LOCAL DATA INFILE' will be skipped. In 4.0, this limitation is not present - all forms of `LOAD DATA INFILE' are properly replicated. * Update queries that use user variables are not replication-safe (yet). * `FLUSH' commands are not stored in the binary log and are because of this not replicated to the slaves. This is not normally a problem as `FLUSH' doesn't change anything. This does however mean that if you update the MySQL privilege tables directly without using the `GRANT' statement and you replicate the `mysql' privilege database, you must do a `FLUSH PRIVILEGES' on your slaves to put the new privileges into effect. * Temporary tables starting in 3.23.29 are replicated properly with the exception of the case when you shut down slave server ( not just slave thread), you have some temporary tables open, and they are used in subsequent updates. To deal with this problem shutting down the slave, do `SLAVE STOP', check `Slave_open_temp_tables' variable to see if it is 0, then issue `mysqladmin shutdown'. If the number is not 0, restart the slave thread with `SLAVE START' and see if you have better luck next time. There will be a cleaner solution, but it has to wait until version 4.0. In earlier versions temporary tables are not replicated properly - we recommend that you either upgrade, or execute `SET SQL_LOG_BIN=0' on your clients before all queries with temp tables. * MySQL only supports one master and many slaves. In 4.x, we will add a voting algorithm to automatically change master if something goes wrong with the current master. We will also introduce 'agent' processes to help do load balancing by sending select queries to different slaves. * Starting in Version 3.23.26, it is safe to connect servers in a circular master-slave relationship with `log-slave-updates' enabled. Note, however, that many queries will not work right in this kind of setup unless your client code is written to take care of the potential problems that can happen from updates that occur in different sequence on different servers. This means that you can do a setup like the following: A -> B -> C -> A This setup will only works if you only do non conflicting updates between the tables. In other words, if you insert data in A and C, you should never insert a row in A that may have a conflicting key with a row insert in C. You should also not update the sam rows on two servers if the order in which the updates are applied matters. Note that the log format has changed in Version 3.23.26 so that pre-3.23.26 slaves will not be able to read it. * If the query on the slave gets an error, the slave thread will terminate, and a message will appear in the `.err' file. You should then connect to the slave manually, fix the cause of the error (for example, non-existent table), and then run the `SLAVE START' SQL command (available starting in Version 3.23.16). In Version 3.23.15, you will have to restart the server. * If connection to the master is lost, the slave will retry immediately, and then in case of failure every `master-connect-retry' (default 60) seconds. Because of this, it is safe to shut down the master, and then restart it after a while. The slave will also be able to deal with network connectivity outages. However, the slave will notice the network outage only after receiving no data from the master for `slave_net_timeout' seconds. So if your outages are short, you may want to decrease `slave_net_timeout' ; see *Note SHOW VARIABLES::. * Shutting down the slave (cleanly) is also safe, as it keeps track of where it left off. Unclean shutdowns might produce problems, especially if disk cache was not synced before the system died. Your system fault tolerance will be greatly increased if you have a good UPS. * If the master is listening on a non-standard port, you will also need to specify this with `master-port' parameter in `my.cnf' . * In Version 3.23.15, all of the tables and databases will be replicated. Starting in Version 3.23.16, you can restrict replication to a set of databases with `replicate-do-db' directives in `my.cnf' or just exclude a set of databases with `replicate-ignore-db'. Note that up until Version 3.23.23, there was a bug that did not properly deal with `LOAD DATA INFILE' if you did it in a database that was excluded from replication. * Starting in Version 3.23.16, `SET SQL_LOG_BIN = 0' will turn off replication (binary) logging on the master, and `SET SQL_LOG_BIN = 1' will turn it back on - you must have the `SUPER' (in MySQL 4.0.2 and above) or `PROCESS' (in older MySQL versions) privilege to do this. * Starting in Version 3.23.19, you can clean up stale replication leftovers when something goes wrong and you want a clean start with `FLUSH MASTER' and `FLUSH SLAVE' commands. In Version 3.23.26 we have renamed them to `RESET MASTER' and `RESET SLAVE' respectively to clarify what they do. The old `FLUSH' variants still work, though, for compatibility. * Starting in Version 3.23.23, you can change masters and adjust log position with `CHANGE MASTER TO'. * Starting in Version 3.23.23, you tell the master that updates in certain databases should not be logged to the binary log with `binlog-ignore-db'. * Starting in Version 3.23.26, you can use `replicate-rewrite-db' to tell the slave to apply updates from one database on the master to the one with a different name on the slave. * Starting in Version 3.23.28, you can use `PURGE MASTER LOGS TO 'log-name'' to get rid of old logs while the slave is running. This will remove all old logs before, but not including `'log-name''. * Due to the non-transactional nature of MyISAM tables, it is possible to have a query that will only partially update a table and return an error code. This can happen, for example, on a multi-row insert that has one row violating a key constraint, or if a long update query is killed after updating some of the rows. If that happens on the master, the slave thread will exit and wait for the DBA to decide what to do about it unless the error code is legitimate and the query execution results in the same error code. If this error code validation behaviour is not desirable, some ( or all) errors could be masked out with `slave-skip-errors' option starting in Version 3.23.47. * While individual tables can be excluded from replication with `replicate-do-table'/`replicate-ignore-table' or `replicate-wild-do-table'/`replicate-wild-ignore-table', there are currently some design deficiencies that in some rather rare cases produce unexpected results. The replication protocol does not inform the slave explicitly which tables are going to be modified by the query - so the slave has to parse the query to know this. To avoid redundant parsing for queries that will end up actually being executed, table exclusion is currently implemented by sending the query to the standard MySQL parser, which will short-circuit the query and report success if it detects that the table should be ignored. In addition to several inefficiencies, this approach is also more bug prone, and there are two known bugs as of Version 3.23.49 - because the parser automatically opens the table when parsing some queries the ignored table has to exist on the slave. The other bug is that if the ignored table gets partially updated, the slave thread will not notice that the table actually should have been ignored and will suspend the replication process. While the above bugs are conceptually very simple to fix, we have not yet found a way to do this without a significant code change that would compromise the stability status of 3.23 branch. There exists a workaround for both if in the rare case it happens to affect your application - use `slave-skip-errors'. Replication Options in `my.cnf' ------------------------------- If you are using replication, we recommend that you use MySQL Version 3.23.33 or later. Older versions work, but they do have some bugs and are missing some features. Some of the options mentioned here may not be available in your version if it is not the most recent one. For all options specific to the 4.0 branch, there is a note indicating so. Otherwise, if you discover that the option you are interested in is not available in your 3.23 version, and you really need it, please upgrade to the most recent 3.23 branch. Please be aware that 4.0 branch is still in alpha, so some things may not be working as smoothly as you would like. If you really would like to try the new features of 4.0, we recommend you do it in such a way that in case there is a problem your mission critical applications will not be disrupted. On both master and slave you need to use the `server-id' option. This sets a unique replication id. You should pick a unique value in the range between 1 to 2^32-1 for each master and slave. Example: `server-id=3' The following table describes the options you can use for the `MASTER': *Option* *Description* `log-bin=filename' Write to a binary update log to the specified location. Note that if you give it a parameter with an extension (for example, `log-bin=/mysql/logs/replication.log' ) versions up to 3.23.24 will not work right during replication if you do `FLUSH LOGS' . The problem is fixed in Version 3.23.25. If you are using this kind of log name, `FLUSH LOGS' will be ignored on binlog. To clear the log, run `FLUSH MASTER', and do not forget to run `FLUSH SLAVE' on all slaves. In Versions 3.23.26 and later, you should use `RESET MASTER' and `RESET SLAVE'. You can use this option if you want to have a name which is independant of your hostname (could be useful in case you rename your host one day). `log-bin-index=filename' Because the user could issue the `FLUSH LOGS' command, we need to know which log is currently active and which ones have been rotated out and in what sequence. This information is stored in the binary log index file. The default is ``hostname`.index'. You should not need to change this. Example: `log-bin-index=db.index' `sql-bin-update-same' If set, setting `SQL_LOG_BIN' to a value will automatically set `SQL_LOG_UPDATE' to the same value and vice versa. `binlog-do-db=database_name' Tells the master that it should log updates to the binary log if the current database (i.e. the one selected by `USE') is `database_name'. All others databases which are not explicitly mentioned are ignored. Note that if you use this, you should ensure that you do updates only in the current database. Example: `binlog-do-db=sales' `binlog-ignore-db=database_name' Tells the master that updates where the current database (i.e. the one selected by `USE') is `database_name' should not be stored in the binary log. Note that if you use this, you should ensure that you do updates only in the current database. Example: `binlog-ignore-db=accounting' The following table describes the options you can use for the `SLAVE': *Option* *Description* `master-host=host' Master hostname or IP address for replication. If not set, the slave thread will not be started. Note that the setting of `master-host' will be ignored if there exists a valid `master.info' file. Probably a better name for this options would have been something like `bootstrap-master-host', but it is too late to change now. Example: `master-host=db-master.mycompany.com' `master-user=username' The username the slave thread will use for authentication when connecting to the master. The user must have the `FILE' privilege. If the master user is not set, user `test' is assumed. The value in `master.info' will take precedence if it can be read. Example: `master-user=scott' `master-password=password' The password the slave thread will authenticate with when connecting to the master. If not set, an empty password is assumed.The value in `master.info' will take precedence if it can be read. Example: `master-password=tiger' `master-port=portnumber' The port the master is listening on. If not set, the compiled setting of `MYSQL_PORT' is assumed. If you have not tinkered with `configure' options, this should be 3306. The value in `master.info' will take precedence if it can be read. Example: `master-port=3306' `master-connect-retry=seconds' The number of seconds the slave thread will sleep before retrying to connect to the master in case the master goes down or the connection is lost. Default is 60. Example: `master-connect-retry=60' `master-ssl' Available after 4.0.0. Turn SSL on for replication. Be warned that is this is a relatively new feature. Example: `master-ssl' `master-ssl-key' Available after 4.0.0. Master SSL keyfile name. Only applies if you have enabled `master-ssl'. Example: `master-ssl-key=SSL/master-key.pem' `master-ssl-cert' Available after 4.0.0. Master SSL certificate file name. Only applies if you have enabled `master-ssl'. Example: `master-ssl-key=SSL/master-cert.pem' `master-info-file=filename' The location of the file that remembers where we left off on the master during the replication process. The default is `master.info' in the data directory. You should not need to change this. Example: `master-info-file=master.info' `report-host' Available after 4.0.0. Hostname or IP of the slave to be reported to the master during slave registration. Will appear in the output of `SHOW SLAVE HOSTS'. Leave unset if you do not want the slave to register itself with the master. Note that it is not sufficient for the master to simply read the IP of the slave off the socket once the slave connects. Due to `NAT' and other routing issues, that IP may not be valid for connecting to the slave from the master or other hosts. Example: `report-host=slave1.mycompany.com' `report-port' Available after 4.0.0. Port for connecting to slave reported to the master during slave registration. Set it only if the slave is listening on a non-default port or if you have a special tunnel from the master or other clients to the slave. If not sure, leave this option unset. `replicate-do-table=db_name.table_name' Tells the slave thread to restrict replication to the specified table. To specify more than one table, use the directive multiple times, once for each table. This will work for cross-database updates, in contrast to `replicate-do-db'. Example: `replicate-do-table=some_db.some_table' `replicate-ignore-table=db_name.table_name' Tells the slave thread to not replicate any command that updates the specified table (even if any other tables may be update by the same command). To specify more than one table to ignore, use the directive multiple times, once for each table. This will work for cross-database updates, in contrast to `replicate-ignore-db'. Example: `replicate-ignore-table=db_name.some_table' `replicate-wild-do-table=db_name.table_name' Tells the slave thread to restrict replication to queries where any of the updated tables match the specified wildcard pattern. To specify more than one table, use the directive multiple times, once for each table. This will work for cross-database updates. Example: `replicate-wild-do-table=foo%.bar%' will replicate only updates that uses a table in any databases that start with `foo' and whose table names start with `bar'. Note that if you do `replicate-wild-do-table=foo%.%' then the rule will be propagated to `CREATE DATABASE' and `DROP DATABASE', i.e. these two statements will be replicated if the database name matches the database pattern ('foo%' here) (this magic is triggered by '%' being the table pattern). `replicate-wild-ignore-table=db_name.table_name' Tells the slave thread to not replicate a query where any table matches the given wildcard pattern. To specify more than one table to ignore, use the directive multiple times, once for each table. This will work for cross-database updates. Example: `replicate-wild-ignore-table=foo%.bar%' will not do updates to tables in databases that start with `foo' and whose table names start with `bar'. Note that if you do `replicate-wild-ignore-table=foo%.%' then the rule will be propagated to `CREATE DATABASE' and `DROP DATABASE', i.e. these two statements will not be replicated if the database name matches the database pattern ('foo%' here) (this magic is triggered by '%' being the table pattern). `replicate-ignore-db=database_name' Tells the slave thread to not replicate any command where the current database (i.e. the one selected by `USE') is `database_name'. To specify more than one database to ignore, use the directive multiple times, once for each database. You should not use this directive if you are using cross table updates and you don't want these update to be replicated. The main reason for this "just-check-the-current-database" behaviour is that it's hard from the command alone to know if a query should be replicated or not ; for example if you are using multi-table-delete or multi-table-update commands in MySQL 4.x that go across multiple databases. It's also very fast to just check the current database, as this only has to be done once at connect time or when the database changes. If you need cross database updates to work, make sure you have 3.23.28 or later, and use `replicate-wild-ignore-table=db_name.%'. Example: `replicate-ignore-db=some_db' `replicate-do-db=database_name' Tells the slave thread to restrict replication to commands where the current database (i.e. the one selected by `USE') is `database_name'. To specify more than one database, use the directive multiple times, once for each database. Note that this will not replicate cross-database queries such as `UPDATE some_db.some_table SET foo='bar'' while having selected a different or no database. If you need cross database updates to work, make sure you have 3.23.28 or later, and use `replicate-wild-do-table=db_name.%'. Example: `replicate-do-db=some_db' `log-slave-updates' Tells the slave to log the updates from the slave thread to the slave's binary log. Off by default. Of course, it requires that the slave be started with binary logging enabled (`log-bin' option). You have to use `log-slave-updates' to chain several slaves ; for example for the following setup to work A -> B ->C (C is a slave of B which is a slave of A) you need to start B with the `log-slave-updates' option. `replicate-rewrite-db=from_name->to_name' Tells the slave to translate the current database (i.e. the one selected by `USE') to `to_name' if it was `from_name' on the master. Only statements involving tables may be affected (`CREATE DATABASE', `DROP DATABASE' won't), and only if `from_name' was the current database on the master. This will not work for cross-database updates. Example: `replicate-rewrite-db=master_db_name->slave_db_name' `slave-skip-errors= Available only in 3.23.47 and later. Tells [err_code1,err_code2,... | the slave thread to continue replication all]' when a query returns an error from the provided list. Normally, replication will discontinue when an error is encountered, giving the user a chance to resolve the inconsistency in the data manually. Do not use this option unless you fully understand why you are getting the errors. If there are no bugs in your replication setup and client programs, and no bugs in MySQL itself, you should never get an abort with error. Indiscriminate use of this option will result in slaves being hopelessly out of sync with the master and you having no idea how the problem happened. For error codes, you should use the numbers provided by the error message in your slave error log and in the output of `SHOW SLAVE STATUS'. Full list of error messages can be found in the source distribution in `Docs/mysqld_error.txt'. You can (but should not) also use a very non-recommended value of `all' which will ignore all error messages and keep barging along regardless. Needless to say, if you use it, we make no promises regarding your data integrity. Please do not complain if your data on the slave is not anywhere close to what it is on the master in this case - you have been warned. Example: `slave-skip-errors=1062,1053' or `slave-skip-errors=all' `skip-slave-start' Tells the slave server not to start the slave on the startup. The user can start it later with `SLAVE START'. `slave_compressed_protocol=#' If 1, then use compression on the slave/client protocol if both slave and master support this. `slave_net_timeout=#' Number of seconds to wait for more data from the master before aborting the read. SQL Commands Related to Replication ----------------------------------- Replication can be controlled through the SQL interface. Here is the summary of commands: *Command* *Description* `SLAVE START' Starts the slave thread. As of MySQL 4.0.2, you can add `IO_THREAD' or `SQL_THREAD' options to the statement to start the I/O thread or the SQL thread. The I/O thread reads queries from the master server and stores them in the relay log. The SQL thread reads the relay log and executes the queries. (Slave) `SLAVE STOP' Stops the slave thread. Like `SLAVE START', this statement may be used with `IO_THREAD' and `SQL_THREAD' options. (Slave) `SET SQL_LOG_BIN=0' Disables update logging if the user has the `SUPER' privilege. Ignored otherwise. (Master) `SET SQL_LOG_BIN=1' Re-enables update logging if the user has the `SUPER' privilege. Ignored otherwise. (Master) `SET GLOBAL Skip the next `n' events from the SQL_SLAVE_SKIP_COUNTER=n' master. Only valid when the slave thread is not running, otherwise, gives an error. Useful for recovering from replication glitches. `RESET MASTER' Deletes all binary logs listed in the index file, resetting the binlog index file to be empty. In pre-3.23.26 versions, use `FLUSH MASTER'. (Master) `RESET SLAVE' Makes the slave forget its replication position in the master logs. In pre 3.23.26 versions the command was called `FLUSH SLAVE'. (Slave) `LOAD TABLE tblname FROM MASTER' Downloads a copy of the table from master to the slave. Implemented mainly for debugging of `LOAD DATA FROM MASTER', but some "gourmet" users might find it useful for other things. Do not use it if you consider yourself the average "non-hacker" type user. Requires that the replication user which is used to connect to the master has `RELOAD' and `SUPER' privileges on the master. Please read the timeout notes in the description of `LOAD DATA FROM MASTER' below, they apply here too. (Slave) `LOAD DATA FROM MASTER' Available starting in 4.0.0. Takes a snapshot of the master and copies it to the slave. Requires that the replication user which is used to connect to the master has `RELOAD' and `SUPER' privileges on the master. Updates the values of `MASTER_LOG_FILE' and `MASTER_LOG_POS' so that the slave will start replicating from the correct position. Will honor table and database exclusion rules specified with `replicate-*' options. So far works only with `MyISAM' tables and acquires a global read lock on the master while taking the snapshot. In the future it is planned to make it work with `InnoDB' tables and to remove the need for global read lock using the non-blocking online backup feature. If you are loading big tables, you may have to increase the values of `net_read_timeout' and `net_write_timeout' on both your master and slave ; see *Note SHOW VARIABLES::. Note that `LOAD DATA FROM MASTER' does *NOT* copy any tables from the `mysql' database. This is to make it easy to have different users and privileges on the master and the slave. `CHANGE MASTER TO Changes the master parameters to the master_def_list' values specified in `master_def_list' and restarts the slave thread. `master_def_list' is a comma-separated list of `master_def' where `master_def' is one of the following: `MASTER_HOST', `MASTER_USER', `MASTER_PASSWORD', `MASTER_PORT', `MASTER_CONNECT_RETRY', `MASTER_LOG_FILE', `MASTER_LOG_POS'. For example: CHANGE MASTER TO MASTER_HOST='master2.mycompany.com', MASTER_USER='replication', MASTER_PASSWORD='bigs3cret', MASTER_PORT=3306, MASTER_LOG_FILE='master2-bin.001', MASTER_LOG_POS=4; You only need to specify the values that need to be changed. The values that you omit will stay the same with the exception of when you change the host or the port. In that case, the slave will assume that since you are connecting to a different host or a different port, the master is different. Therefore, the old values of log and position are not applicable anymore, and will automatically be reset to an empty string and 0, respectively (the start values). Note that if you restart the slave, it will remember its last master. If this is not desirable, you should delete the `master.info' file before restarting, and the slave will read its master from `my.cnf' or the command-line. This command is useful for setting up a slave when you have the snapshot of the master and have recorded the log and the offset on the master that the snapshot corresponds to. You can run `CHANGE MASTER TO MASTER_LOG_FILE='log_name_on_master', MASTER_LOG_POS=log_offset_on_master' on the slave after restoring the snapshot. (Slave) `SHOW MASTER STATUS' Provides status information on the binlog of the master. (Master) `SHOW SLAVE HOSTS' Available after 4.0.0. Gives a listing of slaves currently registered with the master. (Master) `SHOW SLAVE STATUS' Provides status information on essential parameters of the slave thread. (Slave) `SHOW MASTER LOGS' Only available starting in Version 3.23.28. Lists the binary logs on the master. You should use this command prior to `PURGE MASTER LOGS TO' to find out how far you should go. (Master) `SHOW BINLOG EVENTS [ IN Shows the events in the binary update 'logname' ] [ FROM pos ] [ log. Primarily used for LIMIT [offset,] rows ] ' testing/debugging, but can also be used by regular clients that for some reason need to read the binary log contents. (Master) `SHOW NEW MASTER FOR SLAVE WITH This command is used when a slave of a MASTER_LOG_FILE='logfile' AND possibly dead/unavailable master needs MASTER_LOG_POS=pos AND to be switched to replicate off another MASTER_LOG_SEQ=log_seq AND slave that has been replicating the MASTER_SERVER_ID=server_id' same master. The command will return recalculated replication coordinates (the slave's current binary log file name and position within that file). The output can be used in a subsequent `CHANGE MASTER TO' command. Normal users should never need to run this command. It is primarily reserved for internal use by the fail-safe replication code. We may later change the syntax if we find a more intuitive way to describe this operation. `PURGE MASTER LOGS TO 'logname'' Available starting in Version 3.23.28. Deletes all the replication logs that are listed in the log index as being prior to the specified log, and removes them from the log index, so that the given log now becomes the first. Example: PURGE MASTER LOGS TO 'mysql-bin.010' This command will do nothing and fail with an error if you have an active slave that is currently reading one of the logs you are trying to delete. However, if you have a dormant slave, and happen to purge one of the logs it wants to read, the slave will be unable to replicate once it comes up. The command is safe to run while slaves are replicating - you do not need to stop them. You must first check all the slaves with `SHOW SLAVE STATUS' to see which log they are on, then do a listing of the logs on the master with `SHOW MASTER LOGS', find the earliest log among all the slaves (if all the slaves are up to date, this will be the last log on the list), backup all the logs you are about to delete (optional) and purge up to the target log. Replication FAQ --------------- *Q*: How do I configure a slave if the master is already running and I do not want to stop it? *A*: There are several options. If you have taken a backup of the master at some point and recorded the binlog name and offset ( from the output of `SHOW MASTER STATUS' ) corresponding to the snapshot, do the following: * Make sure unique server id is assigned to the slave. * Execute `CHANGE MASTER TO MASTER_HOST='master-host-name', MASTER_USER='master-user-name', MASTER_PASSWORD='master-pass', MASTER_LOG_FILE='recorded-log-name', MASTER_LOG_POS=recorded_log_pos' * Execute `SLAVE START' If you do not have a backup of the master already, here is a quick way to do it consistently: * `FLUSH TABLES WITH READ LOCK' * `gtar zcf /tmp/backup.tar.gz /var/lib/mysql' ( or a variation of this) * `SHOW MASTER STATUS' - make sure to record the output - you will need it later * `UNLOCK TABLES' Afterwards, follow the instructions for the case when you have a snapshot and have recorded the log name and offset. You can use the same snapshot to set up several slaves. As long as the binary logs of the master are left intact, you can wait as long as several days or in some cases maybe a month to set up a slave once you have the snapshot of the master. In theory the waiting gap can be infinite. The two practical limitations is the diskspace of the master getting filled with old logs, and the amount of time it will take the slave to catch up. In version 4.0.0 and newer, you can also use `LOAD DATA FROM MASTER'. This is a convenient command that will take a snapshot, restore it to the slave, and adjust the log name and offset on the slave all at once. In the future, `LOAD DATA FROM MASTER' will be the recommended way to set up a slave. Be warned, howerver, that the read lock may be held for a long time if you use this command. It is not yet implemented as efficiently as we would like to have it. If you have large tables, the preferred method at this time is still with a local `tar' snapshot after executing `FLUSH TABLES WITH READ LOCK'. *Q*: Does the slave need to be connected to the master all the time? *A*: No, it does not. You can have the slave go down or stay disconnected for hours or even days, then reconnect, catch up on the updates, and then disconnect or go down for a while again. So you can, for example, use master-slave setup over a dial-up link that is up only for short periods of time. The implications of that are that at any given time the slave is not guaranteed to be in sync with the master unless you take some special measures. In the future, we will have the option to block the master until at least one slave is in sync. *Q*: How do I force the master to block updates until the slave catches up? *A*: Execute the following commands: * Master: `FLUSH TABLES WITH READ LOCK' * Master: `SHOW MASTER STATUS' - record the log name and the offset * Slave: `SELECT MASTER_POS_WAIT('recorded_log_name', recorded_log_offset)' When the select returns, the slave is currently in sync with the master * Master: `UNLOCK TABLES' - now the master will continue updates. *Q*: Why do I sometimes see more than one `Binlog_Dump' thread on the master after I have restarted the slave? *A*: `Binlog_Dump' is a continuous process that is handled by the server in the following way: * Catch up on the updates. * Once there are no more updates left, go into `pthread_cond_wait()', from which we can be awakened either by an update or a kill. * On wake up, check the reason. If we are not supposed to die, continue the `Binlog_dump' loop. * If there is some fatal error, such as detecting a dead client, terminate the loop. So if the slave thread stops on the slave, the corresponding `Binlog_Dump' thread on the master will not notice it until after at least one update to the master (or a kill), which is needed to wake it up from `pthread_cond_wait()'. In the meantime, the slave could have opened another connection, which resulted in another `Binlog_Dump' thread. The above problem should not be present in Version 3.23.26 and later versions. In Version 3.23.26 we added `server-id' to each replication server, and now all the old zombie threads are killed on the master when a new replication thread connects from the same slave *Q*: How do I rotate replication logs? *A*: In Version 3.23.28 you should use `PURGE MASTER LOGS TO' command after determining which logs can be deleted, and optionally backing them up first. In earlier versions the process is much more painful, and cannot be safely done without stopping all the slaves in the case that you plan to re-use log names. You will need to stop the slave threads, edit the binary log index file, delete all the old logs, restart the master, start slave threads, and then remove the old log files. *Q*: How do I upgrade on a hot replication setup? *A*: If you are upgrading pre-3.23.26 versions, you should just lock the master tables, let the slave catch up, then run `FLUSH MASTER' on the master, and `FLUSH SLAVE' on the slave to reset the logs, then restart new versions of the master and the slave. Note that the slave can stay down for some time - since the master is logging all the updates, the slave will be able to catch up once it is up and can connect. After 3.23.26, we have locked the replication protocol for modifications, so you can upgrade masters and slave on the fly to a newer 3.23 version and you can have different versions of MySQL running on the slave and the master, as long as they are both newer than 3.23.26. *Q*: What issues should I be aware of when setting up two-way replication? *A*: MySQL replication currently does not support any locking protocol between master and slave to guarantee the atomicity of a distributed (cross-server) update. In other words, it is possible for client A to make an update to co-master 1, and in the meantime, before it propagates to co-master 2, client B could make an update to co-master 2 that will make the update of client A work differently than it did on co-master 1. Thus when the update of client A will make it to co-master 2, it will produce tables that will be different from what you have on co-master 1, even after all the updates from co-master 2 have also propagated. So you should not co-chain two servers in a two-way replication relationship, unless you are sure that you updates can safely happen in any order, or unless you take care of mis-ordered updates somehow in the client code. You must also realise that two-way replication actually does not improve performance very much, if at all, as far as updates are concerned. Both servers need to do the same amount of updates each, as you would have one server do. The only difference is that there will be a little less lock contention, because the updates originating on another server will be serialised in one slave thread. This benefit, though, might be offset by network delays. *Q*: How can I use replication to improve performance of my system? *A*: You should set up one server as the master, and direct all writes to it, and configure as many slaves as you have the money and rackspace for, distributing the reads among the master and the slaves. You can also start the slaves with `--skip-bdb', `--low-priority-updates' and `--delay-key-write=ALL' to get speed improvements for the slave. In this case the slave will use non-transactional `MyISAM' tables instead of `BDB' tables to get more speed. *Q*: What should I do to prepare my client code to use performance-enhancing replication? *A*: If the part of your code that is responsible for database access has been properly abstracted/modularised, converting it to run with the replicated setup should be very smooth and easy - just change the implementation of your database access to read from some slave or the master, and to always write to the master. If your code does not have this level of abstraction, setting up a replicated system will give you an opportunity/motivation to it clean up. You should start by creating a wrapper library /module with the following functions: * `safe_writer_connect()' * `safe_reader_connect()' * `safe_reader_query()' * `safe_writer_query()' `safe_' means that the function will take care of handling all the error conditions. You should then convert your client code to use the wrapper library. It may be a painful and scary process at first, but it will pay off in the long run. All applications that follow the above pattern will be able to take advantage of one-master/many slaves solution. The code will be a lot easier to maintain, and adding troubleshooting options will be trivial. You will just need to modify one or two functions, for example, to log how long each query took, or which query, among your many thousands, gave you an error. If you have written a lot of code already, you may want to automate the conversion task by using Monty's `replace' utility, which comes with the standard distribution of MySQL, or just write your own Perl script. Hopefully, your code follows some recognisable pattern. If not, then you are probably better off rewriting it anyway, or at least going through and manually beating it into a pattern. Note that, of course, you can use different names for the functions. What is important is having unified interface for connecting for reads, connecting for writes, doing a read, and doing a write. *Q*: When and how much can MySQL replication improve the performance of my system? *A*: MySQL replication is most beneficial for a system with frequent reads and not so frequent writes. In theory, by using a one master/many slaves setup you can scale by adding more slaves until you either run out of network bandwidth, or your update load grows to the point that the master cannot handle it. In order to determine how many slaves you can get before the added benefits begin to level out, and how much you can improve performance of your site, you need to know your query patterns, and empirically (by benchmarking) determine the relationship between the throughput on reads (reads per second, or `max_reads') and on writes `max_writes') on a typical master and a typical slave. The example here will show you a rather simplified calculation of what you can get with replication for our imagined system. Let's say our system load consists of 10% writes and 90% reads, and we have determined that `max_reads' = 1200 - 2 * `max_writes', or in other words, our system can do 1200 reads per second with no writes, our average write is twice as slow as average read, and the relationship is linear. Let us suppose that our master and slave are of the same capacity, and we have N slaves and 1 master. Then we have for each server (master or slave): `reads = 1200 - 2 * writes' (from bencmarks) `reads = 9* writes / (N + 1) ' (reads split, but writes go to all servers) `9*writes/(N+1) + 2 * writes = 1200' `writes = 1200/(2 + 9/(N+1)' So if N = 0, which means we have no replication, our system can handle 1200/11, about 109 writes per second (which means we will have 9 times as many reads due to the nature of our application). If N = 1, we can get up to 184 writes per second. If N = 8, we get up to 400. If N = 17, 480 writes. Eventually as N approaches infinity (and our budget negative infinity), we can get very close to 600 writes per second, increasing system throughput about 5.5 times. However, with only 8 servers, we increased it almost 4 times already. Note that our computations assumed infinite network bandwidth, and neglected several other factors that could turn out to be significant on your system. In many cases, you may not be able to make a computation similar to the one above that will accurately predict what will happen on your system if you add N replication slaves. However, answering the following questions should help you decided whether and how much, if at all, the replication will improve the performance of your system: * What is the read/write ratio on your system? * How much more write load can one server handle if you reduce the reads? * How many slaves do you have bandwidth for on your network? *Q*: How can I use replication to provide redundancy/high availability? *A*: With the currently available features, you would have to set up a master and a slave (or several slaves), and write a script that will monitor the master to see if it is up, and instruct your applications and the slaves of the master change in case of failure. Some suggestions: * To tell a slave to change the master use the `CHANGE MASTER TO' command. * A good way to keep your applications informed as to the location of the master is by having a dynamic DNS entry for the master. With `bind' you can use `nsupdate' to dynamically update your DNS. * You should run your slaves with the `log-bin' option and without `log-slave-updates'. This way the slave will be ready to become a master as soon as you issue `STOP SLAVE'; `RESET MASTER', and `CHANGE MASTER TO' on the other slaves. It will also help you catch spurious updates that may happen because of misconfiguration of the slave (ideally, you want to configure access rights so that no client can update the slave, except for the slave thread) combined with the bugs in your client programs (they should never update the slave directly). We are currently working on integrating an automatic master election system into MySQL, but until it is ready, you will have to create your own monitoring tools. *Q*: How does the slave server keep track of where it is on the master? *A*: The slave uses a file in the the data directory defined by the `master-info-file=filename' path. This file holds all the information needed by the slave to request new updates. The file contains the following information: *Line#**Description* 1 Binary log file id 2 Log file postion 3 Host (master) 4 Login user 5 Login password 6 Login port 7 Interval, the length of time between reconnects Troubleshooting Replication --------------------------- If you have followed the instructions, and your replication setup is not working, first eliminate the user error factor by checking the following: * Is the master logging to the binary log? Check with `SHOW MASTER STATUS'. If it is, `Position' will be non-zero. If not, verify that you have given the master `log-bin' option and have set `server-id'. * Is the slave running? Check with `SHOW SLAVE STATUS'. The answer is found in `Slave_running' column. If not, verify slave options and check the error log for messages. * If the slave is running, did it establish connection with the master? Do `SHOW PROCESSLIST', find the thread with `system user' value in `User' column and `none' in the `Host' column, and check the `State' column. If it says `connecting to master', verify the privileges for the replication user on the master, master host name, your DNS setup, whether the master is actually running, whether it is reachable from the slave, and if all that seems okay, read the error logs. * If the slave was running, but then stopped, look at SHOW SLAVE STATUS output and check the error logs. It usually happens when some query that succeeded on the master fails on the slave. This should never happen if you have taken a proper snapshot of the master, and never modify the data on the slave outside of the slave thread. If it does, it is a bug, read below on how to report it. * If a query on that succeeded on the master refuses to run on the slave, and a full database resync ( the proper thing to do ) does not seem feasible, try the following: - First see if there is some stray record in the way. Understand how it got there, then delete it and run `SLAVE START' - If the above does not work or does not apply, try to understand if it would be safe to make the update manually ( if needed) and then ignore the next query from the master. - If you have decided you can skip the next query, do `SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SLAVE START;' to skip a query that does not use `AUTO_INCREMENT' or `LAST_INSERT_ID()', or `SET GLOBAL SQL_SLAVE_SKIP_COUNTER=2; SLAVE START;' otherwise. The reason queries that use `AUTO_INCREMENT' or `LAST_INSERT_ID()' are different is that they take two events in the binary log of the master. - If you are sure the slave started out perfectly in sync with the master, and no one has updated the tables involved outside of slave thread, report the bug, so you will not have to do the above tricks again. * Make sure you are not running into an old bug by upgrading to the most recent version. * If all else fails, read the error logs. If they are big, `grep -i slave /path/to/your-log.err' on the slave. There is no generic pattern to search for on the master, as the only errors it logs are general system errors - if it can, it will send the error to the slave when things go wrong. When you have determined that there is no user error involved, and replication still either does not work at all or is unstable, it is time to start working on a bug report. We need to get as much info as possible from you to be able to track down the bug. Please do spend some time and effort preparing a good bug report. Ideally, we would like to have a test case in the format found in `mysql-test/t/rpl*' directory of the source tree. If you submit a test case like that, you can expect a patch within a day or two in most cases, although, of course, you mileage may vary depending on a number of factors. The second best option is to write a simple program with easily configurable connection arguments for the master and the slave that will demonstrate the problem on our systems. You can write one in Perl or in C, depending on which language you know better. If you have one of the above ways to demonstrate the bug, use `mysqlbug' to prepare a bug report and send it to . If you have a phantom - a problem that does occur but you cannot duplicate "at will": * Verify that there is no user error involved. For example, if you update the slave outside of the slave thread, the data will be out of sync, and you can have unique key violations on updates, in which case the slave thread will stop and wait for you to clean up the tables manually to bring them in sync. * Run slave with `log-slave-updates' and `log-bin' - this will keep a log of all updates on the slave. * Save all evidence before resetting the replication. If we have no or only sketchy information, it would take us a while to track down the problem. The evidence you should collect is: - All binary logs on the master - All binary log on the slave - The output of `SHOW MASTER STATUS' on the master at the time you have discovered the problem - The output of `SHOW SLAVE STATUS' on the master at the time you have discovered the problem - Error logs on the master and on the slave * Use `mysqlbinlog' to examine the binary logs. The following should be helpful to find the trouble query, for example: mysqlbinlog -j pos_from_slave_status /path/to/log_from_slave_status | head Once you have collected the evidence on the phantom problem, try hard to isolate it into a separate test case first. Then report the problem to with as much info as possible. MySQL Optimisation ****************** Optimisation is a complicated task because it ultimately requires understanding of the whole system. While it may be possible to do some local optimisations with small knowledge of your system or application, the more optimal you want your system to become the more you will have to know about it. This chapter will try to explain and give some examples of different ways to optimise MySQL. Remember, however, that there are always some (increasingly harder) additional ways to make the system even faster. Optimisation Overview ===================== The most important part for getting a system fast is of course the basic design. You also need to know what kinds of things your system will be doing, and what your bottlenecks are. The most common bottlenecks are: * Disk seeks. It takes time for the disk to find a piece of data. With modern disks in 1999, the mean time for this is usually lower than 10ms, so we can in theory do about 100 seeks a second. This time improves slowly with new disks and is very hard to optimise for a single table. The way to optimise this is to spread the data on more than one disk. * Disk reading/writing. When the disk is at the correct position we need to read the data. With modern disks in 1999, one disk delivers something like 10-20MB/s. This is easier to optimise than seeks because you can read in parallel from multiple disks. * CPU cycles. When we have the data in main memory (or if it already were there) we need to process it to get to our result. Having small tables compared to the memory is the most common limiting factor. But then, with small tables speed is usually not the problem. * Memory bandwidth. When the CPU needs more data than can fit in the CPU cache the main memory bandwidth becomes a bottleneck. This is an uncommon bottleneck for most systems, but one should be aware of it. MySQL Design Limitations/Tradeoffs ---------------------------------- When using the MyISAM storage engine, MySQL uses extremely fast table locking (multiple readers / single writers). The biggest problem with this table type is a if you have a mix of a steady stream of updates and slow selects on the same table. If this is a problem with some tables, you can use another table type for these. *Note Table types::. MySQL can work with both transactional and not transactional tables. To be able to work smoothly with not transactional tables (which can't rollback if something goes wrong), MySQL has the following rules: * All columns have default values. * If you insert a 'wrong' value in a column like a `NULL' in a `NOT NULL' column or a too big numerical value in a numerical column, MySQL will instead of giving an error instead set the column to the 'best possible value'. For numerical values this is 0, the smallest possible values or the largest possible value. For strings this is either the empty string or the longest possible string that can be in the column. * All calculated expressions returns a value that can be used instead of signaling an error condition. For example 1/0 returns `NULL' The reason for the above rules is that we can't check these conditions before the query starts to execute. If we encounter a problem after updating a few rows, we can't just rollback as the table type may not support this. We can't stop because in that case the update would be 'half done' which is probably the worst possible scenario. In this case it's better to 'do the best you can' and then continue as if nothing happened. The above means that one should not use MySQL to check fields content, but one should do this in the application. Portability ----------- Because all SQL servers implement different parts of SQL, it takes work to write portable SQL applications. For very simple selects/inserts it is very easy, but the more you need the harder it gets. If you want an application that is fast with many databases it becomes even harder! To make a complex application portable you need to choose a number of SQL servers that it should work with. You can use the MySQL `crash-me' program/web-page `http://www.mysql.com/information/crash-me.php' to find functions, types, and limits you can use with a selection of database servers. Crash-me now tests far from everything possible, but it is still comprehensive with about 450 things tested. For example, you shouldn't have column names longer than 18 characters if you want to be able to use Informix or DB2. Both the MySQL benchmarks and `crash-me' programs are very database-independent. By taking a look at how we have handled this, you can get a feeling for what you have to do to write your application database-independent. The benchmarks themselves can be found in the `sql-bench' directory in the MySQL source distribution. They are written in Perl with DBI database interface (which solves the access part of the problem). See `http://www.mysql.com/information/benchmarks.html' for the results from this benchmark. As you can see in these results, all databases have some weak points. That is, they have different design compromises that lead to different behaviour. If you strive for database independence, you need to get a good feeling for each SQL server's bottlenecks. MySQL is very fast in retrieving and updating things, but will have a problem in mixing slow readers/writers on the same table. Oracle, on the other hand, has a big problem when you try to access rows that you have recently updated (until they are flushed to disk). Transaction databases in general are not very good at generating summary tables from log tables, as in this case row locking is almost useless. To get your application _really_ database-independent, you need to define an easy extendable interface through which you manipulate your data. As C++ is available on most systems, it makes sense to use a C++ classes interface to the databases. If you use some specific feature for some database (like the `REPLACE' command in MySQL), you should code a method for the other SQL servers to implement the same feature (but slower). With MySQL you can use the `/*! */' syntax to add MySQL-specific keywords to a query. The code inside `/**/' will be treated as a comment (ignored) by most other SQL servers. If high performance is more important than exactness, as in some web applications, it is possibile to create an application layer that caches all results to give you even higher performance. By letting old results 'expire' after a while, you can keep the cache reasonably fresh. This provides a method to handle high load spikes, in which case you can dynamically increase the cache and set the expire timeout higher until things get back to normal. In this case the table creation information should contain information of the initial size of the cache and how often the table should normally be refreshed. What Have We Used MySQL For? ---------------------------- During MySQL initial development, the features of MySQL were made to fit our largest customer. They handle data warehousing for a couple of the biggest retailers in Sweden. From all stores, we get weekly summaries of all bonus card transactions, and we are expected to provide useful information for the store owners to help them find how their advertisement campaigns are affecting their customers. The data is quite huge (about 7 million summary transactions per month), and we have data for 4-10 years that we need to present to the users. We got weekly requests from the customers that they want to get 'instant' access to new reports from this data. We solved this by storing all information per month in compressed 'transaction' tables. We have a set of simple macros (script) that generates summary tables grouped by different criteria (product group, customer id, store ...) from the transaction tables. The reports are web pages that are dynamically generated by a small Perl script that parses a web page, executes the SQL statements in it, and inserts the results. We would have used PHP or mod_perl instead but they were not available at that time. For graphical data we wrote a simple tool in `C' that can produce GIFs based on the result of a SQL query (with some processing of the result). This is also dynamically executed from the Perl script that parses the `HTML' files. In most cases a new report can simply be done by copying an existing script and modifying the SQL query in it. In some cases, we will need to add more fields to an existing summary table or generate a new one, but this is also quite simple, as we keep all transactions tables on disk. (Currently we have at least 50G of transactions tables and 200G of other customer data.) We also let our customers access the summary tables directly with ODBC so that the advanced users can themselves experiment with the data. We haven't had any problems handling this with quite modest Sun Ultra SPARCstation (2x200 Mhz). We recently upgraded one of our servers to a 2 CPU 400 Mhz UltraSPARC, and we are now planning to start handling transactions on the product level, which would mean a ten-fold increase of data. We think we can keep up with this by just adding more disk to our systems. We are also experimenting with Intel-Linux to be able to get more CPU power cheaper. Now that we have the binary portable database format (new in Version 3.23), we will start to use this for some parts of the application. Our initial feelings are that Linux will perform much better on low-to-medium load and Solaris will perform better when you start to get a high load because of extreme disk IO, but we don't yet have anything conclusive about this. After some discussion with a Linux Kernel developer, this might be a side effect of Linux giving so much resources to the batch job that the interactive performance gets very low. This makes the machine feel very slow and unresponsive while big batches are going. Hopefully this will be better handled in future Linux Kernels. The MySQL Benchmark Suite ------------------------- This should contain a technical description of the MySQL benchmark suite (and `crash-me'), but that description is not written yet. Currently, you can get a good idea of the benchmark by looking at the code and results in the `sql-bench' directory in any MySQL source distributions. This benchmark suite is meant to be a benchmark that will tell any user what things a given SQL implementation performs well or poorly at. Note that this benchmark is single threaded, so it measures the minimum time for the operations. We plan to in the future add a lot of multi-threaded tests to the benchmark suite. For example, (run on the same NT 4.0 machine): *Reading 2000000 rows by *Seconds**Seconds* index* mysql 367 249 mysql_odbc 464 db2_odbc 1206 informix_odbc 121126 ms-sql_odbc 1634 oracle_odbc 20800 solid_odbc 877 sybase_odbc 17614 *Inserting (350768) *Seconds**Seconds* rows* mysql 381 206 mysql_odbc 619 db2_odbc 3460 informix_odbc 2692 ms-sql_odbc 4012 oracle_odbc 11291 solid_odbc 1801 sybase_odbc 4802 In the above test MySQL was run with a 8M index cache. We have gathered some more benchmark results at `http://www.mysql.com/information/benchmarks.html'. Note that Oracle is not included because they asked to be removed. All Oracle benchmarks have to be passed by Oracle! We believe that makes Oracle benchmarks *very* biased because the above benchmarks are supposed to show what a standard installation can do for a single client. To run the benchmark suite, you have to download a MySQL source distribution, install the Perl DBI driver, the Perl DBD driver for the database you want to test and then do: cd sql-bench perl run-all-tests --server=# where # is one of supported servers. You can get a list of all options and supported servers by doing `run-all-tests --help'. `crash-me' tries to determine what features a database supports and what its capabilities and limitations are by actually running queries. For example, it determines: * What column types are supported * How many indexes are supported * What functions are supported * How big a query can be * How big a `VARCHAR' column can be We can find the result from `crash-me' on a lot of different databases at `http://www.mysql.com/information/crash-me.php'. Using Your Own Benchmarks ------------------------- You should definitely benchmark your application and database to find out where the bottlenecks are. By fixing it (or by replacing the bottleneck with a 'dummy module') you can then easily identify the next bottleneck (and so on). Even if the overall performance for your application is sufficient, you should at least make a plan for each bottleneck, and decide how to solve it if someday you really need the extra performance. For an example of portable benchmark programs, look at the MySQL benchmark suite. *Note MySQL Benchmarks: MySQL Benchmarks. You can take any program from this suite and modify it for your needs. By doing this, you can try different solutions to your problem and test which is really the fastest solution for you. It is very common that some problems only occur when the system is very heavily loaded. We have had many customers who contact us when they have a (tested) system in production and have encountered load problems. In every one of these cases so far, it has been problems with basic design (table scans are *not good* at high load) or OS/Library issues. Most of this would be a *lot* easier to fix if the systems were not already in production. To avoid problems like this, you should put some effort into benchmarking your whole application under the worst possible load! You can use Super Smack for this, and it is available at: `http://www.mysql.com/Downloads/super-smack/super-smack-1.0.tar.gz'. As the name suggests, it can bring your system down to its knees if you ask it, so make sure to use it only on your development systems. Optimising `SELECT's and Other Queries ====================================== First, one thing that affects all queries: The more complex permission system setup you have, the more overhead you get. If you do not have any `GRANT' statements done, MySQL will optimise the permission checking somewhat. So if you have a very high volume it may be worth the time to avoid grants. Otherwise, more permission check results in a larger overhead. If your problem is with some explicit MySQL function, you can always time this in the MySQL client: mysql> SELECT BENCHMARK(1000000,1+1); +------------------------+ | BENCHMARK(1000000,1+1) | +------------------------+ | 0 | +------------------------+ 1 row in set (0.32 sec) The above shows that MySQL can execute 1,000,000 `+' expressions in 0.32 seconds on a `PentiumII 400MHz'. All MySQL functions should be very optimised, but there may be some exceptions, and the `BENCHMARK(loop_count,expression)' is a great tool to find out if this is a problem with your query. `EXPLAIN' Syntax (Get Information About a `SELECT') --------------------------------------------------- EXPLAIN tbl_name or EXPLAIN SELECT select_options `EXPLAIN tbl_name' is a synonym for `DESCRIBE tbl_name' or `SHOW COLUMNS FROM tbl_name'. When you precede a `SELECT' statement with the keyword `EXPLAIN', MySQL explains how it would process the `SELECT', providing information about how tables are joined and in which order. With the help of `EXPLAIN', you can see when you must add indexes to tables to get a faster `SELECT' that uses indexes to find the records. You should frequently run `ANALYZE TABLE' to update table statistics such as cardinality of keys which can affect the choices the optimiser makes. *Note ANALYZE TABLE::. You can also see if the optimiser joins the tables in an optimal order. To force the optimiser to use a specific join order for a `SELECT' statement, add a `STRAIGHT_JOIN' clause. For non-simple joins, `EXPLAIN' returns a row of information for each table used in the `SELECT' statement. The tables are listed in the order they would be read. MySQL resolves all joins using a single-sweep multi-join method. This means that MySQL reads a row from the first table, then finds a matching row in the second table, then in the third table and so on. When all tables are processed, it outputs the selected columns and backtracks through the table list until a table is found for which there are more matching rows. The next row is read from this table and the process continues with the next table. In MySQL version 4.1 the `EXPLAIN' output was changed to work better with constructs like `UNION's, subqueries and derived tables. Most notable is the addition of two new columns: `id' and `select_type'. Output from `EXPLAIN' consists of the following columns: `id' `SELECT' identifier, the sequential number of this `SELECT' within the query. `select_type' Type of `SELECT' clause, which can be any of the following: `SIMPLE' Simple `SELECT' (without `UNION's or subqueries). `PRIMARY' Outermost `SELECT'. `UNION' Second and further `UNION' `SELECT's. `DEPENDENT UNION' Second and further `UNION' `SELECTS's, dependent on outer subquery. `SUBSELECT' First `SELECT' in subquery. `DEPENDENT SUBSELECT' First `SELECT', dependent on outer subquery. `DERIVED' Derived table `SELECT'. `table' The table to which the row of output refers. `type' The join type. The different join types are listed here, ordered from best to worst type: `system' The table has only one row (= system table). This is a special case of the `const' join type. `const' The table has at most one matching row, which will be read at the start of the query. Because there is only one row, values from the column in this row can be regarded as constants by the rest of the optimiser. `const' tables are very fast as they are read only once! `eq_ref' One row will be read from this table for each combination of rows from the previous tables. This is the best possible join type, other than the `const' types. It is used when all parts of an index are used by the join and the index is `UNIQUE' or a `PRIMARY KEY'. `ref' All rows with matching index values will be read from this table for each combination of rows from the previous tables. `ref' is used if the join uses only a leftmost prefix of the key, or if the key is not `UNIQUE' or a `PRIMARY KEY' (in other words, if the join cannot select a single row based on the key value). If the key that is used matches only a few rows, this join type is good. `range' Only rows that are in a given range will be retrieved, using an index to select the rows. The `key' column indicates which index is used. The `key_len' contains the longest key part that was used. The `ref' column will be `NULL' for this type. `index' This is the same as `ALL', except that only the index tree is scanned. This is usually faster than `ALL', as the index file is usually smaller than the datafile. `ALL' A full table scan will be done for each combination of rows from the previous tables. This is normally not good if the table is the first table not marked `const', and usually *very* bad in all other cases. You normally can avoid `ALL' by adding more indexes, so that the row can be retrieved based on constant values or column values from earlier tables. `possible_keys' The `possible_keys' column indicates which indexes MySQL could use to find the rows in this table. Note that this column is totally independent of the order of the tables. That means that some of the keys in `possible_keys' may not be usable in practice with the generated table order. If this column is empty, there are no relevant indexes. In this case, you may be able to improve the performance of your query by examining the `WHERE' clause to see if it refers to some column or columns that would be suitable for indexing. If so, create an appropriate index and check the query with `EXPLAIN' again. *Note ALTER TABLE::. To see what indexes a table has, use `SHOW INDEX FROM tbl_name'. `key' The `key' column indicates the key (index) that MySQL actually decided to use. The key is `NULL' if no index was chosen. To force MySQL to use an key listed in the `possible_keys' column, use `USE KEY/IGNORE KEY' in your query. *Note SELECT::. Also, running `myisamchk --analyze' (*note myisamchk syntax::) or `ANALYZE TABLE' (*note ANALYZE TABLE::) on the table will help the optimiser choose better indexes. `key_len' The `key_len' column indicates the length of the key that MySQL decided to use. The length is `NULL' if the `key' is `NULL'. Note that this tells us how many parts of a multi-part key MySQL will actually use. `ref' The `ref' column shows which columns or constants are used with the `key' to select rows from the table. `rows' The `rows' column indicates the number of rows MySQL believes it must examine to execute the query. `Extra' This column contains additional information of how MySQL will resolve the query. Here is an explanation of the different text strings that can be found in this column: `Distinct' MySQL will not continue searching for more rows for the current row combination after it has found the first matching row. `Not exists' MySQL was able to do a `LEFT JOIN' optimisation on the query and will not examine more rows in this table for the previous row combination after it finds one row that matches the `LEFT JOIN' criteria. Here is an example for this: SELECT * FROM t1 LEFT JOIN t2 ON t1.id=t2.id WHERE t2.id IS NULL; Assume that `t2.id' is defined with `NOT NULL'. In this case MySQL will scan `t1' and look up the rows in `t2' through `t1.id'. If MySQL finds a matching row in `t2', it knows that `t2.id' can never be `NULL', and will not scan through the rest of the rows in `t2' that has the same `id'. In other words, for each row in `t1', MySQL only needs to do a single lookup in `t2', independent of how many matching rows there are in `t2'. ``range checked for each record (index map: #)'' MySQL didn't find a real good index to use. It will, instead, for each row combination in the preceding tables, do a check on which index to use (if any), and use this index to retrieve the rows from the table. This isn't very fast but is faster than having to do a join without an index. `Using filesort' MySQL will need to do an extra pass to find out how to retrieve the rows in sorted order. The sort is done by going through all rows according to the `join type' and storing the sort key + pointer to the row for all rows that match the `WHERE'. Then the keys are sorted. Finally the rows are retrieved in sorted order. `Using index' The column information is retrieved from the table using only information in the index tree without having to do an additional seek to read the actual row. This can be done when all the used columns for the table are part of the same index. `Using temporary' To resolve the query MySQL will need to create a temporary table to hold the result. This typically happens if you do an `ORDER BY' on a different column set than you did a `GROUP BY' on. `Using where' A `WHERE' clause will be used to restrict which rows will be matched against the next table or sent to the client. If you don't have this information and the table is of type `ALL' or `index', you may have something wrong in your query (if you don't intend to fetch/examine all rows from the table). If you want to get your queries as fast as possible, you should look out for `Using filesort' and `Using temporary'. You can get a good indication of how good a join is by multiplying all values in the `rows' column of the `EXPLAIN' output. This should tell you roughly how many rows MySQL must examine to execute the query. This number is also used when you restrict queries with the `max_join_size' variable. *Note Server parameters::. The following example shows how a `JOIN' can be optimised progressively using the information provided by `EXPLAIN'. Suppose you have the `SELECT' statement shown here, that you examine using `EXPLAIN': EXPLAIN SELECT tt.TicketNumber, tt.TimeIn, tt.ProjectReference, tt.EstimatedShipDate, tt.ActualShipDate, tt.ClientID, tt.ServiceCodes, tt.RepetitiveID, tt.CurrentProcess, tt.CurrentDPPerson, tt.RecordVolume, tt.DPPrinted, et.COUNTRY, et_1.COUNTRY, do.CUSTNAME FROM tt, et, et AS et_1, do WHERE tt.SubmitTime IS NULL AND tt.ActualPC = et.EMPLOYID AND tt.AssignedPC = et_1.EMPLOYID AND tt.ClientID = do.CUSTNMBR; For this example, assume that: * The columns being compared have been declared as follows: *Table* *Column* *Column type* `tt' `ActualPC' `CHAR(10)' `tt' `AssignedPC'`CHAR(10)' `tt' `ClientID' `CHAR(10)' `et' `EMPLOYID' `CHAR(15)' `do' `CUSTNMBR' `CHAR(15)' * The tables have the indexes shown here: *Table* *Index* `tt' `ActualPC' `tt' `AssignedPC' `tt' `ClientID' `et' `EMPLOYID' (primary key) `do' `CUSTNMBR' (primary key) * The `tt.ActualPC' values aren't evenly distributed. Initially, before any optimisations have been performed, the `EXPLAIN' statement produces the following information: table type possible_keys key key_len ref rows Extra et ALL PRIMARY NULL NULL NULL 74 do ALL PRIMARY NULL NULL NULL 2135 et_1 ALL PRIMARY NULL NULL NULL 74 tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872 range checked for each record (key map: 35) Because `type' is `ALL' for each table, this output indicates that MySQL is doing a full join for all tables! This will take quite a long time, as the product of the number of rows in each table must be examined! For the case at hand, this is `74 * 2135 * 74 * 3872 = 45,268,558,720' rows. If the tables were bigger, you can only imagine how long it would take. One problem here is that MySQL can't (yet) use indexes on columns efficiently if they are declared differently. In this context, `VARCHAR' and `CHAR' are the same unless they are declared as different lengths. Because `tt.ActualPC' is declared as `CHAR(10)' and `et.EMPLOYID' is declared as `CHAR(15)', there is a length mismatch. To fix this disparity between column lengths, use `ALTER TABLE' to lengthen `ActualPC' from 10 characters to 15 characters: mysql> ALTER TABLE tt MODIFY ActualPC VARCHAR(15); Now `tt.ActualPC' and `et.EMPLOYID' are both `VARCHAR(15)'. Executing the `EXPLAIN' statement again produces this result: table type possible_keys key key_len ref rows Extra tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872 Using where do ALL PRIMARY NULL NULL NULL 2135 range checked for each record (key map: 1) et_1 ALL PRIMARY NULL NULL NULL 74 range checked for each record (key map: 1) et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1 This is not perfect, but is much better (the product of the `rows' values is now less by a factor of 74). This version is executed in a couple of seconds. A second alteration can be made to eliminate the column length mismatches for the `tt.AssignedPC = et_1.EMPLOYID' and `tt.ClientID = do.CUSTNMBR' comparisons: mysql> ALTER TABLE tt MODIFY AssignedPC VARCHAR(15), -> MODIFY ClientID VARCHAR(15); Now `EXPLAIN' produces the output shown here: table type possible_keys key key_len ref rows Extra et ALL PRIMARY NULL NULL NULL 74 tt ref AssignedPC, ActualPC 15 et.EMPLOYID 52 Using where ClientID, ActualPC et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1 do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1 This is almost as good as it can get. The remaining problem is that, by default, MySQL assumes that values in the `tt.ActualPC' column are evenly distributed, and that isn't the case for the `tt' table. Fortunately, it is easy to tell MySQL about this: shell> myisamchk --analyze PATH_TO_MYSQL_DATABASE/tt shell> mysqladmin refresh Now the join is perfect, and `EXPLAIN' produces this result: table type possible_keys key key_len ref rows Extra tt ALL AssignedPC NULL NULL NULL 3872 Using where ClientID, ActualPC et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1 et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1 do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1 Note that the `rows' column in the output from `EXPLAIN' is an educated guess from the MySQL join optimiser. To optimise a query, you should check if the numbers are even close to the truth. If not, you may get better performance by using `STRAIGHT_JOIN' in your `SELECT' statement and trying to list the tables in a different order in the `FROM' clause. Estimating Query Performance ---------------------------- In most cases you can estimate the performance by counting disk seeks. For small tables, you can usually find the row in 1 disk seek (as the index is probably cached). For bigger tables, you can estimate that (using B++ tree indexes) you will need: `log(row_count) / log(index_block_length / 3 * 2 / (index_length + data_pointer_length)) + 1' seeks to find a row. In MySQL an index block is usually 1024 bytes and the data pointer is usually 4 bytes. A 500,000 row table with an index length of 3 (medium integer) gives you: `log(500,000)/log(1024/3*2/(3+4)) + 1' = 4 seeks. As the above index would require about 500,000 * 7 * 3/2 = 5.2M, (assuming that the index buffers are filled to 2/3, which is typical) you will probably have much of the index in memory and you will probably only need 1-2 calls to read data from the OS to find the row. For writes, however, you will need 4 seek requests (as above) to find where to place the new index and normally 2 seeks to update the index and write the row. Note that the above doesn't mean that your application will slowly degenerate by log N! As long as everything is cached by the OS or SQL server things will only go marginally slower while the table gets bigger. After the data gets too big to be cached, things will start to go much slower until your applications is only bound by disk-seeks (which increase by log N). To avoid this, increase the index cache as the data grows. *Note Server parameters::. Speed of `SELECT' Queries ------------------------- In general, when you want to make a slow `SELECT ... WHERE' faster, the first thing to check is whether you can add an index. *Note MySQL indexes: MySQL indexes. All references between different tables should usually be done with indexes. You can use the `EXPLAIN' command to determine which indexes are used for a `SELECT'. *Note `EXPLAIN': EXPLAIN. Some general tips: * To help MySQL optimise queries better, run `myisamchk --analyze' on a table after it has been loaded with relevant data. This updates a value for each index part that indicates the average number of rows that have the same value. (For unique indexes, this is always 1, of course.) MySQL will use this to decide which index to choose when you connect two tables with 'a non-constant expression'. You can check the result from the `analyze' run by doing `SHOW INDEX FROM table_name' and examining the `Cardinality' column. * To sort an index and data according to an index, use `myisamchk --sort-index --sort-records=1' (if you want to sort on index 1). If you have a unique index from which you want to read all records in order according to that index, this is a good way to make that faster. Note, however, that this sorting isn't written optimally and will take a long time for a large table! How MySQL Optimises `WHERE' Clauses ----------------------------------- The `WHERE' optimisations are put in the `SELECT' part here because they are mostly used with `SELECT', but the same optimisations apply for `WHERE' in `DELETE' and `UPDATE' statements. Also note that this section is incomplete. MySQL does many optimisations, and we have not had time to document them all. Some of the optimisations performed by MySQL are listed here: * Removal of unnecessary parentheses: ((a AND b) AND c OR (((a AND b) AND (c AND d)))) -> (a AND b AND c) OR (a AND b AND c AND d) * Constant folding: (a b>5 AND b=c AND a=5 * Constant condition removal (needed because of constant folding): (B>=5 AND B=5) OR (B=6 AND 5=5) OR (B=7 AND 5=6) -> B=5 OR B=6 * Constant expressions used by indexes are evaluated only once. * `COUNT(*)' on a single table without a `WHERE' is retrieved directly from the table information for `MyISAM' and `HEAP' tables. This is also done for any `NOT NULL' expression when used with only one table. * Early detection of invalid constant expressions. MySQL quickly detects that some `SELECT' statements are impossible and returns no rows. * `HAVING' is merged with `WHERE' if you don't use `GROUP BY' or group functions (`COUNT()', `MIN()'...). * For each sub-join, a simpler `WHERE' is constructed to get a fast `WHERE' evaluation for each sub-join and also to skip records as soon as possible. * All constant tables are read first, before any other tables in the query. A constant table is: - An empty table or a table with 1 row. - A table that is used with a `WHERE' clause on a `UNIQUE' index, or a `PRIMARY KEY', where all index parts are used with constant expressions and the index parts are defined as `NOT NULL'. All the following tables are used as constant tables: mysql> SELECT * FROM t WHERE primary_key=1; mysql> SELECT * FROM t1,t2 -> WHERE t1.primary_key=1 AND t2.primary_key=t1.id; * The best join combination to join the tables is found by trying all possibilities. If all columns in `ORDER BY' and in `GROUP BY' come from the same table, then this table is preferred first when joining. * If there is an `ORDER BY' clause and a different `GROUP BY' clause, or if the `ORDER BY' or `GROUP BY' contains columns from tables other than the first table in the join queue, a temporary table is created. * If you use `SQL_SMALL_RESULT', MySQL will use an in-memory temporary table. * Each table index is queried, and the best index that spans fewer than 30% of the rows is used. If no such index can be found, a quick table scan is used. * In some cases, MySQL can read rows from the index without even consulting the datafile. If all columns used from the index are numeric, then only the index tree is used to resolve the query. * Before each record is output, those that do not match the `HAVING' clause are skipped. Some examples of queries that are very fast: mysql> SELECT COUNT(*) FROM tbl_name; mysql> SELECT MIN(key_part1),MAX(key_part1) FROM tbl_name; mysql> SELECT MAX(key_part2) FROM tbl_name -> WHERE key_part_1=constant; mysql> SELECT ... FROM tbl_name -> ORDER BY key_part1,key_part2,... LIMIT 10; mysql> SELECT ... FROM tbl_name -> ORDER BY key_part1 DESC,key_part2 DESC,... LIMIT 10; The following queries are resolved using only the index tree (assuming the indexed columns are numeric): mysql> SELECT key_part1,key_part2 FROM tbl_name WHERE key_part1=val; mysql> SELECT COUNT(*) FROM tbl_name -> WHERE key_part1=val1 AND key_part2=val2; mysql> SELECT key_part2 FROM tbl_name GROUP BY key_part1; The following queries use indexing to retrieve the rows in sorted order without a separate sorting pass: mysql> SELECT ... FROM tbl_name -> ORDER BY key_part1,key_part2,... ; mysql> SELECT ... FROM tbl_name -> ORDER BY key_part1 DESC,key_part2 DESC,... ; How MySQL Optimises `DISTINCT' ------------------------------ `DISTINCT' is converted to a `GROUP BY' on all columns, `DISTINCT' combined with `ORDER BY' will in many cases also need a temporary table. When combining `LIMIT #' with `DISTINCT', MySQL will stop as soon as it finds `#' unique rows. If you don't use columns from all used tables, MySQL will stop the scanning of the not used tables as soon as it has found the first match. SELECT DISTINCT t1.a FROM t1,t2 where t1.a=t2.a; In the case, assuming `t1' is used before `t2' (check with `EXPLAIN'), then MySQL will stop reading from `t2' (for that particular row in `t1') when the first row in `t2' is found. How MySQL Optimises `LEFT JOIN' and `RIGHT JOIN' ------------------------------------------------ `A LEFT JOIN B' in MySQL is implemented as follows: * The table `B' is set to be dependent on table `A' and all tables that `A' is dependent on. * The table `A' is set to be dependent on all tables (except `B') that are used in the `LEFT JOIN' condition. * All `LEFT JOIN' conditions are moved to the `WHERE' clause. * All standard join optimisations are done, with the exception that a table is always read after all tables it is dependent on. If there is a circular dependence then MySQL will issue an error. * All standard `WHERE' optimisations are done. * If there is a row in `A' that matches the `WHERE' clause, but there wasn't any row in `B' that matched the `LEFT JOIN' condition, then an extra `B' row is generated with all columns set to `NULL'. * If you use `LEFT JOIN' to find rows that don't exist in some table and you have the following test: `column_name IS NULL' in the `WHERE' part, where column_name is a column that is declared as `NOT NULL', then MySQL will stop searching after more rows (for a particular key combination) after it has found one row that matches the `LEFT JOIN' condition. `RIGHT JOIN' is implemented analogously as `LEFT JOIN'. The table read order forced by `LEFT JOIN' and `STRAIGHT JOIN' will help the join optimiser (which calculates in which order tables should be joined) to do its work much more quickly, as there are fewer table permutations to check. Note that the above means that if you do a query of type: SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key MySQL will do a full scan on `b' as the `LEFT JOIN' will force it to be read before `d'. The fix in this case is to change the query to: SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key How MySQL Optimises `ORDER BY' ------------------------------ In some cases MySQL can uses index to satisfy an `ORDER BY' or `GROUP BY' request without doing any extra sorting. The index can also be used even if the `ORDER BY' doesn't match the index exactly, as long as all the unused index parts and all the extra are `ORDER BY' columns are constants in the `WHERE' clause. The following queries will use the index to resolve the `ORDER BY' / `GROUP BY' part: SELECT * FROM t1 ORDER BY key_part1,key_part2,... SELECT * FROM t1 WHERE key_part1=constant ORDER BY key_part2 SELECT * FROM t1 WHERE key_part1=constant GROUP BY key_part2 SELECT * FROM t1 ORDER BY key_part1 DESC,key_part2 DESC SELECT * FROM t1 WHERE key_part1=1 ORDER BY key_part1 DESC,key_part2 DESC Some cases where MySQL can *not* use indexes to resolve the `ORDER BY': (Note that MySQL will still use indexes to find the rows that matches the `WHERE' clause): * You are doing an `ORDER BY' on different keys: `SELECT * FROM t1 ORDER BY key1,key2' * You are doing an `ORDER BY' using non-consecutive key parts. `SELECT * FROM t1 WHERE key2=constant ORDER BY key_part2' * You are mixing `ASC' and `DESC'. `SELECT * FROM t1 ORDER BY key_part1 DESC,key_part2 ASC' * The key used to fetch the rows are not the same one that is used to do the `ORDER BY': `SELECT * FROM t1 WHERE key2=constant ORDER BY key1' * You are joining many tables and the columns you are doing an `ORDER BY' on are not all from the first not-`const' table that is used to retrieve rows (This is the first table in the `EXPLAIN' output which doesn't use a `const' row fetch method). * You have different `ORDER BY' and `GROUP BY' expressions. * The used table index is an index type that doesn't store rows in order. (Like the `HASH' index in `HEAP' tables). In the cases where MySQL have to sort the result, it uses the following algorithm: * Read all rows according to key or by table scanning. Rows that don't match the `WHERE' clause are skipped. * Store the sort-key in a buffer (of size `sort_buffer'). * When the buffer gets full, run a qsort on it and store the result in a temporary file. Save a pointer to the sorted block. (In the case where all rows fits into the sort buffer, no temporary file is created) * Repeat the above until all rows have been read. * Do a multi-merge of up to `MERGEBUFF' (7) regions to one block in another temporary file. Repeat until all blocks from the first file are in the second file. * Repeat the following until there is less than `MERGEBUFF2' (15) blocks left. * On the last multi-merge, only the pointer to the row (last part of the sort-key) is written to a result file. * Now the code in `sql/records.cc' will be used to read through them in sorted order by using the row pointers in the result file. To optimise this, we read in a big block of row pointers, sort these and then we read the rows in the sorted order into a row buffer (`record_rnd_buffer') . You can with `EXPLAIN SELECT ... ORDER BY' check if MySQL can use indexes to resolve the query. If you get `Using filesort' in the `extra' column, then MySQL can't use indexes to resolve the `ORDER BY'. *Note EXPLAIN::. If you want to have a higher `ORDER BY' speed, you should first see if you can get MySQL to use indexes instead of having to do an extra sorting phase. If this is not possible, then you can do: * Increase the size of the `sort_buffer' variable. * Increase the size of the `record_rnd_buffer' variable. * Change `tmpdir' to point to a dedicated disk with lots of empty space. If you use MySQL 4.1 or later you can spread load between several physical disks by setting `tmpdir' to a list of paths separated by colon `:' (semicolon `;' on Windows). They will be used in round-robin fashion. *Note:* These paths should end up on different *physical* disks, not different partitions of the same disk. MySQL by default sorts all `GROUP BY x,y[,...]' queries as if you would have specified `ORDER BY x,y[,...]'. MySQL will optimise away any `ORDER BY' as above without any speed penalty. If you by in some cases don't want to have the result sorted you can specify `ORDER BY NULL': INSERT INTO foo SELECT a,COUNT(*) FROM bar GROUP BY a ORDER BY NULL; How MySQL Optimises `LIMIT' --------------------------- In some cases MySQL will handle the query differently when you are using `LIMIT #' and not using `HAVING': * If you are selecting only a few rows with `LIMIT', MySQL will use indexes in some cases when it normally would prefer to do a full table scan. * If you use `LIMIT #' with `ORDER BY', MySQL will end the sorting as soon as it has found the first `#' lines instead of sorting the whole table. * When combining `LIMIT #' with `DISTINCT', MySQL will stop as soon as it finds `#' unique rows. * In some cases a `GROUP BY' can be resolved by reading the key in order (or do a sort on the key) and then calculate summaries until the key value changes. In this case `LIMIT #' will not calculate any unnecessary `GROUP BY's. * As soon as MySQL has sent the first `#' rows to the client, it will abort the query (if you are not using `SQL_CALC_FOUND_ROWS'). * `LIMIT 0' will always quickly return an empty set. This is useful to check the query and to get the column types of the result columns. * When the server uses temporary tables to resolve the query, the `LIMIT #' is used to calculate how much space is required. Speed of `INSERT' Queries ------------------------- The time to insert a record consists approximately of: * Connect: (3) * Sending query to server: (2) * Parsing query: (2) * Inserting record: (1 x size of record) * Inserting indexes: (1 x number of indexes) * Close: (1) where the numbers are somewhat proportional to the overall time. This does not take into consideration the initial overhead to open tables (which is done once for each concurrently running query). The size of the table slows down the insertion of indexes by log N (B-trees). Some ways to speed up inserts: * If you are inserting many rows from the same client at the same time, use multiple value lists `INSERT' statements. This is much faster (many times in some cases) than using separate `INSERT' statements. If you are adding data to non-empty table, you may tune up the `bulk_insert_buffer_size' variable to make it even faster. *Note `bulk_insert_buffer_size': SHOW VARIABLES. * If you are inserting a lot of rows from different clients, you can get higher speed by using the `INSERT DELAYED' statement. *Note `INSERT': INSERT. * Note that with `MyISAM' tables you can insert rows at the same time `SELECT's are running if there are no deleted rows in the tables. * When loading a table from a text file, use `LOAD DATA INFILE'. This is usually 20 times faster than using a lot of `INSERT' statements. *Note `LOAD DATA': LOAD DATA. * It is possible with some extra work to make `LOAD DATA INFILE' run even faster when the table has many indexes. Use the following procedure: 1. Optionally create the table with `CREATE TABLE'. For example, using `mysql' or Perl-DBI. 2. Execute a `FLUSH TABLES' statement or the shell command `mysqladmin flush-tables'. 3. Use `myisamchk --keys-used=0 -rq /path/to/db/tbl_name'. This will remove all usage of all indexes from the table. 4. Insert data into the table with `LOAD DATA INFILE'. This will not update any indexes and will therefore be very fast. 5. If you are going to only read the table in the future, run `myisampack' on it to make it smaller. *Note Compressed format::. 6. Re-create the indexes with `myisamchk -r -q /path/to/db/tbl_name'. This will create the index tree in memory before writing it to disk, which is much faster because it avoids lots of disk seeks. The resulting index tree is also perfectly balanced. 7. Execute a `FLUSH TABLES' statement or the shell command `mysqladmin flush-tables'. Note that `LOAD DATA INFILE' also does the above optimisation if you insert into an empty table; the main difference with the above procedure is that you can let `myisamchk' allocate much more temporary memory for the index creation that you may want MySQL to allocate for every index recreation. Since MySQL 4.0 you can also use `ALTER TABLE tbl_name DISABLE KEYS' instead of `myisamchk --keys-used=0 -rq /path/to/db/tbl_name' and `ALTER TABLE tbl_name ENABLE KEYS' instead of `myisamchk -r -q /path/to/db/tbl_name'. This way you can also skip `FLUSH TABLES' steps. * You can speed up insertions that is done over multiple statements by locking your tables: mysql> LOCK TABLES a WRITE; mysql> INSERT INTO a VALUES (1,23),(2,34),(4,33); mysql> INSERT INTO a VALUES (8,26),(6,29); mysql> UNLOCK TABLES; The main speed difference is that the index buffer is flushed to disk only once, after all `INSERT' statements have completed. Normally there would be as many index buffer flushes as there are different `INSERT' statements. Locking is not needed if you can insert all rows with a single statement. For transactional tables, you should use `BEGIN/COMMIT' instead of `LOCK TABLES' to get a speedup. Locking will also lower the total time of multi-connection tests, but the maximum wait time for some threads will go up (because they wait for locks). For example: thread 1 does 1000 inserts thread 2, 3, and 4 does 1 insert thread 5 does 1000 inserts If you don't use locking, 2, 3, and 4 will finish before 1 and 5. If you use locking, 2, 3, and 4 probably will not finish before 1 or 5, but the total time should be about 40% faster. As `INSERT', `UPDATE', and `DELETE' operations are very fast in MySQL, you will obtain better overall performance by adding locks around everything that does more than about 5 inserts or updates in a row. If you do very many inserts in a row, you could do a `LOCK TABLES' followed by an `UNLOCK TABLES' once in a while (about each 1000 rows) to allow other threads access to the table. This would still result in a nice performance gain. Of course, `LOAD DATA INFILE' is much faster for loading data. To get some more speed for both `LOAD DATA INFILE' and `INSERT', enlarge the key buffer. *Note Server parameters::. Speed of `UPDATE' Queries ------------------------- Update queries are optimised as a `SELECT' query with the additional overhead of a write. The speed of the write is dependent on the size of the data that is being updated and the number of indexes that are updated. Indexes that are not changed will not be updated. Also, another way to get fast updates is to delay updates and then do many updates in a row later. Doing many updates in a row is much quicker than doing one at a time if you lock the table. Note that, with dynamic record format, updating a record to a longer total length may split the record. So if you do this often, it is very important to `OPTIMIZE TABLE' sometimes. *Note `OPTIMIZE TABLE': OPTIMIZE TABLE. Speed of `DELETE' Queries ------------------------- If you want to delete all rows in the table, you should use `TRUNCATE TABLE table_name'. *Note TRUNCATE::. The time to delete a record is exactly proportional to the number of indexes. To delete records more quickly, you can increase the size of the index cache. *Note Server parameters::. Other Optimisation Tips ----------------------- Unsorted tips for faster systems: * Use persistent connections to the database to avoid the connection overhead. If you can't use persistent connections and you are doing a lot of new connections to the database, you may want to change the value of the `thread_cache_size' variable. *Note Server parameters::. * Always check that all your queries really use the indexes you have created in the tables. In MySQL you can do this with the `EXPLAIN' command. *Note Explain: (manual)EXPLAIN. * Try to avoid complex `SELECT' queries on `MyISAM' tables that are updated a lot. This is to avoid problems with table locking. * The new `MyISAM' tables can insert rows in a table without deleted rows at the same time another table is reading from it. If this is important for you, you should consider methods where you don't have to delete rows or run `OPTIMIZE TABLE' after you have deleted a lot of rows. * Use `ALTER TABLE ... ORDER BY expr1,expr2...' if you mostly retrieve rows in `expr1,expr2...' order. By using this option after big changes to the table, you may be able to get higher performance. * In some cases it may make sense to introduce a column that is 'hashed' based on information from other columns. If this column is short and reasonably unique it may be much faster than a big index on many columns. In MySQL it's very easy to use this extra column: `SELECT * FROM table_name WHERE hash=MD5(CONCAT(col1,col2)) AND col_1='constant' AND col_2='constant'' * For tables that change a lot you should try to avoid all `VARCHAR' or `BLOB' columns. You will get dynamic row length as soon as you are using a single `VARCHAR' or `BLOB' column. *Note Table types::. * It's not normally useful to split a table into different tables just because the rows gets 'big'. To access a row, the biggest performance hit is the disk seek to find the first byte of the row. After finding the data most new disks can read the whole row fast enough for most applications. The only cases where it really matters to split up a table is if it's a dynamic row size table (see above) that you can change to a fixed row size, or if you very often need to scan the table and don't need most of the columns. *Note Table types::. * If you very often need to calculate things based on information from a lot of rows (like counts of things), it's probably much better to introduce a new table and update the counter in real time. An update of type `UPDATE table SET count=count+1 WHERE index_column=constant' is very fast! This is really important when you use MySQL table types like MyISAM and ISAM that only have table locking (multiple readers / single writers). This will also give better performance with most databases, as the row locking manager in this case will have less to do. * If you need to collect statistics from big log tables, use summary tables instead of scanning the whole table. Maintaining the summaries should be much faster than trying to do statistics 'live'. It's much faster to regenerate new summary tables from the logs when things change (depending on business decisions) than to have to change the running application! * If possible, one should classify reports as 'live' or 'statistical', where data needed for statistical reports are only generated based on summary tables that are generated from the actual data. * Take advantage of the fact that columns have default values. Insert values explicitly only when the value to be inserted differs from the default. This reduces the parsing that MySQL need to do and improves the insert speed. * In some cases it's convenient to pack and store data into a blob. In this case you have to add some extra code in your application to pack/unpack things in the blob, but this may save a lot of accesses at some stage. This is practical when you have data that doesn't conform to a static table structure. * Normally you should try to keep all data non-redundant (what is called 3rd normal form in database theory), but you should not be afraid of duplicating things or creating summary tables if you need these to gain more speed. * Stored procedures or UDF (user-defined functions) may be a good way to get more performance. In this case you should, however, always have a way to do this some other (slower) way if you use some database that doesn't support this. * You can always gain something by caching queries/answers in your application and trying to do many inserts/updates at the same time. If your database supports lock tables (like MySQL and Oracle), this should help to ensure that the index cache is only flushed once after all updates. * Use `INSERT /*! DELAYED */' when you do not need to know when your data is written. This speeds things up because many records can be written with a single disk write. * Use `INSERT /*! LOW_PRIORITY */' when you want your selects to be more important. * Use `SELECT /*! HIGH_PRIORITY */' to get selects that jump the queue. That is, the select is done even if there is somebody waiting to do a write. * Use the multi-line `INSERT' statement to store many rows with one SQL command (many SQL servers supports this). * Use `LOAD DATA INFILE' to load bigger amounts of data. This is faster than normal inserts and will be even faster when `myisamchk' is integrated in `mysqld'. * Use `AUTO_INCREMENT' columns to make unique values. * Use `OPTIMIZE TABLE' once in a while to avoid fragmentation when using a dynamic table format. *Note `OPTIMIZE TABLE': OPTIMIZE TABLE. * Use `HEAP' tables to get more speed when possible. *Note Table types::. * When using a normal web server setup, images should be stored as files. That is, store only a file reference in the database. The main reason for this is that a normal web server is much better at caching files than database contents. So it it's much easier to get a fast system if you are using files. * Use in memory tables for non-critical data that are accessed often (like information about the last shown banner for users that don't have cookies). * Columns with identical information in different tables should be declared identical and have identical names. Before Version 3.23 you got slow joins otherwise. Try to keep the names simple (use `name' instead of `customer_name' in the customer table). To make your names portable to other SQL servers you should keep them shorter than 18 characters. * If you need really high speed, you should take a look at the low-level interfaces for data storage that the different SQL servers support! For example, by accessing the MySQL `MyISAM' directly, you could get a speed increase of 2-5 times compared to using the SQL interface. To be able to do this the data must be on the same server as the application, and usually it should only be accessed by one process (because external file locking is really slow). One could eliminate the above problems by introducing low-level `MyISAM' commands in the MySQL server (this could be one easy way to get more performance if needed). By carefully designing the database interface, it should be quite easy to support this types of optimisation. * In many cases it's faster to access data from a database (using a live connection) than accessing a text file, just because the database is likely to be more compact than the text file (if you are using numerical data), and this will involve fewer disk accesses. You will also save code because you don't have to parse your text files to find line and column boundaries. * You can also use replication to speed things up. *Note Replication::. * Declaring a table with `DELAY_KEY_WRITE=1' will make the updating of indexes faster, as these are not logged to disk until the file is closed. The downside is that you should run `myisamchk' on these tables before you start `mysqld' to ensure that they are okay if something killed `mysqld' in the middle. As the key information can always be generated from the data, you should not lose anything by using `DELAY_KEY_WRITE'. Locking Issues ============== How MySQL Locks Tables ---------------------- You can find a discussion about different locking methods in the appendix. *Note Locking methods::. All locking in MySQL is deadlock-free, except for `InnoDB' and `BDB' type tables. This is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order. `InnoDB' type tables automatically acquire their row locks and `BDB' type tables their page locks during the processing of SQL statements, not at the start of the transaction. The locking method MySQL uses for `WRITE' locks works as follows: * If there are no locks on the table, put a write lock on it. * Otherwise, put the lock request in the write lock queue. The locking method MySQL uses for `READ' locks works as follows: * If there are no write locks on the table, put a read lock on it. * Otherwise, put the lock request in the read lock queue. When a lock is released, the lock is made available to the threads in the write lock queue, then to the threads in the read lock queue. This means that if you have many updates on a table, `SELECT' statements will wait until there are no more updates. To work around this for the case where you want to do many `INSERT' and `SELECT' operations on a table, you can insert rows in a temporary table and update the real table with the records from the temporary table once in a while. This can be done with the following code: mysql> LOCK TABLES real_table WRITE, insert_table WRITE; mysql> INSERT INTO real_table SELECT * FROM insert_table; mysql> TRUNCATE TABLE insert_table; mysql> UNLOCK TABLES; You can use the `LOW_PRIORITY' options with `INSERT', `UPDATE' or `DELETE' or `HIGH_PRIORITY' with `SELECT' if you want to prioritise retrieval in some specific cases. You can also start `mysqld' with `--low-priority-updates' to get the same behaveour. Using `SQL_BUFFER_RESULT' can also help making table locks shorter. *Note SELECT::. You could also change the locking code in `mysys/thr_lock.c' to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications. Table Locking Issues -------------------- The table locking code in MySQL is deadlock free. MySQL uses table locking (instead of row locking or column locking) on all table types, except `InnoDB' and `BDB' tables, to achieve a very high lock speed. For large tables, table locking is much better than row locking for most applications, but there are, of course, some pitfalls. For `InnoDB' and `BDB' tables, MySQL only uses table locking if you explicitly lock the table with `LOCK TABLES'. For these table types we recommend you to not use `LOCK TABLES' at all, because `InnoDB' uses automatic row level locking and `BDB' uses page level locking to ensure transaction isolation. In MySQL Version 3.23.7 and above, you can insert rows into `MyISAM' tables at the same time other threads are reading from the table. Note that currently this only works if there are no holes after deleted rows in the table at the time the insert is made. When all holes has been filled with new data, concurrent inserts will automatically be enabled again. Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table will wait until the update is ready. As updates on tables normally are considered to be more important than `SELECT', all statements that update a table have higher priority than statements that retrieve information from a table. This should ensure that updates are not 'starved' because one issues a lot of heavy queries against a specific table. (You can change this by using `LOW_PRIORITY' with the statement that does the update or `HIGH_PRIORITY' with the `SELECT' statement.) Starting from MySQL Version 3.23.7 one can use the `max_write_lock_count' variable to force MySQL to temporary give all `SELECT' statements, that wait for a table, a higher priority after a specific number of inserts on a table. Table locking is, however, not very good under the following senario: * A client issues a `SELECT' that takes a long time to run. * Another client then issues an `UPDATE' on a used table. This client will wait until the `SELECT' is finished. * Another client issues another `SELECT' statement on the same table. As `UPDATE' has higher priority than `SELECT', this `SELECT' will wait for the `UPDATE' to finish. It will also wait for the first `SELECT' to finish! * A thread is waiting for something like `full disk', in which case all threads that wants to access the problem table will also be put in a waiting state until more disk space is made available. Some possible solutions to this problem are: * Try to get the `SELECT' statements to run faster. You may have to create some summary tables to do this. * Start `mysqld' with `--low-priority-updates'. This will give all statements that update (modify) a table lower priority than a `SELECT' statement. In this case the last `SELECT' statement in the previous scenario would execute before the `INSERT' statement. * You can give a specific `INSERT', `UPDATE', or `DELETE' statement lower priority with the `LOW_PRIORITY' attribute. * Start `mysqld' with a low value for `max_write_lock_count' to give `READ' locks after a certain number of `WRITE' locks. * You can specify that all updates from a specific thread should be done with low priority by using the SQL command: `SET LOW_PRIORITY_UPDATES=1'. *Note `SET': SET OPTION. * You can specify that a specific `SELECT' is very important with the `HIGH_PRIORITY' attribute. *Note `SELECT': SELECT. * If you have problems with `INSERT' combined with `SELECT', switch to use the new `MyISAM' tables as these support concurrent `SELECT's and `INSERT's. * If you mainly mix `INSERT' and `SELECT' statements, the `DELAYED' attribute to `INSERT' will probably solve your problems. *Note `INSERT': INSERT. * If you have problems with `SELECT' and `DELETE', the `LIMIT' option to `DELETE' may help. *Note `DELETE': DELETE. Optimising Database Structure ============================= Design Choices -------------- MySQL keeps row data and index data in separate files. Many (almost all) other databases mix row and index data in the same file. We believe that the MySQL choice is better for a very wide range of modern systems. Another way to store the row data is to keep the information for each column in a separate area (examples are SDBM and Focus). This will cause a performance hit for every query that accesses more than one column. Because this degenerates so quickly when more than one column is accessed, we believe that this model is not good for general purpose databases. The more common case is that the index and data are stored together (like in Oracle/Sybase et al). In this case you will find the row information at the leaf page of the index. The good thing with this layout is that it, in many cases, depending on how well the index is cached, saves a disk read. The bad things with this layout are: * Table scanning is much slower because you have to read through the indexes to get at the data. * You can't use only the index table to retrieve data for a query. * You lose a lot of space, as you must duplicate indexes from the nodes (as you can't store the row in the nodes). * Deletes will degenerate the table over time (as indexes in nodes are usually not updated on delete). * It's harder to cache only the index data. Get Your Data as Small as Possible ---------------------------------- One of the most basic optimisation is to get your data (and indexes) to take as little space on the disk (and in memory) as possible. This can give huge improvements because disk reads are faster and normally less main memory will be used. Indexing also takes less resources if done on smaller columns. MySQL supports a lot of different table types and row formats. Choosing the right table format may give you a big performance gain. *Note Table types::. You can get better performance on a table and minimise storage space using the techniques listed here: * Use the most efficient (smallest) types possible. MySQL has many specialised types that save disk space and memory. * Use the smaller integer types if possible to get smaller tables. For example, `MEDIUMINT' is often better than `INT'. * Declare columns to be `NOT NULL' if possible. It makes everything faster and you save one bit per column. Note that if you really need `NULL' in your application you should definitely use it. Just avoid having it on all columns by default. * If you don't have any variable-length columns (`VARCHAR', `TEXT', or `BLOB' columns), a fixed-size record format is used. This is faster but unfortunately may waste some space. *Note `MyISAM' table formats: MyISAM table formats. * The primary index of a table should be as short as possible. This makes identification of one row easy and efficient. * For each table, you have to decide which storage/index method to use. *Note Table types::. * Only create the indexes that you really need. Indexes are good for retrieval but bad when you need to store things fast. If you mostly access a table by searching on a combination of columns, make an index on them. The first index part should be the most used column. If you are *always* using many columns, you should use the column with more duplicates first to get better compression of the index. * If it's very likely that a column has a unique prefix on the first number of characters, it's better to only index this prefix. MySQL supports an index on a part of a character column. Shorter indexes are faster not only because they take less disk space but also because they will give you more hits in the index cache and thus fewer disk seeks. *Note Server parameters::. * In some circumstances it can be beneficial to split into two a table that is scanned very often. This is especially true if it is a dynamic format table and it is possible to use a smaller static format table that can be used to find the relevant rows when scanning the table. How MySQL Uses Indexes ---------------------- Indexes are used to find rows with a specific value of one column fast. Without an index MySQL has to start with the first record and then read through the whole table until it finds the relevant rows. The bigger the table, the more this costs. If the table has an index for the columns in question, MySQL can quickly get a position to seek to in the middle of the datafile without having to look at all the data. If a table has 1000 rows, this is at least 100 times faster than reading sequentially. Note that if you need to access almost all 1000 rows it is faster to read sequentially because we then avoid disk seeks. All MySQL indexes (`PRIMARY', `UNIQUE', and `INDEX') are stored in B-trees. Strings are automatically prefix- and end-space compressed. *Note `CREATE INDEX': CREATE INDEX. Indexes are used to: * Quickly find the rows that match a `WHERE' clause. * Retrieve rows from other tables when performing joins. * Find the `MAX()' or `MIN()' value for a specific indexed column. This is optimised by a preprocessor that checks if you are using `WHERE' key_part_# = constant on all key parts < N. In this case MySQL will do a single key lookup and replace the `MIN()' expression with a constant. If all expressions are replaced with constants, the query will return at once: SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10 * Sort or group a table if the sorting or grouping is done on a leftmost prefix of a usable key (for example, `ORDER BY key_part_1,key_part_2 '). The key is read in reverse order if all key parts are followed by `DESC'. *Note ORDER BY optimisation::. * In some cases a query can be optimised to retrieve values without consulting the datafile. If all used columns for some table are numeric and form a leftmost prefix for some key, the values may be retrieved from the index tree for greater speed: SELECT key_part3 FROM table_name WHERE key_part1=1 Suppose you issue the following `SELECT' statement: mysql> SELECT * FROM tbl_name WHERE col1=val1 AND col2=val2; If a multiple-column index exists on `col1' and `col2', the appropriate rows can be fetched directly. If separate single-column indexes exist on `col1' and `col2', the optimiser tries to find the most restrictive index by deciding which index will find fewer rows and using that index to fetch the rows. If the table has a multiple-column index, any leftmost prefix of the index can be used by the optimiser to find rows. For example, if you have a three-column index on `(col1,col2,col3)', you have indexed search capabilities on `(col1)', `(col1,col2)', and `(col1,col2,col3)'. MySQL can't use a partial index if the columns don't form a leftmost prefix of the index. Suppose you have the `SELECT' statements shown here: mysql> SELECT * FROM tbl_name WHERE col1=val1; mysql> SELECT * FROM tbl_name WHERE col2=val2; mysql> SELECT * FROM tbl_name WHERE col2=val2 AND col3=val3; If an index exists on `(col1,col2,col3)', only the first query shown above uses the index. The second and third queries do involve indexed columns, but `(col2)' and `(col2,col3)' are not leftmost prefixes of `(col1,col2,col3)'. MySQL also uses indexes for `LIKE' comparisons if the argument to `LIKE' is a constant string that doesn't start with a wildcard character. For example, the following `SELECT' statements use indexes: mysql> SELECT * FROM tbl_name WHERE key_col LIKE "Patrick%"; mysql> SELECT * FROM tbl_name WHERE key_col LIKE "Pat%_ck%"; In the first statement, only rows with `"Patrick" <= key_col < "Patricl"' are considered. In the second statement, only rows with `"Pat" <= key_col < "Pau"' are considered. The following `SELECT' statements will not use indexes: mysql> SELECT * FROM tbl_name WHERE key_col LIKE "%Patrick%"; mysql> SELECT * FROM tbl_name WHERE key_col LIKE other_col; In the first statement, the `LIKE' value begins with a wildcard character. In the second statement, the `LIKE' value is not a constant. MySQL 4.0 does another optimisation on `LIKE'. If you use `... LIKE "%string%"' and `string' is longer than 3 characters, MySQL will use the `Turbo Boyer-Moore' algorithm to initialise the pattern for the string and then use this pattern to perform the search quicker. Searching using `column_name IS NULL' will use indexes if column_name is an index. MySQL normally uses the index that finds the least number of rows. An index is used for columns that you compare with the following operators: `=', `>', `>=', `<', `<=', `BETWEEN', and a `LIKE' with a non-wildcard prefix like `'something%''. Any index that doesn't span all `AND' levels in the `WHERE' clause is not used to optimise the query. In other words: To be able to use an index, a prefix of the index must be used in every `AN