最近写的一个程序需要连接一个DB2的数据库上去,配置环境的时候走了不少弯路,看了N多文档,搞了近两天,终于搞好了,在这里记录下:

先说明:系统是centos linux 5.x,64位,系统上的php是5.3,python的版本我有安装python 2.6的。

1)安装IBM Data Server Driver Package (DS Driver)

下载地址:https://www-304.ibm.com/support/docview.wss?uid=swg27016878

我下载下来的包名是:v10.5fp3_linuxx64_dsdriver.tar.gz

tar xvzf v10.5fp3_linuxx64_dsdriver.tar.gz
mkdir /opt/ibm
cp dsdriver /opt/ibm/
cd /opt/ibm/dsdriver./installDSDriver

继续阅读

myql 5.5.23安装:

1. 安装一些需要的库

yum install gcc-c++ gperf ncurses-devel readline-devel libaio-devel bison ncurses-devel openssl-devel cmake

yum groupinstall  Development tools

 

2. mysql 5.5以后用cmake编译了,安装cmake(centos 6内,在上面安装了cmake,这一步就可以不做了):

wget http://www.cmake.org/files/v2.8/cmake-2.8.8.tar.gz
cd cmake-2.8.8

./configure 

make && make install

3.编译mysql:

下载解压后:

cd mysql-5.5.23

cmake -DCMAKE_INSTALL_PREFIX=/data1/app/services/mysql5_5 -DWITH_MYISAM_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_ARCHIVE_STORAGE_ENGINE=1 -DWITH_BLACKHOLE_STORAGE_ENGINE=1 -DWITH_MEMORY_STORAGE_ENGINE=1 -DWITH_FEDERATED_STORAGE_ENGINE=1 -DWITH_PARTITION_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DWITH_SSL=system -DWITH_ZLIB=system -DENABLED_LOCAL_INFILE=1 -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DEXTRA_CHARSETS=all -DMYSQL_TCP_PORT=3306 -DMYSQL_USER=mysql

make && make install

cd /data1/app/services/mysql5_5
scripts/mysql_install_db –user=mysql –datadir=data

chown -R root  .
chown -R mysql data
chgrp -R mysql .
cp support-files/mysql.server /etc/init.d/mysql55
chmod 755 /etc/init.d/mysql55
chkconfig –level 345 mysql55 on

 

启动:

service mysql55 start

修改密码:

mysql -h 127.0.0.1 

GRANT ALL PRIVILEGES ON *.* TO root@"%" IDENTIFIED BY 'password' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO root@"127.0.0.1" IDENTIFIED BY 'password' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO root@"localhost" IDENTIFIED BY 'password' WITH GRANT OPTION;
flush privileges;

 

 

下面是一个mysql 5.5 的my.cnf文件例子:

[client]
port            = 3306
default-character-set = utf8
socket          = /data1/app/tmp/mysql.sock

[mysqld]
datadir = /data1/app/services/mysql5_5/data
socket = /data1/app/tmp/mysql.sock
pid-file = /data1/app/tmp/mysql.pid
user = mysql
port = 3306

character-set-server = utf8
collation-server = utf8_general_ci
#default-storage-engine=INNODB

skip-name-resolve
skip-external-locking
key_buffer = 128M
max_allowed_packet = 4M
table_cache = 32512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 8M
thread_cache_size = 32
query_cache_size = 256M
tmp_table_size = 128M

# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 16
lower_case_table_names = 1
back_log = 500
max_connect_errors = 1000
max_connections = 1600

slow_query_log_file = /data1/app/log/mysql/mysql-slow.log
slow_query_log = 1
log-queries-not-using-indexes
long_query_time = 1
wait_timeout = 86400

server-id = 1
log-bin = mysql-bin
expire-logs-days = 7
log-error = mysql-bin.err
binlog_do_db = flowershop
sync-binlog = 0

innodb_buffer_pool_size = 24G
innodb_additional_mem_pool_size = 128M
innodb_file_per_table
innodb_open_files = 65535
innodb_lock_wait_timeout = 10
innodb_rollback_on_timeout = on
innodb_log_file_size = 1G
innodb_log_buffer_size = 256M
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
innodb_data_file_path = ibdata1:20M:autoextend
#innodb_file_io_threads = 8
innodb_read_io_threads = 8
innodb_write_io_threads = 8
#innodb_force_recovery = 1
innodb_thread_concurrency = 30
innodb_flush_log_at_trx_commit = 2
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 80
innodb_file_format = Barracuda
innodb_file_format_max = Barracuda
innodb_io_capacity = 400
innodb_use_native_aio = 1

#binlog_format = mixed
#transaction-isolation = READ-UNCOMMITTED

[mysqld_safe]
log-error=/data1/app/log/mysql/mysql.log

要想让verticadb的数据库随着系统启动而启动,把下面这行加入到/etc/rc.local:

su – dbadmin -c ‘/opt/vertica/bin/adminTools -t start_db -d dbname -p password -i’

 

执行结果类似这样:

su – dbadmin -c ‘/opt/vertica/bin/adminTools -t start_db -d verticadb-p password -i’ 

        Node Status: v_verticadb_node0001: (DOWN)
        Node Status: v_verticadb_node0001: (DOWN)
        Node Status: v_verticadb_node0001: (DOWN)
        Node Status: v_verticadb_node0001: (DOWN)
        Node Status: v_verticadb_node0001: (DOWN)
        Node Status: v_verticadb_node0001: (UP)

 

本文转自:http://www.fromdual.com/mysql-handler-read-status-variables

Because I do a lot of Performance Tuning gigs I get often in contact with these status variables. In the beginning I had a problem to understand them and now I have a problem to memorize the relation of the name and the meaning. Therefore I wrote this little summary:
Prepare the example

To show you the effect I have worked out a little example:

CREATE TABLE test (
    id    INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
  , data  VARCHAR(32)
  , ts    TIMESTAMP
  , INDEX (data)
);

INSERT INTO test
VALUES (NULL, ‘abc’, NOW()), (NULL, ‘abc’, NOW()), (NULL, ‘abd’, NOW()), (NULL, ‘acd’, NOW()), (NULL, ‘def’, NOW())
     , (NULL, ‘pqr’, NOW()), (NULL, ‘stu’, NOW()), (NULL, ‘vwx’, NOW()), (NULL, ‘yza’, NOW()), (NULL, ‘def’, NOW())
;

SELECT * FROM test;
+—-+——+———————+
| id | data | ts                  |
+—-+——+———————+
|  1 | abc  | 2008-01-18 16:28:40 |
|  2 | abc  | 2008-01-18 16:28:40 |
|  3 | abd  | 2008-01-18 16:28:40 |
|  4 | acd  | 2008-01-18 16:28:40 |
|  5 | def  | 2008-01-18 16:28:40 |
|  6 | pqr  | 2008-01-18 16:28:40 |
|  7 | stu  | 2008-01-18 16:28:40 |
|  8 | vwx  | 2008-01-18 16:28:40 |
|  9 | yza  | 2008-01-18 16:28:40 |
| 10 | def  | 2008-01-18 16:28:40 |
+—-+——+———————+

To see the effect of a query do the following steps:

    FLUSH STATUS;
    Execute the query
    SHOW SESSION STATUS LIKE ‘handler_read%’;
    Do an EXPLAIN of the query

Handler_read_first

The number of times the first entry was read from an index. If this value is high, it suggests that the server is doing a lot of full index scans.

+————-+          +—+—+
| Table       |          | In|ex |
|             |          |   |   |
|             |          |   |   |
|             |          |   |   |
|             |          |   |   |
|             |          |   v   |
|             |          |       |
|             |          |       |
+————-+          +——-+

SELECT data FROM test;
10 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 1     |
| Handler_read_key      | 0     |
| Handler_read_next     | 10    |
+———————–+——-+

EXPLAIN SELECT data FROM test;
+—-+————-+——-+——-+—————+——+———+——+——+————-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra       |
+—-+————-+——-+——-+—————+——+———+——+——+————-+
|  1 | SIMPLE      | test  | index | NULL          | data | 35      | NULL |   10 | Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————-+

So what we can basically say is, that we had 1 full index scan and it did 10+1 index fetches.

Let us do some more examples

SELECT data FROM test WHERE data BETWEEN ‘A’ AND ‘O’;
6 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 6     |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————————–+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra                    |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+
|  1 | SIMPLE      | test  | range | data          | data | 35      | NULL |    5 | Using where; Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+

Here it seems the query is not starting with Handler_read_first though it could theoretically. Instead of we get a Handler_read_key. What we can also see is the "wrong" estimation of the optimizer in the execution plan.

Whit this example the query really could start from the beginning…

SELECT data FROM test WHERE data < ‘O’;
6 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 6     |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————————–+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra                    |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+
|  1 | SIMPLE      | test  | range | data          | data | 35      | NULL |    5 | Using where; Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————————–

But it does not!

The same for this query:

SELECT data FROM test WHERE data LIKE ‘a%’;
4 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 4     |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————————–+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra                    |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+
|  1 | SIMPLE      | test  | range | data          | data | 35      | NULL |    4 | Using where; Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+

And this query does something completely different:

SELECT data FROM test WHERE data IN (‘abc’, ‘abd’, ‘acd’);
4 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 3     |
| Handler_read_next     | 4     |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————————–+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra                    |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+
|  1 | SIMPLE      | test  | range | data          | data | 35      | NULL |    4 | Using where; Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————————–+

I was not able to get any Handler_read_first count other than by a real full index scan. So I would say that a Handler_read_first is equivalent to Number of full index scans.

A full index scan is better than a full table scan but still not good because they burn a lot of CPU cycles. But sometimes you cannot avoid it…
Handler_read_key

The number of requests to read a row based on a key. If this value is high, it is a good indication that your tables are properly indexed for your queries.

See also the examples in the previous chapter.

+————-+          +——-+
| Table       |          | Index |
|             | <——  |       | <–+
|             |          |       |
|             |          |       |
|             |          |       |
|             |          |       |
|             |          |       |
|             |          |       |
+————-+          +——-+

SELECT data FROM test where data = ‘abc’;
2 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 2     |
+———————–+——-+

+—-+————-+——-+——+—————+——+———+——-+——+————————–+
| id | select_type | table | type | possible_keys | key  | key_len | ref   | rows | Extra                    |
+—-+————-+——-+——+—————+——+———+——-+——+————————–+
|  1 | SIMPLE      | test  | ref  | data          | data | 35      | const |    2 | Using where; Using index |
+—-+————-+——-+——+—————+——+———+——-+——+————————–+

What makes me wondering in this example (an also in the previous) is, that based on the query there is IMHO no reason to access the table (row)…

SELECT * FROM test where data = ‘pqr’;
1 row in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 1     |
+———————–+——-+

+—-+————-+——-+——+—————+——+———+——-+——+————-+
| id | select_type | table | type | possible_keys | key  | key_len | ref   | rows | Extra       |
+—-+————-+——-+——+—————+——+———+——-+——+————-+
|  1 | SIMPLE      | test  | ref  | data          | data | 35      | const |    1 | Using where |
+—-+————-+——-+——+—————+——+———+——-+——+————-+

In this example it makes clearly sense…!
Handler_read_next

The number of requests to read the next row in key order. This value is incremented if you are querying an index column with a range constraint or if you are doing an index scan.

See also the examples in the previous chapters.

+————-+          +——-+
| Table       |          | Index |
|             |          |       |
|             | <——  |   +   |
|             | <——  |   |   |
|             | <——  |   v   |
|             |          |       |
|             |          |       |
|             |          |       |
+————-+          +——-+

Handler_read_prev

The number of requests to read the previous row in key order. This read method is mainly used to optimize ORDER BY … DESC.

+————-+          +——-+
| Table       |          | Index |
|             |          |       |
|             | <——  |   ^   |
|             | <——  |   |   |
|             | <——  |   +   |
|             |          |       |
|             |          |       |
|             |          |       |
+————-+          +——-+

SELECT data FROM test ORDER BY data DESC;
10 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 0     |
| Handler_read_next     | 0     |
| Handler_read_prev     | 10    |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra       |
+—-+————-+——-+——-+—————+——+———+——+——+————-+
|  1 | SIMPLE      | test  | index | NULL          | data | 35      | NULL |   10 | Using index |
+—-+————-+——-+——-+—————+——+———+——+——+————-+

There is no such status like Handler_read_last implemented like it could be according to the HANDLER functions [1].

SELECT * FROM test where data between ‘A’ and ‘B’ ORDER BY data DESC;
4 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_first    | 0     |
| Handler_read_key      | 1     |
| Handler_read_next     | 0     |
| Handler_read_prev     | 4     |
+———————–+——-+

+—-+————-+——-+——-+—————+——+———+——+——+————-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra       |
+—-+————-+——-+——-+—————+——+———+——+——+————-+
|  1 | SIMPLE      | test  | range | data          | data | 35      | NULL |    4 | Using where |
+—-+————-+——-+——-+—————+——+———+——+——+————-+

Handler_read_rnd

The number of requests to read a row based on a fixed position. This value is high if you are doing a lot of queries that require sorting of the result. You probably have a lot of queries that require MySQL to scan entire tables or you have joins that don’t use keys properly.

This status comes into account if the old file_sort mechanism is used [2].

To make this work we have to modify slightly our table:

ALTER TABLE test ADD COLUMN file_sort text;

UPDATE test SET file_sort = ‘abcdefghijklmnopqrstuvwxyz’ WHERE id = 1;
UPDATE test SET file_sort = ‘bcdefghijklmnopqrstuvwxyza’ WHERE id = 2;
UPDATE test SET file_sort = ‘cdefghijklmnopqrstuvwxyzab’ WHERE id = 3;
UPDATE test SET file_sort = ‘defghijklmnopqrstuvwxyzabc’ WHERE id = 4;
UPDATE test SET file_sort = ‘efghijklmnopqrstuvwxyzabcd’ WHERE id = 5;
UPDATE test SET file_sort = ‘fghijklmnopqrstuvwxyzabcde’ WHERE id = 6;
UPDATE test SET file_sort = ‘ghijklmnopqrstuvwxyzabcdef’ WHERE id = 7;
UPDATE test SET file_sort = ‘hijklmnopqrstuvwxyzabcdefg’ WHERE id = 8;
UPDATE test SET file_sort = ‘ijklmnopqrstuvwxyzabcdefgh’ WHERE id = 9;
UPDATE test SET file_sort = ‘jklmnopqrstuvwxyzabcdefghi’ WHERE id = 10;

SELECT * FROM test ORDER BY file_sort asc;
10 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_rnd      | 10    |
| Handler_read_rnd_next | 11    |
+———————–+——-+

+—-+————-+——-+——+—————+——+———+——+——+—————-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra          |
+—-+————-+——-+——+—————+——+———+——+——+—————-+
|  1 | SIMPLE      | test  | ALL  | NULL          | NULL | NULL    | NULL |   10 | Using filesort |
+—-+————-+——-+——+—————+——+———+——+——+—————-+

This is really a performance killer and should be avoided whenever possible!
Handler_read_rnd_next

The number of requests to read the next row in the data file. This value is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have.

+——+——+          +——-+
| Table|      |          | Index |
|      |      |          |       |
|      |      |          |       |
|      |      |          |       |
|      |      |          |       |
|      v      |          |       |
|             |          |       |
|             |          |       |
+————-+          +——-+

SELECT * FROM test;
10 rows in set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_rnd_next | 11    |
+———————–+——-+

+—-+————-+——-+——+—————+——+———+——+——+——-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra |
+—-+————-+——-+——+—————+——+———+——+——+——-+
|  1 | SIMPLE      | test  | ALL  | NULL          | NULL | NULL    | NULL |   10 |       |
+—-+————-+——-+——+—————+——+———+——+——+——-+

Obviously also filtering does not have a impact on the work which is performed:

SELECT * FROM test WHERE ts = ‘2008-01-18 17:33:39’;
Empty set

+———————–+——-+
| Variable_name         | Value |
+———————–+——-+
| Handler_read_rnd_next | 11    |
+———————–+——-+

+—-+————-+——-+——+—————+——+———+——+——+————-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra       |
+—-+————-+——-+——+—————+——+———+——+——+————-+
|  1 | SIMPLE      | test  | ALL  | NULL          | NULL | NULL    | NULL |   10 | Using where |
+—-+————-+——-+——+—————+——+———+——+——+————-+

Literature

[1] HANDLER Syntax
[2] File sort

今天看mysql slow日志的时候发现有些查询的时间特别长,如下面:

# User@Host: flowershop2[flowershop2] @  [10.176.202.139]
# Query_time: 18446744073709.550781  Lock_time: 0.000027 Rows_sent: 0  Rows_examined: 1
SET timestamp=1332464662;
UPDATE f_users SET uCash = ‘9861’, uSellFlowerTime = ‘1332464662’ WHERE uId = ‘1829711071’;
# Time: 120322 21:06:22
# User@Host: flowershop2[flowershop2] @  [10.176.227.212]
# Query_time: 18446744073709.550781  Lock_time: 0.000022 Rows_sent: 0  Rows_examined: 1
SET timestamp=1332464782;
UPDATE f_user_actions SET uaSize = ’11’ WHERE uId = ‘1551376611’ AND uaId = ‘1332129864763’;
# Time: 120322 21:07:22
# User@Host: flowershop2[flowershop2] @  [10.176.202.139]
# Query_time: 18446744073709.550781  Lock_time: 0.000020 Rows_sent: 2  Rows_examined: 2
SET timestamp=1332464842;

 

Query_time: 18446744073709.550781,显然这不是一个真实的查询时间。

网上搜了下,应该是mysql的一个bug, http://bugs.mysql.com/bug.php?id=47813 

不过这个bug 09年就有人报了,好像现在还没有fix掉.

看到一篇老外的文章分析说可能是多cpu造成的,如一个查询开如由一个cpu,后来又转到第二个cpu去执行时,就可能会产生这个问题。

老外的文章:http://www.mysqlperformanceblog.com/2009/05/17/what-time-18446744073709550000-means/


本文地址:http://blog.csdn.net/kongxx/article/details/6656658

今天升级了Vertica从4.0.x到5.0.4版本,发现Vertica4x多数据库实例端口配置文章中提到的方法不再适用,做了一些调研发现可以通过以下步骤来实现:

1. 首先先停止所有的数据库实例;

2. 创建多个数据库实例,比如mydb1和mydb2;

3. 编辑/opt/vertica/config/admintools.conf文件,将其中mydb1和mydb2配置部分的端口改成自己想要的端口,如下:

    [Configuration]  
    install_opts = [‘-u’, ‘fkong’]  
    default_base = /home/dbadmin  
    show_hostnames = False  
    format = 3  
      
    [Cluster]  
    hosts = 127.0.0.1  
    spread_hosts =   
      
    [Nodes]  
    node0001 = 127.0.0.1,/opt/dbadmin,/opt/dbadmin  
    v_mydb1_node0001 = 127.0.0.1,/opt/vertica/db/mydb1,/opt/vertica/db/mydb1  
    v_mydb2_node0001 = 127.0.0.1,/opt/vertica/db/mydb2,/opt/vertica/db/mydb2  
      
    [Database:mydb1]  
    host = 127.0.0.1  
    restartpolicy = ksafe  
    port = 5551  
    path = /opt/vertica/db/mydb1  
    nodes = v_mydb1_node0001  
      
    [Database:mydb2]  
    host = 127.0.0.1  
    restartpolicy = ksafe  
    port = 5552  
    path = /opt/vertica/db/mydb2  

    nodes = v_mydb2_node0001 

 

4. 重启Vertica5x服务。

删除主键(Vertica数据库的主键值并不是唯一的):
SELECT ANALYZE_CONSTRAINTS(‘fb_s.c_log’);

找到key名,再:

ALTER TABLE fb_s.c_log DROP CONSTRAINT C_PRIMARY;

SELECT ANALYZE_CONSTRAINTS(‘fb_s.user_info’);

ALTER TABLE fb_s.user_info DROP CONSTRAINT C_PRIMARY;

建用户和SCHEMA :

CREATE user fb_s_sql IDENTIFIED BY ‘password’;
CREATE SCHEMA fb_s_sql;

给权限:
GRANT ALL ON SCHEMA fb_s_sql TO fb_s_sql;
GRANT ALL ON SCHEMA fb_s TO fb_s_sql;

GRANT ALL ON TABLE fb_s_sql.sqllog TO fb_s_sql;

建表:

CREATE TABLE fb_s.c_log (
    uid int  NOT NULL,
    cash int,
    gold int,
    level int,
    rtime datetime,
    tid varchar(20),
    act varchar(50),
    item varchar(500),
    value int,
    value2 int,
    time datetime
);

CREATE TABLE fb_s.new_c_log (
  uid integer PRIMARY KEY NOT NULL,
  cash integer,
  gold integer,
  level integer,
  rtime datetime,
  tid varchar(20),
  act varchar(50),
  item varchar(500),
  value integer,
  value2 integer,
  time datetime NOT NULL
)
PARTITION BY EXTRACT(year FROM time)*100 + EXTRACT(month FROM time);

后一个是按time字段分区

增加及修改字段:

ALTER TABLE fb_s.c_logADD COLUMN value2 integer default 0;
ALTER TABLE fb_s.c_log  ALTER COLUMN duration SET DEFAULT 0;
ALTER TABLE fb_s.c_log  ALTER COLUMN mesg SET DEFAULT ”;

两表之间导数据:
insert into fb_s.c_log (uid,cash,gold,level,rtime,tid,act,item,value,value2,time)
(select * from fb_s.c_logbak);

两库之间导数据:

在源库导出:

vsql -d topcity -U dbadmin -w password -F ‘,’ -At -o fs_user_info.csv -c "SELECT * FROM fb_s.user_info;" &
vsql -d topcity -U dbadmin -w password -F ‘,’ -At -o fs_c_log.csv -c "SELECT * FROM fb_s.c_log;" &

目的库导入:
COPY fb_s.user_info  FROM ‘/opt/fs_user_info.csv’ EXCEPTIONS ‘/tmp/exp.log’ DELIMITER ‘,’;
COPY fb_s.c_log  FROM ‘/opt/fs_c_log.csv’ EXCEPTIONS ‘/tmp/exp.log’ DELIMITER ‘,’;

第一:首先要把mysqld停止
service mysqld stop

第二:启动mysql,但是要跳过权限表
/usr/local/mysql/bin/mysqld_safe –skip-grant-tables &

第三:进去mysql,并修改密码
mysql -u root
mysql>use mysql;
mysql>update set user password=password("new_pass") where user="root";
mysql>flush privileges;
mysql>q

第四:重新启动mysql,正常进入。

yum install gcc gcc-c++

groupadd mysql
useradd –shell /sbin/nologin -g mysql mysql
tar zxvf mysql-5.1.58.tar.gz
cd mysql-5.1.58

 

#设置一下 CFLAGS 和 CXXFLAGS,尤其要注意打开 HAVE_DLOPEN 选项
CFLAGS=’-O2 -DHAVE_DLOPEN -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic’

CXXFLAGS=’-O2 -DHAVE_DLOPEN -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic’

./configure –prefix=/data1/app/services/mysql
–with-charset=utf8
–with-collation=utf8_general_ci
–with-extra-charsets=complex
–enable-thread-safe-client
–enable-local-infile
–enable-assembler
–with-big-tables
–localstatedir=/data1/app/services/mysql/data
–with-unix-socket-path=/data1/app/tmp/mysql.sock
–with-mysqld-user=mysql
–with-plugins=max-no-ndb

make
make install

cp support-files/my-medium.cnf /etc/my.cnf
cp support-files/mysql.server /etc/init.d/mysql
chmod u+x /etc/init.d/mysql

cd /data1/app/services/mysql
bin/mysql_install_db –user=mysql –force –datadir=data
chown -R root  .
chown -R mysql data
chgrp -R mysql .

chkconfig –add mysql
chkconfig –level 345 mysql on
bin/mysqld_safe –user=mysql &
service mysql restart

 

ln -s /data1/app/mysql52/bin/mysql /sbin/mysql
ln -s /data1/app/mysql52/mysqladmin /sbin/mysqladmin

 

my.cnf文件参考(注意红色部分):

[mysqld]
datadir=/data1/app/services/mysql/data
socket=/data1/app/tmp/mysql.sock
pid-file=/data1/app/tmp/mysql.pid
user=mysql
port=3308

ignore-builtin-innodb
plugin-load=innodb=ha_innodb_plugin.so

# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).

old_passwords=1

character-set-server = utf8
collation-server = utf8_general_ci
default-storage-engine=INNODB

skip-name-resolve
skip-external-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 32512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 8M
thread_cache_size = 32
query_cache_size = 8M
tmp_table_size = 64M

# Try number of CPU’s*2 for thread_concurrency
thread_concurrency = 16
lower_case_table_names = 1
max_connections = 100

slow_query_log_file = /data1/app/log/mysql-slow.log
slow_query_log = 1
log-queries-not-using-indexes
long_query_time = 1
wait_timeout = 86400

log-bin = mysql-bin
server-id = 1
expire-logs-days = 7

innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 16M
innodb_file_per_table
innodb_open_files = 65535
innodb_lock_wait_timeout = 50
innodb_log_file_size = 512M
innodb_log_buffer_size = 16M
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2

binlog_format = mixed
transaction-isolation = READ-UNCOMMITTED

[mysqld_safe]
log-error=/data1/app/log/mysql.log

 

启动数据库进去看下:

[root@fbtw-topcity log]# mysql -h 127.0.0.1 -P 3308
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.1.58-log Source distribution

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the buffer.

mysql> show plugin;         
+————+———-+—————-+———————+———+
| Name       | Status   | Type           | Library             | License |
+————+———-+—————-+———————+———+
| binlog     | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| partition  | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| ARCHIVE    | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| BLACKHOLE  | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| CSV        | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| FEDERATED  | DISABLED | STORAGE ENGINE | NULL                | GPL     |
| MEMORY     | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| MyISAM     | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| MRG_MYISAM | ACTIVE   | STORAGE ENGINE | NULL                | GPL     |
| InnoDB     | ACTIVE   | STORAGE ENGINE | ha_innodb_plugin.so | GPL     |
+————+———-+—————-+———————+———+
10 rows in set, 1 warning (0.00 sec)

mysql> show variables like ‘innodb_%’;
+———————————+————————+
| Variable_name                   | Value                  |
+———————————+————————+
| innodb_adaptive_flushing        | ON                     |
| innodb_adaptive_hash_index      | ON                     |
| innodb_additional_mem_pool_size | 16777216               |
| innodb_autoextend_increment     | 8                      |
| innodb_autoinc_lock_mode        | 1                      |
| innodb_buffer_pool_size         | 268435456              |
| innodb_change_buffering         | inserts                |
| innodb_checksums                | ON                     |
| innodb_commit_concurrency       | 0                      |
| innodb_concurrency_tickets      | 500                    |
| innodb_data_file_path           | ibdata1:10M:autoextend |
| innodb_data_home_dir            |                        |
| innodb_doublewrite              | ON                     |
| innodb_fast_shutdown            | 1                      |
| innodb_file_format              | Antelope               |
| innodb_file_format_check        | Antelope               |
| innodb_file_per_table           | ON                     |
| innodb_flush_log_at_trx_commit  | 2                      |
| innodb_flush_method             | O_DIRECT               |
| innodb_force_recovery           | 0                      |
| innodb_io_capacity              | 200                    |
| innodb_lock_wait_timeout        | 50                     |
| innodb_locks_unsafe_for_binlog  | OFF                    |
| innodb_log_buffer_size          | 16777216               |
| innodb_log_file_size            | 536870912              |
| innodb_log_files_in_group       | 2                      |
| innodb_log_group_home_dir       | ./                     |
| innodb_max_dirty_pages_pct      | 75                     |
| innodb_max_purge_lag            | 0                      |
| innodb_mirrored_log_groups      | 1                      |
| innodb_old_blocks_pct           | 37                     |
| innodb_old_blocks_time          | 0                      |
| innodb_open_files               | 65535                  |
| innodb_read_ahead_threshold     | 56                     |
| innodb_read_io_threads          | 4                      |
| innodb_replication_delay        | 0                      |
| innodb_rollback_on_timeout      | OFF                    |
| innodb_spin_wait_delay          | 6                      |
| innodb_stats_method             | nulls_equal            |
| innodb_stats_on_metadata        | ON                     |
| innodb_stats_sample_pages       | 8                      |
| innodb_strict_mode              | OFF                    |
| innodb_support_xa               | ON                     |
| innodb_sync_spin_loops          | 30                     |
| innodb_table_locks              | ON                     |
| innodb_thread_concurrency       | 0                      |
| innodb_thread_sleep_delay       | 10000                  |
| innodb_use_sys_malloc           | ON                     |
| innodb_version                  | 1.0.17                 |
| innodb_write_io_threads         | 4                      |
+———————————+————————+
50 rows in set (0.02 sec)

mysql>

INNODB_LOG_FILE 过于小,会直接触发CHECKPOINT,导致频繁IO请求; 多大是合适的?

    ========================================
    : (none) 16:13:13> pager grep sequence               
    PAGER set to ‘grep sequence’
    : (none) 16:13:14>
    : (none) 16:13:15>  SHOW engine innodb STATUSG SELECT sleep(60); SHOW engine innodb STATUSG               
    Log sequence number 1450 485101299
    1 row in set (0.09 sec)
   
   
    1 row in set (1 min 0.01 sec)
   
    Log sequence number 1450 505024667
    1 row in set (0.00 sec)
   
    : (none) 16:14:37> nopager
    PAGER set to stdout
    : (none) 16:14:43> select (505024667-485101299)/1024/1024;
    +———————————+
    | (505024667-485101299)/1024/1024 |
    +———————————+
    |                     19.00040436 |
    +———————————+
    1 row in set (0.00 sec)
    ========================================
    Notice the log sequence number. That’s the total number of bytes written to the transaction log.
    我们在高峰期间采样可以得到,1分钟产生19M的日志; 我觉得这个INNODB LOG大小设成 19M*60=1140M 已经足够了;
    60分钟是一个经验值, 你也可以适当调大,比如 500M,3个文件 ;这相对来说是安全的;
    当然你也可以用以下命令来查看日志产生的大小:
    show status like ‘Innodb_os_log_written’; select sleep(60); show status like ‘Innodb_os_log_written’;