作者 |
教你们几招:中国大陆招软件人才的最佳捷径 |
|
所跟贴 |
I use the myisam, but it's not million level , it only 200,000 or 500,000 records was stable -- 文成武德周,乾坤日月明,天地正气锋 - (390 Byte) 2005-1-19 周三, 08:03 (1014 reads) |
ykliu
头衔: 海归少校
加入时间: 2004/07/21 文章: 78
海归分: 10818
|
|
作者:ykliu 在 海归招聘 发贴, 来自【海归网】 http://www.haiguinet.com
I am sure there are many mysql experts here. I am not trying to pretend to be one, just have some questions regarding your solution.
Frist of all the unstable and high CPU cost maybe not caused by mysql itself.
For example,
1. maybe your harddrive can not handle high data transfer rate and most CPU cost is on IO.
2. Maybe your memory is low so there are many swarping?
3. Maybe you allocate too few memory to mysql?
There are many other thing you could try to tune before look at mysql. I assume you did that.
You're actually manual partitioning your data, that's normal approach to handle mass data. The problem is mysql doesn't support view. You are creating quite a problem here.
In your example you are partitioning by 50,000. I guess it's a typo, should be 500,000. So for 10M records you need 20 tables.
If your application only select Single record from your tables by newsid, you can create a procedure or funcation to pre determine which table to select from. However if you want to select record base on any other columns of the table you will have trouble. You will also have problem on range selection. Since there's no view in mysql you have to union all tables to accomplish that, the CPU and IO cost is going to be hudge.
Remember we haven't touched the update, insert and delete part yet.
Ok, I am not trying to apply your postion, haha. Just my 2 cents.
作者:ykliu 在 海归招聘 发贴, 来自【海归网】 http://www.haiguinet.com
|
|
|
返回顶端 |
|
|
|
|
|
|
您不能在本论坛发表新主题, 不能回复主题, 不能编辑自己的文章, 不能删除自己的文章, 不能发表投票, 您 不可以 发表活动帖子在本论坛, 不能添加附件可以下载文件, |
|
|