设为首页收藏本站

追梦Linux

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 3218|回复: 0

Mongodb高可用集群(四)——分片

[复制链接]

482

主题

485

帖子

16万

积分

CEO

Rank: 9Rank: 9Rank: 9

积分
168233

最佳新人活跃会员热心会员推广达人宣传达人灌水之王突出贡献优秀版主荣誉管理论坛元老

QQ
发表于 2017-3-3 16:35:41 | 显示全部楼层 |阅读模式
在上一篇《Mongodb高可用集群(三)——深入副本集内部机制》搭建后还有两个问题没有解决:
  • 从节点每个上面的数据都是对数据库全量拷贝,从节点压力会不会过大?
  • 数据压力大到机器支撑不了的时候能否做到自动扩展?
在系统早期,数据量还小的时候不会引起太大的问题,但是随着数据量持续增多,后续迟早会出现一台机器硬件瓶颈问题的。而mongodb主打的就是海量数据架构,他不能解决海量数据怎么行!不行!“分片”就用这个来解决这个问题。
传统数据库怎么做海量数据读写?其实一句话概括:分而治之。下图看看就清楚了,如下 taobao岳旭强在infoq中提到的 架构图:

上图中有个TDDL,是taobao的一个数据访问层组件,他主要的作用是SQL解析、路由处理。根据应用的请求的功能解析当前访问的sql判断是在哪个业务数据库、哪个表访问查询并返回数据结果。具体如图:


说了这么多传统数据库的架构,那Nosql怎么去做到了这些呢?mysql要做到自动扩展需要加一个数据访问层用程序去扩展,数据库的增加、删除、备份还需要程序去控制。一但数据库的节点一多,要维护起来也是非常头疼的。不过mongodb所有的这一切通过他自己的内部机制就可以搞定!顿时石化了,这么牛X!还是上图看看mongodb通过哪些机制实现路由、分片:



从图中可以看到有四个组件:mongos、config server、shard、replica set。
mongos,数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
config server,顾名思义为配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,这个可不能丢失!就算挂掉其中一台,只要还有存货, mongodb集群就不会挂掉。
shard,这就是传说中的分片了。上面提到一个机器就算能力再大也有天花板,就像军队打仗一样,一个人再厉害喝血瓶也拼不过对方的一个师。俗话说三个臭皮匠顶个诸葛亮,这个时候团队的力量就凸显出来了。在互联网也是这样,一台普通的机器做不了的多台机器来做,如下图:



一台机器的一个数据表 Collection1 存储了 1T 数据,压力太大了!在分给4个机器后,每个机器都是256G,则分摊了集中在一台机器的压力。也许有人问一台机器硬盘加大一点不就可以了,为什么要分给四台机器呢?不要光想到存储空间,实际运行的数据库还有硬盘的读写、网络的IO、CPU和内存的瓶颈。在mongodb集群只要设置好了分片规则,通过mongos操作数据库就能自动把对应的数据操作请求转发到对应的分片机器上。在生产环境中分片的片键可要好好设置,这个影响到了怎么把数据均匀分到多个分片机器上,不要出现其中一台机器分了1T,其他机器没有分到的情况,这样还不如不分片!
replica set,上两节已经详细讲过了这个东东,怎么这里又来凑热闹!其实上图4个分片如果没有 replica set 是个不完整架构,假设其中的一个分片挂掉那四分之一的数据就丢失了,所以在高可用性的分片架构还需要对于每一个分片构建 replica set 副本集保证分片的可靠性。生产环境通常是 2个副本 + 1个仲裁。
说了这么多,还是来实战一下如何搭建高可用的mongodb集群:
首先确定各个组件的数量,mongos 3个, config server 3个,数据分3片 shard server 3个,每个shard 有一个副本一个仲裁也就是 3 * 2 = 6 个,总共需要部署15个实例。这些实例可以部署在独立机器也可以部署在一台机器,我们这里测试资源有限,只准备了 3台机器,在同一台机器只要端口不同就可以,看一下物理部署图:


架构搭好了,安装软件!
1、准备机器
172.17.0.2
172.17.0.3
172.17.0.4
2、安装
tar xf mongodb-linux-x86_64-rhel62-3.4.2.tgz
mv mongodb-linux-x86_64-rhel62-3.4.2   /usr/local/mongodb
3、
分别在每台机器建立mongos 、config 、 shard1 、shard2、shard3 五个目录

因为mongos不存储数据,只需要建立日志文件目录即可。
[Bash shell] 纯文本查看 复制代码
cat mkdir_dir.sh
#!/bin/bash
cd /usr/local/mongodb
#mongos日志目录
mkdir mongos/log -p
#config日志目录
mkdir config/log -p
#config数据目录
mkdir config/data -p
#shard1数据目录
mkdir shard1/data -p
#shard2数据目录
mkdir shard2/data -p
#shard3数据目录
mkdir shard3/data -p
#shard1日志目录
mkdir shard1/log -p
#shard2日志目录
mkdir shard2/log -p
#shard3日志目录
mkdir shard3/log -p

4、规划5个组件对应的端口号,由于一个机器需要同时部署 mongos、config server 、shard1、shard2、shard3,所以需要用端口进行区分。
这个端口可以自由定义,在本文 mongos为 20000, config server 为 21000, shard1为 22001 , shard2为22002, shard3为22003.
5、在每一台服务器分别启动配置服务,3.4版本要求配置服务器也必须是副本集 以前的版本无此要求
[Bash shell] 纯文本查看 复制代码
#配置文件
cat configsrv.conf
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true

#启用日志选项 MongoDB的数据操作将会写入到journal文件夹的文件里
journal = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#profile = 1
#slowms = 500

#启动授权模式
#auth=true

#打开web监控
#httpinterface=true
#rest=true

#副本集名称
replSet=configs

#添加验证key
#clusterAuthMode=keyFile
#keyFile=/usr/local/mongodb/mongo.key

#declare this is a shard db of a cluster;
#shardsvr = true

#设置oplog大小 oplog为固定集合 oplog size默认是磁盘容量的5%(min-1G,max-50G)
#oplogSize=2000

#设置最大连接数
maxConns=20000
#——————————————–

启动:
mongod -f configsrv.conf
创建副本集:
mongo --port 21000
>use admin;
>config={
    _id: "configs",
    members: [
        {
            _id: 0,
            host: "172.17.0.2:21000"
        },
        {
            _id: 1,
            host: "172.17.0.3:21000"
        },
        {
            _id: 2,
            host: "172.17.0.4:21000"
        }
    ]
}

>rs.initiate(config)
{ "ok": 1 }

6、
配置副本集(三台机器),并启动
#配置文件
[Bash shell] 纯文本查看 复制代码
#shard1
cat shard1.conf
#——————————————–
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 0.0.0.0
port = 22001
fork = true
journal = true

profile = 1
slowms = 500

#打开web监控
httpinterface=true
rest=true

#副本集名称
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#添加验证key
#clusterAuthMode=keyFile
#keyFile=/usr/local/mongodb/mongo.key

#declare this is a config db of a cluster;
#configsvr = true

#启动授权模式
#auth=true

#设置最大连接数
maxConns=20000
#——————————————–
启动:mongod -f shard1.conf
#mongo --port 22001
use admin;
config = { _id:"shard1", members:[
{_id:0,host:"172.17.0.2:22001"},
{_id:1,host:"172.17.0.3:22001",[/size][/font][/backcolor][backcolor=rgb(255, 255, 255)][font=Roboto, "][size=16px]arbiterOnly: true},
{_id:2,host:"172.17.0.4:22001"}
]
};

rs.initiate(config);


[Bash shell] 纯文本查看 复制代码
#shard2
cat shard2.conf
#——————————————–
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 0.0.0.0
port = 22002
fork = true
journal = true

profile = 1
slowms = 500

#打开web监控
httpinterface=true
rest=true

#副本集名称
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#添加验证key
#clusterAuthMode=keyFile
#keyFile=/usr/local/mongodb/mongo.key

#declare this is a config db of a cluster;
#configsvr = true

#启动授权模式
#auth=true

#设置最大连接数
maxConns=20000
#——————————————–
[backcolor=rgb(255, 255, 255)][font=Roboto, "][size=16px]启动:mongod -f shard2.conf[/size][/font][/backcolor]
#mongo --port 22002
use admin;
config = { _id:"shard2", members:[
{_id:0,host:"172.17.0.2:22002"},
{_id:1,host:"172.17.0.3:22002"},
{_id:2,host:"172.17.0.4:22002"}
]
};

rs.initiate(config);


[Bash shell] 纯文本查看 复制代码
#shard3
cat shard3.conf
#——————————————–
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 0.0.0.0
port = 22003
fork = true
journal = true

profile = 1
slowms = 500

#打开web监控
httpinterface=true
rest=true

#副本集名称
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#添加验证key
#clusterAuthMode=keyFile
#keyFile=/usr/local/mongodb/mongo.key

#declare this is a config db of a cluster;
#configsvr = true

#启动授权模式
#auth=true

#设置最大连接数
maxConns=20000
#——————————————–[/size][/font][/backcolor]
[backcolor=rgb(255, 255, 255)][font=Roboto, "][size=16px]启动:mongod -f shard3
.conf
#mongo --port 22003
use admin;
config = { _id:"shard3", members:[
{_id:0,host:"172.17.0.2:22003"},
{_id:1,host:"172.17.0.3:22003"},
{_id:2,host:"172.17.0.4:22003"}
]
};

rs.initiate(config);



7、
配置路由服务器 先启动配置服务器和分片服务器 后启动路由实例
启动路由实例:(三台机器)
[Bash shell] 纯文本查看 复制代码
cat  mongos.conf
#——————————————————————————-
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0
port = 20000
fork = true

#启动授权模式
#auth=true

#打开web监控
#httpinterface=true
#rest=true

#副本集名称
#replSet=dev

#添加验证key
#clusterAuthMode=keyFile
#keyFile=/usr/local/mongodb/mongo.key

#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/172.17.0.2:21000,172.17.0.3:21000,172.17.0.4:21000

#设置最大连接数
maxConns=20000
#——————————————————————————-

#启动三台mongos服务器
mongos -f mongos.conf
查看当前分片状态:
#mongo --port 20000

mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“589997810aa6bd4d2aa3fb04”)
}
shards:
active mongoses:
"3.4.2" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Sat Mar 04 2017 12:00:03 GMT+0000 (UTC) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:


8、
加入分片
登陆mongos服务器 执行如下操作:
[Bash shell] 纯文本查看 复制代码
mongos>sh.addShard("shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001")
mongos>sh.addShard("shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002")
#查看目前分片状态
mongos> use admin
switched to db admin
mongos> db.runCommand({listshards:1})
{
    "shards" : [
        {
            "_id" : "shard1",
            "host" : "shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001",
            "state" : 1
        },
        {
            "_id" : "shard2",
            "host" : "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",
            "state" : 1
        }
    ],
    "ok" : 1
}


查看当前状态信息:
[Bash shell] 纯文本查看 复制代码
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“58ac0fac33fb547d2a2f7aa3”)
}
shards:
active mongoses:
"3.4.2" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Sat Mar 04 2017 12:02:43 GMT+0000 (UTC) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:


[Bash shell] 纯文本查看 复制代码
开启允许分片
mongos> sh.enableSharding("xmj")
{ "ok" : 1 }
分片需要集合有索引:
插入10W条数据
mongos>for(i=1;i<100000;i++){db.u1.insert({"uid":i,"name":"jesse"+i})}
WriteResult({ "nInserted" : 1 })

以uid为键创建索引:
mongos> db.u1.createIndex({"uid":1})
{
    "raw" : {
        "configs/172.17.0.2:21000,172.17.0.3:21000,172.17.0.4:21000" : {
            "createdCollectionAutomatically" : false,
            "numIndexesBefore" : 2,
            "numIndexesAfter" : 2,
            "note" : "all indexes already exist",
            "ok" : 1,
            "$gleStats" : {
                "lastOpTime" : {
                    "ts" : Timestamp(1488632533, 1),
                    "t" : NumberLong(2)
                },
                "electionId" : ObjectId("7fffffff0000000000000002")
            }
        }
    },
    "ok" : 1
}

查看索引
mongos> db.u1.getIndexes()
[
    {
        "v" : 2,
        "key" : {
            "_id" : 1
        },
        "name" : "_id_",
        "ns" : "admin.u1"
    },
    {
        "v" : 2,
        "key" : {
            "uid" : 1
        },
        "name" : "uid_1",
        "ns" : "admin.u1"
    }
]


[Bash shell] 纯文本查看 复制代码
指定集合的分片键uid
mongos> sh.shardCollection("xmj.u1",{"uid":1})
{ "collectionsharded" : "xmj.u1", "ok" : 1 }
再次插入10W条数据测试:
for(i=100000;i<200000;i++){ db.u1.insert({"uid":i,"name":"jesse"+i}) }
WriteResult({ "nInserted" : 1 })
查看分片情况:
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("58b991a0ab014f43bc119ad9")
}
  shards:
    {  "_id" : "shard1",  "host" : "shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001",  "state" : 1 }
    {  "_id" : "shard2",  "host" : "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",  "state" : 1 }
  active mongoses:
    "3.4.2" : 3
 autosplit:
    Currently enabled: yes
  balancer:
    Currently enabled:  yes
    Currently running:  no
        Balancer lock taken at Sat Mar 04 2017 12:04:03 GMT+0000 (UTC) by ConfigServer:Balancer
    Failed balancer rounds in last 5 attempts:  2
    Last reported error:  could not find host matching read preference { mode: "primary" } for set shard1
    Time of Reported error:  Sat Mar 04 2017 12:04:23 GMT+0000 (UTC)
    Migration Results for the last 24 hours:
        No recent migrations
  databases:
    {  "_id" : "zly",  "primary" : "shard2",  "partitioned" : true }
        zly.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard2    1
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
    {  "_id" : "xmj",  "primary" : "shard2",  "partitioned" : true }
        xmj.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard2    1
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)


移除一个分片操作:
[Bash shell] 纯文本查看 复制代码
1:确认balancer 开启
mongos> sh.getBalancerState()
true
2:执行删除
mongos> use admin
switched to db admin
mongos> db.runCommand({removeShard:"shard1"})
{
    "msg": "drainingstartedsuccessfully",
    "state": "started",
    "shard": "shard1",
    "note": "youneedtodropormovePrimarythesedatabases",
    "dbsToMove": [],
    "ok": 1
}
再次执行 检查该过程:
mongos> db.runCommand({removeShard:"shard1"})
{
    "msg": "removeshard completed successfully",
    "state": "completed",
    "shard": "shard1",
    "ok": 1
}
查看分片情况:
mongos> db.runCommand({listshards:1})
{
    "shards": [
        {
            "_id": "shard2",
            "host": "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",
            "state": 1
        }
    ],
    "ok": 1
}
mongos> db.printShardingStatus()
--- Sharding Status ---
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("58b991a0ab014f43bc119ad9")
}
  shards:
    {  "_id" : "shard2",  "host" : "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",  "state" : 1 }
  active mongoses:
    "3.4.2" : 3
 autosplit:
    Currently enabled: yes
  balancer:
    Currently enabled:  yes
    Currently running:  no
        Balancer lock taken at Sat Mar 04 2017 12:04:03 GMT+0000 (UTC) by ConfigServer:Balancer
    Failed balancer rounds in last 5 attempts:  2
    Last reported error:  could not find host matching read preference { mode: "primary" } for set shard1
    Time of Reported error:  Sat Mar 04 2017 12:04:23 GMT+0000 (UTC)
    Migration Results for the last 24 hours:
        No recent migrations
  databases:
    {  "_id" : "zly",  "primary" : "shard2",  "partitioned" : true }
        zly.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard2    1
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
    {  "_id" : "xmj",  "primary" : "shard2",  "partitioned" : true }
        xmj.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard2    1
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 


将删除分片加入集合操作:
清空data数据目录 删除完毕旧数据并启动 按照如上方式创建副本集:
[Bash shell] 纯文本查看 复制代码
#mongo --port 22001
use admin;
config = { _id:"shard1", members:[
{_id:0,host:"172.17.0.2:22001"},
{_id:1,host:"172.17.0.3:22001"},
{_id:2,host:"172.17.0.4:22001"}
]
};

rs.initiate(config);
并创建新的集合插入一条测试数据 测试非分片数据迁移:
shard1:PRIMARY> use xmj2
switched to db xmj2
shard1:PRIMARY> db.u2.insert({"uid":1,"username":"jesse"})
WriteResult({ "nInserted" : 1 })
mongos>sh.addShard("shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001")
{ "shardAdded" : "shard1", "ok" : 1 }
再次插入数据测试:
mongos> use xmj
switched to db xmj
mongos> for(i=200000;i<300000;i++){ db.u1.insert({"uid":i,"name":"jesse"+i}) };
WriteResult({ "nInserted" : 1 })
查看数据:
mongos>  sh.status()
--- Sharding Status ---
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("58b991a0ab014f43bc119ad9")
}
  shards:
    {  "_id" : "shard1",  "host" : "shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001",  "state" : 1 }
    {  "_id" : "shard2",  "host" : "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",  "state" : 1 }
  active mongoses:
    "3.4.2" : 3
 autosplit:
    Currently enabled: yes
  balancer:
    Currently enabled:  yes
    Currently running:  no
        Balancer lock taken at Sat Mar 04 2017 12:04:03 GMT+0000 (UTC) by ConfigServer:Balancer
    Failed balancer rounds in last 5 attempts:  2
    Last reported error:  could not find host matching read preference { mode: "primary" } for set shard1
    Time of Reported error:  Sat Mar 04 2017 12:04:23 GMT+0000 (UTC)
    Migration Results for the last 24 hours:
        1 : Success
  databases:
    {  "_id" : "zly",  "primary" : "shard2",  "partitioned" : true }
        zly.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard2    1
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
    {  "_id" : "xmj",  "primary" : "shard2",  "partitioned" : true }
        xmj.u1
            shard key: { "uid" : 1 }
            unique: false
            balancing: true
            chunks:
                shard1    1
                shard2    2
            { "uid" : { "$minKey" : 1 } } -->> { "uid" : 200001 } on : shard1 Timestamp(2, 0)
            { "uid" : 200001 } -->> { "uid" : 200017 } on : shard2 Timestamp(2, 1)
            { "uid" : 200017 } -->> { "uid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 3)
    {  "_id" : "xmj2",  "primary" : "shard1",  "partitioned" : false }


测试将xmj2非分片数据迁移到shard2
[Bash shell] 纯文本查看 复制代码
mongos> use admin
switched to db admin
mongos>
mongos> db.runCommand({movePrimary:"xmj2",to:"shard2"})
{
    "primary" : "shard2:shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",
    "ok" : 1
}
查看迁移数据:
shard2:PRIMARY> use xmj2
switched to db xmj2
shard2:PRIMARY> show dbs;
admin  0.000GB
local  0.004GB
xmj    0.004GB
xmj2   0.000GB
shard2:PRIMARY> db.u2.find()
{ "_id" : ObjectId("58bac9983a224d8a39836e48"), "uid" : 1, "username" : "jesse" }
控制集合分布-基于tag sharing 需要三个分片来实践:
添加新的副本集到分片:
mongos> sh.addShard("shard3/172.17.0.2:22003,172.17.0.3:22003,172.17.0.4:22003")
{ "shardAdded" : "shard3", "ok" : 1 }
mongos> use admin
switched to db admin

查看状态-
mongos> db.runCommand({"listshards":1})
{
    "shards" : [
        {
            "_id" : "shard2",
            "host" : "shard2/172.17.0.2:22002,172.17.0.3:22002,172.17.0.4:22002",
            "state" : 1
        },
        {
            "_id" : "shard1",
            "host" : "shard1/172.17.0.2:22001,172.17.0.3:22001,172.17.0.4:22001",
            "state" : 1
        },
        {
            "_id" : "shard3",
            "host" : "shard3/172.17.0.2:22003,172.17.0.3:22003,172.17.0.4:22003",
            "state" : 1
        }
    ],
    "ok" : 1
}

为每个分片创建tag
[Bash shell] 纯文本查看 复制代码
mongos> sh.addShardTag("shard1","a1")
{ "ok" : 1 }
mongos> sh.addShardTag("shard2","b1")
{ "ok" : 1 }
mongos> sh.addShardTag("shard3","c1")
{ "ok" : 1 }

监控锁状态:
[Bash shell] 纯文本查看 复制代码
mongos> db.locks.find()
{ "_id" : "balancer", "state" : 2, "ts" : ObjectId("58ac0f3133fb547d2a2f7a57"), "who" : "ConfigServer:Balancer", "process" : "ConfigServer", "when" : ISODate("2017-03-04T10:00:14.023Z"), "why" : "CSRS Balancer" }
{ "_id" : "xmj", "state" : 0, "ts" : ObjectId("58ac17443652ce40ecefee48"), "who" : "master:50000:1487672420:6860575566816274554:conn1", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-03-04T10:32:36.540Z"), "why" : "enableSharding" }
{ "_id" : "xmj.u1", "state" : 0, "ts" : ObjectId("58ac0f3133fb547d2a2f7a57"), "who" : "ConfigServer:Balancer", "process" : "ConfigServer", "when" : ISODate("2017-02-22T03:20:19.030Z"), "why" : "Migrating chunk(s) in collection xmj.u1" }
{ "_id" : "xmj2-movePrimary", "state" : 0, "ts" : ObjectId("58ac21743652ce40ecf4dc69"), "who" : "master:50000:1487672420:6860575566816274554:conn4", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-03-04T11:16:04.434Z"), "why" : "Moving primary shard of xmj2" }
{ "_id" : "movies", "state" : 0, "ts" : ObjectId("58ad08063652ce40ecf4dc9d"), "who" : "master:50000:1487672420:6860575566816274554:conn5", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-02-22T03:39:50.952Z"), "why" : "enableSharding" }
{ "_id" : "movies.a1", "state" : 0, "ts" : ObjectId("58ad0c9f3652ce40ecf5045d"), "who" : "master:50000:1487672420:6860575566816274554:conn7", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-02-22T03:59:27.285Z"), "why" : "drop" }
{ "_id" : "movies.b1", "state" : 0, "ts" : ObjectId("58ad0c963652ce40ecf50453"), "who" : "master:50000:1487672420:6860575566816274554:conn7", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-02-22T03:59:18.942Z"), "why" : "drop" }
{ "_id" : "movies.c1", "state" : 0, "ts" : ObjectId("58ad0c993652ce40ecf50458"), "who" : "master:50000:1487672420:6860575566816274554:conn7", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-02-22T03:59:21.956Z"), "why" : "drop" }
{ "_id" : "jesse", "state" : 0, "ts" : ObjectId("58ad3f4d3652ce40ecfa3ebe"), "who" : "master:50000:1487672420:6860575566816274554:conn10", "process" : "master:50000:1487672420:6860575566816274554", "when" : ISODate("2017-02-22T07:35:41.737Z"), "why" : "enableSharding" }

QQ|小黑屋|手机版|Archiver|追梦Linux ( 粤ICP备14096197号  点击这里给我发消息

GMT+8, 2019-6-26 06:46 , Processed in 0.294552 second(s), 36 queries .

Powered by 追梦Linux! X3.3 Licensed

© 2015-2017 追梦Linux!.

快速回复 返回顶部 返回列表