侧边栏壁纸
博主头像
Lang博主等级

十七岁想打职业。

  • 累计撰写 10 篇文章
  • 累计创建 11 个标签
  • 累计收到 1 条评论
隐藏侧边栏

分库分表ShardingSphere

Lang
2021-10-27 / 0 评论 / 0 点赞 / 174 阅读 / 18,060 字
温馨提示:
本文最后更新于 2022-01-21,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

一.前言

之前为了实现读写分离,使用了mybatis-plus的多数据源方案,也就每个要读操作的方法上加入注解@DS('slave'),很是繁琐。

后来看到公司的分库分表方案使用的Sharding-Jdbc,然后去折腾了一下,简直不要太丝滑。

二.一些概念

1.垂直分表

单表字段较多,访问频率较多的字段只有少部分。可以将访问频率少的字段与text的字段放在一张表,热点数据放在另外一张表。

2.垂直分库

通过垂直分表性能得到了一定程度的提升,但是还没有达到要求,并且磁盘空间也快不够了,因为数据还是始终限制在一台服务器,库内垂直分表只解决了单一表数据量过大的问题,但没有将表分布到不同的服务器上,因此每个表还是竞争同一个物理机的CPU、内存、网络IO、磁盘。

垂直分库是指按照业务将表进行分类,分布到不同的数据库上面,每个库可以放在不同的服务器上,它的核心理念是专库专用。

  • 解决业务层面的耦合,业务清晰
  • 能对不同业务的数据进行分级管理、维护、监控、扩展等
  • 高并发场景下,垂直分库一定程度的提升IO、数据库连接数、降低单机硬件资源的瓶颈
  • 垂直分库通过将表按业务分类,然后分布在不同数据库,并且可以将这些数据库部署在不同服务器上,从而达到多个服务器共同分摊压力的效果,但是依然没有解决单表数据量过大的问题。

3.水平分库

水平分库是把同一个表的数据按一定规则拆到不同的数据库中,每个库可以放在不同的服务器上。

  • 解决了单库大数据,高并发的性能瓶颈。
  • 提高了系统的稳定性及可用性。

4.水平分表

水平分表是在同一个数据库内,把同一个表的数据按一定规则拆到多个表中。

  • 优化单一表数据量过大而产生的性能问题
  • 避免IO争抢并减少锁表的几率,库内的水平分表,解决了单一表数据量过大的问题,分出来的小表中只包含一部分数据,从而使得单个表的数据量变小,提高检索性能。

三.ShardingSphere的使用

1.maven依赖

<dependency>
    <groupId>org.apache.shardingsphere</groupId>
    <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
    <version>4.1.1</version>
</dependency>

2.spring配置

spring:
  autoconfigure:
    exclude:
      - org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
      - com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure
  shardingsphere:
    datasource:
      names: master0,slave0,slave1
      master0:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.jdbc.Driver
        url: jdbc:mysql://xxxx:3306/yrblog1?characterEncoding=utf-8
        username: root
        password: xxx
      slave0:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.jdbc.Driver
        url: jdbc:mysql://xxxx:3306/yrblog2?characterEncoding=utf-8
        username: root
        password: xxx
      slave1:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.jdbc.Driver
        url: jdbc:mysql://xxx:3306/yrblog3?characterEncoding=utf-8
        username: root
        password: xxx
    sharding:
      tables:
        tab_user:
          actual-data-nodes: master0.tab_user$->{0..2} #指定所需分的表
          table-strategy:
            inline:
              sharding-column: id #指定主键
              algorithm-expression: tab_user$->{id % 3} #分表规则为主键除以3取模
          key-generator:
            column: id
            type: SNOWFLAKE
      master-slave-rules:
        master0:
          master-data-source-name: master0 #指定主库
          slave-data-source-names: slave0,slave1 # 指定从库
    props:
      sql:
        show: true

整个核心就是这个yml配置;

  • spring.autoconfigure.exclude 排除其他数据源的自动配置
  • spring.shardingsphere.datasource 配置主库与从库数据源,这里配置一主两从。
  • spring.shardingsphere.sharding 这里开始配置分表策略
  • spring.shardingsphere.props 显示sql

配置后就完成了。当然主从复制需要我们自己在mysql中配置。然后代码中无需任何其他操作,直接使用mybatis按单库单表操作就是了。

四.测试

@RequestMapping("list")
public List<User> list(){
    List<User> users = userService.list();
    return  users;
}

查询所有的接口,执行后我们发现它会自动查询从库的表,并且随机查询slave0和slave1,但始终是结果是tab_user0,tab_user1,tab_user2三张表的联合。

Logic SQL: SELECT  id,name  FROM tab_user
SQLStatement: SelectStatementContext
Actual SQL: slave1 ::: SELECT  id,name  FROM tab_user0
Actual SQL: slave0 ::: SELECT  id,name  FROM tab_user1
Actual SQL: slave1 ::: SELECT  id,name  FROM tab_user2
Logic SQL: SELECT  id,name  FROM tab_user
SQLStatement: SelectStatementContext
Actual SQL: slave0 ::: SELECT  id,name  FROM tab_user0
Actual SQL: slave1 ::: SELECT  id,name  FROM tab_user1
Actual SQL: slave0 ::: SELECT  id,name  FROM tab_user2

企业级分库分表配置

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:sharding="http://shardingjdbc.io/schema/shardingjdbc/sharding"
       xmlns:master-slave="http://shardingjdbc.io/schema/shardingjdbc/masterslave"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                        http://www.springframework.org/schema/beans/spring-beans.xsd
                        http://www.springframework.org/schema/context
                        http://www.springframework.org/schema/context/spring-context.xsd
                        http://www.springframework.org/schema/tx
                        http://www.springframework.org/schema/tx/spring-tx.xsd
                        http://shardingjdbc.io/schema/shardingjdbc/sharding
                        http://shardingjdbc.io/schema/shardingjdbc/sharding/sharding.xsd
                        http://shardingjdbc.io/schema/shardingjdbc/masterslave
                        http://shardingjdbc.io/schema/shardingjdbc/masterslave/master-slave.xsd">
    <!-- alibaba druid 父数据源-->
    <bean id="parentDataSource" class="com.alibaba.druid.pool.DruidDataSource" abstract="true" init-method="init" destroy-method="close">
        <property name="driverClassName" value="${jdbc.driver:com.mysql.jdbc.Driver}" />
        <!-- 基本属性 url、product、password -->
        <property name="username" value="${jdbc.user}" />
        <property name="password" value="${jdbc.pwd}" />
        <!-- 配置初始化大小、最小、最大 -->
        <property name="initialSize" value="${jdbc.init:5}" />
        <property name="minIdle" value="${jdbc.minConn:5}" />
        <property name="maxActive" value="${jdbc.maxConn:20}" />
        <!-- 配置获取连接等待超时的时间 -->
        <property name="maxWait" value="${jdbc0.maxWait:60000}" />
        <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 -->
        <property name="timeBetweenEvictionRunsMillis" value="${jdbc.timeBetweenEvictionRunsMillis:60000}" />
        <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 -->
        <property name="minEvictableIdleTimeMillis" value="${jdbc.minEvictableIdleTimeMillis:90000}" />
        <property name="validationQuery" value="SELECT 'x'" />
        <property name="testWhileIdle" value="true" />
        <property name="testOnBorrow" value="true" />
        <property name="testOnReturn" value="false" />
        <!-- 打开PSCache,并且指定每个连接上PSCache的大小 -->
        <property name="poolPreparedStatements" value="true" />
        <property name="maxPoolPreparedStatementPerConnectionSize" value="20" />
        <!-- 配置监控统计拦截的filters -->
        <property name="filters" value="stat" />
        <property name="connectionInitSqls" value="set names utf8mb4;"/>
    </bean>

    <!-- db 数据源0 -->
    <bean id="db_m_0" parent="parentDataSource">
        <property name="url" value="${jdbc.m0.url}"/>
    </bean>
    <bean id="db_s_0" parent="parentDataSource">
        <property name="url" value="${jdbc.s0.url}"/>
    </bean>

    <!-- db 数据源1 -->
    <bean id="db_m_1" parent="parentDataSource">
        <property name="url" value="${jdbc.m1.url}"/>
    </bean>
    <bean id="db_s_1" parent="parentDataSource">
        <property name="url" value="${jdbc.s1.url}"/>
    </bean>

      <!--主从配置-->
    <master-slave:data-source id="db_ms_0" master-data-source-name="db_m_0" slave-data-source-names="db_s_0"/>

    <master-slave:data-source id="db_ms_1" master-data-source-name="db_m_1" slave-data-source-names="db_s_1"/>
    <!-- 分库策略 -->
    <sharding:standard-strategy id="databaseStrategy" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.DatabaseShardingAlgorithm" sharding-column="org_id"/>
    <!--分表策略-->
    <sharding:standard-strategy id="channelTableStrategy" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelTableShardingAlgorithm" sharding-column="org_id"/>
    <sharding:standard-strategy id="channelPushRecordTableStrategy" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelPushRecordTableShardingAlgorithm" sharding-column="task_id"/>
    <sharding:standard-strategy id="channelTaskMappingTableStrategy" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelTaskMappingTableShardingAlgorithm" sharding-column="channel_id"/>

    <sharding:standard-strategy id="openidChannelRelTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.OpenidChannelRelTableShardingAlgorithm" sharding-column="openid"/>
    <sharding:standard-strategy id="scanSubEventTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ScanSubEventTableShardingAlgorithm" sharding-column="channel_id"/>
    <sharding:standard-strategy id="channelRoleTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelRoleTableShardingAlgorithm" sharding-column="org_id"/>
    <sharding:standard-strategy id="channelPointRecordTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelPointRecordTableShardingAlgorithm" sharding-column="channel_id"/>
    <sharding:standard-strategy id="taskNoticeTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.TaskNoticeTableShardingAlgorithm" sharding-column="task_id"/>
    <sharding:standard-strategy id="channelTempQrcodeTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelTempQrcodeTableShardingAlgorithm" sharding-column="org_id"/>
    <sharding:standard-strategy id="taskInterestsExchangeTableShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.TaskInterestsExchangeTableShardingAlgorithm" sharding-column="org_id"/>
    <sharding:standard-strategy id="channelEmployeeTagRelTablesShardingAlgorithm"  precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelEmployeeTagRelTablesShardingAlgorithm"  sharding-column="org_id" />
    <sharding:standard-strategy id="channelPushTaskNoticeChannelRelTablesShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.ChannelPushTaskNoticeChannelRelTablesShardingAlgorithm" sharding-column="task_id"/>
    <sharding:standard-strategy id="materialClickRecordTablesShardingAlgorithm"  precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.MaterialClickRecordTablesShardingAlgorithm"  sharding-column="org_id" />
    <sharding:standard-strategy id="temporaryMaterialRecordTablesShardingAlgorithm" precise-algorithm-class="com.zbmy.rabbit.channel.server.sharding.TemporaryMaterialRecordTablesShardingAlgorithm" sharding-column="org_id"/>

    <sharding:data-source id="shardingDataSource">
        <sharding:sharding-rule data-source-names="db_ms_0,db_ms_1" default-data-source-name="db_ms_0">
            <sharding:table-rules>
                <sharding:table-rule logic-table="t_channel" actual-data-nodes="db_ms_${0..1}.t_channel_${0..63}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelTableStrategy" />
                <sharding:table-rule logic-table="t_channel_role_rel" actual-data-nodes="db_ms_${0..1}.t_channel_role_rel_${0..31}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelRoleTableShardingAlgorithm" />
                <sharding:table-rule logic-table="t_channel_push_record" actual-data-nodes="db_ms_${0..1}.t_channel_push_record_${0..127}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelPushRecordTableStrategy" />
                <sharding:table-rule logic-table="t_channel_task_mapping" actual-data-nodes="db_ms_${0..1}.t_channel_task_mapping_${0..63}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelTaskMappingTableStrategy" />
                <sharding:table-rule logic-table="t_channel_group" actual-data-nodes="db_ms_${0..1}.t_channel_group" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_channel_link" actual-data-nodes="db_ms_${0..1}.t_channel_link" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_channel_push_task" actual-data-nodes="db_ms_${0..1}.t_channel_push_task" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_openid_channel_rel" actual-data-nodes="db_ms_${0..1}.t_openid_channel_rel_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="openidChannelRelTableShardingAlgorithm" />
                <sharding:table-rule logic-table="t_scan_sub_event" actual-data-nodes="db_ms_${0..1}.t_scan_sub_event_${0..31}" database-strategy-ref="databaseStrategy" table-strategy-ref="scanSubEventTableShardingAlgorithm" />
                <sharding:table-rule logic-table="t_channel_push_task_poster" actual-data-nodes="db_ms_${0..1}.t_channel_push_task_poster" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_channel_push_task_dept_rel" actual-data-nodes="db_ms_${0..1}.t_channel_push_task_dept_rel" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_channel_point_record" actual-data-nodes="db_ms_${0..1}.t_channel_point_record_${0..31}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelPointRecordTableShardingAlgorithm"/>
                <sharding:table-rule logic-table="t_task_interests" actual-data-nodes="db_ms_${0..1}.t_task_interests" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_task_notice" actual-data-nodes="db_ms_${0..1}.t_task_notice_${0..31}" database-strategy-ref="databaseStrategy" table-strategy-ref="taskNoticeTableShardingAlgorithm" />
                <sharding:table-rule logic-table="t_channel_temp_qrcode" actual-data-nodes="db_ms_${0..1}.t_channel_temp_qrcode_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelTempQrcodeTableShardingAlgorithm" />
                <sharding:table-rule logic-table="t_task_target" actual-data-nodes="db_ms_${0..1}.t_task_target" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_task_notice_batch" actual-data-nodes="db_ms_${0..1}.t_task_notice_batch" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_task_interests_exchange" actual-data-nodes="db_ms_${0..1}.t_task_interests_exchange_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="taskInterestsExchangeTableShardingAlgorithm"/>
                <sharding:table-rule logic-table="t_channel_show_business" actual-data-nodes="db_ms_${0..1}.t_channel_show_business" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_employee_tag" actual-data-nodes="db_ms_${0..1}.t_employee_tag" database-strategy-ref="databaseStrategy"/>
                <sharding:table-rule logic-table="t_employee_tag_rel" actual-data-nodes="db_ms_${0..1}.t_employee_tag_rel_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelEmployeeTagRelTablesShardingAlgorithm" />
                <sharding:table-rule logic-table="t_channel_push_task_notice_channel_rel" actual-data-nodes="db_ms_${0..1}.t_channel_push_task_notice_channel_rel_${0..127}" database-strategy-ref="databaseStrategy" table-strategy-ref="channelPushTaskNoticeChannelRelTablesShardingAlgorithm" />
                <sharding:table-rule logic-table="t_material_click_record" actual-data-nodes="db_ms_${0..1}.t_material_click_record_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="materialClickRecordTablesShardingAlgorithm" />
                <sharding:table-rule logic-table="t_temporary_material_record" actual-data-nodes="db_ms_${0..1}.t_temporary_material_record_${0..7}" database-strategy-ref="databaseStrategy" table-strategy-ref="temporaryMaterialRecordTablesShardingAlgorithm" />
                <sharding:table-rule logic-table="t_channel_select_artificial_label" actual-data-nodes="db_ms_${0..1}.t_channel_select_artificial_label" database-strategy-ref="databaseStrategy"/>
            </sharding:table-rules>
        </sharding:sharding-rule>
        <sharding:props>
        	<prop key="metrics.enable">${metrics.enable:true}</prop>
            <prop key="sql.show">${sql.show:true}</prop>
        </sharding:props>
    </sharding:data-source>

    <!-- 事务管理器 -->
    <bean id="transactionManagerSharding" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dynamicDataSource"/>
    </bean>

    <bean id="sqlSessionFactorySharding" class="org.mybatis.spring.SqlSessionFactoryBean">
        <property name="dataSource" ref="dynamicDataSource" />
        <property name="configLocation" value="classpath:mybatis/mybatis-config.xml" />
        <!-- mapper和resultmap配置路径 -->
        <property name="mapperLocations">
            <list>
                <value>classpath:mybatis/sql/**/*Mapper.xml</value>
            </list>
        </property>
    </bean>

    <!-- 扫描指定路径下的Mapper -->
    <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
        <property name="basePackage" value="com.zbmy.rabbit.channel.server.mapper" />
        <property name="sqlSessionFactoryBeanName" value="sqlSessionFactorySharding"/>
    </bean>

    <bean class="com.zbmy.rabbit.channel.server.sharding.SpringUtils"></bean>

    <tx:annotation-driven transaction-manager="transactionManagerSharding" proxy-target-class="true" />
    <!--忽略spring解析不到的属性-->
    <context:property-placeholder ignore-unresolvable="true"/>
</beans><!---->

分库分表后维护成本就大大增加了,如果业务量不大的情况下,能不分就不分吧。

0

评论