<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[kevin]]></title><description><![CDATA[再见，其实不是告别，而是一种承诺]]></description><link>http://blog.liu-kevin.com/</link><generator>Ghost 1.26</generator><lastBuildDate>Mon, 27 Apr 2026 15:26:02 GMT</lastBuildDate><atom:link href="http://blog.liu-kevin.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[linux之expect]]></title><description><![CDATA[<div class="kg-card-markdown"><p>当前存在一个脚本名为  test.sh</p>
<p>但是在执行过程中需要输入密码，该场景可通过expect解决</p>
<p>假设密码是123456  并且在输入密码时出现Passport提示，则可通过以下脚本实现</p>
<pre><code>#!/usr/bin/expect -f

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./test.sh

# 等待出现密码提示字符串，这里以常见的&quot;Password:&quot;为例
expect {
    &quot;*Password:*&quot; {
        send &quot;123456\r&quot;
        exp_continue
    }
}
</code></pre>
<p>如果是多次输入则</p>
<pre><code>#!/usr/bin/expect -f

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./test.sh</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2026/01/12/linuxzhi-expect/</link><guid isPermaLink="false">6964e7536ec746000182dc95</guid><category><![CDATA[linux]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Mon, 12 Jan 2026 12:23:34 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>当前存在一个脚本名为  test.sh</p>
<p>但是在执行过程中需要输入密码，该场景可通过expect解决</p>
<p>假设密码是123456  并且在输入密码时出现Passport提示，则可通过以下脚本实现</p>
<pre><code>#!/usr/bin/expect -f

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./test.sh

# 等待出现密码提示字符串，这里以常见的&quot;Password:&quot;为例
expect {
    &quot;*Password:*&quot; {
        send &quot;123456\r&quot;
        exp_continue
    }
}
</code></pre>
<p>如果是多次输入则</p>
<pre><code>#!/usr/bin/expect -f

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./test.sh

# 等待出现密码提示字符串，这里以常见的&quot;Password:&quot;为例
expect {
    &quot;*Password:*&quot; {
        send &quot;123456\r&quot;
        exp_continue
    }
    &quot;*Option&gt;:*&quot; {
        send &quot;3\r&quot;
        exp_continue
    }
}
</code></pre>
<p>则  遇到Password输入123456  遇到 Option输入 3</p>
<p>参数</p>
<pre><code>#!/usr/bin/expect -f

# 检查是否传递了至少一个参数
if {[llength $argv] &lt; 1} {
    puts &quot;至少需要传递一个参数&quot;
    exit 1
}

set target [lindex $argv 0]

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./sshdd.sh

# 等待出现密码提示字符串，这里以常见的&quot;password:&quot;为例
expect {
    &quot;*Password*&quot; {
        send &quot;xxxxxx\r&quot;
        exp_continue
    }

    &quot;*Option&gt;:*&quot; {
        send &quot;3\r&quot;
        exp_continue
    }
    &quot;*op-sven-opsec01.gz01*&quot; {
        send &quot;dssh $target\r&quot;
        exp_continue
    }
}
</code></pre>
<p>以上脚本实现 自动登陆某个ip服务器功能</p>
<p>上面的操作，执行后，在60s后会退出，如果需要长时间停留在后面的页面，则需要interact</p>
<p>脚本执行完指定的交互操作后自动退出，可能是因为没有将控制权交还给用户，导致脚本执行结束后整个进程退出。这通常发生在没有使用 interact 命令的情况下</p>
<pre><code>#!/usr/bin/expect -f

# 检查是否传递了至少一个参数
if {[llength $argv] &lt; 1} {
    puts &quot;至少需要传递一个参数&quot;
    exit 1
}

set target [lindex $argv 0]

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./sshdd.sh

# 等待出现密码提示字符串，这里以常见的&quot;password:&quot;为例
expect {
    &quot;*Password*&quot; {
        send &quot;xxxxxx\r&quot;
        exp_continue
    }

    &quot;*Option&gt;:*&quot; {
        send &quot;3\r&quot;
        exp_continue
    }
    &quot;*op-sven-opsec01.gz01*&quot; {
        send &quot;dssh $target\r&quot;
        exp_continue
    }
}

interact

</code></pre>
<p>这样的话 是在等待60s无后续操作后，会将操作权交还给用户，如果想在某个操作后，如op-sven-opsec01.gz01之后立即将操作权交还，则可用下面的方式</p>
<pre><code>#!/usr/bin/expect -f

# 检查是否传递了至少一个参数
if {[llength $argv] &lt; 1} {
    puts &quot;至少需要传递一个参数&quot;
    exit 1
}

set target [lindex $argv 0]

# 设置超时时间为30秒
set timeout 60

# 启动要执行的脚本
spawn ./sshdd.sh

# 等待出现密码提示字符串，这里以常见的&quot;password:&quot;为例
expect {
    &quot;*Password*&quot; {
        send &quot;xxxxxx\r&quot;
        exp_continue
    }

    &quot;*Option&gt;:*&quot; {
        send &quot;3\r&quot;
        exp_continue
    }
    &quot;*op-sven-opsec01.gz01*&quot; {
        send &quot;dssh $target\r&quot;
        # 发送命令后等待30秒
        after 30000
        interact
    }
}

interact
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[服务启动报错IOException parsing XML document from URL]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>报错</h1>
<pre><code>org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from URL [jar:file:/home/xiaoju/freight-driver-mall/lib/freight-driver-mall-dal-1.0.0-SNAPSHOT.jar!/freight-driver-mall-mybatis.xml]; nested exception is java.net.ConnectException: Connection timed out
        at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:413)
        at org.springframework.beans.factory.xml.</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2025/12/05/fu-wu-qi-dong-bao-cuo/</link><guid isPermaLink="false">6932a9906ec746000182dc87</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Fri, 05 Dec 2025 09:57:15 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>报错</h1>
<pre><code>org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from URL [jar:file:/home/xiaoju/freight-driver-mall/lib/freight-driver-mall-dal-1.0.0-SNAPSHOT.jar!/freight-driver-mall-mybatis.xml]; nested exception is java.net.ConnectException: Connection timed out
        at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:413)
        at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:338)
        at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310)
        at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:196)
        at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:232)
        at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:203)
        at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.lambda$loadBeanDefinitionsFromImportedResources$0(ConfigurationClassBeanDefinitionReader.java:390)
        at java.base/java.util.LinkedHashMap.forEach(LinkedHashMap.java:986)
        at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsFromImportedResources(ConfigurationClassBeanDefinitionReader.java:354)
        at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:156)
        at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:129)
        at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:343)
        at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:247)
        at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:311)
        at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:112)
        at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:756)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:573)
        at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147)
        at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:732)
        at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:409)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:308)
        at com.didichuxing.framework.boot.DidiBootApplication.run(DidiBootApplication.java:47)
        at com.didichuxing.framework.boot.DidiBootApplication.run(DidiBootApplication.java:66)
        at com.didichuxing.framework.boot.DidiBootApplication.run(DidiBootApplication.java:59)
</code></pre>
<h1 id>排查</h1>
<p>在服务启动时，通过netstat 可发现有个tcp一直在创建连接</p>
<pre><code>netstat -ant | grep -E 'SYN_SENT|SYN_RECV'
</code></pre>
<p>并且通过上面的日志发现是freight-driver-mall-mybatis.xml文件<br>
故可定位是该文件中的有个ip不通</p>
<pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt;
&lt;!DOCTYPE configuration PUBLIC &quot;-//mybatis.org//DTD Config 3.0//EN&quot;
        &quot;http://mybatis.org/dtd/mybatis-3-config.dtd&quot;&gt;
&lt;configuration&gt;
</code></pre>
<p>即<code>http://mybatis.org/dtd/mybatis-3-config.dtd</code>  在节点上ping该域名是不通的</p>
<h1 id>解决方案</h1>
<h3 id="mybatisorg">方案一解决mybatis.org网络问题</h3>
<h3 id="orgapacheibatisbuilderxmlmybatis3mapperdtd">方案二调整为org/apache/ibatis/builder/xml/mybatis-3-mapper.dtd</h3>
<pre><code>&lt;!DOCTYPE mapper
        PUBLIC &quot;-//mybatis.org//DTD Mapper 3.0//EN&quot;
        &quot;org/apache/ibatis/builder/xml/mybatis-3-mapper.dtd&quot;&gt;
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[apollo问题-无法获取到对应的配置]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>问题</h1>
<p>服务重启后，服务中的apollo配置数据丢失，单个编辑后可获取到编辑后的数据</p>
<h1 id="apollo">apollo数据初始化代码</h1>
<ul>
<li>在服务启动的PostConstruct方法中执行process方法，并且不带configName</li>
<li>当配置出现变更时，只针对特定的configName进行初始化，调用process方法时configName有值</li>
</ul>
<pre><code>private void process(String configName) {
        synchronized (LOCK) {
            if (StringUtils.isBlank(configName)) {//No1
                // 全量刷新
                Condition condition = new Condition();
                condition.with(&quot;__key&quot;, &quot;value&quot;);//No3
                Collection&lt;Config&gt; allConfigs = Apollo.getConfigsByNamespaceAndConditions(this.namespace(), condition).getAllConfigs();//No2
                for (Config</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2025/11/20/apollowen-ti-wu-fa-huo-qu-dao-dui-ying-de-pei-zhi/</link><guid isPermaLink="false">691f200e6ec746000182dc7b</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Thu, 20 Nov 2025 14:32:35 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>问题</h1>
<p>服务重启后，服务中的apollo配置数据丢失，单个编辑后可获取到编辑后的数据</p>
<h1 id="apollo">apollo数据初始化代码</h1>
<ul>
<li>在服务启动的PostConstruct方法中执行process方法，并且不带configName</li>
<li>当配置出现变更时，只针对特定的configName进行初始化，调用process方法时configName有值</li>
</ul>
<pre><code>private void process(String configName) {
        synchronized (LOCK) {
            if (StringUtils.isBlank(configName)) {//No1
                // 全量刷新
                Condition condition = new Condition();
                condition.with(&quot;__key&quot;, &quot;value&quot;);//No3
                Collection&lt;Config&gt; allConfigs = Apollo.getConfigsByNamespaceAndConditions(this.namespace(), condition).getAllConfigs();//No2
                for (Config config : allConfigs) {
                    T obj = parseConfig(config);
                    if (obj != null) {
                        context.put(config.getConfigName(), obj);
                    }
                }
            } else {
                // 单个变更
                Config config = Apollo.getConfig(this.namespace(), configName);
                T obj = parseConfig(config);
                if (obj != null) {
                    context.put(config.getConfigName(), obj);
                }
            }
        }
    }
</code></pre>
<h1 id>排查方案</h1>
<ul>
<li>在节点上对配置进行编辑</li>
<li>在本地 remote debug</li>
<li>当执行至No1时，将configName设置为空字符串</li>
<li>然后调用至No2</li>
<li>在Apollo.getConfigsByNamespaceAndConditions会调用getMatchConfigsByCondition(namespaceConfig, condition, configNameRules.value())方法</li>
<li>在该方法的buildConfigNameHashValues(configNameRules, condition)中对相应condition中的过滤条件，并且在apollo运营模板id规则中的字段进行hash<br>
<img src="http://blog.liu-kevin.com/content/images/2025/11/D-Chat_20251120222348.png" alt="D-Chat_20251120222348"></li>
<li>在isNotMatchConfigName方法中过滤运营模板唯一性规则值是否匹配，过滤掉不匹配的配置数据<br>
<img src="http://blog.liu-kevin.com/content/images/2025/11/D-Chat_20251120222714.png" alt="D-Chat_20251120222714"></li>
</ul>
<h1 id>问题点</h1>
<ul>
<li>在调用 Apollo.getConfigsByNamespaceAndConditions时如果不传condition会报错(return NamespaceConfig.createInvalidResult(namespace, &quot;condition is null&quot;);)</li>
<li>故大家都会传一个condition(No3)</li>
<li>而key和value值都是随意写的</li>
<li>而本次巧就巧在模板中的唯一性校验字段就是key,所以就会去查询配置中key字段值为value的配置，查不到就不会初始化</li>
<li>看了代码 不传condition查不了，但是可以传个空condition,但是未验证，防止出现其它问题，暂停时调整condition的key</li>
</ul>
<pre><code>Condition condition = new Condition();
condition.with(&quot;__key&quot;, &quot;value&quot;);
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[mybatics报错java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>问题</h1>
<p>使用maven自动生成的mapper文件，发现查询可以，但是insert时报错</p>
<pre><code>org.apache.ibatis.executor.ExecutorException: Error selecting key or setting result to parameter object. Cause: java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]
</code></pre>
<h1 id>原因</h1>
<p>跟踪源码发现在自动生成的mapper.xml的insert方法中存在</p>
<pre><code>&lt;selectKey keyProperty=&quot;taskId&quot; order=&quot;AFTER&quot; resultType=&quot;java.lang.Long&quot;&gt;</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2025/06/27/mybaticsbao-cuo-java-lang-illegalstateexception-cannot-determine-target-datasource-for-lookup-key-null/</link><guid isPermaLink="false">685e02a6d3ec870001342e79</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Fri, 27 Jun 2025 02:38:52 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>问题</h1>
<p>使用maven自动生成的mapper文件，发现查询可以，但是insert时报错</p>
<pre><code>org.apache.ibatis.executor.ExecutorException: Error selecting key or setting result to parameter object. Cause: java.lang.IllegalStateException: Cannot determine target DataSource for lookup key [null]
</code></pre>
<h1 id>原因</h1>
<p>跟踪源码发现在自动生成的mapper.xml的insert方法中存在</p>
<pre><code>&lt;selectKey keyProperty=&quot;taskId&quot; order=&quot;AFTER&quot; resultType=&quot;java.lang.Long&quot;&gt;
  SELECT LAST_INSERT_ID()
&lt;/selectKey&gt;
</code></pre>
<p>而datasource是通过在mapper.java文件上通过注解的方法注入的</p>
<pre><code>@Mapper
@SelectDataSource(&quot;ds2&quot;)
</code></pre>
<p><img src="http://blog.liu-kevin.com/content/images/2025/06/D-Chat_20250627103605.png" alt="D-Chat_20250627103605"></p>
<p>在执行该方法时，找不到对应的datasource</p>
<h1 id>解决方案</h1>
<h3 id>方案一</h3>
<p>去掉mapper.xml中的selectkey<br>
或者调整为</p>
<pre><code>&lt;insert id=&quot;insert&quot; keyColumn=&quot;id&quot; keyProperty=&quot;id&quot; useGeneratedKeys=&quot;true&quot;
</code></pre>
<h3 id>方案二</h3>
<p>调整@SelectDataSource(&quot;ds2&quot;)为以下方式</p>
<pre><code>@MapperScan(basePackages = {&quot;com.didichuxing.dd596.driver.mall.dal.mapper.apply&quot;,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.stock&quot;,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.task&quot; ,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.qrcode&quot;,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.record&quot;,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.redeem&quot;,
        &quot;com.didichuxing.dd596.driver.mall.dal.mapper.withhold&quot;
},
        sqlSessionFactoryRef = &quot;driverMallSqlSessionFactory&quot;)
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[预发debug问题]]></title><description><![CDATA[<div class="kg-card-markdown"><p>预发环境禁止对事务型操作进行debug<br>
否则可能因为表无索引等原因，锁表导致对线上造成影响</p>
</div>]]></description><link>http://blog.liu-kevin.com/2025/03/02/yu-fa/</link><guid isPermaLink="false">67c40b24d3ec870001342e4e</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Sun, 02 Mar 2025 07:40:03 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>预发环境禁止对事务型操作进行debug<br>
否则可能因为表无索引等原因，锁表导致对线上造成影响</p>
</div>]]></content:encoded></item><item><title><![CDATA[spring手动开启事务]]></title><description><![CDATA[<div class="kg-card-markdown"><p>当某个spring方法为private时，或 调用某个方法未通过代理调用时，则无法使用@Transactional的方式处理事务，而需要手动处理，具体方式如下</p>
<pre><code>//指定事务对应的数据库连接
@Resource(name = &quot;driverMallTx&quot;)
private PlatformTransactionManager transactionManager;


private void dth(){
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
    // 设置隔离级别等属性
    def.setIsolationLevel(TransactionDefinition.ISOLATION_REPEATABLE_READ);

    TransactionStatus status = transactionManager.getTransaction(def);
    boolean success = true;
    try {
        doSomeThing();
    } catch (Exception e) {
        success = false;
        throw e;
    }finally {
        if(success)</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2025/02/25/springshou-dong-kai-qi-shi-wu/</link><guid isPermaLink="false">67bdb417d3ec870001342e4a</guid><category><![CDATA[java]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Tue, 25 Feb 2025 12:17:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>当某个spring方法为private时，或 调用某个方法未通过代理调用时，则无法使用@Transactional的方式处理事务，而需要手动处理，具体方式如下</p>
<pre><code>//指定事务对应的数据库连接
@Resource(name = &quot;driverMallTx&quot;)
private PlatformTransactionManager transactionManager;


private void dth(){
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
    // 设置隔离级别等属性
    def.setIsolationLevel(TransactionDefinition.ISOLATION_REPEATABLE_READ);

    TransactionStatus status = transactionManager.getTransaction(def);
    boolean success = true;
    try {
        doSomeThing();
    } catch (Exception e) {
        success = false;
        throw e;
    }finally {
        if(success){
            transactionManager.commit(status);
        }else {
            transactionManager.rollback(status);
        }
    }


}
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[记录一次线上sql优化]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id="sql">sql</h1>
<pre><code>select 'false' as QUERYID, id, uid, order_id, start_position, end_position, reply_order_price, finish_order_price, member_level, scene_id, member_score_type, member_score, rob_result_time, record_type, gmt_create, gmt_modify, is_deleted, is_member, pk_result 
from member_rob_order_record 
WHERE  is_</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/12/26/ji-lu-yi-ci-xian-shang-sqlyou-hua/</link><guid isPermaLink="false">676cd449d3ec870001342e3b</guid><category><![CDATA[mysql]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Thu, 26 Dec 2024 04:04:35 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="sql">sql</h1>
<pre><code>select 'false' as QUERYID, id, uid, order_id, start_position, end_position, reply_order_price, finish_order_price, member_level, scene_id, member_score_type, member_score, rob_result_time, record_type, gmt_create, gmt_modify, is_deleted, is_member, pk_result 
from member_rob_order_record 
WHERE  is_deleted = 0 and uid = 7916485614431901 and rob_result_time between '2024-11-19 16:56:46' and '2024-12-19 16:56:46' ;
</code></pre>
<h1 id>现状</h1>
<p>当前已存在的索引是  uid&amp;rob_result_time<br>
慢sql监控显示很多sql需要100ms以上</p>
<h1 id>调整一</h1>
<p>因为只需要数量&amp;价格，所以只返回少量信息，修改sql如下</p>
<pre><code>select count(1) as count,sum(finish_order_price) as orderPrice from member_rob_order_record where uid = 7951668099743255 AND rob_result_time &gt;= '2024-11-26 11:40:11' AND rob_result_time &lt;= '2024-12-26 11:40:11' and record_type = 1 and finish_order_price &gt;0 AND is_deleted = 0;

</code></pre>
<h1 id>观察</h1>
<p>发现sql慢查现象并未减少，并且发现，uid&amp;rob_result_time 命中的数据有800+条，故这800多条均需要回表，为了减少回表，建以下索引</p>
<pre><code>ALTER TABLE member_rob_order_record ADD INDEX idx_uid_time_type_status_price (uid,is_deleted,record_type,rob_result_time,finish_order_price);
</code></pre>
<ul>
<li>rob_result_time 是范围查询，故在后面</li>
</ul>
<h1 id>再次观察</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/12/D-Chat_20241226120357.png" alt="D-Chat_20241226120357"></p>
<p>无慢查，故问题出在回表上</p>
</div>]]></content:encoded></item><item><title><![CDATA[mysql timestamp类型使用问题]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>表</h1>
<pre><code>
CREATE TABLE `hy_goods_config` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `type` tinyint(4) NOT NULL DEFAULT '0' COMMENT '类型 0-goods 1-sku',
  `obj_code` varchar(64) NOT NULL DEFAULT '0' COMMENT '对象code',
  `sub_type` tinyint(4) NOT NULL DEFAULT '0' COMMENT '配置类型0-城市',
  `content` json NOT</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/12/17/mysql-timestamplei-xing-shi-yong-wen-ti/</link><guid isPermaLink="false">67612e41d3ec870001342e37</guid><category><![CDATA[mysql]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Tue, 17 Dec 2024 08:11:05 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>表</h1>
<pre><code>
CREATE TABLE `hy_goods_config` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `type` tinyint(4) NOT NULL DEFAULT '0' COMMENT '类型 0-goods 1-sku',
  `obj_code` varchar(64) NOT NULL DEFAULT '0' COMMENT '对象code',
  `sub_type` tinyint(4) NOT NULL DEFAULT '0' COMMENT '配置类型0-城市',
  `content` json NOT NULL COMMENT '配置内容',
  `is_deleted` tinyint(4) NOT NULL DEFAULT '0' COMMENT '删除标记0-未删除  1-已删除',
  `gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '更新时间',
  `start_time` timestamp NOT NULL DEFAULT '2000-01-01 00:00:00' COMMENT '开始时间',
  `end_time` timestamp NOT NULL DEFAULT '1972-01-01 00:00:00' COMMENT '结束时间',
  PRIMARY KEY (`id`),
  KEY `idx_obj_id` (`obj_code`)
) ENGINE=InnoDB AUTO_INCREMENT=874 DEFAULT CHARSET=utf8mb4 COMMENT='商品配置'
</code></pre>
<h1 id>执行语句</h1>
<pre><code>INSERT INTO hy_goods_config (TYPE, obj_code, sub_type, content, start_time, end_time) values(1, '11111', 2, '[]', NULL, NULL)
</code></pre>
<h1 id>问题</h1>
<p>start_time和end_time均为当前时间</p>
<h1 id>原因</h1>
<p>MySQL 的 <code>explicit_defaults_for_timestamp</code> 系统变量控制了 TIMESTAMP 类型的行为。如果这个变量设置为 OFF（这是 MySQL 5.6 及之前版本的默认行为），那么 TIMESTAMP 列如果没有显式地指定值，会自动使用当前时间作为默认值，即使你已经指定了不同的默认值。</p>
<p>可通过<code>SHOW VARIABLES LIKE 'explicit_defaults_for_timestamp'</code>的方式查看对应的配置</p>
<p>另外需要注意的是timestamp类型支持的最小时间为<code>1970-01-01 00:00:01</code></p>
</div>]]></content:encoded></item><item><title><![CDATA[线上火焰图分析]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id="log4j">log4j</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122162626.png" alt="D-Chat_20241122162626"></p>
<p>根据火焰图发现有个写日志的操作非常慢，跟踪代码，发现具体为以下代码</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122163438.png" alt="D-Chat_20241122163438"></p>
<p>根据includeLocation查询到日志配置文件中，配置的为true，修改为false</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122163454.png" alt="D-Chat_20241122163454"></p>
<p>调整后效果<br>
<img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241125220544.png" alt="D-Chat_20241125220544"></p>
<h1 id="beancopy">bean copy</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122165956.png" alt="D-Chat_20241122165956"></p>
<p>具体代码 <code>BeanMapper.copy(rpcData,result);</code>使用的是<code>DozerBeanMapper dozer = new DozerBeanMapper();</code>调整为<code>mapstruct</code></p>
<p>调整后效果</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241125182800.png" alt="D-Chat_20241125182800"></p>
<blockquote>
<p>火焰图看到的是使用cpu的时间片，io阻塞过程中，通过火焰图观察不出来，所以并不能直观得与rt进行等同</p>
</blockquote>
</div>]]></description><link>http://blog.liu-kevin.com/2024/11/22/xian-shang-huo-yan-tu-fen-xi/</link><guid isPermaLink="false">67403fafd3ec870001342e2f</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Fri, 22 Nov 2024 08:55:12 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="log4j">log4j</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122162626.png" alt="D-Chat_20241122162626"></p>
<p>根据火焰图发现有个写日志的操作非常慢，跟踪代码，发现具体为以下代码</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122163438.png" alt="D-Chat_20241122163438"></p>
<p>根据includeLocation查询到日志配置文件中，配置的为true，修改为false</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122163454.png" alt="D-Chat_20241122163454"></p>
<p>调整后效果<br>
<img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241125220544.png" alt="D-Chat_20241125220544"></p>
<h1 id="beancopy">bean copy</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241122165956.png" alt="D-Chat_20241122165956"></p>
<p>具体代码 <code>BeanMapper.copy(rpcData,result);</code>使用的是<code>DozerBeanMapper dozer = new DozerBeanMapper();</code>调整为<code>mapstruct</code></p>
<p>调整后效果</p>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/D-Chat_20241125182800.png" alt="D-Chat_20241125182800"></p>
<blockquote>
<p>火焰图看到的是使用cpu的时间片，io阻塞过程中，通过火焰图观察不出来，所以并不能直观得与rt进行等同</p>
</blockquote>
</div>]]></content:encoded></item><item><title><![CDATA[流程实例process编排]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>模块</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/18c33a7cfff650492112f796cd978bdc.png" alt="18c33a7cfff650492112f796cd978bdc"></p>
<h1 id="instancegroup">InstanceGroup</h1>
<pre><code>public class InstanceGroup {

    /**
     * 将某几个组，打包为一个新的组
     *
     * @param instanceGroups
     * @return
     */
    public static InstanceGroup getInstance(InstanceGroup... instanceGroups) {
        InstanceGroup result = new InstanceGroup();
        result.instanceGroups = Arrays.stream(instanceGroups).collect(Collectors.toList());
        return result;
    }

    /**
     * 将某几个process打包为一个组
     *
     * @param processClassList
     * @return
     */
    public static InstanceGroup getInstance(Class&lt;? extends AbstractGoodsQueryProcess&gt;... processClassList) {
        InstanceGroup result = new InstanceGroup();
        result.processClassList</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/11/21/processbian-pai/</link><guid isPermaLink="false">673f2625d3ec870001342e2c</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Thu, 21 Nov 2024 12:40:57 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>模块</h1>
<p><img src="http://blog.liu-kevin.com/content/images/2024/11/18c33a7cfff650492112f796cd978bdc.png" alt="18c33a7cfff650492112f796cd978bdc"></p>
<h1 id="instancegroup">InstanceGroup</h1>
<pre><code>public class InstanceGroup {

    /**
     * 将某几个组，打包为一个新的组
     *
     * @param instanceGroups
     * @return
     */
    public static InstanceGroup getInstance(InstanceGroup... instanceGroups) {
        InstanceGroup result = new InstanceGroup();
        result.instanceGroups = Arrays.stream(instanceGroups).collect(Collectors.toList());
        return result;
    }

    /**
     * 将某几个process打包为一个组
     *
     * @param processClassList
     * @return
     */
    public static InstanceGroup getInstance(Class&lt;? extends AbstractGoodsQueryProcess&gt;... processClassList) {
        InstanceGroup result = new InstanceGroup();
        result.processClassList = Arrays.stream(processClassList).collect(Collectors.toList());
        return result;
    }

    /**
     * 通过instanceGroups&amp;processClassList获取一个执行器组
     *
     * @param instanceGroups   最小集依赖，即后续processClass执行过程中必须的前置依赖
     * @param processClassList 本组执行的操作
     * @return
     */
    public static InstanceGroup getInstance(InstanceGroup instanceGroup, Class&lt;? extends AbstractGoodsQueryProcess&gt;... processClassList) {
        InstanceGroup result = new InstanceGroup();
        result.instanceGroups = new ArrayList&lt;&gt;();
        result.instanceGroups.add(instanceGroup);
        result.processClassList = Arrays.stream(processClassList).collect(Collectors.toList());
        return result;
    }

    /**
     * 通过instanceGroups&amp;processClassList获取一个执行器组
     *
     * @param instanceGroups   最小集依赖，即后续processClass执行过程中必须的前置依赖
     * @param processClassList 本组执行的操作
     * @return
     */
    public static InstanceGroup getInstance(List&lt;InstanceGroup&gt; instanceGroups, Class&lt;? extends AbstractGoodsQueryProcess&gt;... processClassList) {
        InstanceGroup result = new InstanceGroup();
        result.instanceGroups = instanceGroups;
        result.processClassList = Arrays.stream(processClassList).collect(Collectors.toList());
        return result;
    }

    private List&lt;Class&lt;? extends AbstractGoodsQueryProcess&gt;&gt; processClassList;

    private List&lt;InstanceGroup&gt; instanceGroups;


    public boolean isClassProcess() {
        return this.processClassList != null;
    }

    public List&lt;Class&lt;? extends GoodsQueryProcess&gt;&gt; getProcessClass() {
        List&lt;Class&lt;? extends GoodsQueryProcess&gt;&gt; result = new ArrayList&lt;&gt;();
        if (!CollectionUtils.isEmpty(instanceGroups)) {
            for (InstanceGroup instanceGroup : instanceGroups) {
                List&lt;Class&lt;? extends GoodsQueryProcess&gt;&gt; instanceGroupProcessClass = instanceGroup.getProcessClass();
                instanceGroupProcessClass.removeAll(result);
                result.addAll(instanceGroupProcessClass);
            }
        }
        if (!CollectionUtils.isEmpty(this.processClassList)) {
            this.processClassList.removeAll(result);
            result.addAll(this.processClassList);
        }
        return result;
    }

}
</code></pre>
<h1 id="instancegroupconstant">InstanceGroupConstant</h1>
<pre><code>public interface InstanceGroupConstant {

    //==================================base===========================================
    /**
     * 商品售卖范围过滤
     * 执行的操作
     * 1. 商品ids初始化
     * 2. 商品售卖范围初始化
     * 3. appId过滤
     * 4. carType过滤
     * 5. 司机标签过滤
     */
    InstanceGroup BASE_FILTER_GOODS_SCOPE = InstanceGroup.getInstance(GoodsIdsInitProcess.class, GoodsScopeInitProcess.class, GoodsAppIdFilter.class, GoodsCarTypeFilter.class, DriverTagFilter.class);

    /**
     * 商品信息过滤
     * 执行的操作
     * 1. 商品ids初始化
     * 2. 商品信息初始化
     * 3. 商品类目过滤
     * 4. 商品状态过滤
     */
    InstanceGroup BASE_FILTER_GOODS_INFO = InstanceGroup.getInstance(GoodsIdsInitProcess.class, GoodsInfoInitProcess.class, GoodsCategoryFilter.class, GoodsOnSaleFilter.class);

    /**
     * 子商品信息初始化
     * 执行的操作
     * 1. 商品id初始化
     * 2. 子商品初始化
     */
    InstanceGroup BASE_GOODS_SKU_INIT = InstanceGroup.getInstance(GoodsIdsInitProcess.class, GoodsSkuInfoInitProcess.class);

    /**
     * 商品SKU过滤
     * 依赖商品售卖范围初始化 BASE_INIT_GOODS_SCOPE
     * 执行的操作
     * 1. 购买条件过滤
     */
    InstanceGroup BASE_FILTER_GOODS_SKU_SCENE = InstanceGroup.getInstance(BASE_GOODS_SKU_INIT, DriverSceneFilter.class);

    /**
     * 分页&amp;整合数据
     * 依赖售卖范围过滤、商品过滤及子商品初始化
     * 执行的操作
     * 1. 分页
     * 2. 整合数据
     */
    InstanceGroup BASE_PAGE_ASSEMBLE = InstanceGroup.getInstance(Arrays.asList(BASE_FILTER_GOODS_SCOPE, BASE_FILTER_GOODS_INFO, BASE_GOODS_SKU_INIT),
            GoodsPageProcess.class,
            GoodsAutoRenewPriceInitProcess.class,
            GoodsAssembleProcess.class);


    //=======================point========================================
    /**
     * 积分前端tag过滤
     * 依赖商品信息初始化
     * 执行的操作
     * 1. 前端tag过滤
     */
    InstanceGroup POINT_FILTER_GOODS_VIEW_TYPE = InstanceGroup.getInstance(BASE_FILTER_GOODS_INFO, GoodsViewTypeFilter.class);

    /**
     * 子商品价格过滤
     * 依赖子商品初始化
     * 执行的操作
     * 1. 价格过滤
     */
    InstanceGroup POINT_FILTER_GOODS_POINT_PRICE = InstanceGroup.getInstance(BASE_GOODS_SKU_INIT, GoodsPointPriceFilter.class);


    /**
     * 积分自定义排序
     * 依赖商品信息初始化&amp;子商品信息初始化
     * 执行的操作
     * 1. 积分自定义排序
     */
    InstanceGroup POINT_SORT_CUSTOM = InstanceGroup.getInstance(Arrays.asList(BASE_FILTER_GOODS_INFO, BASE_GOODS_SKU_INIT), PointsGoodsSortProcess.class);


    //========================group==========================================


    InstanceGroup GROUP_FILTER_GOODS_ANT_SCOPE = InstanceGroup.getInstance(Arrays.asList(BASE_FILTER_GOODS_SCOPE, BASE_FILTER_GOODS_INFO));


}
</code></pre>
<h1 id="processinstanceenum">ProcessInstanceEnum</h1>
<pre><code>public enum ProcessInstanceEnum {

    /**
     * 通用流程
     * 存在不符合的购买条件的商品不返回
     */
    COMMON(InstanceGroup.getInstance(
            InstanceGroupConstant.GROUP_FILTER_GOODS_ANT_SCOPE,
            InstanceGroupConstant.BASE_FILTER_GOODS_SKU_SCENE,
            InstanceGroupConstant.BASE_PAGE_ASSEMBLE
    )),


    /**
     * 通用流程
     * 与COMMON的区别是不check driverSceneFilter，上游校验
     */
    COMMON_INCLUDE_BUY_LIMIT_GOODS(InstanceGroup.getInstance(
            InstanceGroupConstant.GROUP_FILTER_GOODS_ANT_SCOPE,
            InstanceGroupConstant.BASE_PAGE_ASSEMBLE
    )),

    /**
     * 积分列表流程
     */
    POINT_CUSTOM_SORT(InstanceGroup.getInstance(
            InstanceGroupConstant.GROUP_FILTER_GOODS_ANT_SCOPE,
            InstanceGroupConstant.POINT_FILTER_GOODS_VIEW_TYPE,
            InstanceGroupConstant.POINT_FILTER_GOODS_POINT_PRICE,
            InstanceGroupConstant.POINT_SORT_CUSTOM,
            InstanceGroupConstant.BASE_PAGE_ASSEMBLE
    )),

    ;

    ProcessInstanceEnum(InstanceGroup instanceGroup) {
        this.instanceGroup = instanceGroup;
    }

    private InstanceGroup instanceGroup;


    public List&lt;Class&lt;? extends GoodsQueryProcess&gt;&gt; getProcessClass() {
        return this.instanceGroup.getProcessClass();
    }
}
</code></pre>
<h1 id>使用</h1>
<pre><code>public class GoodsQueryCompent {
    @Resource
    private List&lt;GoodsQueryProcess&gt; goodsQueryProcesses;

    private Map&lt;ProcessInstanceEnum, List&lt;GoodsQueryProcess&gt;&gt; instanceMap = new ConcurrentHashMap();

    @PostConstruct
    private void init() {
        Map&lt;Class, GoodsQueryProcess&gt; processMap = new HashMap&lt;&gt;();
        for (GoodsQueryProcess goodsQueryProcess : goodsQueryProcesses) {
            processMap.put(goodsQueryProcess.getClass(), goodsQueryProcess);
        }
        for (ProcessInstanceEnum value : ProcessInstanceEnum.values()) {
            List&lt;GoodsQueryProcess&gt; processes = new ArrayList&lt;&gt;();
            for (Class&lt;? extends GoodsQueryProcess&gt; processClass : value.getProcessClass()) {
                processes.add(processMap.get(processClass));
            }
            instanceMap.put(value, processes);
        }
    }

    public List&lt;GoodsSkuInfoBO&gt; queryGoodsSkus(ProcessInstanceEnum instance, GoodsQueryCondition condition) {
        GoodsQueryContext context = new GoodsQueryContext(condition);
        List&lt;GoodsQueryProcess&gt; list = instanceMap.get(instance);
        for (GoodsQueryProcess goodsQueryProcess : list) {
            goodsQueryProcess.process(context);
        }
        return context.getResult();
    }

}
</code></pre>
<h1 id>总结</h1>
<p>这样处理的好处是 可以任意定义流程实例及实例顺序，来满足不同场景的需求</p>
</div>]]></content:encoded></item><item><title><![CDATA[dubbo filter使用介绍]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>结论</h1>
<blockquote>
<p>先把结论放最前面，可忽略后面的过程</p>
</blockquote>
<ul>
<li>org.apache.dubbo.rpc.Filter与com.alibaba.dubbo.rpc.Filter没区别</li>
<li>spring.dubbo.provider.filter: 无该配置，服务中的该配置是用于xml中配置provider的filter</li>
<li>provider filter或service filter可具体给这个provider或service配置过滤器</li>
<li>如果想去掉某个filter,可在 provider filter或service filter配置的前面加<code>-</code></li>
<li>排序在filter类上定义,order越小，优先级越高</li>
</ul>
<pre><code>@Activate(group = { CommonConstants.PROVIDER }, order = -100)
</code></pre>
<ul>
<li>优先级越高的filter<code>invoker.invoke(invocation)</code>前的越早执行，<code>invoker.invoke(invocation)</code>后的越晚执行</li>
</ul>
<h1 id>配置</h1>
<h3 id="orgapachedubborpcfiltercomalibabadubborpcfilter"><code>org.apache.dubbo.rpc.</code></h3></div>]]></description><link>http://blog.liu-kevin.com/2024/11/06/dubbo-filter-2/</link><guid isPermaLink="false">672a0cb5d3ec870001342e1e</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Wed, 06 Nov 2024 09:46:23 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>结论</h1>
<blockquote>
<p>先把结论放最前面，可忽略后面的过程</p>
</blockquote>
<ul>
<li>org.apache.dubbo.rpc.Filter与com.alibaba.dubbo.rpc.Filter没区别</li>
<li>spring.dubbo.provider.filter: 无该配置，服务中的该配置是用于xml中配置provider的filter</li>
<li>provider filter或service filter可具体给这个provider或service配置过滤器</li>
<li>如果想去掉某个filter,可在 provider filter或service filter配置的前面加<code>-</code></li>
<li>排序在filter类上定义,order越小，优先级越高</li>
</ul>
<pre><code>@Activate(group = { CommonConstants.PROVIDER }, order = -100)
</code></pre>
<ul>
<li>优先级越高的filter<code>invoker.invoke(invocation)</code>前的越早执行，<code>invoker.invoke(invocation)</code>后的越晚执行</li>
</ul>
<h1 id>配置</h1>
<h3 id="orgapachedubborpcfiltercomalibabadubborpcfilter"><code>org.apache.dubbo.rpc.Filter</code>与<code>com.alibaba.dubbo.rpc.Filter</code>配置</h3>
<pre><code>apiExceptionFilterV1=com.didichuxing.dd596.driver.car.biz.filter.ApiExceptionFilterV1
</code></pre>
<h3 id="providerfilter">provider filter配置</h3>
<pre><code>&lt;dubbo:provider registry=&quot;registry&quot; group=&quot;${dubbo.group.default}&quot; filter=&quot;-DidiErrorHandleFilter&quot;&gt;

</code></pre>
<h3 id="servicefilter">service filter配置</h3>
<pre><code>&lt;dubbo:service
            registry=&quot;registry&quot; interface=&quot;com.didichuxing.dd596.driver.car.api.VehicleRemoteService&quot; filter=&quot;apiExceptionFilter&quot;
            ref=&quot;vehicleRemoteService&quot; version=&quot;1.0.0${dubbo.version.suffix.provider}&quot; /&gt;

</code></pre>
<h1 id="dubboexceptionfilter">服务中配置的dubbo exception filter不生效</h1>
<h3 id="1">原因1</h3>
<p>当不配置到service时，已存在优先级更低的过滤器ErrorHandleFilter,该过滤器优先级更低，故<code>invoker.invoke(invocation)</code>后更先执行，后面再执行时，就不存在exception，所以ApiExceptionFilter不会执行</p>
<h3 id="2">原因2</h3>
<p>当不存在ErrorHandleFilter时，在ApiExceptionFilter中虽然改写了result，但是未清除exception信息，导致后面的ProviderExceptionFilter执行时，再次改写result</p>
<h3 id="service">问题：为什么添加到service上就生效了</h3>
<p>当把ApiExceptionFilter添加到service上时，虽然仍然没有调整优先级，但是会先执行其它的全局过滤器，最后执行该ApiExceptionFilter，而在执行ApiExceptionFilter时，改写了result,在打印日志的filter中会将改写后的打印出来（request_out）,但是之后的error filter会重写，所以获取到的响应有问题</p>
<h1 id="1">附1-测试过程</h1>
<h3 id>初始</h3>
<ul>
<li>
<ol>
<li>org.apache.dubbo.rpc.Filter中配置apiExceptionFilter</li>
</ol>
</li>
<li>
<ol start="2">
<li>com.alibaba.dubbo.rpc.Filter中配置apiExceptionFilterV2</li>
</ol>
</li>
<li>
<ol start="3">
<li>provider filter添加apiExceptionFilterV3</li>
</ol>
</li>
<li>
<ol start="4">
<li>service filter添加apiExceptionFilterV4</li>
</ol>
</li>
</ul>
<blockquote>
<p>其中apiExceptionFilterV3、apiExceptionFilterV4均在<code>org.apache.dubbo.rpc.Filter</code>中配置</p>
</blockquote>
<p>结果上述4个apiExceptionFilter均会执行</p>
<p>结论<br>
所以3、4配置或不配置都不影响过滤器的执行</p>
<h3 id>测试<code>-</code></h3>
<p>在3、4中配置时，前面加上<code>-</code> v1、v2执行，v3、v4不执行，说明如果添加了过滤器，可在某个provider或service中排除</p>
<h3 id="comalibabadubborpcfilter">测试<code>com.alibaba.dubbo.rpc.Filter</code></h3>
<p>将apiExceptionFilterV3、apiExceptionFilterV4均放入 <code>com.alibaba.dubbo.rpc.Filter</code>中<br>
v1、v2执行</p>
<h3 id="activate">测试Activate</h3>
<p>新增 v6、v7、v8<br>
v6: 只添加@Activate(group = &quot;provider&quot;, order = 6)注解<br>
v7: 不添加注解，添加org.apache.dubbo.rpc.Filter<br>
v8: 不添加注解，添加org.apache.dubbo.rpc.Filter及spring.dubbo.provider.filter</p>
<p>6、7、8均不会执行</p>
<h3 id="springdubboproviderfilter">测试不添加<code>spring.dubbo.provider.filter</code></h3>
<p>v9: 只添加@Activate(group = &quot;provider&quot;, order = 9)注解，添加spring.dubbo.provider.filter<br>
只执行v9并不会执行</p>
<h3 id="activate">测试只添加@Activate注解</h3>
<p>v10: 只添加@Activate(group = &quot;provider&quot;, order = 10)注解，service filter中添加<br>
service filter中添加无论添加 ApiExceptionFilterV10还是apiExceptionFilterV10,服务均无法启动</p>
<h1 id="2apiexceptionfilter">附2-ApiExceptionFilter</h1>
<pre><code>@Activate(group = &quot;provider&quot;)
public class ApiExceptionFilter implements Filter {

    private static final ILog LOGGER = LogFactory.getLog(ApiExceptionFilter.class);

    @Override
    public Result invoke(Invoker&lt;?&gt; invoker, Invocation invocation) throws RpcException {
        Result result = invoker.invoke(invocation);
        try {
            // 处理异常
            if (result != null &amp;&amp; result.hasException()) {
                Throwable e = result.getException();
                if (e instanceof NullPointerException) {
                    LOGGER.error(&quot;api_exception||errMsg=java.lang.NullPointerException||e=&quot;,e);
                }   else if (e instanceof IllegalArgumentException) {
                    return buildResult(result,UserErrorEnum.PARAM_ERROR.getCode(), StringUtils.isNotEmpty(e.getMessage()) ? e.getMessage() : &quot;Parameter invalid&quot;);
                } else if (e instanceof CarBizException){
                    return buildResult(result,((CarBizException) e).getCode(), ((CarBizException) e).getMsg());
                }else if (e instanceof RpcBizException) {
                    // 包装返回的errMsg
                    int code = ((RpcBizException) e).getCode();
                    String msg = ((RpcBizException) e).getMsg();
                    return buildResult(result,code, msg);
                } else if (e instanceof RpcException) {
                    return buildResult(result,((RpcException) e).getCode(), e.getMessage());
                } else if (e instanceof Exception) {
                    LOGGER.error(&quot;api_exception||e=&quot;,e);
                    return buildResult(result,UserErrorEnum.SYSTEM_ERROR.getCode(), UserErrorEnum.SYSTEM_ERROR.getDesc());
                }
            }
            return result;
        } catch (Exception e) {
            LOGGER.error(&quot;api_exception_invoke_error||errMsg={}&quot;, e);
            return result;
        }

    }

    private Result buildResult(Result result,int code, String msg) {

        RpcResult rpcResult = new RpcResult();
        // 处理RPC错误
        rpcResult.setCode(code);
        rpcResult.setMsg(msg);
        result.setValue(rpcResult);
        return result;
    }
}

</code></pre>
<h3 id>待优化点</h3>
<ul>
<li>将优先级调到最高<code>@Activate(group = &quot;provider&quot;,order = 100)</code></li>
<li>构造 result时，设置exception为null</li>
</ul>
<pre><code>private Result buildResult(Result result,int code, String msg) {

    RpcResult rpcResult = new RpcResult();
    // 处理RPC错误
    rpcResult.setCode(code);
    rpcResult.setMsg(msg);
    result.setValue(rpcResult);
    result.setException(null);
    return result;
}
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[spring datasource多数据源配置]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>配置</h1>
<pre><code>mybatis.datasource.xxx.configLocation=classpath*:xxx-mybatis.xml
mybatis.datasource.xxx.mapperLocations=classpath*:sqlmap/*/*.xml
mybatis.datasource.xxx.type=com.alibaba.druid.pool.DruidDataSource
mybatis.datasource.xxx.driverClassName=com.mysql.jdbc.Driver
mybatis.datasource.xxx.url=jdbc:mysql://100.xx.xx.31:xxx/xxx?useUnicode=true&amp;characterEncoding=utf-8&amp;</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/09/23/spring-datasourcepei-zhi/</link><guid isPermaLink="false">66a5f4bbd3ec870001342dee</guid><category><![CDATA[java]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Mon, 23 Sep 2024 06:19:20 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>配置</h1>
<pre><code>mybatis.datasource.xxx.configLocation=classpath*:xxx-mybatis.xml
mybatis.datasource.xxx.mapperLocations=classpath*:sqlmap/*/*.xml
mybatis.datasource.xxx.type=com.alibaba.druid.pool.DruidDataSource
mybatis.datasource.xxx.driverClassName=com.mysql.jdbc.Driver
mybatis.datasource.xxx.url=jdbc:mysql://100.xx.xx.31:xxx/xxx?useUnicode=true&amp;characterEncoding=utf-8&amp;allowMultiQueries=true&amp;useSSL=false
mybatis.datasource.xxx.username=xxx
mybatis.datasource.xxx.password=xxxx
mybatis.datasource.xxx.initialSize=20
mybatis.datasource.xxx.minIdle=20
mybatis.datasource.xxx.maxActive=50
mybatis.datasource.xxx.timeBetweenEvictionRunsMillis=1800
mybatis.datasource.xxx.minEvictableIdleTimeMillis=60000
mybatis.datasource.xxx.maxEvictableIdleTimeMillis=100000
mybatis.datasource.xxx.maxWait=1000
mybatis.datasource.xxx.maxIdleTime=60000
mybatis.datasource.xxx.idleConnectionTestPeriod=60000
mybatis.datasource.xxx.validationQuery=SELECT 1
mybatis.datasource.xxx.testOnBorrow=false
mybatis.datasource.xxx.testWhileIdle=true
mybatis.datasource.xxx.phyTimeoutMillis=25200000
mybatis.datasource.xxx.keepAlive=true
mybatis.datasource.xxx.testOnReturn=false
</code></pre>
<h1 id="properties">properties</h1>
<pre><code>
@Component
@Getter
@Setter
@PropertySource(value = { XXXProperties.LOCATION })
@ConfigurationProperties(prefix = XXXProperties.PREFIX)
public class XXXProperties {

    static final String PREFIX = &quot;mybatis.datasource.xxx&quot;;

    static final String LOCATION = &quot;classpath:application-datasource.properties&quot;;

    private String mapperLocations;

    private String configLocation;

    private String type;

    private String driverClassName;
}

</code></pre>
<h1 id="datasourceconfig">datasource config</h1>
<pre><code>
@Configuration
@MapperScan(basePackages = &quot;com.xxxx.mapper&quot;, sqlSessionFactoryRef = &quot;xxxSqlSessionFactory&quot;)
public class DataSourceConfig {

    @Resource
    private XxxProperties xxxProperties;

    @Resource
    private Environment environment;

    @Bean(name = &quot;xxxDataSource&quot;)
    public DataSource xxxDataSource() {

        DataSource ds = DataSourceBuilder.create()
            .type((Class&lt;? extends DataSource&gt;) ClassUtils.resolveClassName(xxxProperties.getType(), null))
            .driverClassName(xxxProperties.getDriverClassName()).build();

        return Binder.get(environment).bind(XxxProperties.PREFIX, ds.getClass()).get();
    }

    @Bean(name = &quot;xxxSqlSessionFactory&quot;)
    public SqlSessionFactory xxxSqlSessionFactory(@Qualifier(&quot;xxxDataSource&quot;) DataSource dataSource)
        throws Exception {

        SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(dataSource);
        sessionFactory.setConfigLocation(
            new PathMatchingResourcePatternResolver().getResources(xxxProperties.getConfigLocation())[0]
        );
        sessionFactory.setMapperLocations(
            new PathMatchingResourcePatternResolver().
                getResources(xxxProperties.getMapperLocations()));

        return sessionFactory.getObject();
    }

    @Bean(name = &quot;xxxSqlSession&quot;)
    public SqlSessionTemplate xxxSqlSession(
        @Qualifier(&quot;xxxSqlSessionFactory&quot;) SqlSessionFactory sqlSessionFactory) {
        return new SqlSessionTemplate(sqlSessionFactory);
    }

    @Bean(name = &quot;xxxTx&quot;)
    public XxxTransactionManager xxxTx(@Qualifier(&quot;xxxDataSource&quot;) DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }
}
</code></pre>
<p>如果需要配置多数据数，则创建多个配置，及DataSourceConfig  并指定相应的mapper</p>
<p>如果dataSource要调整为HighAvailableDataSource及NodeListener的使用方式，则可以如下创建dataSource,并且在配置文件对象中添加url、password及username</p>
<pre><code>DataSource ds = DataSourceBuilder.create()
    .type((Class&lt;? extends DataSource&gt;) ClassUtils.resolveClassName(distributionProperties.getType(), null))
    .driverClassName(distributionProperties.getDriverClassName())
    .build();
DataSource result = Binder.get(environment)
    .bind(DistributionProperties.PREFIX, ds.getClass())
    .get();
DisfNodeListener nodeListener = new DisfNodeListener();
nodeListener.setUrl(distributionProperties.getUrl());
nodeListener.setUsername(distributionProperties.getUsername());
nodeListener.setPassword(distributionProperties.getPassword());
HighAvailableDataSource highAvailableDataSource = (HighAvailableDataSource) result;
highAvailableDataSource.setNodeListener(nodeListener);
return highAvailableDataSource;
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[redis cache抽象实现]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id>概述</h1>
<p>redis缓存在项目中经常用到，如果每个需要的地方独立实现，会有各种各样的问题，如实现的功能不健壮，代码冗余</p>
<p>本篇的宗旨是构建一个抽象的，功能完善，代码健壮的缓存模块，使得接入时，通够快速、便捷的使用缓存</p>
<h1 id="class">重要的class</h1>
<h3 id="cache">Cache</h3>
<p>redis中缓存的对象</p>
<pre><code>/**
 * @author liuk
 * 封装此对象的目的
 * 1. 当从redis批量获取对象时，能够清楚获取到的value对应的key
 * 当然redis返回时，是与请求时的key的顺序是一致的，如果未取到会返回null,通过顺序的一致性也是可以确定获取到的value对应的key的
 * 2. 如果该key确实不存在对应的对象，防止一直透过缓存查数据库，故该场景保存的value=null
 */
@Data
public class Cache&lt;K, V&gt; {

    public Cache() {
    }
    public Cache(K key, V value) {
        this.key = key;
        this.</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/08/10/redis-cache/</link><guid isPermaLink="false">66ac8fe8d3ec870001342df3</guid><category><![CDATA[java]]></category><category><![CDATA[redis]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Sat, 10 Aug 2024 03:22:07 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id>概述</h1>
<p>redis缓存在项目中经常用到，如果每个需要的地方独立实现，会有各种各样的问题，如实现的功能不健壮，代码冗余</p>
<p>本篇的宗旨是构建一个抽象的，功能完善，代码健壮的缓存模块，使得接入时，通够快速、便捷的使用缓存</p>
<h1 id="class">重要的class</h1>
<h3 id="cache">Cache</h3>
<p>redis中缓存的对象</p>
<pre><code>/**
 * @author liuk
 * 封装此对象的目的
 * 1. 当从redis批量获取对象时，能够清楚获取到的value对应的key
 * 当然redis返回时，是与请求时的key的顺序是一致的，如果未取到会返回null,通过顺序的一致性也是可以确定获取到的value对应的key的
 * 2. 如果该key确实不存在对应的对象，防止一直透过缓存查数据库，故该场景保存的value=null
 */
@Data
public class Cache&lt;K, V&gt; {

    public Cache() {
    }
    public Cache(K key, V value) {
        this.key = key;
        this.value = value;
    }
    /**
     * 缓存key
     */
    private K key;
    /**
     * 缓存value
     */
    private V value;
}
</code></pre>
<h3 id="rediskeyenum">RedisKeyEnum</h3>
<p>redis key的枚举，包括key的构造及expire</p>
<pre><code>public enum RedisKeyEnum {

    APPLY_BATCH_EXEC_LOCK_KEY(&quot;xx:xx:exec_lock_key:%s&quot;, 10)

    RedisKeyEnum(String key, int expire) {
        this.key = key;
        this.expire = expire;
    }

    private String key;
    /**
     * second
     */
    private int expire;

    public String getKey(Object... args) {
        return String.format(key, args);
    }

    public int getExpire() {
        return expire;
    }
}
</code></pre>
<h3 id="rediscache">RedisCache</h3>
<p>cache的接口，不止是redis cache   所有的cache基本都是以下方法的实现</p>
<pre><code>public interface RedisCache&lt;K, V&gt; {

    /**
     * 获取缓存
     *
     * @param code
     * @return
     */
    V getByCache(K code);


    /**
     * 批量获取缓存
     *
     * @param codes
     * @return
     */
    Map&lt;K, V&gt; getByCache(Set&lt;K&gt; codes);

    /**
     * 删除缓存
     *
     * @param code
     */
    void delCache(K code);
}

</code></pre>
<h3 id="abstractrediscache">AbstractRedisCache</h3>
<p>抽象redis cache实现</p>
<pre><code>
/**
 * @author liuk
 */
public abstract class AbstractRedisCache&lt;K, V&gt; implements RedisCache&lt;K, V&gt; {
    private static final ILog LOGGER = LogFactory.getLog(AbstractRedisCache.class);

    /**
     * 需要注入自己的的redis commands 工具
     */
    @Resource
    private JedisCommands jedisCommands;
    /**
     * 需要注入自己的的redis commands 批量获取key value的工具
     */
    @Resource
    private MultiKeyCommands multiKeyCommands;

    /**
     * 获取缓存
     *
     * @param code
     * @return
     */
    //todo key 抽象   解决多参数问题 以后再改造吧
    public V getByCache(K code) {
        if(code == null){
            return null;
        }
        String cacheStr = this.get(code);
        if (!StringUtils.isEmpty(cacheStr)) {
            Cache&lt;K, V&gt; cache = map2cache(cacheStr);
            return cache.getValue();
        }
        //get
        LOGGER.info(&quot;redis_cache||createCache||code={}&quot;, code);
        V value = createCache(code);
        this.set(code, value);
        return value;
    }

    private Cache&lt;K, V&gt; map2cache(String cacheStr) {
        Cache&lt;K, V&gt; cache = JSONObject.parseObject(cacheStr, Cache.class);
        V v = JSONObject.parseObject(cacheStr).getObject(&quot;value&quot;, valueClass());
        cache.setValue(v);
        return cache;
    }

    /**
     * 批量获取缓存
     *
     * @param codes
     * @return
     */
    public Map&lt;K, V&gt; getByCache(Set&lt;K&gt; codes) {
        
        if(CollectionUtils.isEmpty(codes)){
            return new HashMap&lt;&gt;();
        }
        Map&lt;K, V&gt; result = new HashMap&lt;&gt;();
        int max = 20;
        Set&lt;K&gt; noCacheGoodsIds = new HashSet&lt;&gt;(codes);
        Map&lt;String, K&gt; codesMap = map2String(codes);
        List&lt;K&gt; codeList = new ArrayList&lt;&gt;(codes);

        for (int i = 0; i &lt; codes.size(); i += max) {
            List&lt;K&gt; tempCodes = codeList.subList(i, Math.min(i + max, codes.size()));
            //redis 操作弱依赖
            List&lt;String&gt; values = this.mget(tempCodes);
            if (org.springframework.util.CollectionUtils.isEmpty(values)) {
                continue;
            }
            for (String value : values) {
                //竟然拿到的是null.....
                if (StringUtils.isEmpty(value)) {
                    continue;
                }
                Cache&lt;K, V&gt; cache = map2cache(value);
                if (cache.getValue() != null) {
                    result.put(codesMap.get(String.valueOf(cache.getKey())), cache.getValue());
                }
                noCacheGoodsIds.remove(codesMap.get(String.valueOf(cache.getKey())));

            }
        }
        if (!CollectionUtils.isEmpty(noCacheGoodsIds)) {
            LOGGER.info(&quot;redis_cache||createCache||size={}&quot;, noCacheGoodsIds.size());
            noCacheGoodsIds = noCacheGoodsIds.stream().filter(Objects::nonNull)
                    .filter(CacheKey::isCreateCache)
                    .collect(Collectors.toSet());
            Map&lt;K, V&gt; values = createCache(noCacheGoodsIds);
            this.set(values);
            
            if (values != null) {
                result.putAll(values);
            }
        }
        return result;
    }

    /**
     * 异步 and 弱依赖
     *
     * @param cacheMap
     */
    private void set(Map&lt;K, V&gt; cacheMap) {
        ThreadPoolExecutorEnum.BATCH_EXEC_THREAD_POOL.execute(() -&gt; {
            try {
                RedisKeyEnum keyEnum = keyEnum();
                for (K key : cacheMap.keySet()) {
                    Cache&lt;K, V&gt; cache = new Cache&lt;&gt;(key, cacheMap.get(key));
                    jedisCommands.setex(keyEnum.getKey(key), keyEnum.getExpire(), JSONObject.toJSONString(cache));
                }
            } catch (Exception e) {
                LOGGER.error(&quot;redis_cache||set||cacheMap.size={}&quot;, cacheMap.size());
            }
        });

    }

    /**
     * redis弱依赖，超时或故障不影响业务
     * todo redis sentinal熔断
     *
     * @param keyList
     * @return
     */
    private List&lt;String&gt; mget(List&lt;K&gt; keyList) {
        try {
            RedisKeyEnum keyEnum = keyEnum();
            String[] keys = new String[keyList.size()];
            for (int j = 0; j &lt; keyList.size(); j++) {
                keys[j] = keyEnum.getKey(keyList.get(j));
            }
            List&lt;String&gt; values = multiKeyCommands.mget(keys);
            return values;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||mget||keys.size={}&quot;, keyList.size());
        }
        return new ArrayList&lt;&gt;();
    }

    /**
     * redis弱依赖，超时或故障不影响业务
     * 当redis超时时，直接调用createCache  是仍然能获取到value的，但是数据库压力会比较大，需考虑是否可承压，可通过开关控制是否可直接调用 createValue
     *
     * @param keyList
     * @return
     */
    private String get(K key) {
        try {
            RedisKeyEnum keyEnum = keyEnum();
            String cacheStr = jedisCommands.get(keyEnum.getKey(key));
            return cacheStr;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||get||key={}&quot;, key);
        }
        return null;
    }

    /**
     * redis弱依赖，当redis超时或故障时不影响业务
     * 此处如有需要可考虑异步 参考 delCache
     * @param keyList
     * @return
     */
    private void set(K key, V value) {
        try {
            RedisKeyEnum keyEnum = keyEnum();
            Cache&lt;K, V&gt; cache = new Cache&lt;&gt;(key, value);
            jedisCommands.setex(keyEnum.getKey(key), keyEnum.getExpire(), JSONObject.toJSONString(cache));
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||set||key={}&quot;, key);
        }
    }

    /**
     * 映射code，进行K与string的映射，否则当key是Long时，可能会映射为Integer而非Long
     * 故通过此方法，记录key的原值
     *
     * @param codes
     * @return
     */
    private Map&lt;String, K&gt; map2String(Set&lt;K&gt; codes) {
        Map&lt;String, K&gt; result = new HashMap&lt;&gt;();
        for (K code : codes) {
            result.put(String.valueOf(code), code);
        }
        return result;
    }

    /**
     * 删除缓存
     * 弱依赖，删除失败不影响现有逻辑，等缓存超时，但是也需要关注删除失败的场景，如果出现，则后续缓存失效前使用的都是老数据，根据数据及时性，做不同的逻辑
     *
     * @param code
     */
    public void delCache(K code) {
        try {

            RedisKeyEnum keyEnum = keyEnum();
            String key = keyEnum.getKey(code);

            //有些是在事务中的操作，为了尽快释放事务，异步删除
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                jedisCommands.del(key);
            });

            //延时双删，防止主备延迟，导致上面删除后，仍然读到旧的数据，此处延时双删下，如不需要可删除
            ScheduledExecutor.run(() -&gt; {
                jedisCommands.del(key);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||key={}&quot;, code);
        }

    }

    /**
     * 缓存不存在时单个生成缓存对象
     *
     * @param key
     * @return
     */
    private V createCache(K key) {
        Map&lt;K, V&gt; map = createCache(Sets.newHashSet(key));
        if (map != null &amp;&amp; map.containsKey(key)) {
            return map.get(key);
        }
        return null;
    }

    /**
     * 缓存不存在时，用于批量生成缓存对象
     *
     * @param key
     * @return
     */
    protected abstract Map&lt;K, V&gt; createCache(Set&lt;K&gt; key);

    /**
     * 缓存key枚举  存在key及expire
     *
     * @return
     */
    protected abstract RedisKeyEnum keyEnum();

    /**
     * 缓存class类型，用于将字符串map为V 对象，因为泛型在编译后会被擦除，所以通过泛型无法正常映射
     *
     * @return
     */
    protected abstract Class&lt;V&gt; valueClass();
}


</code></pre>
<h3 id="threadpoolexecutorenum">ThreadPoolExecutorEnum</h3>
<p>异常处理线程池</p>
<pre><code>public enum ThreadPoolExecutorEnum {

    BATCH_EXEC_THREAD_POOL(2, 5, 60, 100, &quot;batch_exec_thread_pool&quot;, new ThreadPoolExecutor.AbortPolicy()),
    REDIS_CACHE_THREAD_POOL(2, 5, 60, 100, &quot;cache_batch_set_thread_pool&quot;, new ThreadPoolExecutor.CallerRunsPolicy()),
    ;

    private TaxiThreadPoolExecutor executor;

    ThreadPoolExecutorEnum(int corePoolSize, int maxPoolSize, int keepAliveTime, int queueSize, String name, RejectedExecutionHandler handler) {
        //XXXThreadPoolExecutor 保证trace等数据完整
        this.executor = new XXXThreadPoolExecutor(corePoolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS,
                new ArrayBlockingQueue&lt;&gt;(queueSize), new ThreadFactoryBuilder(name), handler);
    }


    public void execute(Runnable runnable) {
        executor.execute(runnable);
    }
}
</code></pre>
<h3 id>延时处理器</h3>
<p>用于进行延时处理</p>
<pre><code>public class ScheduledExecutor {

    /**
     *  延时处理器
     */
    private static ScheduledExecutorService executorService = new TaxiScheduledThreadPoolExecutorWrapper(2,
            new ThreadFactoryBuilder(&quot;scheduled_executor&quot;));


    /**
     * 防止出现死锁，该线程池不可在其它地方使用
     */
    private static ExecutorService runExecutor = ExecutorServiceMetrics.monitor(
            Metrics.globalRegistry, new TaxiThreadPoolExecutor(2, 5, 60, TimeUnit.SECONDS,
                    new ArrayBlockingQueue&lt;&gt;(1000), new ThreadFactoryBuilder(&quot;scheduled_run_thread_pool&quot;), new ThreadPoolExecutor.DiscardPolicy()),
            &quot;scheduled_run_thread_pool&quot;
    );
    /**
     * 延时执行
     *
     * @param runnable
     * @param second   秒
     */
    public static void run(Runnable runnable, int second) {
        //ScheduledExecutorService 用于延时，为了防止阻塞其它的延时处理逻辑，延时后通过新的线程池执行
        Runnable threadPoolRunnable = () -&gt; {
            runExecutor.execute(runnable);
        };
        executorService.schedule(threadPoolRunnable, second, TimeUnit.SECONDS);
    }

}
</code></pre>
<h1 id="key">复杂的数据key缓存</h1>
<ul>
<li>支持复杂的key（如对象）及泛型的数据缓存(如泛型)</li>
<li>支持是否只读缓存，不createCache</li>
</ul>
<h3 id="cachekey">CacheKey</h3>
<pre><code>public interface CacheKey {

    /**
     * 单个缓存查询
     *
     * @return
     */
    String key();

    /**
     * 当无缓存时是否创建缓存
     *
     * @return
     */
    boolean isCreateCache();
}
</code></pre>
<h3 id="abstractcomplexrediscache">AbstractComplexRedisCache</h3>
<pre><code>/**
 * @author liuk
 * 支持复杂的key及集合的缓存，支持批量获取
 */
public abstract class AbstractComplexRedisCache&lt;K extends CacheKey, V&gt; implements RedisCache&lt;K, V&gt; {
    private static final ILog LOGGER = LogFactory.getLog(AbstractComplexRedisCache.class);

    @Resource
    private JedisCommands jedisCommands;


    @Resource
    private MultiKeyCommands multiKeyCommands;

    /**
     * 获取缓存
     *
     * @param code
     * @return
     */
    public V getByCache(K cacheKey) {
        if (cacheKey == null) {
            return null;
        }
        String cacheStr = this.get(cacheKey);
        if (!StringUtils.isEmpty(cacheStr)) {
            return getValue(cacheKey,cacheStr);
        }
        //get1
        LOGGER.info(&quot;redis_cache||createCache||code={}&quot;, cacheKey.key());
        if (cacheKey.isCreateCache()) {
            V value = createCache(cacheKey);
            this.set(cacheKey, value);
            return value;
        } else {
            return null;
        }
    }

    private Cache&lt;K, V&gt; map2cache(K cacheKey, String cacheStr) {
        Cache&lt;K, V&gt; cache = new Cache&lt;&gt;();
        V v = JSONObject.parseObject(cacheStr).getObject(&quot;value&quot;, type());
        cache.setValue(v);
        cache.setKey(cacheKey);
        return cache;
    }


    /**
     * redis弱依赖，超时或故障不影响业务
     *
     * @param keyList
     * @return
     */
    private String get(K key) {
        try {
            String cacheStr = jedisCommands.get(key.key());
            return cacheStr;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||get||key={}&quot;, key);
        }
        return null;
    }

    /**
     * redis弱依赖，超时或故障不影响业务
     *
     * @param keyList
     * @return
     */
     private void set(K cacheKey, V value) {
        try {
            if (encapsulation()) {
                Cache&lt;K, V&gt; cache = new Cache&lt;&gt;(cacheKey, value);
                jedisCommands.setex(cacheKey.key(), this.expire(), JSONObject.toJSONString(cache));
            } else {
                jedisCommands.setex(cacheKey.key(), this.expire(), String.valueOf(value));
            }
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||set||key={}&quot;, cacheKey.key());
        }
    }


    /**
     * 删除缓存
     *
     * @param code
     */
    public void delCache(K code) {
        try {

            String key = code.key();

            //有些是在事务中的操作，为了尽快释放事务，异步删除
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                jedisCommands.del(key);
            });

            //延时双删
            ScheduledExecutor.run(() -&gt; {
                jedisCommands.del(key);
            }, 1);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||key={}&quot;, code);
        }

    }

    /**
     * 单个获取缓存对象
     *
     * @param key
     * @return
     */
    protected abstract V createCache(K key);


    /**
     * 单个获取缓存对象
     *
     * @param key
     * @return
     */
    protected abstract Map&lt;K, V&gt; createCache(Set&lt;K&gt; key);

    /**
     * 缓存类型
     *
     * @return
     */
    protected abstract Type type();

    /**
     * 秒
     *
     * @return
     */
    protected abstract int expire();


    @Override
    public Map&lt;K, V&gt; getByCache(Set&lt;K&gt; codes) {

        Map&lt;K, V&gt; result = new HashMap&lt;&gt;();

        List&lt;String&gt; keys = new ArrayList&lt;&gt;();
        Map&lt;String, K&gt; keyMap = new HashMap&lt;&gt;();
        int index = 0;
        for (K code : codes) {
            String key = code.key();
            keys.add(key);
            keyMap.put(key, code);
        }
        int max = 20;
        Set&lt;K&gt; noCacheGoodsIds = new HashSet&lt;&gt;(codes);
        for (int i = 0; i &lt; codes.size(); i += max) {
            List&lt;String&gt; tempCodes = keys.subList(i, Math.min(i + max, codes.size()));
            //redis 操作弱依赖
            List&lt;String&gt; values = this.mget(tempCodes);
            if (org.springframework.util.CollectionUtils.isEmpty(values)) {
                continue;
            }

            for (int valIndex = 0; valIndex &lt; values.size(); valIndex++) {
                String value = values.get(valIndex);
                K key = keyMap.get(tempCodes.get(valIndex));
                //竟然拿到的是null.....
                if (StringUtils.isEmpty(value)) {
                    continue;
                }
                V cache = getValue(key,value);
                if (cache != null) {
                    result.put(key, cache);
                }
                noCacheGoodsIds.remove(key);
            }
        }
        if (!CollectionUtils.isEmpty(noCacheGoodsIds)) {
            LOGGER.info(&quot;redis_cache||createCache||size={}&quot;, noCacheGoodsIds.size());
            noCacheGoodsIds = noCacheGoodsIds.stream().filter(Objects::nonNull).filter(CacheKey::isCreateCache).collect(Collectors.toSet());
            Map&lt;K, V&gt; values = createCache(noCacheGoodsIds);
            this.set(values);
            if (values != null) {
                result.putAll(values);
            }
        }
        return result;
    }

    private List&lt;String&gt; mget(List&lt;String&gt; keyList) {
        try {
            String[] keys = new String[keyList.size()];
            for (int j = 0; j &lt; keyList.size(); j++) {
                keys[j] = keyList.get(j);
            }
            List&lt;String&gt; values = multiKeyCommands.mget(keys);
            return values;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||mget||keys.size={}&quot;, keyList.size());
        }
        return new ArrayList&lt;&gt;();
    }

    private void set(Map&lt;K, V&gt; cacheMap) {
        if (cacheMap == null || cacheMap.size() == 0) {
            return;
        }
        ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
            try {
                for (K key : cacheMap.keySet()) {
                    if(encapsulation()){
                        Cache&lt;K, V&gt; cache = new Cache&lt;&gt;(key, cacheMap.get(key));
                        jedisCommands.setex(key.key(), this.expire(), JSONObject.toJSONString(cache));
                    }else {
                        jedisCommands.setex(key.key(), this.expire(), String.valueOf(cacheMap.get(key)));
                    }

                }
            } catch (Exception e) {
                LOGGER.error(&quot;redis_cache||set||cacheMap.size={}&quot;, cacheMap.size());
            }
        });

    }
    /**
     * 获取redis值，根据不同的实现，通过不同的方式取值
     */
    private V getValue(K cacheKey,String cacheStr){
        if(encapsulation()){
            Cache&lt;K, V&gt; cache = map2cache(cacheKey, cacheStr);
            return cache.getValue();
        }else if(type().equals(Integer.class)){
            //todo 暂时这样吧  以后再说
            V v = (V) (Object) Integer.parseInt(cacheStr);
            return v;
        }else {
            throw new BizException(ErrorCodeEnum.SYSTEM_ERROR);
        }
    }
    
    /**
     * 是否包装为 Cache对象
     * 对于一些需要进行 incr等操作的 数据不可包装为Cache对象
     *
     * @return
     */
    protected boolean encapsulation() {
        return true;
    }
    
    
    /**
     * refresh
     *
     * @param code
     */
    public void refreshCache(K code) {
        try {
            //有些是在事务中的操作，为了尽快释放事务，异步更新
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                V value = createCache(code);
                set(code, value);
            });

            //延时更新
            ScheduledExecutor.run(() -&gt; {
                V value = createCache(code);
                set(code, value);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||key={}&quot;, code);
        }
    }

    /**
     * refresh   如果删除会引发大量的透穿到数据库，则使用refresh功能
     *
     * @param code
     */
    public void refreshCache(Set&lt;K&gt; codes) {
        try {
            //有些是在事务中的操作，为了尽快释放事务，异步更新
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                Map&lt;K, V&gt; values = createCache(codes);
                set(values);
            });

            //延时更新
            ScheduledExecutor.run(() -&gt; {
                Map&lt;K, V&gt; values = createCache(codes);
                set(values);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||keys={}&quot;, JSONObject.toJSONString(codes));
        }
    }


}
</code></pre>
<h1 id>使用</h1>
<h3 id="abstractrediscache">AbstractRedisCache的使用</h3>
<pre><code>@Service
public class GoodsDao extends AbstractRedisCache&lt;Long, Goods&gt; {
    
    @Resource
    private GoodsCustomMapper goodsCustomMapper;
    
    @Override
    protected Map&lt;Long, Goods&gt; createCache(Set&lt;Long&gt; key) {
        Map&lt;Long, Goods&gt; result = new HashMap&lt;&gt;();
        GoodsExample goodsExample = new GoodsExample();
        goodsExample.createCriteria().andIsDeletedEqualTo(DeletedStatusEnum.NORMAL.getCode())
                .andGoodsIdIn(new ArrayList&lt;&gt;(key));
        List&lt;Goods&gt; goodsList = goodsCustomMapper.selectByExample(goodsExample);
        if (CollectionUtils.isEmpty(goodsList)) {
            return result;
        }
        for (Goods goods : goodsList) {
            result.put(goods.getGoodsId(), goods);
        }
        return result;
    }


    @Override
    protected RedisCacheKeyEnum keyEnum() {
        return RedisCacheKeyEnum.GOODS_CACHE_KEY;
    }

    @Override
    protected Class&lt;Goods&gt; valueClass() {
        return Goods.class;
    }
}
</code></pre>
<h3 id="abstractcomplexrediscache">AbstractComplexRedisCache的使用</h3>
<pre><code>public interface CacheKey {

    /**
     * 单个缓存查询
     *
     * @return
     */
    String key();

    /**
     * 当无缓存时是否创建缓存
     *
     * @return
     */
    boolean isCreateCache();
}
</code></pre>
<pre><code>@Component
public class CityGoodsCache extends AbstractComplexRedisCache&lt;CityGoodsCacheKey, List&lt;Long&gt;&gt; {

    @Resource
    private GoodsCustomMapper goodsCustomMapper;

    /**
     * 全部的单独查-1
     *
     * @param key
     * @return
     */
    @Override
    protected List&lt;Long&gt; createCache(CityGoodsCacheKey key) {
        GoodsCityQueryCondition condition = new GoodsCityQueryCondition();
        condition.setMallType(key.getMallType());
        condition.setGoodsState(GoodsShelvesStateEnum.put.getCode());
        condition.setCityId(key.getCityId().toString());
        condition.setBizType(key.getBizType());
        List&lt;Goods&gt; goods = goodsCustomMapper.selectByCityId(condition);
        List&lt;Long&gt; goodsIdList = new ArrayList&lt;&gt;();
        if (CollectionUtils.isNotEmpty(goods)) {
            goodsIdList = goods.stream().map(Goods::getGoodsId).collect(Collectors.toList());
        }
        return goodsIdList;
    }

    @Override
    protected Map&lt;CityGoodsCacheKey, List&lt;Long&gt;&gt; createCache(Set&lt;CityGoodsCacheKey&gt; key) {
        Map&lt;CityGoodsCacheKey, List&lt;Long&gt;&gt; result = new HashMap&lt;&gt;();
        for (CityGoodsCacheKey cacheKey : key) {
            result.put(cacheKey, createCache(cacheKey));
        }
        return result;
    }

    @Override
    protected int expire() {
        return 60 * 10;
    }

    @Override
    protected Type type() {
        return new TypeToken&lt;List&lt;Long&gt;&gt;() {
        }.getType();
    }
}
</code></pre>
<ul>
<li>更复杂的使用，参考<code>https://blog.liu-kevin.com/ghost/#/editor/67c844aed3ec870001342e51</code></li>
</ul>
<h1 id>支持版本的缓存组件</h1>
<p>使用AbstractComplexRedisCache时，如果需要升级缓存，可能想到的调整缓存key<br>
但是这样会有两个问题</p>
<ul>
<li>在一定时间内，使用缓存数据量翻倍</li>
<li>在上线部署了部分节点后，有可能新的key已经有缓存了，但是该数据有变更，并且变更发生在老的key上，则新key的数据在缓存过期前一直都是有问题的</li>
</ul>
<p>基于上面的问题，在保存缓存时，引入的版本的概念<br>
这样同一个key的缓存只会有一份，并且当数据变更发生在老的节点上时，因为是同一个key，所以已部署节点取到后发现版本低于当前的版本，则会自动加载最新的，而老的节点，发现版本当于当前版本，则会直接使用</p>
<h3 id="cache">cache对象</h3>
<pre><code>@Data
public class ComplexCache&lt;K, V&gt; extends Cache&lt;K, V&gt; {
    public ComplexCache() {
        this.traceId = TraceContext.getContext().getTraceId();
        this.timestamp = System.currentTimeMillis();
        this.tag = 0;
        init(0);
    }

    public ComplexCache(K key, V value) {
        super(key, value);
        init(0);
    }

    public ComplexCache(K key, V value,int version) {
        super(key, value);
        init(version);
    }

    private void init(int version) {
        this.traceId = TraceContext.getContext().getTraceId();
        this.timestamp = System.currentTimeMillis();
        this.tag = 0;
        this.version = version;
    }


    //缓存时的trace 方便定位问题
    private String traceId;
    //缓存的版本
    private Integer version;
    //缓存时的时间戳
    private Long timestamp;
    /**
     * 0-有效 1-createCacheIng  用于标识是否需要刷新缓存，防止缓存失效后击穿缓存至数据库，本次未实现
     */
    private Integer tag;

}
</code></pre>
<h3 id="cache">cache抽象组件</h3>
<pre><code>
public abstract class AbstractComplexRedisVersionCache&lt;K extends CacheKey, V&gt; implements RedisCache&lt;K, V&gt; {
    private static final ILog LOGGER = LogFactory.getLog(AbstractComplexRedisVersionCache.class);

    @Resource
    private JedisCommands jedisCommands;


    @Resource
    private MultiKeyCommands multiKeyCommands;

    /**
     * 获取缓存
     *
     * @param code
     * @return
     */
    public V getByCache(K cacheKey) {
        if (cacheKey == null) {
            return null;
        }
        String cacheStr = this.get(cacheKey);
        ComplexCache&lt;K, V&gt; cache = null;
        if (!StringUtils.isEmpty(cacheStr)) {
            if (encapsulation()) {
                cache = map2cache(cacheKey, cacheStr);
            } else {
                return getBaseValue(cacheStr);
            }
        }
        boolean isCreateCache = isCreateCache(cacheKey, cache);
        if (!isCreateCache) {
            if (cache == null) {
                return null;
            }
            return cache.getValue();
        }
        //get1
        LOGGER.info(&quot;redis_cache||createCache||code={}&quot;, cacheKey.key());

        V value = createCache(cacheKey);
        this.set(cacheKey, value);
        return value;

    }

    /**
     * 是否创建缓存
     *
     * @param cacheKey
     * @param cache
     * @return
     */
    private boolean isCreateCache(K cacheKey, ComplexCache&lt;K, V&gt; cache) {
        if (!cacheKey.isCreateCache()) {
            return false;
        }
        if (cache == null) {
            return true;
        }
        if (cache.getVersion() == null) {
            return false;
        }
        if (cache.getVersion() &lt; version()) {
            return true;
        }
        return false;
    }

    private V getBaseValue(String cacheStr) {
        if (type().equals(Integer.class)) {
            //todo 暂时这样吧  以后再说
            V v = (V) (Object) Integer.parseInt(cacheStr);
            return v;
        } else {
            throw new BizException(ErrorCodeEnum.SYSTEM_ERROR);
        }
    }

    private ComplexCache&lt;K, V&gt; map2cache(K cacheKey, String cacheStr) {
        ComplexCache&lt;K, V&gt; cache = new ComplexCache&lt;&gt;();
        JSONObject cacheJSON = JSONObject.parseObject(cacheStr);
        V v = cacheJSON.getObject(&quot;value&quot;, type());
        cache.setValue(v);
        cache.setKey(cacheKey);
        cache.setTag(cacheJSON.getInteger(&quot;tag&quot;));
        cache.setTimestamp(cacheJSON.getLong(&quot;timestamp&quot;));
        cache.setVersion(cacheJSON.getInteger(&quot;version&quot;));
        cache.setTraceId(cacheJSON.getString(&quot;traceId&quot;));
        return cache;
    }


    /**
     * redis弱依赖，超时或故障不影响业务
     *
     * @param keyList
     * @return
     */
    private String get(K key) {
        try {
            String cacheStr = jedisCommands.get(key.key());
            return cacheStr;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||get||key={}&quot;, key);
        }
        return null;
    }

    /**
     * redis弱依赖，超时或故障不影响业务
     *
     * @param keyList
     * @return
     */
    private void set(K cacheKey, V value) {
        try {
            if (encapsulation()) {
                ComplexCache&lt;K, V&gt; cache = new ComplexCache&lt;&gt;(cacheKey, value, version());
                jedisCommands.setex(cacheKey.key(), this.expire(), JSONObject.toJSONString(cache));
            } else {
                jedisCommands.setex(cacheKey.key(), this.expire(), String.valueOf(value));
            }
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||set||key={}&quot;, cacheKey.key());
        }
    }


    /**
     * 删除缓存
     *
     * @param code
     */
    public void delCache(K code) {
        try {
            String key = code.key();

            //有些是在事务中的操作，为了尽快释放事务，异步删除
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                jedisCommands.del(key);
            });

            //延时双删
            ScheduledExecutor.run(() -&gt; {
                jedisCommands.del(key);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||key={}&quot;, code);
        }
    }

    /**
     * refresh
     *
     * @param code
     */
    public void refreshCache(K code) {
        try {
            //有些是在事务中的操作，为了尽快释放事务，异步更新
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                V value = createCache(code);
                set(code, value);
            });

            //延时更新
            ScheduledExecutor.run(() -&gt; {
                V value = createCache(code);
                set(code, value);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||key={}&quot;, code);
        }
    }

    /**
     * refresh
     *
     * @param code
     */
    public void refreshCache(Set&lt;K&gt; codes) {
        try {
            //有些是在事务中的操作，为了尽快释放事务，异步更新
            ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
                Map&lt;K, V&gt; values = createCache(codes);
                set(values);
            });

            //延时更新
            ScheduledExecutor.run(() -&gt; {
                Map&lt;K, V&gt; values = createCache(codes);
                set(values);
            }, 3);
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||delCache||keys={}&quot;, JSONObject.toJSONString(codes));
        }
    }

    @Override
    public Map&lt;K, V&gt; getByCache(Set&lt;K&gt; codes) {

        Map&lt;K, V&gt; result = new HashMap&lt;&gt;();

        List&lt;String&gt; keys = new ArrayList&lt;&gt;();
        Map&lt;String, K&gt; keyMap = new HashMap&lt;&gt;();
        int index = 0;
        for (K code : codes) {
            String key = code.key();
            keys.add(key);
            keyMap.put(key, code);
        }
        int max = 20;
        Set&lt;K&gt; noCacheGoodsIds = new HashSet&lt;&gt;(codes);
        for (int i = 0; i &lt; codes.size(); i += max) {
            List&lt;String&gt; tempCodes = keys.subList(i, Math.min(i + max, codes.size()));
            //redis 操作弱依赖
            List&lt;String&gt; values = this.mget(tempCodes);
            if (org.springframework.util.CollectionUtils.isEmpty(values)) {
                continue;
            }

            for (int valIndex = 0; valIndex &lt; values.size(); valIndex++) {
                String value = values.get(valIndex);
                K key = keyMap.get(tempCodes.get(valIndex));
                //竟然拿到的是null.....
                if (StringUtils.isEmpty(value)) {
                    continue;
                }

                boolean isCreateCache;
                V v;
                if (!encapsulation()) {
                    isCreateCache = isCreateCache(key, null);
                    v = getBaseValue(value);
                } else {
                    ComplexCache&lt;K, V&gt; cache = map2cache(key, value);
                    isCreateCache = isCreateCache(key, cache);
                    v = cache.getValue();
                }
                if (!isCreateCache) {
                    if (v != null) {
                        result.put(key, v);
                    }
                    noCacheGoodsIds.remove(key);
                }
            }
        }
        if (!CollectionUtils.isEmpty(noCacheGoodsIds)) {
            LOGGER.info(&quot;redis_cache||createCache||size={}&quot;, noCacheGoodsIds.size());
            noCacheGoodsIds = noCacheGoodsIds.stream().filter(Objects::nonNull)
                    .filter(CacheKey::isCreateCache)
                    .collect(Collectors.toSet());
            Map&lt;K, V&gt; values = createCache(noCacheGoodsIds);
            this.set(values);
            if (values != null) {
                result.putAll(values);
            }
        }
        return result;
    }

    private List&lt;String&gt; mget(List&lt;String&gt; keyList) {
        try {
            String[] keys = new String[keyList.size()];
            for (int j = 0; j &lt; keyList.size(); j++) {
                keys[j] = keyList.get(j);
            }
            List&lt;String&gt; values = multiKeyCommands.mget(keys);
            return values;
        } catch (Exception e) {
            LOGGER.error(&quot;redis_cache||mget||keys.size={}&quot;, keyList.size());
        }
        return new ArrayList&lt;&gt;();
    }

    private void set(Map&lt;K, V&gt; cacheMap) {
        if (cacheMap == null || cacheMap.size() == 0) {
            return;
        }
        ThreadPoolExecutorEnum.REDIS_CACHE_THREAD_POOL.execute(() -&gt; {
            try {
                for (K key : cacheMap.keySet()) {
                    if (encapsulation()) {
                        ComplexCache&lt;K, V&gt; cache = new ComplexCache&lt;&gt;(key, cacheMap.get(key), version());
                        jedisCommands.setex(key.key(), this.expire(), JSONObject.toJSONString(cache));
                    } else {
                        jedisCommands.setex(key.key(), this.expire(), String.valueOf(cacheMap.get(key)));
                    }

                }
            } catch (Exception e) {
                LOGGER.error(&quot;redis_cache||set||cacheMap.size={}&quot;, cacheMap.size());
            }
        });

    }

    /**
     * 是否包装为 Cache对象
     * 对于一些需要进行 incr等操作的 数据不可包装为Cache对象
     *
     * @return
     */
    protected boolean encapsulation() {
        return true;
    }


    /**
     * 单个获取缓存对象
     *
     * @param key
     * @return
     */
    protected abstract V createCache(K key);


    /**
     * 单个获取缓存对象
     *
     * @param key
     * @return
     */
    protected abstract Map&lt;K, V&gt; createCache(Set&lt;K&gt; key);

    /**
     * 缓存类型
     *
     * @return
     */
    protected abstract Type type();

    /**
     * 秒
     *
     * @return
     */
    protected abstract int expire();

    /**
     * 版本号
     *
     * @return
     */
    protected int version() {
        return 0;
    }
}

</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[java之简易bitmap实现]]></title><description><![CDATA[<div class="kg-card-markdown"><p>当需要用比较少的空间存储true和false时，不防考虑下通过bitmap实现。</p>
<p>bitmap用途还是非常广泛的，比如布隆过滤器等。</p>
<p>下面介绍如何用java实现一个bitmap用于存储状态数据（0/1、true/false）</p>
<pre><code>public class BitMap {

    /**
     * 二进制数据存储，存储内容为[0,0,1,1]
     */
    char[] binaryChars = null;

    /**
     * 扩张因子，当binaryChars小时，以该值扩张
     */
    private static final double DILATATION_FACTOR = 1.3;

    /**
     * defaultChar 无状态时的值  因为存的为0，1所以无状态即为0
     */
    private char defaultChar = '0';
    /**
     * valueChar 有状态时的值  因为存的为0，1所以无状态即为1
     */
    private char valueChar = '1';

    /**
     * 将字符串转为BitMap
     *
     * @param</code></pre></div>]]></description><link>http://blog.liu-kevin.com/2024/08/09/jian-yi-bitmap/</link><guid isPermaLink="false">66b5a8f3d3ec870001342df8</guid><category><![CDATA[java]]></category><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Fri, 09 Aug 2024 05:28:30 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>当需要用比较少的空间存储true和false时，不防考虑下通过bitmap实现。</p>
<p>bitmap用途还是非常广泛的，比如布隆过滤器等。</p>
<p>下面介绍如何用java实现一个bitmap用于存储状态数据（0/1、true/false）</p>
<pre><code>public class BitMap {

    /**
     * 二进制数据存储，存储内容为[0,0,1,1]
     */
    char[] binaryChars = null;

    /**
     * 扩张因子，当binaryChars小时，以该值扩张
     */
    private static final double DILATATION_FACTOR = 1.3;

    /**
     * defaultChar 无状态时的值  因为存的为0，1所以无状态即为0
     */
    private char defaultChar = '0';
    /**
     * valueChar 有状态时的值  因为存的为0，1所以无状态即为1
     */
    private char valueChar = '1';

    /**
     * 将字符串转为BitMap
     *
     * @param data
     */
    public BitMap(String data) {
        // 转换为整数  Long大小有限制  使用bigInteger  无大小限制
        BigInteger number = new BigInteger(data);
        //转换为二进制
        String binaryString = number.toString(2);
        //转换为二进制字符数组
        binaryChars = binaryString.toCharArray();
    }

    /**
     * 创建大小为位数size的二进制
     * 大小为size bitmap，使用set时index最大为size-1,否则会触发扩容
     *
     * @param size
     */
    public BitMap(int size) {
        binaryChars = new char[size];
        for (int i = 0; i &lt; binaryChars.length; i++) {
            binaryChars[i] = defaultChar;
        }
    }

    /**
     * 默认10size
     */
    public BitMap() {
        this(10);
    }

    /**
     * 返回当前数组长度 
     * 与 toString后生成的字符串的长度不同，toString会trim左边的0
     * @return
     */
    public int size() {
        return binaryChars.length;
    }



    /**
     * 对坐标赋值
     *
     * @param index 位置，从数组的右边开始向左递增，防止转为bigInteger时，省掉右边的0导致数据问题
     * @param value true or false
     */
    public void set(int index, boolean value) {
        if (index &gt;= binaryChars.length) {
            //扩容
            dilatation(index);
        }
        //从低位开始存，为了防止转数字时，丢掉高位
        binaryChars[binaryChars.length - 1 - index] = value ? valueChar : defaultChar;
    }

    /**
     * 扩容
     *
     * @param size 以size计算扩容后的数组大小
     *             取+10 &amp; *DILATATION_FACTOR 中的小值
     */
    private void dilatation(int size) {
        int max = Math.min(size + 10, (int) (size * DILATATION_FACTOR));
        char[] temp = new char[max];
        //从右边数据开始迁移到新的数组，如果不存在则置为defaultChar
        for (int i = 0; i &lt; temp.length; i++) {
            if (i &lt; binaryChars.length) {
                temp[temp.length - 1 - i] = binaryChars[binaryChars.length - 1 - i];
            } else {
                temp[temp.length - 1 - i] = defaultChar;
            }
        }
        binaryChars = temp;
    }

    /**
     * 获取第index坐标上是否有值
     *
     * @param index
     * @return
     */
    public boolean get(int index) {
        //如果该坐标大于了binaryChars.length则表示未存储，返回false
        if (index &gt;= binaryChars.length) {
            return false;
        }
        //is equal valueChar
        return binaryChars[binaryChars.length - 1 - index] == valueChar;
    }

    /**
     * 转为string,通过BigInteger转时会丢掉左侧的0
     *
     * @return
     */
    public String toString() {
        BigInteger num = new BigInteger(new String(binaryChars), 2);
        return String.valueOf(num);
    }

    /**
     * toString
     * @param radix 进制 如2
     * @return
     */
    public String toString(int radix){
        BigInteger num = new BigInteger(new String(binaryChars), 2);
        String binaryString = num.toString(radix);
        return binaryString;
    }
    
    public static void main(String[] args) {
        BitMap bitMapUtil = new BitMap(10);
        bitMapUtil.set(0, true);
        System.out.println(bitMapUtil);

        bitMapUtil = new BitMap(bitMapUtil.toString());
        bitMapUtil.set(11, true);
        System.out.println(bitMapUtil);
        System.out.println(bitMapUtil.get(11));
        System.out.println(bitMapUtil.get(10));
        System.out.println(bitMapUtil.get(100));
        bitMapUtil.set(11, false);
        System.out.println(bitMapUtil);
    }
}

</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[ScheduledThreadPoolExecutor 为什么不能指定maximumPoolSize]]></title><description><![CDATA[<div class="kg-card-markdown"><p>ScheduledThreadPoolExecutor不能指定maximumPoolSize，‌这是因为其内部实现机制决定的。‌ScheduledThreadPoolExecutor的构造函数中，‌maximumPoolSize参数被硬编码为Integer.MAX_VALUE，‌这意味着无论我们如何设置maximumPoolSize参数，‌其实际最大线程数始终是Integer.MAX_VALUE，‌即理论上的最大整数值。‌这种设计选择背后的原因主要是为了确保延迟任务能够及时被处理，‌避免因为线程池大小限制而导致任务处理延迟。‌</p>
<p>ScheduledThreadPoolExecutor使用的是DelayedWorkQueue队列，‌这是一个无界队列，‌意味着其可以容纳无限多的任务。‌由于这个队列是无界的，‌因此设置maximumPoolSize实际上是没有意义的，‌因为队列本身不会成为限制因素。‌corePoolSize参数指定了线程池中的线程数量，‌而keepAliveTime参数定义了当线程池中的线程数量超过corePoolSize时，‌多余的空闲线程的存活时间。‌然而，‌由于maximumPoolSize被设置为一个极大的值，‌实际上线程池的大小主要受控于corePoolSize的设置，‌以及任务队列的处理能力。‌</p>
<p>此外，‌由于ScheduledThreadPoolExecutor主要负责调度延迟执行的任务，‌它并不直接处理业务逻辑。‌这意味着，‌即使设置了较大的corePoolSize，‌在实际应用中，‌如果并发任务较少，‌也可能会造成线程的浪费。‌因此，‌一种常见的做法是将ScheduledThreadPoolExecutor作为任务调度器，‌将具体的业务逻辑处理交给其他标准的线程池来执行，‌以避免资源的浪费</p>
</div>]]></description><link>http://blog.liu-kevin.com/2024/08/08/scheduledthreadpoolexecutor-wei-shi-yao-bu-neng-zhi-ding-maximumpoolsize/</link><guid isPermaLink="false">66b47836d3ec870001342df6</guid><dc:creator><![CDATA[凯文]]></dc:creator><pubDate>Thu, 08 Aug 2024 07:48:22 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>ScheduledThreadPoolExecutor不能指定maximumPoolSize，‌这是因为其内部实现机制决定的。‌ScheduledThreadPoolExecutor的构造函数中，‌maximumPoolSize参数被硬编码为Integer.MAX_VALUE，‌这意味着无论我们如何设置maximumPoolSize参数，‌其实际最大线程数始终是Integer.MAX_VALUE，‌即理论上的最大整数值。‌这种设计选择背后的原因主要是为了确保延迟任务能够及时被处理，‌避免因为线程池大小限制而导致任务处理延迟。‌</p>
<p>ScheduledThreadPoolExecutor使用的是DelayedWorkQueue队列，‌这是一个无界队列，‌意味着其可以容纳无限多的任务。‌由于这个队列是无界的，‌因此设置maximumPoolSize实际上是没有意义的，‌因为队列本身不会成为限制因素。‌corePoolSize参数指定了线程池中的线程数量，‌而keepAliveTime参数定义了当线程池中的线程数量超过corePoolSize时，‌多余的空闲线程的存活时间。‌然而，‌由于maximumPoolSize被设置为一个极大的值，‌实际上线程池的大小主要受控于corePoolSize的设置，‌以及任务队列的处理能力。‌</p>
<p>此外，‌由于ScheduledThreadPoolExecutor主要负责调度延迟执行的任务，‌它并不直接处理业务逻辑。‌这意味着，‌即使设置了较大的corePoolSize，‌在实际应用中，‌如果并发任务较少，‌也可能会造成线程的浪费。‌因此，‌一种常见的做法是将ScheduledThreadPoolExecutor作为任务调度器，‌将具体的业务逻辑处理交给其他标准的线程池来执行，‌以避免资源的浪费</p>
</div>]]></content:encoded></item></channel></rss>