Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

题目介绍

Merge two sorted linked lists and return it as a sorted list. The list should be made by splicing together the nodes of the first two lists.

将两个升序链表合并为一个新的 升序 链表并返回。新链表是通过拼接给定的两个链表的所有节点组成的。

示例 1

输入:l1 = [1,2,4], l2 = [1,3,4]
输出:[1,1,2,3,4,4]

示例 2

输入: l1 = [], l2 = []
输出: []

示例 3

输入: l1 = [], l2 = [0]
输出: [0]

简要分析

这题是 Easy 的,看着也挺简单,两个链表进行合并,就是比较下大小,可能将就点的话最好就在两个链表中原地合并

题解代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public ListNode mergeTwoLists(ListNode l1, ListNode l2) {
// 下面两个if判断了入参的边界,如果其一为null,直接返回另一个就可以了
if (l1 == null) {
return l2;
}
if (l2 == null) {
return l1;
}
// new 一个合并后的头结点
ListNode merged = new ListNode();
// 这个是当前节点
ListNode current = merged;
// 一开始给这个while加了l1和l2不全为null的条件,后面想了下不需要
// 因为内部前两个if就是跳出条件
while (true) {
if (l1 == null) {
// 这里其实跟开头类似,只不过这里需要将l2剩余部分接到merged链表后面
// 所以不能是直接current = l2,这样就是把后面的直接丢了
current.val = l2.val;
current.next = l2.next;
break;
}
if (l2 == null) {
current.val = l1.val;
current.next = l1.next;
break;
}
// 这里是两个链表都不为空的时候,就比较下大小
if (l1.val < l2.val) {
current.val = l1.val;
l1 = l1.next;
} else {
current.val = l2.val;
l2 = l2.next;
}
// 这里是new个新的,其实也可以放在循环头上
current.next = new ListNode();
current = current.next;
}
current = null;
// 返回这个头结点
return merged;
}

结果

ConsumeQueue 其实是定位到一个 topic 下的消息在 CommitLog 下的偏移量,它也是固定大小的

1
2
3
4
// ConsumeQueue file size,default is 30W
private int mapedFileSizeConsumeQueue = 300000 * ConsumeQueue.CQ_STORE_UNIT_SIZE;

public static final int CQ_STORE_UNIT_SIZE = 20;

所以文件大小是5.7M 左右

5udpag

ConsumeQueue 的构建是通过org.apache.rocketmq.store.DefaultMessageStore.ReputMessageService运行后的 doReput 方法,而启动是的 reputFromOffset 则是通过org.apache.rocketmq.store.DefaultMessageStore#start中下面代码设置并启动

1
2
3
4
log.info("[SetReputOffset] maxPhysicalPosInLogicQueue={} clMinOffset={} clMaxOffset={} clConfirmedOffset={}",
maxPhysicalPosInLogicQueue, this.commitLog.getMinOffset(), this.commitLog.getMaxOffset(), this.commitLog.getConfirmOffset());
this.reputMessageService.setReputFromOffset(maxPhysicalPosInLogicQueue);
this.reputMessageService.start();

看一下 doReput 的逻辑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
private void doReput() {
if (this.reputFromOffset < DefaultMessageStore.this.commitLog.getMinOffset()) {
log.warn("The reputFromOffset={} is smaller than minPyOffset={}, this usually indicate that the dispatch behind too much and the commitlog has expired.",
this.reputFromOffset, DefaultMessageStore.this.commitLog.getMinOffset());
this.reputFromOffset = DefaultMessageStore.this.commitLog.getMinOffset();
}
for (boolean doNext = true; this.isCommitLogAvailable() && doNext; ) {

if (DefaultMessageStore.this.getMessageStoreConfig().isDuplicationEnable()
&& this.reputFromOffset >= DefaultMessageStore.this.getConfirmOffset()) {
break;
}

// 根据偏移量获取消息
SelectMappedBufferResult result = DefaultMessageStore.this.commitLog.getData(reputFromOffset);
if (result != null) {
try {
this.reputFromOffset = result.getStartOffset();

for (int readSize = 0; readSize < result.getSize() && doNext; ) {
// 消息校验和转换
DispatchRequest dispatchRequest =
DefaultMessageStore.this.commitLog.checkMessageAndReturnSize(result.getByteBuffer(), false, false);
int size = dispatchRequest.getBufferSize() == -1 ? dispatchRequest.getMsgSize() : dispatchRequest.getBufferSize();

if (dispatchRequest.isSuccess()) {
if (size > 0) {
// 进行分发处理,包括 ConsumeQueue 和 IndexFile
DefaultMessageStore.this.doDispatch(dispatchRequest);

if (BrokerRole.SLAVE != DefaultMessageStore.this.getMessageStoreConfig().getBrokerRole()
&& DefaultMessageStore.this.brokerConfig.isLongPollingEnable()) {
DefaultMessageStore.this.messageArrivingListener.arriving(dispatchRequest.getTopic(),
dispatchRequest.getQueueId(), dispatchRequest.getConsumeQueueOffset() + 1,
dispatchRequest.getTagsCode(), dispatchRequest.getStoreTimestamp(),
dispatchRequest.getBitMap(), dispatchRequest.getPropertiesMap());
}

this.reputFromOffset += size;
readSize += size;
if (DefaultMessageStore.this.getMessageStoreConfig().getBrokerRole() == BrokerRole.SLAVE) {
DefaultMessageStore.this.storeStatsService
.getSinglePutMessageTopicTimesTotal(dispatchRequest.getTopic()).incrementAndGet();
DefaultMessageStore.this.storeStatsService
.getSinglePutMessageTopicSizeTotal(dispatchRequest.getTopic())
.addAndGet(dispatchRequest.getMsgSize());
}
} else if (size == 0) {
this.reputFromOffset = DefaultMessageStore.this.commitLog.rollNextFile(this.reputFromOffset);
readSize = result.getSize();
}
} else if (!dispatchRequest.isSuccess()) {

if (size > 0) {
log.error("[BUG]read total count not equals msg total size. reputFromOffset={}", reputFromOffset);
this.reputFromOffset += size;
} else {
doNext = false;
// If user open the dledger pattern or the broker is master node,
// it will not ignore the exception and fix the reputFromOffset variable
if (DefaultMessageStore.this.getMessageStoreConfig().isEnableDLegerCommitLog() ||
DefaultMessageStore.this.brokerConfig.getBrokerId() == MixAll.MASTER_ID) {
log.error("[BUG]dispatch message to consume queue error, COMMITLOG OFFSET: {}",
this.reputFromOffset);
this.reputFromOffset += result.getSize() - readSize;
}
}
}
}
} finally {
result.release();
}
} else {
doNext = false;
}
}
}

分发的逻辑看到这

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
    class CommitLogDispatcherBuildConsumeQueue implements CommitLogDispatcher {

@Override
public void dispatch(DispatchRequest request) {
final int tranType = MessageSysFlag.getTransactionValue(request.getSysFlag());
switch (tranType) {
case MessageSysFlag.TRANSACTION_NOT_TYPE:
case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
DefaultMessageStore.this.putMessagePositionInfo(request);
break;
case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
break;
}
}
}
public void putMessagePositionInfo(DispatchRequest dispatchRequest) {
ConsumeQueue cq = this.findConsumeQueue(dispatchRequest.getTopic(), dispatchRequest.getQueueId());
cq.putMessagePositionInfoWrapper(dispatchRequest);
}

真正存储的是在这

1
2
3
4
5
6
7
8
9
10
11
12
13
private boolean putMessagePositionInfo(final long offset, final int size, final long tagsCode,
final long cqOffset) {

if (offset + size <= this.maxPhysicOffset) {
log.warn("Maybe try to build consume queue repeatedly maxPhysicOffset={} phyOffset={}", maxPhysicOffset, offset);
return true;
}

this.byteBufferIndex.flip();
this.byteBufferIndex.limit(CQ_STORE_UNIT_SIZE);
this.byteBufferIndex.putLong(offset);
this.byteBufferIndex.putInt(size);
this.byteBufferIndex.putLong(tagsCode);

这里也可以看到 ConsumeQueue 的存储格式,

AA6Tve

偏移量,消息大小,跟 tag 的 hashCode

其实这个表示有点不太对,应该是 Druid 动态切换数据源的方法,只是应用在了 springboot 框架中,准备代码准备了半天,之前在一次数据库迁移中使用了,发现 Druid 还是很强大的,用来做动态数据源切换很方便。

首先这里的场景跟我原来用的有点点区别,在项目中使用的是通过配置中心控制数据源切换,统一切换,而这里的例子多加了个可以根据接口注解配置

第一部分是最核心的,如何基于 Spring JDBC 和 Druid 来实现数据源切换,是继承了org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource 这个类,他的determineCurrentLookupKey方法会被调用来获得用来决定选择那个数据源的对象,也就是 lookupKey,也可以通过这个类看到就是通过这个 lookupKey 来路由找到数据源。

1
2
3
4
5
6
7
8
9
10
public class DynamicDataSource extends AbstractRoutingDataSource {

@Override
protected Object determineCurrentLookupKey() {
if (DatabaseContextHolder.getDatabaseType() != null) {
return DatabaseContextHolder.getDatabaseType().getName();
}
return DatabaseType.MASTER1.getName();
}
}

而如何使用这个 lookupKey 呢,就涉及到我们的 DataSource 配置了,原来就是我们可以直接通过spring 的 jdbc 配置数据源,像这样

现在我们要使用 Druid 作为数据源了,然后配置 DynamicDataSource的参数,通过 key 来选择对应的 DataSource,也就是下面配的 master1 和 master2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
<bean id="master1" class="com.alibaba.druid.pool.DruidDataSource" init-method="init"
destroy-method="close"
p:driverClassName="com.mysql.cj.jdbc.Driver"
p:url="${master1.demo.datasource.url}"
p:username="${master1.demo.datasource.username}"
p:password="${master1.demo.datasource.password}"
p:initialSize="5"
p:minIdle="1"
p:maxActive="10"
p:maxWait="60000"
p:timeBetweenEvictionRunsMillis="60000"
p:minEvictableIdleTimeMillis="300000"
p:validationQuery="SELECT 'x'"
p:testWhileIdle="true"
p:testOnBorrow="false"
p:testOnReturn="false"
p:poolPreparedStatements="false"
p:maxPoolPreparedStatementPerConnectionSize="20"
p:connectionProperties="config.decrypt=true"
p:filters="stat,config"/>

<bean id="master2" class="com.alibaba.druid.pool.DruidDataSource" init-method="init"
destroy-method="close"
p:driverClassName="com.mysql.cj.jdbc.Driver"
p:url="${master2.demo.datasource.url}"
p:username="${master2.demo.datasource.username}"
p:password="${master2.demo.datasource.password}"
p:initialSize="5"
p:minIdle="1"
p:maxActive="10"
p:maxWait="60000"
p:timeBetweenEvictionRunsMillis="60000"
p:minEvictableIdleTimeMillis="300000"
p:validationQuery="SELECT 'x'"
p:testWhileIdle="true"
p:testOnBorrow="false"
p:testOnReturn="false"
p:poolPreparedStatements="false"
p:maxPoolPreparedStatementPerConnectionSize="20"
p:connectionProperties="config.decrypt=true"
p:filters="stat,config"/>

<bean id="dataSource" class="com.nicksxs.springdemo.config.DynamicDataSource">
<property name="targetDataSources">
<map key-type="java.lang.String">
<!-- master -->
<entry key="master1" value-ref="master1"/>
<!-- slave -->
<entry key="master2" value-ref="master2"/>
</map>
</property>
<property name="defaultTargetDataSource" ref="master1"/>
</bean>

现在就要回到头上,介绍下这个DatabaseContextHolder,这里使用了 ThreadLocal 存放这个 DatabaseType,为啥要用这个是因为前面说的我们想要让接口层面去配置不同的数据源,要把持相互隔离不受影响,就使用了 ThreadLocal,关于它也可以看我前面写的一篇文章聊聊传说中的 ThreadLocal,而 DatabaseType 就是个简单的枚举

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
public class DatabaseContextHolder {
public static final ThreadLocal<DatabaseType> databaseTypeThreadLocal = new ThreadLocal<>();

public static DatabaseType getDatabaseType() {
return databaseTypeThreadLocal.get();
}

public static void putDatabaseType(DatabaseType databaseType) {
databaseTypeThreadLocal.set(databaseType);
}

public static void clearDatabaseType() {
databaseTypeThreadLocal.remove();
}
}
public enum DatabaseType {
MASTER1("master1", "1"),
MASTER2("master2", "2");

private final String name;
private final String value;

DatabaseType(String name, String value) {
this.name = name;
this.value = value;
}

public String getName() {
return name;
}

public String getValue() {
return value;
}

public static DatabaseType getDatabaseType(String name) {
if (MASTER2.name.equals(name)) {
return MASTER2;
}
return MASTER1;
}
}

这边可以看到就是通过动态地通过putDatabaseType设置lookupKey来进行数据源切换,要通过接口注解配置来进行设置的话,我们就需要一个注解

1
2
3
4
5
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface DataSource {
String value();
}

这个注解可以配置在我的接口方法上,比如这样

1
2
3
4
5
6
7
8
9
public interface StudentService {

@DataSource("master1")
public Student queryOne();

@DataSource("master2")
public Student queryAnother();

}

通过切面来进行数据源的设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
@Aspect
@Component
@Order(-1)
public class DataSourceAspect {

@Pointcut("execution(* com.nicksxs.springdemo.service..*.*(..))")
public void pointCut() {

}


@Before("pointCut()")
public void before(JoinPoint point)
{
Object target = point.getTarget();
System.out.println(target.toString());
String method = point.getSignature().getName();
System.out.println(method);
Class<?>[] classz = target.getClass().getInterfaces();
Class<?>[] parameterTypes = ((MethodSignature) point.getSignature())
.getMethod().getParameterTypes();
try {
Method m = classz[0].getMethod(method, parameterTypes);
System.out.println("method"+ m.getName());
if (m.isAnnotationPresent(DataSource.class)) {
DataSource data = m.getAnnotation(DataSource.class);
System.out.println("dataSource:"+data.value());
DatabaseContextHolder.putDatabaseType(DatabaseType.getDatabaseType(data.value()));
}

} catch (Exception e) {
e.printStackTrace();
}
}

@After("pointCut()")
public void after() {
DatabaseContextHolder.clearDatabaseType();
}
}

通过接口判断是否带有注解跟是注解的值,DatabaseType 的配置不太好,不过先忽略了,然后在切点后进行清理

这是我 master1 的数据,

master2 的数据

然后跑一下简单的 demo,

1
2
3
4
5
6
7
@Override
public void run(String...args) {
LOGGER.info("run here");
System.out.println(studentService.queryOne());
System.out.println(studentService.queryAnother());

}

看一下运行结果

其实这个方法应用场景不止可以用来迁移数据库,还能实现精细化的读写数据源分离之类的,算是做个简单记录和分享。

这个话题是由一次组内同学分享引出来的,首先在 springboot 2.x 开始默认使用了 cglib 作为 aop 的实现,这里也稍微讲一下,在一个 1.x 的老项目里,可以看到AopAutoConfiguration 是这样的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@Configuration
@ConditionalOnClass({ EnableAspectJAutoProxy.class, Aspect.class, Advice.class })
@ConditionalOnProperty(prefix = "spring.aop", name = "auto", havingValue = "true", matchIfMissing = true)
public class AopAutoConfiguration {

@Configuration
@EnableAspectJAutoProxy(proxyTargetClass = false)
@ConditionalOnProperty(prefix = "spring.aop", name = "proxy-target-class", havingValue = "false", matchIfMissing = true)
public static class JdkDynamicAutoProxyConfiguration {
}

@Configuration
@EnableAspectJAutoProxy(proxyTargetClass = true)
@ConditionalOnProperty(prefix = "spring.aop", name = "proxy-target-class", havingValue = "true", matchIfMissing = false)
public static class CglibAutoProxyConfiguration {
}

}

而在 2.x 中变成了这样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Configuration(proxyBeanMethods = false)
@ConditionalOnProperty(prefix = "spring.aop", name = "auto", havingValue = "true", matchIfMissing = true)
public class AopAutoConfiguration {

@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(Advice.class)
static class AspectJAutoProxyingConfiguration {

@Configuration(proxyBeanMethods = false)
@EnableAspectJAutoProxy(proxyTargetClass = false)
@ConditionalOnProperty(prefix = "spring.aop", name = "proxy-target-class", havingValue = "false")
static class JdkDynamicAutoProxyConfiguration {

}

@Configuration(proxyBeanMethods = false)
@EnableAspectJAutoProxy(proxyTargetClass = true)
@ConditionalOnProperty(prefix = "spring.aop", name = "proxy-target-class", havingValue = "true",
matchIfMissing = true)
static class CglibAutoProxyConfiguration {

}

}

为何会加载 AopAutoConfiguration 在前面的文章聊聊 SpringBoot 自动装配里已经介绍过,有兴趣的可以看下,可以发现 springboot 在 2.x 版本开始使用 cglib 作为默认的动态代理实现。

然后就是出现的问题了,代码是这样的,一个简单的基于 springboot 的带有数据库的插入,对插入代码加了事务注解,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
@Mapper
public interface StudentMapper {
// 就是插入一条数据
@Insert("insert into student(name, age)" + "values ('nick', '18')")
public Long insert();
}

@Component
public class StudentManager {

@Resource
private StudentMapper studentMapper;

public Long createStudent() {
return studentMapper.insert();
}
}

@Component
public class StudentServiceImpl implements StudentService {

@Resource
private StudentManager studentManager;

// 自己引用
@Resource
private StudentServiceImpl studentService;

@Override
@Transactional
public Long createStudent() {
Long id = studentManager.createStudent();
Long id2 = studentService.createStudent2();
return 1L;
}

@Transactional
private Long createStudent2() {
// Integer t = Integer.valueOf("aaa");
return studentManager.createStudent();
}
}

第一个公有方法 createStudent 首先调用了 manager 层的创建方法,然后再通过引入的 studentService 调用了createStudent2,我们先跑一下看看会出现啥情况,果不其然报错了,正是这个报错让我纠结了很久

EdR7oB

报了个空指针,而且是在 createStudent2 已经被调用到了,在它的内部,报的 studentManager 是 null,首先 cglib 作为动态代理它是通过继承的方式来实现的,相当于是会在调用目标对象的代理方法时调用 cglib 生成的子类,具体的代理切面逻辑在子类实现,然后在调用目标对象的目标方法,但是继承的方式对于 final 和私有方法其实是没法进行代理的,因为没法继承,所以我最开始的想法是应该通过 studentService 调用 createStudent2 的时候就报错了,也就是不会进入这个方法内部,后面才发现犯了个特别二的错误,继承的方式去调用父类的私有方法,对于 Java 来说是可以调用到的,父类的私有方法并不由子类的InstanceKlass维护,只能通过子类的InstanceKlass找到Java类对应的_super,这样间接地访问。也就是说子类其实是可以访问的,那为啥访问了会报空指针呢,这里报的是studentManager 是空的,可以往依赖注入方面去想,如果忽略依赖注入,我这个studentManager 的确是 null,那是不是就没有被依赖注入呢,但是为啥前面那个可以呢

这个问题着实查了很久,不废话来看代码

1
2
3
4
5
6
7
8
9
10
@Override
protected Object invokeJoinpoint() throws Throwable {
if (this.methodProxy != null) {
// 这里的 target 就是被代理的 bean
return this.methodProxy.invoke(this.target, this.arguments);
}
else {
return super.invokeJoinpoint();
}
}

这个是org.springframework.aop.framework.CglibAopProxy.CglibMethodInvocation的代码,其实它在这里不是直接调用 super 也就是父类的方法,而是通过 methodProxy 调用 target 目标对象的方法,也就是原始的 studentService bean 的方法,这样子 spring 管理的已经做好依赖注入的 bean 就能正常起作用,否则就会出现上面的问题,因为 cglib 其实是通过继承来实现,通过将调用转移到子类上加入代理逻辑,我们在简单使用的时候会直接 invokeSuper() 调用父类的方法,但是在这里 spring 的场景里需要去支持 spring 的功能逻辑,所以上面的问题就可以开始来解释了,因为 createStudent 是公共方法,cglib 可以对其进行继承代理,但是在执行逻辑的时候其实是通过调用目标对象,也就是 spring 管理的被代理的目标对象的 bean 调用的 createStudent,而对于下面的 createStudent2 方法因为是私有方法,不会走代理逻辑,也就不会有调用回目标对象的逻辑,只是通过继承关系,在子类中没有这个方法,所以会通过子类的InstanceKlass找到这个类对应的_super,然后调用父类的这个私有方法,这里要搞清楚一个点,从这个代理类直接找到其父类然后调用这个私有方法,这个类是由 cglib 生成的,不是被 spring 管理起来经过依赖注入的 bean,所以是没有 studentManager 这个依赖的,也就出现了前面的问题

而在前面提到的cglib通过methodProxy调用到目标对象,目标对象是在什么时候设置的呢,其实是在bean的生命周期中,org.springframework.beans.factory.config.BeanPostProcessor#postProcessAfterInitialization这个接口的在bean的初始化过程中,会调用实现了这个接口的方法,

1
2
3
4
5
6
7
8
9
10
@Override
public Object postProcessAfterInitialization(@Nullable Object bean, String beanName) {
if (bean != null) {
Object cacheKey = getCacheKey(bean.getClass(), beanName);
if (this.earlyProxyReferences.remove(cacheKey) != bean) {
return wrapIfNecessary(bean, beanName, cacheKey);
}
}
return bean;
}

具体的逻辑在 org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator#wrapIfNecessary这个方法里

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
protected Object getCacheKey(Class<?> beanClass, @Nullable String beanName) {
if (StringUtils.hasLength(beanName)) {
return (FactoryBean.class.isAssignableFrom(beanClass) ?
BeanFactory.FACTORY_BEAN_PREFIX + beanName : beanName);
}
else {
return beanClass;
}
}

/**
* Wrap the given bean if necessary, i.e. if it is eligible for being proxied.
* @param bean the raw bean instance
* @param beanName the name of the bean
* @param cacheKey the cache key for metadata access
* @return a proxy wrapping the bean, or the raw bean instance as-is
*/
protected Object wrapIfNecessary(Object bean, String beanName, Object cacheKey) {
if (StringUtils.hasLength(beanName) && this.targetSourcedBeans.contains(beanName)) {
return bean;
}
if (Boolean.FALSE.equals(this.advisedBeans.get(cacheKey))) {
return bean;
}
if (isInfrastructureClass(bean.getClass()) || shouldSkip(bean.getClass(), beanName)) {
this.advisedBeans.put(cacheKey, Boolean.FALSE);
return bean;
}

// Create proxy if we have advice.
Object[] specificInterceptors = getAdvicesAndAdvisorsForBean(bean.getClass(), beanName, null);
if (specificInterceptors != DO_NOT_PROXY) {
this.advisedBeans.put(cacheKey, Boolean.TRUE);
Object proxy = createProxy(
bean.getClass(), beanName, specificInterceptors, new SingletonTargetSource(bean));
this.proxyTypes.put(cacheKey, proxy.getClass());
return proxy;
}

this.advisedBeans.put(cacheKey, Boolean.FALSE);
return bean;
}

然后在 org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator#createProxy 中创建了代理类

CommitLog 结构

CommitLog 是 rocketmq 的服务端,也就是 broker 存储消息的的文件,跟 kafka 一样,也是顺序写入,当然消息是变长的,生成的规则是每个文件的默认1G =1024 * 1024 * 1024,commitlog的文件名fileName,名字长度为20位,左边补零,剩余为起始偏移量;比如00000000000000000000代表了第一个文件,起始偏移量为0,文件大小为1G=1 073 741 824Byte;当这个文件满了,第二个文件名字为00000000001073741824,起始偏移量为1073741824, 消息存储的时候会顺序写入文件,当文件满了则写入下一个文件,代码中的定义

1
2
// CommitLog file size,default is 1G
private int mapedFileSizeCommitLog = 1024 * 1024 * 1024;

kLahwW

本地跑个 demo 验证下,也是这样,这里奇妙有几个比较巧妙的点(个人观点),首先文件就刚好是 1G,并且按照大小偏移量去生成下一个文件,这样获取消息的时候按大小算一下就知道在哪个文件里了,

代码中写入 CommitLog 的逻辑可以从这开始看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
public PutMessageResult putMessage(final MessageExtBrokerInner msg) {
// Set the storage time
msg.setStoreTimestamp(System.currentTimeMillis());
// Set the message body BODY CRC (consider the most appropriate setting
// on the client)
msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
// Back to Results
AppendMessageResult result = null;

StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();

String topic = msg.getTopic();
int queueId = msg.getQueueId();

final int tranType = MessageSysFlag.getTransactionValue(msg.getSysFlag());
if (tranType == MessageSysFlag.TRANSACTION_NOT_TYPE
|| tranType == MessageSysFlag.TRANSACTION_COMMIT_TYPE) {
// Delay Delivery
if (msg.getDelayTimeLevel() > 0) {
if (msg.getDelayTimeLevel() > this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel()) {
msg.setDelayTimeLevel(this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel());
}

topic = ScheduleMessageService.SCHEDULE_TOPIC;
queueId = ScheduleMessageService.delayLevel2QueueId(msg.getDelayTimeLevel());

// Backup real topic, queueId
MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_TOPIC, msg.getTopic());
MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_QUEUE_ID, String.valueOf(msg.getQueueId()));
msg.setPropertiesString(MessageDecoder.messageProperties2String(msg.getProperties()));

msg.setTopic(topic);
msg.setQueueId(queueId);
}
}

long eclipseTimeInLock = 0;
MappedFile unlockMappedFile = null;
MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();

putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
try {
long beginLockTimestamp = this.defaultMessageStore.getSystemClock().now();
this.beginTimeInLock = beginLockTimestamp;

// Here settings are stored timestamp, in order to ensure an orderly
// global
msg.setStoreTimestamp(beginLockTimestamp);

if (null == mappedFile || mappedFile.isFull()) {
mappedFile = this.mappedFileQueue.getLastMappedFile(0); // Mark: NewFile may be cause noise
}
if (null == mappedFile) {
log.error("create mapped file1 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, null);
}

result = mappedFile.appendMessage(msg, this.appendMessageCallback);
switch (result.getStatus()) {
case PUT_OK:
break;
case END_OF_FILE:
unlockMappedFile = mappedFile;
// Create a new file, re-write the message
mappedFile = this.mappedFileQueue.getLastMappedFile(0);
if (null == mappedFile) {
// XXX: warn and notify me
log.error("create mapped file2 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, result);
}
result = mappedFile.appendMessage(msg, this.appendMessageCallback);
break;
case MESSAGE_SIZE_EXCEEDED:
case PROPERTIES_SIZE_EXCEEDED:
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, result);
case UNKNOWN_ERROR:
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result);
default:
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result);
}

eclipseTimeInLock = this.defaultMessageStore.getSystemClock().now() - beginLockTimestamp;
beginTimeInLock = 0;
} finally {
putMessageLock.unlock();
}

if (eclipseTimeInLock > 500) {
log.warn("[NOTIFYME]putMessage in lock cost time(ms)={}, bodyLength={} AppendMessageResult={}", eclipseTimeInLock, msg.getBody().length, result);
}

if (null != unlockMappedFile && this.defaultMessageStore.getMessageStoreConfig().isWarmMapedFileEnable()) {
this.defaultMessageStore.unlockMappedFile(unlockMappedFile);
}

PutMessageResult putMessageResult = new PutMessageResult(PutMessageStatus.PUT_OK, result);

// Statistics
storeStatsService.getSinglePutMessageTopicTimesTotal(msg.getTopic()).incrementAndGet();
storeStatsService.getSinglePutMessageTopicSizeTotal(topic).addAndGet(result.getWroteBytes());

handleDiskFlush(result, putMessageResult, msg);
handleHA(result, putMessageResult, msg);

return putMessageResult;
}

前面也看到在CommitLog 目录下是有大小为 1G 的文件组成,在实现逻辑中,其实是通过 org.apache.rocketmq.store.MappedFileQueue ,内部是存的一个MappedFile的队列,对于写入的场景每次都是通过org.apache.rocketmq.store.MappedFileQueue#getLastMappedFile() 获取最后一个文件,如果还没有创建,或者最后这个文件已经满了,那就调用 org.apache.rocketmq.store.MappedFileQueue#getLastMappedFile(long)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public MappedFile getLastMappedFile(final long startOffset, boolean needCreate) {
long createOffset = -1;
// 调用前面的方法,只是从 mappedFileQueue 获取最后一个
MappedFile mappedFileLast = getLastMappedFile();

// 如果为空,计算下创建的偏移量
if (mappedFileLast == null) {
createOffset = startOffset - (startOffset % this.mappedFileSize);
}

// 如果不为空,但是当前的文件写满了
if (mappedFileLast != null && mappedFileLast.isFull()) {
// 前一个的偏移量加上单个文件的偏移量,也就是 1G
createOffset = mappedFileLast.getFileFromOffset() + this.mappedFileSize;
}

if (createOffset != -1 && needCreate) {
// 根据 createOffset 转换成文件名进行创建
String nextFilePath = this.storePath + File.separator + UtilAll.offset2FileName(createOffset);
String nextNextFilePath = this.storePath + File.separator
+ UtilAll.offset2FileName(createOffset + this.mappedFileSize);
MappedFile mappedFile = null;

// 这里如果allocateMappedFileService 存在,就提交请求
if (this.allocateMappedFileService != null) {
mappedFile = this.allocateMappedFileService.putRequestAndReturnMappedFile(nextFilePath,
nextNextFilePath, this.mappedFileSize);
} else {
try {
// 否则就直接创建
mappedFile = new MappedFile(nextFilePath, this.mappedFileSize);
} catch (IOException e) {
log.error("create mappedFile exception", e);
}
}

if (mappedFile != null) {
if (this.mappedFiles.isEmpty()) {
mappedFile.setFirstCreateInQueue(true);
}
this.mappedFiles.add(mappedFile);
}

return mappedFile;
}

return mappedFileLast;
}

首先看下直接创建的,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public MappedFile(final String fileName, final int fileSize) throws IOException {
init(fileName, fileSize);
}
private void init(final String fileName, final int fileSize) throws IOException {
this.fileName = fileName;
this.fileSize = fileSize;
this.file = new File(fileName);
this.fileFromOffset = Long.parseLong(this.file.getName());
boolean ok = false;

ensureDirOK(this.file.getParent());

try {
// 通过 RandomAccessFile 创建 fileChannel
this.fileChannel = new RandomAccessFile(this.file, "rw").getChannel();
// 做 mmap 映射
this.mappedByteBuffer = this.fileChannel.map(MapMode.READ_WRITE, 0, fileSize);
TOTAL_MAPPED_VIRTUAL_MEMORY.addAndGet(fileSize);
TOTAL_MAPPED_FILES.incrementAndGet();
ok = true;
} catch (FileNotFoundException e) {
log.error("create file channel " + this.fileName + " Failed. ", e);
throw e;
} catch (IOException e) {
log.error("map file " + this.fileName + " Failed. ", e);
throw e;
} finally {
if (!ok && this.fileChannel != null) {
this.fileChannel.close();
}
}
}

如果是提交给AllocateMappedFileService的话就用到了一些异步操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
public MappedFile putRequestAndReturnMappedFile(String nextFilePath, String nextNextFilePath, int fileSize) {
int canSubmitRequests = 2;
if (this.messageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
if (this.messageStore.getMessageStoreConfig().isFastFailIfNoBufferInStorePool()
&& BrokerRole.SLAVE != this.messageStore.getMessageStoreConfig().getBrokerRole()) { //if broker is slave, don't fast fail even no buffer in pool
canSubmitRequests = this.messageStore.getTransientStorePool().remainBufferNumbs() - this.requestQueue.size();
}
}
// 将请求放在 requestTable 中
AllocateRequest nextReq = new AllocateRequest(nextFilePath, fileSize);
boolean nextPutOK = this.requestTable.putIfAbsent(nextFilePath, nextReq) == null;
// requestTable 使用了 concurrentHashMap,用文件名作为 key,防止并发
if (nextPutOK) {
// 这里判断了是否可以提交到 TransientStorePool,涉及读写分离,后面再细聊
if (canSubmitRequests <= 0) {
log.warn("[NOTIFYME]TransientStorePool is not enough, so create mapped file error, " +
"RequestQueueSize : {}, StorePoolSize: {}", this.requestQueue.size(), this.messageStore.getTransientStorePool().remainBufferNumbs());
this.requestTable.remove(nextFilePath);
return null;
}
// 塞到阻塞队列中
boolean offerOK = this.requestQueue.offer(nextReq);
if (!offerOK) {
log.warn("never expected here, add a request to preallocate queue failed");
}
canSubmitRequests--;
}

// 这里的两个提交我猜测是为了多生成一个 CommitLog,
AllocateRequest nextNextReq = new AllocateRequest(nextNextFilePath, fileSize);
boolean nextNextPutOK = this.requestTable.putIfAbsent(nextNextFilePath, nextNextReq) == null;
if (nextNextPutOK) {
if (canSubmitRequests <= 0) {
log.warn("[NOTIFYME]TransientStorePool is not enough, so skip preallocate mapped file, " +
"RequestQueueSize : {}, StorePoolSize: {}", this.requestQueue.size(), this.messageStore.getTransientStorePool().remainBufferNumbs());
this.requestTable.remove(nextNextFilePath);
} else {
boolean offerOK = this.requestQueue.offer(nextNextReq);
if (!offerOK) {
log.warn("never expected here, add a request to preallocate queue failed");
}
}
}

if (hasException) {
log.warn(this.getServiceName() + " service has exception. so return null");
return null;
}

AllocateRequest result = this.requestTable.get(nextFilePath);
try {
// 这里就异步等着
if (result != null) {
boolean waitOK = result.getCountDownLatch().await(waitTimeOut, TimeUnit.MILLISECONDS);
if (!waitOK) {
log.warn("create mmap timeout " + result.getFilePath() + " " + result.getFileSize());
return null;
} else {
this.requestTable.remove(nextFilePath);
return result.getMappedFile();
}
} else {
log.error("find preallocate mmap failed, this never happen");
}
} catch (InterruptedException e) {
log.warn(this.getServiceName() + " service has exception. ", e);
}

return null;
}

而真正去执行文件操作的就是 AllocateMappedFileService的 run 方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
public void run() {
log.info(this.getServiceName() + " service started");

while (!this.isStopped() && this.mmapOperation()) {

}
log.info(this.getServiceName() + " service end");
}
private boolean mmapOperation() {
boolean isSuccess = false;
AllocateRequest req = null;
try {
// 从阻塞队列里获取请求
req = this.requestQueue.take();
AllocateRequest expectedRequest = this.requestTable.get(req.getFilePath());
if (null == expectedRequest) {
log.warn("this mmap request expired, maybe cause timeout " + req.getFilePath() + " "
+ req.getFileSize());
return true;
}
if (expectedRequest != req) {
log.warn("never expected here, maybe cause timeout " + req.getFilePath() + " "
+ req.getFileSize() + ", req:" + req + ", expectedRequest:" + expectedRequest);
return true;
}

if (req.getMappedFile() == null) {
long beginTime = System.currentTimeMillis();

MappedFile mappedFile;
if (messageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
try {
// 通过 transientStorePool 创建
mappedFile = ServiceLoader.load(MappedFile.class).iterator().next();
mappedFile.init(req.getFilePath(), req.getFileSize(), messageStore.getTransientStorePool());
} catch (RuntimeException e) {
log.warn("Use default implementation.");
// 默认创建
mappedFile = new MappedFile(req.getFilePath(), req.getFileSize(), messageStore.getTransientStorePool());
}
} else {
// 默认创建
mappedFile = new MappedFile(req.getFilePath(), req.getFileSize());
}

long eclipseTime = UtilAll.computeEclipseTimeMilliseconds(beginTime);
if (eclipseTime > 10) {
int queueSize = this.requestQueue.size();
log.warn("create mappedFile spent time(ms) " + eclipseTime + " queue size " + queueSize
+ " " + req.getFilePath() + " " + req.getFileSize());
}

// pre write mappedFile
if (mappedFile.getFileSize() >= this.messageStore.getMessageStoreConfig()
.getMapedFileSizeCommitLog()
&&
this.messageStore.getMessageStoreConfig().isWarmMapedFileEnable()) {
mappedFile.warmMappedFile(this.messageStore.getMessageStoreConfig().getFlushDiskType(),
this.messageStore.getMessageStoreConfig().getFlushLeastPagesWhenWarmMapedFile());
}

req.setMappedFile(mappedFile);
this.hasException = false;
isSuccess = true;
}
} catch (InterruptedException e) {
log.warn(this.getServiceName() + " interrupted, possibly by shutdown.");
this.hasException = true;
return false;
} catch (IOException e) {
log.warn(this.getServiceName() + " service has exception. ", e);
this.hasException = true;
if (null != req) {
requestQueue.offer(req);
try {
Thread.sleep(1);
} catch (InterruptedException ignored) {
}
}
} finally {
if (req != null && isSuccess)
// 通知前面等待的
req.getCountDownLatch().countDown();
}
return true;
}
0%