当前位置: 移动技术网 > IT编程>开发语言>.net > RocketMQ获取指定消息的实现方法(源码)

RocketMQ获取指定消息的实现方法(源码)

2020年08月17日  | 移动技术网IT编程  | 我要评论
概要消息查询是什么?消息查询就是根据用户提供的msgid从mq中取出该消息rocketmq如果有多个节点如何查询?问题:rocketmq分布式结构中,数据分散在各个节点,即便是同一topic的数据,也

概要

消息查询是什么?

消息查询就是根据用户提供的msgid从mq中取出该消息

rocketmq如果有多个节点如何查询?

问题:rocketmq分布式结构中,数据分散在各个节点,即便是同一topic的数据,也未必都在一个broker上。客户端怎么知道数据该去哪个节点上查?

猜想1:逐个访问broker节点查询数据

猜想2:有某种数据中心存在,该中心知道所有消息存储的位置,只要向该中心查询即可得到消息具体位置,进而取得消息内容

实际:

1.消息id中含有消息所在的broker的地址信息(ip\port)以及该消息在commitlog中的偏移量。

2.客户端实现会从msgid字符串中解析出broker地址,向指定broker节查询消息。

问题:commitlog文件有多个,只有偏移量估计不能确定在哪个文件吧?

实际:单个broker节点内offset是全局唯一的,不是每个commitlog文件的偏移量都是从0开始的。单个节点内所有commitlog文件共用一套偏移量,每个文件的文件名为其第一个消息的偏移量。所以可以根据偏移量和文件名确定commitlog文件。

源码阅读

0.使用方式

messageext  msg = consumer.viewmessage(msgid);

1.消息id解析

这个了解下就可以了

public class messageid {
 private socketaddress address;
 private long offset;

 public messageid(socketaddress address, long offset) {
  this.address = address;
  this.offset = offset;
 }

 //get-set
}

//from mqadminimpl.java
public messageext viewmessage(
 string msgid) throws remotingexception, mqbrokerexception, interruptedexception, mqclientexception {

 messageid messageid = null;
 try {
  //从msgid字符串中解析出address和offset
  //address = ip:port
  //offset为消息在commitlog文件中的偏移量
  messageid = messagedecoder.decodemessageid(msgid);
 } catch (exception e) {
  throw new mqclientexception(responsecode.no_message, "query message by id finished, but no message.");
 }
 return this.mqclientfactory.getmqclientapiimpl().viewmessage(remotingutil.socketaddress2string(messageid.getaddress()),
  messageid.getoffset(), timeoutmillis);
}

//from messagedecoder.java
public static messageid decodemessageid(final string msgid) throws unknownhostexception {
 socketaddress address;
 long offset;
 //ipv4和ipv6的区别
 //如果msgid总长度超过32字符,则为ipv6
 int iplength = msgid.length() == 32 ? 4 * 2 : 16 * 2;

 byte[] ip = utilall.string2bytes(msgid.substring(0, iplength));
 byte[] port = utilall.string2bytes(msgid.substring(iplength, iplength + 8));
 bytebuffer bb = bytebuffer.wrap(port);
 int portint = bb.getint(0);
 address = new inetsocketaddress(inetaddress.getbyaddress(ip), portint);

 // offset
 byte[] data = utilall.string2bytes(msgid.substring(iplength + 8, iplength + 8 + 16));
 bb = bytebuffer.wrap(data);
 offset = bb.getlong(0);

 return new messageid(address, offset);
}

2.长连接客户端rpc实现

要发请求首先得先建立连接,这里方法可以看到创建连接相关的操作。值得注意的是,第一次访问的时候可能连接还没建立,建立连接需要消耗一段时间。代码中对这个时间也做了判断,如果连接建立完成后,发现已经超时,则不再发出请求。目的应该是尽可能减少请求线程的阻塞时间。

//from nettyremotingclient.java
@override
public remotingcommand invokesync(string addr, final remotingcommand request, long timeoutmillis)
 throws interruptedexception, remotingconnectexception, remotingsendrequestexception, remotingtimeoutexception {
 long beginstarttime = system.currenttimemillis();
 //这里会先检查有无该地址的通道,有则返回,无则创建
 final channel channel = this.getandcreatechannel(addr);
 if (channel != null && channel.isactive()) {
  try {
   //前置钩子
   dobeforerpchooks(addr, request); 
   //判断通道建立完成时是否已到达超时时间,如果超时直接抛出异常。不发请求
   long costtime = system.currenttimemillis() - beginstarttime;
   if (timeoutmillis < costtime) {
    throw new remotingtimeoutexception("invokesync call timeout");
   }
   //同步调用
   remotingcommand response = this.invokesyncimpl(channel, request, timeoutmillis - costtime);
   //后置钩子
   doafterrpchooks(remotinghelper.parsechannelremoteaddr(channel), request, response); //后置钩子
   return response;
  } catch (remotingsendrequestexception e) {
   log.warn("invokesync: send request exception, so close the channel[{}]", addr);
   this.closechannel(addr, channel);
   throw e;
  } catch (remotingtimeoutexception e) {
   if (nettyclientconfig.isclientclosesocketiftimeout()) {
    this.closechannel(addr, channel);
    log.warn("invokesync: close socket because of timeout, {}ms, {}", timeoutmillis, addr);
   }
   log.warn("invokesync: wait response timeout exception, the channel[{}]", addr);
   throw e;
  }
 } else {
  this.closechannel(addr, channel);
  throw new remotingconnectexception(addr);
 }
}

下一步看看它的同步调用做了什么处理。注意到它会构建一个future对象加入待响应池,发出请求报文后就挂起线程,然后等待唤醒(waitresponse内部使用countdownlatch等待)。

//from nettyremotingabstract.javapublic remotingcommand invokesyncimpl(final channel channel, final remotingcommand request,
 final long timeoutmillis)
 throws interruptedexception, remotingsendrequestexception, remotingtimeoutexception {
 //请求id
 final int opaque = request.getopaque();

 try {
  //请求存根
  final responsefuture responsefuture = new responsefuture(channel, opaque, timeoutmillis, null, null);
  //加入待响应的请求池
  this.responsetable.put(opaque, responsefuture);
  final socketaddress addr = channel.remoteaddress();
  //将请求发出,成功发出时更新状态
  channel.writeandflush(request).addlistener(new channelfuturelistener() {
   @override
   public void operationcomplete(channelfuture f) throws exception {
    if (f.issuccess()) { //若成功发出,更新请求状态为“已发出”
     responsefuture.setsendrequestok(true);
     return;
    } else {
     responsefuture.setsendrequestok(false);
    }

    //若发出失败,则从池中移除(没用了,释放资源)
    responsetable.remove(opaque);
    responsefuture.setcause(f.cause());
    //putresponse的时候会唤醒等待的线程
    responsefuture.putresponse(null);
    log.warn("send a request command to channel <" + addr + "> failed.");
   }
  });

  //只等待一段时间,不会一直等下去
  //若正常响应,则收到响应后,此线程会被唤醒,继续执行下去
  //若超时,则到达该时间后线程苏醒,继续执行
  remotingcommand responsecommand = responsefuture.waitresponse(timeoutmillis);
  if (null == responsecommand) {
   if (responsefuture.issendrequestok()) {
    throw new remotingtimeoutexception(remotinghelper.parsesocketaddressaddr(addr), timeoutmillis,
     responsefuture.getcause());
   } else {
    throw new remotingsendrequestexception(remotinghelper.parsesocketaddressaddr(addr), responsefuture.getcause());
   }
  }

  return responsecommand;
 } finally {
  //正常响应完成时,将future释放(正常逻辑)
  //超时时,将future释放。这个请求已经作废了,后面如果再收到响应,就可以直接丢弃了(由于找不到相关的响应钩子,就不处理了)
  this.responsetable.remove(opaque);
 }
}

好,我们再来看看收到报文的时候是怎么处理的。我们都了解jdk中的future的原理,大概就是将这个任务提交给其他线程处理,该线程处理完毕后会将结果写入到future对象中,写入时如果有线程在等待该结果,则唤醒这些线程。这里也差不多,只不过执行线程在服务端,服务执行完毕后会将结果通过长连接发送给客户端,客户端收到后根据报文中的id信息从待响应池中找到future对象,然后就是类似的处理了。

class nettyclienthandler extends simplechannelinboundhandler<remotingcommand> {

 //底层解码完毕得到remotingcommand的报文
 @override
 protected void channelread0(channelhandlercontext ctx, remotingcommand msg) throws exception {
  processmessagereceived(ctx, msg);
 }
}

public void processmessagereceived(channelhandlercontext ctx, remotingcommand msg) throws exception {
 final remotingcommand cmd = msg;
 if (cmd != null) {
  //判断类型
  switch (cmd.gettype()) {
   case request_command:
    processrequestcommand(ctx, cmd);
    break;
   case response_command:
    processresponsecommand(ctx, cmd);
    break;
   default:
    break;
  }
 }
}

public void processresponsecommand(channelhandlercontext ctx, remotingcommand cmd) {
 //取得消息id
 final int opaque = cmd.getopaque();
 //从待响应池中取得对应请求
 final responsefuture responsefuture = responsetable.get(opaque);
 if (responsefuture != null) {
  //将响应值注入到responsefuture对象中,等待线程可从这个对象获取结果
  responsefuture.setresponsecommand(cmd);
  //请求已处理完毕,释放该请求
  responsetable.remove(opaque);

  //如果有回调函数的话则回调(由当前线程处理)
  if (responsefuture.getinvokecallback() != null) {
   executeinvokecallback(responsefuture);
  } else {
   //没有的话,则唤醒等待线程(由等待线程做处理)
   responsefuture.putresponse(cmd);
   responsefuture.release();
  }
 } else {
  log.warn("receive response, but not matched any request, " + remotinghelper.parsechannelremoteaddr(ctx.channel()));
  log.warn(cmd.tostring());
 }
}

总结一下,客户端的处理时序大概是这样的:

结构大概是这样的:

3.服务端的处理

//todo 服务端待补充commitlog文件映射相关内容

class nettyserverhandler extends simplechannelinboundhandler<remotingcommand> {

  @override
  protected void channelread0(channelhandlercontext ctx, remotingcommand msg) throws exception {
    processmessagereceived(ctx, msg);
  }
}

//from nettyremotingabscract.java
public void processmessagereceived(channelhandlercontext ctx, remotingcommand msg) throws exception {
  final remotingcommand cmd = msg;
  if (cmd != null) {
    switch (cmd.gettype()) {
      case request_command: //服务端走这里
        processrequestcommand(ctx, cmd);
        break;
      case response_command:
        processresponsecommand(ctx, cmd);
        break;
      default:
        break;
    }
  }
}

//from nettyremotingabscract.java
public void processrequestcommand(final channelhandlercontext ctx, final remotingcommand cmd) {
  //查看有无该请求code相关的处理器
  final pair<nettyrequestprocessor, executorservice> matched = this.processortable.get(cmd.getcode());
  //如果没有,则使用默认处理器(可能没有默认处理器)
  final pair<nettyrequestprocessor, executorservice> pair = null == matched ? this.defaultrequestprocessor : matched;
  final int opaque = cmd.getopaque();

  if (pair != null) {
    runnable run = new runnable() {
      @override
      public void run() {
        try {
          dobeforerpchooks(remotinghelper.parsechannelremoteaddr(ctx.channel()), cmd);
          final remotingresponsecallback callback = new remotingresponsecallback() {
            @override
            public void callback(remotingcommand response) {
              doafterrpchooks(remotinghelper.parsechannelremoteaddr(ctx.channel()), cmd, response);
              if (!cmd.isonewayrpc()) {
                if (response != null) { //不为null,则由本类将响应值写会给请求方
                  response.setopaque(opaque);
                  response.markresponsetype();
                  try {
                    ctx.writeandflush(response);
                  } catch (throwable e) {
                    log.error("process request over, but response failed", e);
                    log.error(cmd.tostring());
                    log.error(response.tostring());
                  }
                } else { //为null,意味着processor内部已经将响应处理了,这里无需再处理。
                }
              }
            }
          };
          if (pair.getobject1() instanceof asyncnettyrequestprocessor) {//querymessageprocessor为异步处理器
            asyncnettyrequestprocessor processor = (asyncnettyrequestprocessor)pair.getobject1();
            processor.asyncprocessrequest(ctx, cmd, callback);
          } else { 
            nettyrequestprocessor processor = pair.getobject1();
            remotingcommand response = processor.processrequest(ctx, cmd);
            doafterrpchooks(remotinghelper.parsechannelremoteaddr(ctx.channel()), cmd, response);
            callback.callback(response);
          }
        } catch (throwable e) {
          log.error("process request exception", e);
          log.error(cmd.tostring());

          if (!cmd.isonewayrpc()) {
            final remotingcommand response = remotingcommand.createresponsecommand(remotingsysresponsecode.system_error,
              remotinghelper.exceptionsimpledesc(e));
            response.setopaque(opaque);
            ctx.writeandflush(response);
          }
        }
      }
    };

    if (pair.getobject1().rejectrequest()) {
      final remotingcommand response = remotingcommand.createresponsecommand(remotingsysresponsecode.system_busy,
        "[rejectrequest]system busy, start flow control for a while");
      response.setopaque(opaque);
      ctx.writeandflush(response);
      return;
    }

    try {
      final requesttask requesttask = new requesttask(run, ctx.channel(), cmd);
      pair.getobject2().submit(requesttask);
    } catch (rejectedexecutionexception e) {
      if ((system.currenttimemillis() % 10000) == 0) {
        log.warn(remotinghelper.parsechannelremoteaddr(ctx.channel())
          + ", too many requests and system thread pool busy, rejectedexecutionexception "
          + pair.getobject2().tostring()
          + " request code: " + cmd.getcode());
      }

      if (!cmd.isonewayrpc()) {
        final remotingcommand response = remotingcommand.createresponsecommand(remotingsysresponsecode.system_busy,
          "[overload]system busy, start flow control for a while");
        response.setopaque(opaque);
        ctx.writeandflush(response);
      }
    }
  } else {
    string error = " request type " + cmd.getcode() + " not supported";
    final remotingcommand response =
      remotingcommand.createresponsecommand(remotingsysresponsecode.request_code_not_supported, error);
    response.setopaque(opaque);
    ctx.writeandflush(response);
    log.error(remotinghelper.parsechannelremoteaddr(ctx.channel()) + error);
  }
}

//from querymessageprocesor.java
@override
public remotingcommand processrequest(channelhandlercontext ctx, remotingcommand request)
  throws remotingcommandexception {
  switch (request.getcode()) {
    case requestcode.query_message:
      return this.querymessage(ctx, request);
    case requestcode.view_message_by_id: //通过msgid查询消息
      return this.viewmessagebyid(ctx, request);
    default:
      break;
  }

  return null;
}

public remotingcommand viewmessagebyid(channelhandlercontext ctx, remotingcommand request)
  throws remotingcommandexception {
  final remotingcommand response = remotingcommand.createresponsecommand(null);
  final viewmessagerequestheader requestheader =
    (viewmessagerequestheader) request.decodecommandcustomheader(viewmessagerequestheader.class);

  response.setopaque(request.getopaque());

  //getmessagetstore得到当前映射到内存中的commitlog文件,然后根据偏移量取得数据
  final selectmappedbufferresult selectmappedbufferresult =
    this.brokercontroller.getmessagestore().selectonemessagebyoffset(requestheader.getoffset());
  if (selectmappedbufferresult != null) {
    response.setcode(responsecode.success);
    response.setremark(null);

    //将响应通过socket写回给客户端
    try {
      //response对象的数据作为header
      //消息内容作为body
      fileregion fileregion =
        new onemessagetransfer(response.encodeheader(selectmappedbufferresult.getsize()),
          selectmappedbufferresult);
      ctx.channel().writeandflush(fileregion).addlistener(new channelfuturelistener() {
        @override
        public void operationcomplete(channelfuture future) throws exception {
          selectmappedbufferresult.release();
          if (!future.issuccess()) {
            log.error("transfer one message from page cache failed, ", future.cause());
          }
        }
      });
    } catch (throwable e) {
      log.error("", e);
      selectmappedbufferresult.release();
    }

    return null; //如果有值,则直接写回给请求方。这里返回null是不需要由外层处理响应。
  } else {
    response.setcode(responsecode.system_error);
    response.setremark("can not find message by the offset, " + requestheader.getoffset());
  }

  return response;
}

总结

到此这篇关于rocketmq获取指定消息的文章就介绍到这了,更多相关rocketmq获取指定消息内容请搜索移动技术网以前的文章或继续浏览下面的相关文章希望大家以后多多支持移动技术网!

如您对本文有疑问或者有任何想说的,请 点击进行留言回复,万千网友为您解惑!

相关文章:

验证码:
移动技术网