<legend id='YZLH4'><style id='YZLH4'><dir id='YZLH4'><q id='YZLH4'></q></dir></style></legend>
    <i id='YZLH4'><tr id='YZLH4'><dt id='YZLH4'><q id='YZLH4'><span id='YZLH4'><b id='YZLH4'><form id='YZLH4'><ins id='YZLH4'></ins><ul id='YZLH4'></ul><sub id='YZLH4'></sub></form><legend id='YZLH4'></legend><bdo id='YZLH4'><pre id='YZLH4'><center id='YZLH4'></center></pre></bdo></b><th id='YZLH4'></th></span></q></dt></tr></i><div id='YZLH4'><tfoot id='YZLH4'></tfoot><dl id='YZLH4'><fieldset id='YZLH4'></fieldset></dl></div>
    <tfoot id='YZLH4'></tfoot>

      <bdo id='YZLH4'></bdo><ul id='YZLH4'></ul>

    <small id='YZLH4'></small><noframes id='YZLH4'>

    1. 是否可以将硬件解复用用于高负载网络服务器?

      Is it possible to use the hardware de-multiplexing for highload network servers?(是否可以将硬件解复用用于高负载网络服务器?)
      • <legend id='ha6eE'><style id='ha6eE'><dir id='ha6eE'><q id='ha6eE'></q></dir></style></legend>

        <small id='ha6eE'></small><noframes id='ha6eE'>

          • <i id='ha6eE'><tr id='ha6eE'><dt id='ha6eE'><q id='ha6eE'><span id='ha6eE'><b id='ha6eE'><form id='ha6eE'><ins id='ha6eE'></ins><ul id='ha6eE'></ul><sub id='ha6eE'></sub></form><legend id='ha6eE'></legend><bdo id='ha6eE'><pre id='ha6eE'><center id='ha6eE'></center></pre></bdo></b><th id='ha6eE'></th></span></q></dt></tr></i><div id='ha6eE'><tfoot id='ha6eE'></tfoot><dl id='ha6eE'><fieldset id='ha6eE'></fieldset></dl></div>
              <bdo id='ha6eE'></bdo><ul id='ha6eE'></ul>
            • <tfoot id='ha6eE'></tfoot>

                  <tbody id='ha6eE'></tbody>
              1. 本文介绍了是否可以将硬件解复用用于高负载网络服务器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                问题描述

                例如,对于使用 TCP/IP(使用 POSIX poll/select 或更高级的 epoll、kqueue、poll_set、IOCP)的异步 IO,网络驱动程序通过不同(硬件解复用器) CPU 核心,接收消息并将它们转储到内核级别的单个(多路复用器)缓冲区中.然后,我们的线程接受器通过使用 epoll/kqueue/poll_set/IOCP 从这个单个缓冲区接收一个消息套接字的描述符列表,这些消息的套接字来来往往分散(多路分解器)跨线程(在线程池中)) 在不同的 CPU 核心上运行.

                For example, for an asynchronous IO by using TCP/IP (using POSIX poll/select or more advanced epoll, kqueue, poll_set, IOCP), network driver starts by an interruption in different (hardware demultiplexer) CPU-cores, receives messages and dump them into a single (multiplexer) buffer at the kernel level. Then, our thread-acceptor by using epoll / kqueue / poll_set / IOCP receives from this single buffer a list of descriptors of sockets of messages which came and again scatters (demultiplexer) across threads (in thread-pool) running on different CPU-cores.

                简而言之,方案看起来像:硬件中断(硬件解复用器)->内核空间中的网络驱动程序(多路复用器)->用户空间中的用户接受器通过使用epoll/kqueue/poll_set/IOCP(解复用器)

                In short scheme looks like: hardware interruption (hardware demultiplexor) -> network driver in kernel space (multiplexor) -> user's acceptor in user space by using epoll / kqueue / poll_set / IOCP (demultiplexor)

                去掉最后两个链接,只用硬件解复用器"不是更方便快捷吗?

                Is not it easier and faster, to get rid of the last two links, and use only the "hardware demultiplexor"?

                一个例子.如果网络数据包到达,网卡将中断 CPU.在当今的大多数系统上,这些中断分布在内核之间.IE.这项工作是一个硬件解复用器.收到这样的中断后,我们可以立即处理这个网络的消息,等待下一次中断.解复用的所有工作都是在硬件级别通过使用 CPU 中断完成的.

                An example. If a network packet arrives, the network card will interrupt the CPU. On most systems today, these interrupts are distributed across cores. I.e. this work is a hardware demultiplexer. After receiving such an interruption, we can immediately process this network's message and wait for the next interrupt. All work for demultiplexing is done at the level of the hardware, by using a CPU interrupt.

                在 Cortex-A5 MPCore 中:http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0434b/CCHDBEBE.html

                In Cortex-A5 MPCore: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0434b/CCHDBEBE.html

                在所有 Linux 中是否可行,例如 QNX 中的实时 *nix,是否有使用这种方法的公共项目,可能是 ngnix?

                Is it feasible an approach in all of Linux, in real-time *nix such as QNX, and are there public projects where this approach is used, may be ngnix?

                更新:

                对我的问题的简单回答 - 是的,我可以使用硬件解复用,方法是使用 /proc/irq/<N>/smp_affinity:http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-在 linux 中处理

                Simple answer to my question - yes I can use hardware demultiplexing by using /proc/irq/<N>/smp_affinity: http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux

                但是第二个注意 - 这不是一件好事,因为一个数据包的不同部分可以由不同的内核处理,并且缓存同步可能需要时间(L1(CoreX)-> L3-> L1(CoreY))缓存一致性:http://www.alexonlinux.com/why-interrupt-affinity-with-multiple-cores-is-not-such-a-good-thing

                But second notice - it is not such a good thing, because different part of one packet can handled by different cores, and it can take time to cache synchronization (L1(CoreX)->L3->L1(CoreY)) for cache coherency: http://www.alexonlinux.com/why-interrupt-affinity-with-multiple-cores-is-not-such-a-good-thing

                解决方案:

                • 将不同的以太网适配器(其 IRQ)硬绑定到不同的单个 CPU 内核
                • 使用大数据包和小消息,当数据包通常完全包含整个消息时

                问题:但是可能有一些更好的解决方案,例如当我们手动从网络适配器收到一批网络数据包时使用软 IRQ(没有硬件 IRQ),有吗?

                QUESTION: But may be there are some better solutions, an example using soft-IRQ (without hardware-IRQ) when we recieve a batch of some network packets from network adapter manualy, are there?

                推荐答案

                简单回答我的问题 - 是的,我可以使用硬件多路分解,通过使用 /proc/irq//smp_affinity:http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux

                Simple answer to my question - yes I can use hardware demultiplexing by using /proc/irq/<N>/smp_affinity: http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux

                但是第二个注意 - 这不是一件好事,因为一个数据包的不同部分可以由不同的内核处理,并且缓存同步可能需要时间(L1(CoreX)-> L3-> L1(CoreY))缓存一致性:http://www.alexonlinux.com/why-interrupt-affinity-with-multiple-cores-is-not-such-a-good-thing

                But second notice - it is not such a good thing, because different part of one packet can handled by different cores, and it can take time to cache synchronization (L1(CoreX)->L3->L1(CoreY)) for cache coherency: http://www.alexonlinux.com/why-interrupt-affinity-with-multiple-cores-is-not-such-a-good-thing

                解决方案:

                • 将不同的以太网适配器(其 IRQ)硬绑定到不同的单个 CPU 内核
                • 使用大数据包和小消息,当数据包通常完全包含整个消息时

                这篇关于是否可以将硬件解复用用于高负载网络服务器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                相关文档推荐

                C++ stl unordered_map implementation, reference validity(C++ stl unordered_map 实现,参考有效性)
                C++: Is it possible to use a reference as the value in a map?(C++:是否可以使用引用作为映射中的值?)
                Where ampersand quot;amp;quot; can be put when passing argument by reference?(其中符号“amp;通过引用传递参数时可以放置吗?)
                Why can a non-const reference parameter be bound to a temporary object?(为什么可以将非常量引用参数绑定到临时对象?)
                What is a dangling reference?(什么是悬空引用?)
                C++ reference changes when push_back new element to std::vector(当 push_back 新元素到 std::vector 时,C++ 引用发生变化)

                1. <tfoot id='0LNkh'></tfoot>
                    <tbody id='0LNkh'></tbody>

                2. <legend id='0LNkh'><style id='0LNkh'><dir id='0LNkh'><q id='0LNkh'></q></dir></style></legend>

                  <small id='0LNkh'></small><noframes id='0LNkh'>

                    <i id='0LNkh'><tr id='0LNkh'><dt id='0LNkh'><q id='0LNkh'><span id='0LNkh'><b id='0LNkh'><form id='0LNkh'><ins id='0LNkh'></ins><ul id='0LNkh'></ul><sub id='0LNkh'></sub></form><legend id='0LNkh'></legend><bdo id='0LNkh'><pre id='0LNkh'><center id='0LNkh'></center></pre></bdo></b><th id='0LNkh'></th></span></q></dt></tr></i><div id='0LNkh'><tfoot id='0LNkh'></tfoot><dl id='0LNkh'><fieldset id='0LNkh'></fieldset></dl></div>
                          <bdo id='0LNkh'></bdo><ul id='0LNkh'></ul>