• <tfoot id='5ocPt'></tfoot>
    <legend id='5ocPt'><style id='5ocPt'><dir id='5ocPt'><q id='5ocPt'></q></dir></style></legend>

    1. <i id='5ocPt'><tr id='5ocPt'><dt id='5ocPt'><q id='5ocPt'><span id='5ocPt'><b id='5ocPt'><form id='5ocPt'><ins id='5ocPt'></ins><ul id='5ocPt'></ul><sub id='5ocPt'></sub></form><legend id='5ocPt'></legend><bdo id='5ocPt'><pre id='5ocPt'><center id='5ocPt'></center></pre></bdo></b><th id='5ocPt'></th></span></q></dt></tr></i><div id='5ocPt'><tfoot id='5ocPt'></tfoot><dl id='5ocPt'><fieldset id='5ocPt'></fieldset></dl></div>
        <bdo id='5ocPt'></bdo><ul id='5ocPt'></ul>
      1. <small id='5ocPt'></small><noframes id='5ocPt'>

      2. 如何使用 OpenMP 通过 C++ std::list 并行化 for 循环?

        How do I parallelize a for loop through a C++ std::list using OpenMP?(如何使用 OpenMP 通过 C++ std::list 并行化 for 循环?)
        <i id='mScGT'><tr id='mScGT'><dt id='mScGT'><q id='mScGT'><span id='mScGT'><b id='mScGT'><form id='mScGT'><ins id='mScGT'></ins><ul id='mScGT'></ul><sub id='mScGT'></sub></form><legend id='mScGT'></legend><bdo id='mScGT'><pre id='mScGT'><center id='mScGT'></center></pre></bdo></b><th id='mScGT'></th></span></q></dt></tr></i><div id='mScGT'><tfoot id='mScGT'></tfoot><dl id='mScGT'><fieldset id='mScGT'></fieldset></dl></div>

          <tfoot id='mScGT'></tfoot>
          1. <small id='mScGT'></small><noframes id='mScGT'>

            <legend id='mScGT'><style id='mScGT'><dir id='mScGT'><q id='mScGT'></q></dir></style></legend>
                <tbody id='mScGT'></tbody>
                <bdo id='mScGT'></bdo><ul id='mScGT'></ul>

                  本文介绍了如何使用 OpenMP 通过 C++ std::list 并行化 for 循环?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我想使用 OpenMP 以并行方式遍历 std::list 中的所有元素.循环应该能够改变列表的元素.有没有一个简单的解决方案?当迭代器是随机访问迭代器时,OpenMP 3.0 似乎支持并行 for 循环,否则不支持.无论如何,我更愿意使用 OpenMP 2.0,因为我无法完全控制哪些编译器可供我使用.

                  I would like to iterate through all elements in an std::list in parallel fashion using OpenMP. The loop should be able to alter the elements of the list. Is there a simple solution for this? It seems that OpenMP 3.0 supports parallel for loops when the iterator is a Random Access Iterator, but not otherwise. In any case, I would prefer to use OpenMP 2.0 as I don't have full control over which compilers are available to me.

                  如果我的容器是一个向量,我可能会使用:

                  If my container were a vector, I might use:

                  #pragma omp parallel for
                  for (auto it = v.begin(); it != v.end(); ++it) {
                      it->process();
                  }
                  

                  我知道我可以将列表复制到向量中,执行循环,然后将所有内容复制回来.但是,如果可能,我想避免这种复杂性和开销.

                  I understand that I could copy the list into a vector, do the loop, then copy everything back. However, I would like to avoid this complexity and overhead if possible.

                  推荐答案

                  如果您决定使用 Openmp 3.0,您可以使用 task 功能:

                  If you decide to use Openmp 3.0, you can use the task feature:

                  #pragma omp parallel
                  #pragma omp single
                  {
                    for(auto it = l.begin(); it != l.end(); ++it)
                       #pragma omp task firstprivate(it)
                         it->process();
                    #pragma omp taskwait
                  }
                  

                  这将在一个线程中执行循环,但将元素的处理委托给其他线程.

                  This will execute the loop in one thread, but delegate the processing of elements to others.

                  如果没有 OpenMP 3.0,最简单的方法是将所有指向列表中元素的指针(或向量中的迭代器并迭代那个.这样你就不必复制任何东西并避免复制元素本身的开销,所以它不应该有太多开销:

                  Without OpenMP 3.0 the easiest way would be writing all pointers to elements in the list (or iterators in a vector and iterating over that one. This way you wouldn't have to copy anything back and avoid the overhead of copying the elements themselves, so it shouldn't have to much overhead:

                  std::vector<my_element*> elements; //my_element is whatever is in list
                  for(auto it = list.begin(); it != list.end(); ++it)
                    elements.push_back(&(*it));
                  
                  #pragma omp parallel shared(chunks)
                  {
                    #pragma omp for
                    for(size_t i = 0; i < elements.size(); ++i) // or use iterators in newer OpenMP
                        elements[i]->process();
                  }
                  

                  如果你想避免复制指针,你总是可以手动创建一个并行化的 for 循环.您可以让线程访问列表的交错元素(由 KennyTM 提出),或者在迭代和迭代之前将范围拆分为大致相等的连续部分.后者似乎更可取,因为线程避免访问当前由其他线程处理的列表节点(即使只有下一个指针),这可能导致错误共享.大致如下所示:

                  If you want to avoid copying even the pointers, you can always create a parallelized for loop by hand. You can either have the threads access interleaved elements of the list (as proposed by KennyTM) or split the range in roughly equal contious parts before iterating and iterating over those. The later seems preferable since the threads avoid accessing listnodes currently processed by other threads (even if only the next pointer), which could lead to false sharing. This would look roughly like this:

                  #pragma omp parallel
                  {
                    int thread_count = omp_get_num_threads();
                    int thread_num   = omp_get_thread_num();
                    size_t chunk_size= list.size() / thread_count;
                    auto begin = list.begin();
                    std::advance(begin, thread_num * chunk_size);
                    auto end = begin;
                    if(thread_num = thread_count - 1) // last thread iterates the remaining sequence
                       end = list.end();
                    else
                       std::advance(end, chunk_size);
                    #pragma omp barrier
                    for(auto it = begin; it != end; ++it)
                      it->process();
                  }
                  

                  barrier 不是严格需要的,但是如果 process 改变了处理过的元素(意味着它不是一个 const 方法),如果线程迭代一个已经被变异的序列.这种方式将在序列上迭代 3*n 次(其中 n 是线程数),因此对于大量线程,缩放比例可能不如最佳.

                  The barrier is not strictly needed, however if process mutates the processed element (meaning it is not a const method), there might be some sort of false sharing without it, if threads iterate over a sequence which is already being mutated. This way will iterate 3*n times over the sequence (where n is the number of threads), so scaling might be less then optimal for a high number of threads.

                  为了减少开销,您可以将范围的生成放在 #pragma omp parallel 之外,但是您需要知道将形成并行部分的线程数.因此,您可能必须手动设置 num_threads,或使用 omp_get_max_threads() 并处理创建的线程数少于 omp_get_max_threads() 的情况 (这只是一个上限).在这种情况下,最后一种方法可以通过为每个线程分配多个块来处理(使用 #pragma omp for 应该这样做):

                  To reduce the overhead you could put the generation of the ranges outside of the #pragma omp parallel, however you will need to know how many threads will form the parallel section. So you'd probably have to manually set the num_threads, or use omp_get_max_threads() and handle the case that the number of threads created is less then omp_get_max_threads() (which is only an upper bound). The last way could be handled by possibly assigning each thread severa chunks in that case (using #pragma omp for should do that):

                  int max_threads = omp_get_max_threads();
                  std::vector<std::pair<std::list<...>::iterator, std::list<...>::iterator> > chunks;
                  chunks.reserve(max_threads); 
                  size_t chunk_size= list.size() / max_threads;
                  auto cur_iter = list.begin();
                  for(int i = 0; i < max_threads - 1; ++i)
                  {
                     auto last_iter = cur_iter;
                     std::advance(cur_iter, chunk_size);
                     chunks.push_back(std::make_pair(last_iter, cur_iter);
                  }
                  chunks.push_back(cur_iter, list.end();
                  
                  #pragma omp parallel shared(chunks)
                  {
                    #pragma omp for
                    for(int i = 0; i < max_threads; ++i)
                      for(auto it = chunks[i].first; it != chunks[i].second; ++it)
                        it->process();
                  }
                  

                  这将只需要对 list 进行三次迭代(两次,如果您无需迭代即可获得列表的大小).我认为这是对非随机访问迭代器可以做的最好的事情,而无需使用 tasks 或迭代一些不合适的数据结构(如指针向量).

                  This will take only three iterations over list (two, if you can get the size of the list without iterating). I think that is about the best you can do for non random access iterators without using tasks or iterating over some out of place datastructure (like a vector of pointer).

                  这篇关于如何使用 OpenMP 通过 C++ std::list 并行化 for 循环?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                  相关文档推荐

                  Constructor initialization Vs assignment(构造函数初始化 Vs 赋值)
                  Is a `=default` move constructor equivalent to a member-wise move constructor?(`=default` 移动构造函数是否等同于成员移动构造函数?)
                  Has the new C++11 member initialization feature at declaration made initialization lists obsolete?(声明时新的 C++11 成员初始化功能是否使初始化列表过时了?)
                  Order of constructor call in virtual inheritance(虚继承中构造函数调用的顺序)
                  How to use sfinae for selecting constructors?(如何使用 sfinae 选择构造函数?)
                  Initializing a union with a non-trivial constructor(使用非平凡的构造函数初始化联合)
                    <bdo id='blv7E'></bdo><ul id='blv7E'></ul>
                    <i id='blv7E'><tr id='blv7E'><dt id='blv7E'><q id='blv7E'><span id='blv7E'><b id='blv7E'><form id='blv7E'><ins id='blv7E'></ins><ul id='blv7E'></ul><sub id='blv7E'></sub></form><legend id='blv7E'></legend><bdo id='blv7E'><pre id='blv7E'><center id='blv7E'></center></pre></bdo></b><th id='blv7E'></th></span></q></dt></tr></i><div id='blv7E'><tfoot id='blv7E'></tfoot><dl id='blv7E'><fieldset id='blv7E'></fieldset></dl></div>

                    <legend id='blv7E'><style id='blv7E'><dir id='blv7E'><q id='blv7E'></q></dir></style></legend>

                  • <tfoot id='blv7E'></tfoot>
                      <tbody id='blv7E'></tbody>

                      <small id='blv7E'></small><noframes id='blv7E'>