
#YOUTUBE AUTO TURN OFF ANNOTATIONS CODE#
I have some small amount of experience with OpenMP, which allows one to modify C++ or Fortran code using pragmas to direct the compiler regarding parallelization of the code. You can also optimize the hell out of the network, which is very likely to saturate with many problems. On the other hand, it usually requires the fewest messaging and other communication libraries, as you're only doing one type of communication. People who write "pure" code of one type or another usually end up with something that is ugly, hard to maintain and feels wrong for the problem. In practice, you're likely to use a blend of SIMD, MISD and MIMD in any "real-world" program. If so, you wouldn't care if they were re-entrant or not. If you want to experiment with dynamic methods (herustics, genetic algorithms, self-learning networks) or where you want to apply multiple algorithms to the same data (eg: data-mining, using a range of specialist algorithms), then you're going to be running a vast number of completely different routines that may have no components in common. :) SIMD is fashionable, but is limited to those cases where you are operating on the data in parallel. SIMD would need to be re-entrant to be parallel, otherwise you can't be running the same instructions. These are "Single Instruction, Single Data" (ie: a single-threaded program), "Single Instruction, Multi Data" would be an example of this), "Multi Instruction, Single Data" (a good way to program genetic algorithms) and "Multi Instruction, Multi Data" (traditional, hard-core parallelism). More complex version: There are four ways to run a program.

Simple version: Parallel code need not be re-entrant, but all re-entrant code is parallel. The more of these opportunities for parallelism the compiler can find, the better it is. But the compiler could still run the lengthy calculation in parallel and synchronize only the fast output at the end. Then they can't simply run in parallel, because the output is a dependency: As written, the output from doSomething(0) comes before doSomething(1) and so on. Suppose each doSomething involves a lengthy calculation and an output at the end. If nothing in doSomething depends on the outcome of doSomething with a different i, they can be run in parallel and in any order. The art of automatic parallelization is to find overspecifications like the ordered loop where order isn't really necessary.

Or maybe he doesn't care about the order and doSomething just needs to run once for each i in 0.99. Reentrancy is a factor, because it's a class of dependencies, but there are many other dependencies.Ĭonsider a for loop: for (int i=0 i100 i++)doSomething(i) Ĭan this be parallelized? Perhaps the author meant it like it's written there: First doSomething(0), then doSomething(1), then.
