Alternative
Languages
The languages described so far in this chapter have been extensions to
what might be called standard C/C++. In some ways, C and C++ are not ideal
languages for parallelization. One particular issue is the extensive use of
pointers, which makes it hard to prove that memory accesses do not alias.
As a consequence of this, other programming
languages have been devised that either target developing parallel applications
or do not suffer from some of the issues that hit C/C++. For example, Fortress,
initially developed by Sun Microsystems, has a model where loops are parallel
by default unless otherwise specified. The Go language from Google includes the
concept of go routines that, rather like OpenMP tasks, can be exe-cuted in
parallel with the main thread.
One area of interest is functional programming.
With pure functional programming, the evaluation of an expression depends only
on the parameters passed into that expres-sion. Hence, functions can be
evaluated in parallel, or in any order, and will produce the same result. We
will consider Haskell as one example of a functional language.
The code in Listing 10.16 evaluates the Nth
Fibonacci number in Haskell. The lan-guage allows the return values for
functions to be defined for particular input values. So, in this instance, we
are setting the return values for 0 and 1 as well as the general return value
for any other numbers.
Listing 10.16 Evaluating
the Nth Fibonacci Number in Haskell
fib
0 = 0
fib
1 = 1
fib
n = fib (n-1) + fib (n-2)
Listing 10.17 shows the result of
using this function interactively. The command :load requests that the module fib.hs be
loaded, and then the command fib is
invoked with the parameter 10, and the runtime
returns the value 55.
Listing 10.17 Asking
Haskell to Provide the Tenth Fibonacci Number
GHCi, version 6.10.4: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude> :load fib.hs
[1 of 1] Compiling Main (
fib.hs, interpreted )
Ok, modules loaded: Main.
*Main> fib 10
55
Listing 10.18 defines a second function, bif, a variant of the Fibonacci function. Suppose that we want to return
the sum of the two functions. The code defines a serial version of this
function and provides a main routine
that prints the result of calling this function.
Listing 10.18 Stand-Alone
Serial Program
main = print ( serial 10 10)
fib 0 = 0 fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1 bif 1 = 0
bif n = bif (n-1) + bif (n-2)
serial
a b = fib a + bif b
Rather than interpreting this program, we can compile and run it as
shown in Listing 10.19.
Listing 10.19 Compiling
and Running Serial Code
C:\> ghc -O --make test.hs
[1 of 1] Compiling Main (
test.hs, test.o )
Linking test.exe ...
C:\> test
2
1
The two functions should take about the same amount
of time to execute, so it would make sense to execute them in parallel. Listing
10.20 shows the code to do this.
Listing 10.20 Stand-Alone
Parallel Program
import Control.Parallel
main = print ( parallel 20 20)
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
bif 0 = -1
bif 1 = 0
bif n = bif (n-1) + bif (n-2)
parallel a b
= let x = fib a
y = bif b
in x `par` (y `pseq` (x+y))
In the code, the let expressions are not assignments of values but declarations of local
variables. The local variables will be evaluated only if they are needed; this
is lazy evaluation. These local variables are used in the in expression, which performs the com-putation. The import statement at the start of the code imports the Control.Parallel
module. This module defines the
`par` and `pseq` operators. These two operators are used so
that the computation of x=fib a and y=bif
b is per-formed in parallel, and this ensures that
the result (x+y) is
computed after the calcula-tion of y has completed. Without these elaborate preparations, it is possible
that both parallel threads might choose to compute the value of the function x first.
The example given here exposes parallelism using
low-level primitives. The preferred way of coding parallelism in Haskell is to
use strategies. This approach separates the com-putation from the
parallelization.
Haskell highlights the key advantage of pure functional programming
languages that is helpful for writing parallel code. This is that the result of
a function call depends only on the parameters passed into it. From this point,
the compiler knows that a function call can be scheduled in any arbitrary
order, and the results of the function call do not depend on the time at which
the call is made. The advantage that this provides is that adding the `par` operator to produce a parallel version of an application is guaranteed
not to change the result of the application. Hence, parallelization is a
solution for improving performance and not a source of bugs.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.