SequenceL

SequenceL
Paradigms Parallel computing, Functional, Purely functional, Declarative programming
Designed by Dr. Daniel Cooke,
Dr. Nelson Rushton,
Dr. Brad Nemanich
Developers Texas Tech University,
Texas Multicore Technologies
First appeared 1989 (1989)
Typing discipline Static, type inference
Platform x86, POWER, ARM
OS Windows, OS X, Linux
Website texasmulticore.com

SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability. Its main advantage is that it can be used to write straightforward code that automatically takes full advantage of all the processing power available, without programmers needing to be concerned with identifying parallelisms, specifying vectorization, avoiding race conditions, and other challenges of manual directive-based programming approaches such as OpenMP.

Programs written in SequenceL can be compiled to multithreaded code that runs in parallel, with no explicit indications from a programmer of how or what to parallelize. As of 2015, versions of the SequenceL compiler generate parallel code in C++ and OpenCL, which allows it to work with most popular programming languages, including C, C++, C#, Fortran, Java, and Python. A platform-specific runtime manages the threads safely, automatically providing parallel performance according to the number of cores available.

History

SequenceL was initially developed over a 20-year period starting in 1989, mostly at Texas Tech University. Primary funding was from NASA, which originally wanted to develop a specification language which was "self-verifying"; that is, once written, the requirements could be executed, and the results verified against the desired outcome.

The principal researcher on the project was initially Dr. Daniel Cooke,[1] who was soon joined by Dr. Nelson Rushton (another Texas Tech professor) and later Dr. Brad Nemanich (then a PhD student under Cooke). The goal of creating a language that was simple enough to be readable, but unambiguous enough to be executable, drove the inventors to settle on a functional, declarative language approach, where a programmer describes desired results, rather than the means to achieve them. The language is then free to solve the problem in the most efficient manner that it can find.

As the language evolved, the researchers developed new computational approaches, including consume-simplify-produce (CSP).[2] In 1998, research began to apply SequenceL to parallel computing. This culminated in 2004 when it took its more complete form with the addition of the normalize-transpose (NT) semantic,[3][4] which coincided with the major vendors of central processing units (CPUs) making a major shift to multi-core processors rather than continuing to increase clock speeds. NT is the semantic work-horse, being used to simplify and decompose structures, based on a dataflow-like execution strategy similar to GAMMA[5] and NESL.[6] The NT semantic achieves a goal similar to that of the Lämmel and Peyton-Jones’ boilerplate elimination.[7][8] All other features of the language are definable from these two laws - including recursion, subscripting structures, function references, and evaluation of function bodies.[9][10]

Though it was not the original intent, these new approaches allowed the language to parallelize a large fraction of the operations it performed, transparently to the programmer. In 2006, a prototype auto-parallelizing compiler was developed at Texas Tech University. In 2009, Texas Tech licensed the intellectual property to Texas Multicore Technologies (TMT),[11] for follow-on commercial development. Texas Tech continues research into SequenceL via further research grants, and collaboration with TMT.

SequenceL has been used in commercial-scale applications where algorithmic accuracy, code transparency, and multiprocessor performance, have been crucial. These applications have included spacecraft navigation, aviation decision support, target tracking, computational fluid dynamics (CFD), image processing, video processing, audio signal processing, computer network scheduling, database processing, seismic migration processing, and industrial control systems.

Design

SequenceL is designed to be as simple as possible to learn and use, focusing on algorithmic code where it adds value, e.g., the inventors chose not to reinvent I/O since C handled that well. As a result, the full language reference for SequenceL is only 40 pages, with copious examples, and its formal grammar has around 15 production rules.

SequenceL is strictly evaluated (like Lisp), statically typed with type inference (like Haskell), and uses a combination of infix and prefix operators that resemble standard, informal mathematical notation (like C, Pascal, Python, etc.). It is a purely declarative language, meaning that a programmer defines functions, in the mathematical sense, without giving instructions for their implementation. For example, the mathematical definition of matrix multiplication is as follows:

The product of the m×p matrix A with the p×n matrix B is the m×n matrix whose (i,j)'th entry is

The SequenceL definition mirrors that definition more or less exactly:

   matmul(A(2), B(2)) [i,j] := 
       let k := 1...size(B); 
       in  sum( A[i,k] * B[k,j] );

The subscripts following each parameter A and B on the left hand side of the definition indicate that A and B are depth-2 structures (i.e., lists of lists of scalars), which are here thought of as matrices. From this formal definition, SequenceL infers the dimensions of the defined product from the formula for its (i, j)'th entry (as the set of pairs (i, j) for which the right hand side is defined) and computes each entry by the same formula as in the informal definition above. Notice there are no explicit instructions for iteration in this definition, or for the order in which operations are to be carried out. Because of this, the SequenceL compiler can perform operations in any order (including parallel order) which satisfies the defining equation. In this example, computation of coordinates in the product will be parallelized in a way that, for large matrices, scales linearly with the number of processors.

As noted above, SequenceL has no built-in constructs for input/output (I/O) since it was designed to work in an additive manner with other programming languages. The decision to compile to multithreaded C++ and support the 20+ Simplified Wrapper and Interface Generator (SWIG) languages (C, C++, C#, Java, Python, etc.) means it easily fits into extant design flows, training, and tools. It can be used to enhance extant applications, create multicore libraries, and even create standalone applications by linking the resulting code with other code which performs I/O tasks. SequenceL functions can also be queried from an interpreter with given inputs, like Python and other interpreted languages.

Normalize–transpose

The main non-scalar construct of SequenceL is the sequence, which is essentially a list. Sequences may be nested to any level. To avoid the routine use of recursion common in many purely functional languages, SequenceL uses a technique termed normalize–transpose (NT), in which scalar operations are automatically distributed over elements of a sequence. For example, in SequenceL we have

This results not from overloading the '+' operator, but from the effect of NT that extends to all operations, both built-in and user-defined. As another example, if f() is a 3-argument function whose arguments are scalars, then for any appropriate x and z we will have

The NT construct can be used for multiple arguments at once, as in, for example

It also works when the expected argument is a non-scalar of any type T, and the actual argument is a list of objects of type T (or, in greater generality, any data structure whose coordinates are of type T). For example, if A is a matrix and Xs is a list of matrices [X1, ..., Xn], and given the above definition of matrix multiply, in SequenceL we would have

   matmul(A,Xs) = [matmul(A,X1),...,matmul(A,Xn)]

As a rule, NTs eliminate the need for iteration, recursion, or high level functional operators to

  1. do the same things to every member of a data structure, or to
  2. process corresponding parts of similarly shaped structures together.

This tends to account for most uses of iteration and recursion.

Example: prime numbers

A good example that demonstrates the above concepts would be in finding prime numbers. A prime number is defined as

An integer greater than 1, with no positive divisors other than itself and 1.

So a positive integer z is prime if no numbers from 2 through z-1, inclusive, divide evenly. SequenceL allows this problem to be programmed by literally transcribing the above definition into the language.

In SequenceL, a sequence of the numbers from 2 through z-1, inclusive, is just (2...(z-1)), so a program to find all of the primes between 100 and 200 can be written:

   prime(z) := z when none(z mod (2...(z-1)) = 0);

Which, in English just says,

...return the argument if none of the numbers between 2, and 1 less than the argument itself, divide evenly into it.

If that condition isn’t met, the function returns nothing. As a result, running this program yields

   cmd:>prime(17)
   17
   cmd:>prime(18)
   empty

The string "between 100 and 200" doesn’t appear in the program. Rather, a programmer will typically pass that part in as the argument. Since the program expects a scalar as an argument, passing it a sequence of numbers instead will cause SequenceL to perform the operation on each member of the sequence automatically. Since the function returns empty for failing values, the result will be the input sequence, but filtered to return only those numbers that satisfy the criteria for primes:

   cmd:>prime(100...200)
   [101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,191,193,197,199]

In addition to solving this problem with a very short and readable program, SequenceL’s evaluation of the nested sequences would all be performed in parallel.

Components

The following software components are available and supported by TMT for use in writing SequenceL code. All components are available on x86 platforms running Windows, OS X, and most varieties of Linux (including CentOS, RedHat, OpenSUSE, and Ubuntu), and on ARM and IBM POWER platforms running most varieties of Linux.

Interpreter

A command-line interpreter allows writing code directly into a command shell, or loading code from prewritten text files. This code can be executed, and the results evaluated, to assist in checking code correctness, or finding a quick answer. It is also available via the popular Eclipse integrated development environment (IDE). Code executed in the interpreter does not run in parallel; it executes in one thread.

Compiler

A command-line compiler reads SequenceL code and generates highly parallelized, vectorized, C++, and optionally OpenCL, which must be linked with the SequenceL runtime library to execute.

Runtime

The runtime environment is a pre-compiled set of libraries which works with the compiled parallelized C++ code to execute optimally on the target platform. It builds on Intel Threaded Building Blocks (TBB)[12] and handles things such as cache optimization, memory management, work queues-stealing, and performance monitoring.

Eclipse IDE plug-in with debugger

An Eclipse integrated development environment plug-in provides standard editing abilities (function rollup, chromacoding, etc.), and a SequenceL debugging environment. This plug-in runs against the SequenceL Interpreter, so cannot be used to debug the multithreaded code; however, by providing automatic parallelization, debugging of parallel SequenceL code is really verifying correctness of sequential SequenceL code. That is, if it runs correctly sequentially, it should run correctly in parallel – so debugging in the interpreter is sufficient.

Libraries

Various math and other standard function libraries are included as SequenceL source code to streamline the programming process and serve as best practice examples. These may be imported, in much the same way that C or C++ libraries are #included.

Using it

Typically, SequenceL development is done in the popular Eclipse IDE or a source code editor such as Notepad++, Vim (text editor), Xcode, etc. on a part of extant code that a programmer either wants to speed up or enhance. This is then debugged and verified for correctness using the interpreter and SequenceL debugger. When done, the code is then passed through the SequenceL compiler, which emits highly parallelized and vectorized C++ code, and optionally OpenCL. A programmer then adds this code to their project, linking to the runtime library at compile time, and using their compiler of choice (ranging from GCC to optimizing compilers), generating one, parallelized, vectorized, high performance executable.

See also

References

  1. Dr. Daniel Cooke at Texas Multicore Technologies
  2. Consume-simplify-produce (CSP)
  3. Nemanich, Brad; Cooke, Daniel; Rushton, Nelson (2010), SequenceL: Transparency And Multi-Core Parallelisms (PDF), DAMP '10 Proceedings of the 5th ACM SIGPLAN workshop on Declarative Aspects of Multicore Programming, New York, NY, US: ACM, pp. 45–52
  4. Cooke, Daniel; Rushton, Nelson; Nemanich, Brad; Watson, Robert G.; Andersen, Per (March 2008), "Normalize, Transpose, and Distribute: An Automatic Approach for Handling Nonscalars" (PDF), ACM Transactions on Programming Languages and Systems, New York, NY, US: ACM, 30 (2)
  5. Banater, J-P; Le Metayer, D. (January 1993), "Programming by Multiset Transformation", Communications of the ACM, ACM, 36 (1): 98–111, doi:10.1145/151233.151242
  6. Blelloch, Guy (March 1996), "Programming Parallel Algorithms", Communications of the ACM, ACM, 39 (3): 85–97, doi:10.1145/227234.227246
  7. Lämmel, Ralf; Peyton-Jones, Simon (2003), "Scrap your boilerplate: a practical design pattern for generic programming", Proceedings of TLDI 2003, ACM Press
  8. Lämmel, Ralf; Peyton-Jones, Simon (2004), "Scrap more boilerplate: reflection, zips, and generalised casts", Proceedings of ICFP 2004, ACM Press
  9. Cooke, Daniel; Rushton, Nelson (January 1993), "Iterative and Parallel Algorithm Design from High Level Language Traces" (PDF), ICCS'05 Proceedings of the 5th international conference on Computational Science, Part III: 891–894, doi:10.1007/11428862_132, ISBN 978-3-540-26044-8
  10. Cooke, Daniel; Rushton, Nelson (June 27–30, 2005), "SequenceL – An Overview of a Simple Language", Proceedings of The 2005 International Conference on Programming Languages and Compilers, PLC 2005, Las Vegas, Nevada, US
  11. Texas Multicore Technologies, Inc.
  12. Intel Threaded Building Blocks (TBB)

External links

This article is issued from Wikipedia - version of the 10/16/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.